id
stringlengths 10
10
| title
stringlengths 19
145
| abstract
stringlengths 273
1.91k
| full_text
dict | qas
dict | figures_and_tables
dict | question
sequence | retrieval_gt
sequence | answer_gt
sequence | __index_level_0__
int64 0
887
|
---|---|---|---|---|---|---|---|---|---|
1909.00694 | Minimally Supervised Learning of Affective Events Using Discourse Relations | Recognizing affective events that trigger positive or negative sentiment has a wide range of natural language processing applications but remains a challenging problem mainly because the polarity of an event is not necessarily predictable from its constituent words. In this paper, we propose to propagate affective polarity using discourse relations. Our method is simple and only requires a very small seed lexicon and a large raw corpus. Our experiments using Japanese data show that our method learns affective events effectively without manually labeled data. It also improves supervised learning results when labeled data are small. | {
"paragraphs": [
[
"Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).",
"Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.",
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.",
"We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small."
],
[
"Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).",
"Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.",
"BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.",
"Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.",
""
],
[
""
],
[
"",
"Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:",
"${\\rm Encoder}$ outputs a vector representation of the event $x$. ${\\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\\rm Encoder}$.",
""
],
[
"Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \\cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.",
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.",
""
],
[
"The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.",
""
],
[
"The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.",
""
],
[
"The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.",
""
],
[
"Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.",
"We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:",
"where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\\rm AL}$ is the total number of AL pairs, and $\\lambda _{\\rm AL}$ is a hyperparameter.",
"For the CA data, the loss function is defined as:",
"$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\\rm CA}$ is the total number of CA pairs. $\\lambda _{\\rm CA}$ and $\\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.",
"The loss function for the CO data is defined analogously:",
"The difference is that the first term makes the scores of the two events distant from each other.",
""
],
[
""
],
[
""
],
[
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.",
". 重大な失敗を犯したので、仕事をクビになった。",
"Because [I] made a serious mistake, [I] got fired.",
"From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.",
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16."
],
[
"We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:",
". 作業が楽だ。",
"The work is easy.",
". 駐車場がない。",
"There is no parking lot.",
"Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.",
"The objective function for supervised training is:",
"",
"where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\\rm ACP}$ is the number of the events of the ACP Corpus.",
"To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \\le 0$.",
""
],
[
"As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.",
"BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\\rm Encoder}$, see Sections SECREF30.",
"We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$.",
""
],
[
"",
"Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.",
"The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.",
"Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.",
"Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.",
"The result of hyperparameter optimization for the BiGRU encoder was as follows:",
"As the CA and CO pairs were equal in size (Table TABREF16), $\\lambda _{\\rm CA}$ and $\\lambda _{\\rm CO}$ were comparable values. $\\lambda _{\\rm CA}$ was about one-third of $\\lambda _{\\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\\textit {problem}_{\\text{negative}}$ causes $\\textit {solution}_{\\text{positive}}$”:",
". (悪いところがある, よくなるように努力する)",
"(there is a bad point, [I] try to improve [it])",
"The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\\lambda _{\\rm CA}$.",
"Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす\" (drop) and only the objects are different. The second event “肩を落とす\" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.",
""
],
[
"In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.",
"Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance."
],
[
"We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation."
],
[
"喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed)."
],
[
"怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry)."
],
[
"The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set."
],
[
"We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Method",
"Proposed Method ::: Polarity Function",
"Proposed Method ::: Discourse Relation-Based Event Pairs",
"Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)",
"Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)",
"Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)",
"Proposed Method ::: Loss Functions",
"Experiments",
"Experiments ::: Dataset",
"Experiments ::: Dataset ::: AL, CA, and CO",
"Experiments ::: Dataset ::: ACP (ACP Corpus)",
"Experiments ::: Model Configurations",
"Experiments ::: Results and Discussion",
"Conclusion",
"Acknowledgments",
"Appendices ::: Seed Lexicon ::: Positive Words",
"Appendices ::: Seed Lexicon ::: Negative Words",
"Appendices ::: Settings of Encoder ::: BiGRU",
"Appendices ::: Settings of Encoder ::: BERT"
]
} | {
"answers": [
{
"annotation_id": [
"31e85022a847f37c15fd0415f3c450c74c8e4755",
"95da0a6e1b08db74a405c6a71067c9b272a50ff5"
],
"answer": [
{
"evidence": [
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types."
],
"extractive_spans": [],
"free_form_answer": "a vocabulary of positive and negative predicates that helps determine the polarity score of an event",
"highlighted_evidence": [
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event.",
"It is a "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types."
],
"extractive_spans": [
"seed lexicon consists of positive and negative predicates"
],
"free_form_answer": "",
"highlighted_evidence": [
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"annotation_id": [
"1e5e867244ea656c4b7632628086209cf9bae5fa"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Performance of various models on the ACP test set.",
"FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data.",
"As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.",
"We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$."
],
"extractive_spans": [],
"free_form_answer": "Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA+CO -- BERT achieved 0.913 accuracy. \nUsing a subset to train: BERT achieved 0.876 accuracy using ACP (6K), BERT achieved 0.886 accuracy using ACP (6K) + AL, BiGRU achieved 0.830 accuracy using ACP (6K), BiGRU achieved 0.879 accuracy using ACP (6K) + AL + CA + CO.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Performance of various models on the ACP test set.",
"FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data.",
"As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. ",
"We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"annotation_id": [
"49a78a07d2eed545556a835ccf2eb40e5eee9801",
"acd6d15bd67f4b1496ee8af1c93c33e7d59c89e1"
],
"answer": [
{
"evidence": [
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event."
],
"extractive_spans": [],
"free_form_answer": "based on the relation between events, the suggested polarity of one event can determine the possible polarity of the other event ",
"highlighted_evidence": [
"As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.",
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types."
],
"extractive_spans": [],
"free_form_answer": "cause relation: both events in the relation should have the same polarity; concession relation: events should have opposite polarity",
"highlighted_evidence": [
"As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.",
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"annotation_id": [
"36926a4c9e14352c91111150aa4c6edcc5c0770f",
"75b6dd28ccab20a70087635d89c2b22d0e99095c"
],
"answer": [
{
"evidence": [
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.",
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.",
"FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.",
"We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:",
"Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.",
"FLOAT SELECTED: Table 2: Details of the ACP dataset."
],
"extractive_spans": [],
"free_form_answer": "7000000 pairs of events were extracted from the Japanese Web corpus, 529850 pairs of events were extracted from the ACP corpus",
"highlighted_evidence": [
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. ",
"From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.",
"FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.",
"We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well.",
"Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.",
"FLOAT SELECTED: Table 2: Details of the ACP dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Details of the ACP dataset."
],
"extractive_spans": [],
"free_form_answer": "The ACP corpus has around 700k events split into positive and negative polarity ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Details of the ACP dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2d8c7df145c37aad905e48f64d8caa69e54434d4"
],
"answer": [
{
"evidence": [
"Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive)."
],
"extractive_spans": [
"negative",
"positive"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"df4372b2e8d9bb2039a5582f192768953b01d904"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data."
],
"extractive_spans": [],
"free_form_answer": "3%",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5c5bbc8af91c16af89b4ddd57ee6834be018e4e7"
],
"answer": [
{
"evidence": [
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event."
],
"extractive_spans": [],
"free_form_answer": "by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity",
"highlighted_evidence": [
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0206f2131f64a3e02498cedad1250971b78ffd0c"
],
"answer": [
{
"evidence": [
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16."
],
"extractive_spans": [],
"free_form_answer": "30 words",
"highlighted_evidence": [
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"c36bad2758c4f9866d64c357c475d370595d937f"
],
"answer": [
{
"evidence": [
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.",
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16."
],
"extractive_spans": [
"100 million sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. ",
"From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the seed lexicon?",
"What are the results?",
"How are relations used to propagate polarity?",
"How big is the Japanese data?",
"What are labels available in dataset for supervision?",
"How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?",
"How does their model learn using mostly raw data?",
"How big is seed lexicon used for training?",
"How large is raw corpus used for training?"
],
"question_id": [
"753990d0b621d390ed58f20c4d9e4f065f0dc672",
"9d578ddccc27dd849244d632dd0f6bf27348ad81",
"02e4bf719b1a504e385c35c6186742e720bcb281",
"44c4bd6decc86f1091b5fc0728873d9324cdde4e",
"86abeff85f3db79cf87a8c993e5e5aa61226dc98",
"c029deb7f99756d2669abad0a349d917428e9c12",
"39f8db10d949c6b477fa4b51e7c184016505884f",
"d0bc782961567dc1dd7e074b621a6d6be44bb5b4",
"a592498ba2fac994cd6fad7372836f0adb37e22a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An overview of our method. We focus on pairs of events, the former events and the latter events, which are connected with a discourse relation, CAUSE or CONCESSION. Dropped pronouns are indicated by brackets in English translations. We divide the event pairs into three types: AL, CA, and CO. In AL, the polarity of a latter event is automatically identified as either positive or negative, according to the seed lexicon (the positive word is colored red and the negative word blue). We propagate the latter event’s polarity to the former event. The same polarity as the latter event is used for the discourse relation CAUSE, and the reversed polarity for CONCESSION. In CA and CO, the latter event’s polarity is not known. Depending on the discourse relation, we encourage the two events’ polarities to be the same (CA) or reversed (CO). Details are given in Section 3.2.",
"Table 1: Statistics of the AL, CA, and CO datasets.",
"Table 2: Details of the ACP dataset.",
"Table 5: Examples of polarity scores predicted by the BiGRU model trained with AL+CA+CO.",
"Table 3: Performance of various models on the ACP test set.",
"Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table5-1.png",
"5-Table3-1.png",
"5-Table4-1.png"
]
} | [
"What is the seed lexicon?",
"What are the results?",
"How are relations used to propagate polarity?",
"How big is the Japanese data?",
"How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?",
"How does their model learn using mostly raw data?",
"How big is seed lexicon used for training?"
] | [
[
"1909.00694-Proposed Method ::: Discourse Relation-Based Event Pairs-1"
],
[
"1909.00694-Experiments ::: Model Configurations-2",
"1909.00694-5-Table4-1.png",
"1909.00694-Experiments ::: Model Configurations-0",
"1909.00694-5-Table3-1.png"
],
[
"1909.00694-Proposed Method ::: Discourse Relation-Based Event Pairs-1",
"1909.00694-Introduction-2"
],
[
"1909.00694-Experiments ::: Dataset ::: AL, CA, and CO-4",
"1909.00694-4-Table1-1.png",
"1909.00694-Experiments ::: Dataset ::: AL, CA, and CO-0",
"1909.00694-Experiments ::: Dataset ::: ACP (ACP Corpus)-5",
"1909.00694-Experiments ::: Dataset ::: ACP (ACP Corpus)-0",
"1909.00694-4-Table2-1.png"
],
[
"1909.00694-5-Table4-1.png"
],
[
"1909.00694-Introduction-2"
],
[
"1909.00694-Experiments ::: Dataset ::: AL, CA, and CO-4"
]
] | [
"a vocabulary of positive and negative predicates that helps determine the polarity score of an event",
"Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA+CO -- BERT achieved 0.913 accuracy. \nUsing a subset to train: BERT achieved 0.876 accuracy using ACP (6K), BERT achieved 0.886 accuracy using ACP (6K) + AL, BiGRU achieved 0.830 accuracy using ACP (6K), BiGRU achieved 0.879 accuracy using ACP (6K) + AL + CA + CO.",
"cause relation: both events in the relation should have the same polarity; concession relation: events should have opposite polarity",
"The ACP corpus has around 700k events split into positive and negative polarity ",
"3%",
"by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity",
"30 words"
] | 0 |
1705.09665 | Community Identity and User Engagement in a Multi-Community Landscape | A community's identity defines and shapes its internal dynamics. Our current understanding of this interplay is mostly limited to glimpses gathered from isolated studies of individual communities. In this work we provide a systematic exploration of the nature of this relation across a wide variety of online communities. To this end we introduce a quantitative, language-based typology reflecting two key aspects of a community's identity: how distinctive, and how temporally dynamic it is. By mapping almost 300 Reddit communities into the landscape induced by this typology, we reveal regularities in how patterns of user engagement vary with the characteristics of a community. Our results suggest that the way new and existing users engage with a community depends strongly and systematically on the nature of the collective identity it fosters, in ways that are highly consequential to community maintainers. For example, communities with distinctive and highly dynamic identities are more likely to retain their users. However, such niche communities also exhibit much larger acculturation gaps between existing users and newcomers, which potentially hinder the integration of the latter. More generally, our methodology reveals differences in how various social phenomena manifest across communities, and shows that structuring the multi-community landscape can lead to a better understanding of the systematic nature of this diversity. | {
"paragraphs": [
[
"“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”",
"",
"— Italo Calvino, Invisible Cities",
"A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.",
"One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?",
"To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.",
"Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.",
"Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.",
"Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.",
"Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.",
"More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.",
"More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity."
],
[
"A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.",
"We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity."
],
[
"In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.",
"We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.",
"Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).",
"These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B)."
],
[
"Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.",
"Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).",
"In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:",
"Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6 ",
"where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.",
"We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.",
"Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7 ",
"A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.",
"Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.",
""
],
[
"Having described these word-level measures, we now proceed to establish the primary axes of our typology:",
"Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.",
"Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.",
"In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 ."
],
[
"We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.",
"Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.",
"The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.",
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).",
"Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.",
"In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.",
"Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.",
"Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .",
"We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered."
],
[
"We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.",
"In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).",
"We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention."
],
[
"We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).",
"Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.",
"Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features."
],
[
"As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.",
"To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community)."
],
[
"The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.",
"We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).",
"This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.",
"To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0 ",
"where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.",
"We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0 ",
" INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.",
"Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.",
"These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary."
],
[
"Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.",
"Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.",
"We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.",
"We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.",
"We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).",
"The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.",
"To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).",
"We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term."
],
[
"Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.",
"Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.",
"Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.",
"Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .",
"Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.",
"Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .",
"In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities."
],
[
"Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.",
"Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.",
"One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?",
"Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes."
],
[
"The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. "
]
],
"section_name": [
"Introduction",
"A typology of community identity",
"Overview and intuition",
"Language-based formalization",
"Community-level measures",
"Applying the typology to Reddit",
"Community identity and user retention",
"Community-type and monthly retention",
"Community-type and user tenure",
"Community identity and acculturation",
"Community identity and content affinity",
"Further related work",
"Conclusion and future work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"04ae0cc420f69540ca11707ab8ecc07a89f803f7",
"31d8f8ed7ba40b27c480f7caf7cfb48fba47bb07"
],
"answer": [
{
"evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 )."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"8a080f37fbbb5c6700422a346b944ef535fa725b"
],
"answer": [
{
"evidence": [
"We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).",
"As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content."
],
"extractive_spans": [],
"free_form_answer": "Dynamic communities have substantially higher rates of monthly user retention than more stable communities. More distinctive communities exhibit moderately higher monthly retention rates than more generic communities. There is also a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community - a short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content.\n",
"highlighted_evidence": [
"We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).",
"As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"f64ff06cfd16f9bd339512a6e85f0a7bc8b670f4"
],
"answer": [
{
"evidence": [
"Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.",
"More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities."
],
"extractive_spans": [
"communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members",
"within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers "
],
"free_form_answer": "",
"highlighted_evidence": [
"We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.",
"More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2c804f9b9543e3b085fbd1fff87f0fde688f1484",
"78de92427e9e37b0dfdc19f57b735e65cec40e0a"
],
"answer": [
{
"evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 )."
],
"extractive_spans": [],
"free_form_answer": "They selected all the subreddits from January 2013 to December 2014 with at least 500 words in the vocabulary and at least 4 months of the subreddit's history. They also removed communities with the bulk of the contributions are in foreign language.",
"highlighted_evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 )."
],
"extractive_spans": [],
"free_form_answer": "They collect subreddits from January 2013 to December 2014,2 for which there are at\nleast 500 words in the vocabulary used to estimate the measures,\nin at least 4 months of the subreddit’s history. They compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language.",
"highlighted_evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"62d30e963bf86e9b2d454adbd4b2c4dc3107cd11"
],
"answer": [
{
"evidence": [
"Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable."
],
"extractive_spans": [
"the average volatility of all utterances"
],
"free_form_answer": "",
"highlighted_evidence": [
". A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"21484dfac315192bb69aee597ebf5d100ff5925b"
],
"answer": [
{
"evidence": [
"Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic."
],
"extractive_spans": [
" the average specificity of all utterances"
],
"free_form_answer": "",
"highlighted_evidence": [
"A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"How do the various social phenomena examined manifest in different types of communities?",
"What patterns do they observe about how user engagement varies with the characteristics of a community?",
"How did the select the 300 Reddit communities for comparison?",
"How do the authors measure how temporally dynamic a community is?",
"How do the authors measure how distinctive a community is?"
],
"question_id": [
"003f884d3893532f8c302431c9f70be6f64d9be8",
"bb97537a0a7c8f12a3f65eba73cefa6abcd2f2b2",
"eea089baedc0ce80731c8fdcb064b82f584f483a",
"edb2d24d6d10af13931b3a47a6543bd469752f0c",
"938cf30c4f1d14fa182e82919e16072fdbcf2a82",
"93f4ad6568207c9bd10d712a52f8de25b3ebadd4"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: A: Within a community certain words are more community-specific and temporally volatile than others. For instance, words like onesies are highly specific to the BabyBumps community (top left corner), while words like easter are temporally ephemeral. B: Extending these word-level measures to communities, we can measure the overall distinctiveness and dynamicity of a community, which are highly associated with user retention rates (colored heatmap; see Section 3). Communities like Seahawks (a football team) and Cooking use highly distinctive language. Moreover, Seahawks uses very dynamic language, as the discussion continually shifts throughout the football season. In contrast, the content of Cooking remains stable over time, as does the content of pics; though these communities do have ephemeral fads, the overall themes discussed generally remain stable.",
"Table 1: Examples of communities on Reddit which occur at the extremes (top and bottom quartiles) of our typology.",
"Figure 2: A: The monthly retention rate for communities differs drastically according to their position in our identity-based typology, with dynamicity being the strongest signal of higher user retention (x-axes bin community-months by percentiles; in all subsequent plots, error bars indicate 95% bootstrapped confidence intervals). B: Dynamicity also correlates with long-term user retention, measured as the number of months the average user spends in the community; however, distinctiveness does not correlate with this longer-term variant of user retention.",
"Figure 3: A: There is substantial variation in the direction and magnitude of the acculturation gap, which quantifies the extent to which established members of a community are linguistically differentiated from outsiders. Among 60% of communities this gap is positive, indicating that established users match the community’s language more than outsiders. B: The size of the acculturation gap varies systematically according to how dynamic and distinctive a community is. Distinctive communities exhibit larger gaps; as do relatively stable, and very dynamic communities."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png"
]
} | [
"How do the various social phenomena examined manifest in different types of communities?",
"How did the select the 300 Reddit communities for comparison?"
] | [
[
"1705.09665-Community-type and user tenure-0",
"1705.09665-Community-type and monthly retention-0"
],
[
"1705.09665-Applying the typology to Reddit-3"
]
] | [
"Dynamic communities have substantially higher rates of monthly user retention than more stable communities. More distinctive communities exhibit moderately higher monthly retention rates than more generic communities. There is also a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community - a short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content.\n",
"They collect subreddits from January 2013 to December 2014,2 for which there are at\nleast 500 words in the vocabulary used to estimate the measures,\nin at least 4 months of the subreddit’s history. They compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language."
] | 2 |
1908.06606 | Question Answering based Clinical Text Structuring Using Pre-trained Language Model | Clinical text structuring is a critical and fundamental task for clinical research. Traditional methods such as taskspecific end-to-end models and pipeline models usually suffer from the lack of dataset and error propagation. In this paper, we present a question answering based clinical text structuring (QA-CTS) task to unify different specific tasks and make dataset shareable. A novel model that aims to introduce domain-specific features (e.g., clinical named entity information) into pre-trained language model is also proposed for QA-CTS task. Experimental results on Chinese pathology reports collected from Ruijing Hospital demonstrate our presented QA-CTS task is very effective to improve the performance on specific tasks. Our proposed model also competes favorably with strong baseline models in specific tasks. | {
"paragraphs": [
[
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.",
"However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.",
"Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.",
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.",
"We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.",
"Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.",
"The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6."
],
[
"Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.",
"Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.",
"Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.",
"Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component."
],
[
"Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.",
"The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain."
],
[
"Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.",
"Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm\" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离\"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.",
"Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data."
],
[
"In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word."
],
[
"For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.",
"The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model."
],
[
"Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.",
"The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除\" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively."
],
[
"There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.",
"While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.",
"$Attention$ denotes the traditional attention and it can be defined as follows.",
"where $d_k$ is the length of hidden vector."
],
[
"The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\\left\\langle l_s, 2\\right\\rangle $ where $l_s$ denotes the length of sequence.",
"Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.",
"where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively."
],
[
"Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.",
"Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model."
],
[
"In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold."
],
[
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.",
"In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer."
],
[
"To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts."
],
[
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.",
"Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score."
],
[
"To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\\times $ refers to removing that part from our model.",
"As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model."
],
[
"There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.",
"From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.",
"Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance."
],
[
"To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.",
"As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.",
"Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.",
"Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way."
],
[
"In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset."
],
[
"We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research\" (No. 2018YFC0910500)."
]
],
"section_name": [
"Introduction",
"Related Work ::: Clinical Text Structuring",
"Related Work ::: Pre-trained Language Model",
"Question Answering based Clinical Text Structuring",
"The Proposed Model for QA-CTS Task",
"The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text",
"The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information",
"The Proposed Model for QA-CTS Task ::: Integration Method",
"The Proposed Model for QA-CTS Task ::: Final Prediction",
"The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism",
"Experimental Studies",
"Experimental Studies ::: Dataset and Evaluation Metrics",
"Experimental Studies ::: Experimental Settings",
"Experimental Studies ::: Comparison with State-of-the-art Methods",
"Experimental Studies ::: Ablation Analysis",
"Experimental Studies ::: Comparisons Between Two Integration Methods",
"Experimental Studies ::: Data Integration Analysis",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"0ab604dbe114dba174da645cc06a713e12a1fd9d",
"1f1495d06d0abe86ee52124ec9f2f0b25a536147"
],
"answer": [
{
"evidence": [
"To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts."
],
"extractive_spans": [
"Chinese general corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0de2087bf0e46b14042de2a6e707bbf544a04556",
"c14d9acff1d3e6f47901e7104a7f01a10a727050"
],
"answer": [
{
"evidence": [
"Experimental Studies ::: Comparison with State-of-the-art Methods",
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23."
],
"extractive_spans": [
"BERT-Base",
"QANet"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experimental Studies ::: Comparison with State-of-the-art Methods\nSince BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.",
"FLOAT SELECTED: TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL"
],
"extractive_spans": [
"QANet BIBREF39",
"BERT-Base BIBREF26"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. ",
"FLOAT SELECTED: TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"6d56080358bb7f22dd764934ffcd6d4e93fef0b2",
"da233cce57e642941da2446d3e053349c2ab1a15"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Fig. 1. An illustrative example of QA-CTS task.",
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.",
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows."
],
"extractive_spans": [
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained.",
"Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. "
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Fig. 1. An illustrative example of QA-CTS task.",
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.",
"Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.",
"However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.",
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows."
],
"extractive_spans": [],
"free_form_answer": "CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text.",
"highlighted_evidence": [
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly.",
"However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size).",
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7138d812ea70084e7610e5a2422039da1404afd7",
"b732d5561babcf37393ebf6cbb051d04b0b66bd5"
],
"answer": [
{
"evidence": [
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.",
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.",
"In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset."
],
"extractive_spans": [
" three types of questions, namely tumor size, proximal resection margin and distal resection margin"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data.",
"All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. ",
"Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1d4d4965fd44fefbfed0b3267ef5875572994b66"
],
"answer": [
{
"evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20."
],
"extractive_spans": [],
"free_form_answer": "the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences ",
"highlighted_evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"229cc59d1545c9e8f47d43053465e2dfd1b763cc"
],
"answer": [
{
"evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20."
],
"extractive_spans": [],
"free_form_answer": "2,714 ",
"highlighted_evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"e2fe2a3438f28758724d992502a44615051eda90"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"2a73264b743b6dd183c200f7dcd04aed4029f015"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"5f125408e657282669f90a1866d8227c0f94332e"
],
"answer": [
{
"evidence": [
"We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.",
"In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word."
],
"extractive_spans": [
"integrate clinical named entity information into pre-trained language model"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also propose an effective model to integrate clinical named entity information into pre-trained language model.",
"In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e70d8110563d53282f1a26e823d27e6f235772db"
]
},
{
"annotation_id": [
"24c7023a5221b509d34dd6703d6e0607b2777e78"
],
"answer": [
{
"evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20."
],
"extractive_spans": [
"17,833 sentences, 826,987 characters and 2,714 question-answer pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e70d8110563d53282f1a26e823d27e6f235772db"
]
},
{
"annotation_id": [
"d046d9ea83c5ffe607465e2fbc8817131c11e037"
],
"answer": [
{
"evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20."
],
"extractive_spans": [
"17,833 sentences, 826,987 characters and 2,714 question-answer pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e70d8110563d53282f1a26e823d27e6f235772db"
]
},
{
"annotation_id": [
"b3a3d6e707a67bab827053b40e446f30e416887f"
],
"answer": [
{
"evidence": [
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23."
],
"extractive_spans": [
"state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e70d8110563d53282f1a26e823d27e6f235772db"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"five",
"five",
"five",
"five",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What data is the language model pretrained on?",
"What baselines is the proposed model compared against?",
"How is the clinical text structuring task defined?",
"What are the specific tasks being unified?",
"Is all text in this dataset a question, or are there unrelated sentences in between questions?",
"How many questions are in the dataset?",
"What is the perWhat are the tasks evaluated?",
"Are there privacy concerns with clinical data?",
"How they introduce domain-specific features into pre-trained language model?",
"How big is QA-CTS task dataset?",
"How big is dataset of pathology reports collected from Ruijing Hospital?",
"What are strong baseline models in specific tasks?"
],
"question_id": [
"71a7153e12879defa186bfb6dbafe79c74265e10",
"85d1831c28d3c19c84472589a252e28e9884500f",
"1959e0ebc21fafdf1dd20c6ea054161ba7446f61",
"77cf4379106463b6ebcb5eb8fa5bb25450fa5fb8",
"06095a4dee77e9a570837b35fc38e77228664f91",
"19c9cfbc4f29104200393e848b7b9be41913a7ac",
"6743c1dd7764fc652cfe2ea29097ea09b5544bc3",
"14323046220b2aea8f15fba86819cbccc389ed8b",
"08a5f8d36298b57f6a4fcb4b6ae5796dc5d944a4",
"975a4ac9773a4af551142c324b64a0858670d06e",
"326e08a0f5753b90622902bd4a9c94849a24b773",
"bd78483a746fda4805a7678286f82d9621bc45cf"
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"question answering",
"question answering",
"question answering",
"question answering",
"Question Answering",
"Question Answering",
"Question Answering",
"Question Answering",
"",
"",
"",
""
],
"topic_background": [
"research",
"research",
"research",
"research",
"familiar",
"familiar",
"familiar",
"familiar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. An illustrative example of QA-CTS task.",
"TABLE I AN ILLUSTRATIVE EXAMPLE OF NAMED ENTITY FEATURE TAGS",
"Fig. 2. The architecture of our proposed model for QA-CTS task",
"TABLE II STATISTICS OF DIFFERENT TYPES OF QUESTION ANSWER INSTANCES",
"TABLE V COMPARATIVE RESULTS FOR DIFFERENT INTEGRATION METHOD OF OUR PROPOSED MODEL",
"TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL",
"TABLE VI COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (WITHOUT TWO-STAGE TRAINING AND NAMED ENTITY INFORMATION)",
"TABLE VII COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (WITH TWO-STAGE TRAINING AND NAMED ENTITY INFORMATION)",
"TABLE VIII COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (USING MIXED-DATA PRE-TRAINED PARAMETERS)"
],
"file": [
"1-Figure1-1.png",
"2-TableI-1.png",
"3-Figure2-1.png",
"4-TableII-1.png",
"5-TableV-1.png",
"5-TableIII-1.png",
"6-TableVI-1.png",
"6-TableVII-1.png",
"6-TableVIII-1.png"
]
} | [
"How is the clinical text structuring task defined?",
"Is all text in this dataset a question, or are there unrelated sentences in between questions?",
"How many questions are in the dataset?"
] | [
[
"1908.06606-Introduction-0",
"1908.06606-1-Figure1-1.png",
"1908.06606-Introduction-1",
"1908.06606-Introduction-3"
],
[
"1908.06606-Experimental Studies ::: Dataset and Evaluation Metrics-0"
],
[
"1908.06606-Experimental Studies ::: Dataset and Evaluation Metrics-0"
]
] | [
"CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text.",
"the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences ",
"2,714 "
] | 3 |
1811.00942 | Progress and Tradeoffs in Neural Language Models | In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks. Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts. This progress, however, comes at a substantial cost in performance, in terms of inference latency and energy consumption, which is particularly of concern in deployments on mobile devices. This paper, which examines the quality-performance tradeoff of various language modeling techniques, represents to our knowledge the first to make this observation. We compare state-of-the-art NLMs with"classic"Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and prediction accuracy using two standard benchmarks. On a Raspberry Pi, we find that orders of increase in latency and energy usage correspond to less change in perplexity, while the difference is much less pronounced on a desktop. | {
"paragraphs": [
[
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .",
"Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing.",
"In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment.",
"There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life.",
"In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\\times $ longer and requires 32 $\\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point."
],
[
" BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.",
"Other work focus on designing lightweight models for resource-efficient inference on mobile devices. BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling; BIBREF9 examine shallow feedforward neural networks for natural language processing.",
"AWD-LSTM. BIBREF4 show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Specifically, BIBREF4 apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. BIBREF4 name their three-layer LSTM model trained with such tricks, “AWD-LSTM.”",
"Quasi-Recurrent Neural Networks. Quasi-recurrent neural networks (QRNNs; BIBREF10 ) achieve current state of the art in word-level language modeling BIBREF11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input $\\mathbf {X} \\in \\mathbb {R}^{k \\times n}$ , the convolution layer is $\n\\mathbf {Z} = \\tanh (\\mathbf {W}_z \\cdot \\mathbf {X})\\\\\n\\mathbf {F} = \\sigma (\\mathbf {W}_f \\cdot \\mathbf {X})\\\\\n\\mathbf {O} = \\sigma (\\mathbf {W}_o \\cdot \\mathbf {X})\n$ ",
"where $\\sigma $ denotes the sigmoid function, $\\cdot $ represents masked convolution across time, and $\\mathbf {W}_{\\lbrace z, f, o\\rbrace } \\in \\mathbb {R}^{m \\times k \\times r}$ are convolution weights with $k$ input channels, $m$ output channels, and a window size of $r$ . In the recurrent pooling layer, the convolution outputs are combined sequentially: $\n\\mathbf {c}_t &= \\mathbf {f}_t \\odot \\mathbf {c}_{t-1} + (1 -\n\\mathbf {f}_t) \\odot \\mathbf {z}_t\\\\\n\\mathbf {h}_t &= \\mathbf {o}_t \\odot \\mathbf {c}_t\n$ ",
"Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output $\\mathbf {h}_{1:t}$ being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture BIBREF11 .",
"Perplexity–Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at- $k$ , the fraction of top $k$ predictions that contain the correct word. A given R@ $k$ imposes a weak minimum perplexity constraint—there are many free parameters that allow for large variability in the perplexity given a certain R@ $k$ . Consider the corpus, “choo choo train,” with an associated unigram model $P(\\text{``choo''}) = 0.1$ , $P(\\text{``train''}) = 0.9$ , resulting in an R@1 of $1/3$ and perplexity of $4.8$ . Clearly, R@1 $ =1/3$ for all $P(\\text{``choo''}) \\le 0.5$ ; thus, perplexity can drop as low as 2 without affecting recall."
],
[
"We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.",
"For the neural language model, we used a four-layer QRNN BIBREF10 , which achieves state-of-the-art results on a variety of datasets, such as WT103 BIBREF11 and PTB. To compare against more common LSTM architectures, we also evaluated AWD-LSTM BIBREF4 on PTB. For the non-neural approach, we used a standard five-gram model with modified Kneser-Ney smoothing BIBREF15 , as explored in BIBREF16 on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively.",
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
],
[
"The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 . For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD BIBREF4 , then finetuned for 300 epochs using ASGD BIBREF17 , all with a learning rate of 30 throughout. For wt103-qrnn, we followed BIBREF11 and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of $10^{-3}$ . We also applied regularization techniques from BIBREF4 ; all the specific hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights BIBREF18 and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of $r=2$ for the first layer and $r=1$ for the rest.",
"For the KN-5 model, we trained an off-the-shelf five-gram model using the popular SRILM toolkit BIBREF19 . We did not specify any special hyperparameters."
],
[
"We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the first 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage.",
"For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly.",
"In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs $350 USD, and the GPU costs $3000 USD."
],
[
"To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1 ; we report the Skip- and AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in BIBREF3 . BIBREF20 focus on Hebbian softmax, a model extension technique—Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional five-gram model with modified Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM.",
"Perplexity–recall scale. In Figure 1 , using KN-5 as the model, we plot the log perplexity (cross entropy) and R@3 error ( $1 - \\text{R@3}$ ) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing the same R@3 value, as explained in Section \"Infrastructure\" . We also observe that the perplexity–recall scale is non-linear—instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB ( $r=0.85$ ), and an even stronger relationship on WT103 ( $r=0.94$ ). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics.",
"From Figure 1 , we find that QRNN models yield strongly linear log perplexity–recall plots as well, where $r=0.88$ and $r=0.93$ for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1 . We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these findings agree with those from BIBREF21 , which explores the log perplexity–word error rate scale in language modeling for speech recognition.",
"Quality–performance tradeoff. In Table 2 , from left to right, we report perplexity results on the validation and test sets, R@3 on test, and finally per-query latency and energy usage. On the RPi, KN-5 is both fast and power-efficient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2 , row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules BIBREF22 , and the latency is within usability standards BIBREF23 . Nevertheless, the models are still 49 $\\times $ slower and 32 $\\times $ more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60–80% and R@3 increases of 22–34%, but these improvements come at a much higher cost in latency and energy usage.",
"In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2–3) are 9 $\\times $ slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous BIBREF23 . If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11 $\\times $ on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN-5 model, even without using GPU acceleration."
],
[
"In the present work, we describe and examine the tradeoff space between quality and performance for the task of language modeling. Specifically, we explore the quality–performance tradeoffs between KN-5, a non-neural approach, and AWD-LSTM and QRNN, two neural language models. We find that with decreased perplexity comes vastly increased computational requirements: In one of the NLMs, a perplexity reduction by 2.5 $\\times $ results in a 49 $\\times $ rise in latency and 32 $\\times $ increase in energy usage, when compared to KN-5."
]
],
"section_name": [
"Introduction",
"Background and Related Work",
"Experimental Setup",
"Hyperparameters and Training",
"Infrastructure",
"Results and Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"c17796e0bd3bfcc64d5a8e844d23d8d39274af6b"
],
"answer": [
{
"evidence": [
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
],
"extractive_spans": [],
"free_form_answer": "Quality measures using perplexity and recall, and performance measured using latency and energy usage. ",
"highlighted_evidence": [
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"715840b32a89c33e0a1de1ab913664eb9694bd34"
],
"answer": [
{
"evidence": [
"In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\\times $ longer and requires 32 $\\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point."
],
"extractive_spans": [
"Kneser–Ney smoothing"
],
"free_form_answer": "",
"highlighted_evidence": [
"Kneser–Ney smoothing",
"In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"062dcccfdfb5af1c6ee886885703f9437d91a9dc",
"1cc952fc047d0bb1a961c3ce65bada2e983150d1"
],
"answer": [
{
"evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
],
"extractive_spans": [
"perplexity"
],
"free_form_answer": "",
"highlighted_evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
],
"extractive_spans": [
"perplexity"
],
"free_form_answer": "",
"highlighted_evidence": [
"recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What aspects have been compared between various language models?",
"what classic language models are mentioned in the paper?",
"What is a commonly used evaluation metric for language models?"
],
"question_id": [
"dd155f01f6f4a14f9d25afc97504aefdc6d29c13",
"a9d530d68fb45b52d9bad9da2cd139db5a4b2f7c",
"e07df8f613dbd567a35318cd6f6f4cb959f5c82d"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Comparison of neural language models on Penn Treebank and WikiText-103.",
"Figure 1: Log perplexity–recall error with KN-5.",
"Figure 2: Log perplexity–recall error with QRNN.",
"Table 2: Language modeling results on performance and model quality."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"4-Table2-1.png"
]
} | [
"What aspects have been compared between various language models?"
] | [
[
"1811.00942-Experimental Setup-2"
]
] | [
"Quality measures using perplexity and recall, and performance measured using latency and energy usage. "
] | 4 |
1907.05664 | Saliency Maps Generation for Automatic Text Summarization | Saliency map generation techniques are at the forefront of explainable AI literature for a broad range of machine learning applications. Our goal is to question the limits of these approaches on more complex tasks. In this paper we apply Layer-Wise Relevance Propagation (LRP) to a sequence-to-sequence attention model trained on a text summarization dataset. We obtain unexpected saliency maps and discuss the rightfulness of these"explanations". We argue that we need a quantitative way of testing the counterfactual case to judge the truthfulness of the saliency maps. We suggest a protocol to check the validity of the importance attributed to the input and show that the saliency maps obtained sometimes capture the real use of the input features by the network, and sometimes do not. We use this example to discuss how careful we need to be when accepting them as explanation. | {
"paragraphs": [
[
"Ever since the LIME algorithm BIBREF0 , \"explanation\" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.",
"There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain\" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 .",
"Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words\" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP.",
"We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense\" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more."
],
[
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it."
],
[
"The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights\" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017."
],
[
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
[
"We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al. See2017. We obtain summaries that are broadly relevant to the text but do not match the target summaries very well. We observe the same problems such as wrong reproduction of factual details, replacing rare words with more common alternatives or repeating non-sense after the third sentence. We can see in Figure 1 an example of summary obtained compared to the target one.",
"The “summaries\" we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us. They pick up the general subject of the original text."
],
[
"We present in this section the Layer-Wise Relevance Propagation (LRP) BIBREF2 technique that we used to attribute importance to the input features, together with how we adapted it to our model and how we generated the saliency maps. LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers. We call this propagated backwards importance the relevance. LRP has the particularity to attribute negative and positive relevance: a positive relevance is supposed to represent evidence that led to the classifier's result while negative relevance represents evidence that participated negatively in the prediction."
],
[
"We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer. For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 . We write down $R_{i\\leftarrow j}^{(l, l+1)}$ the relevance received by the neuron $i$ of layer $l$ from the neuron $j$ of layer $l+1$ : ",
"$$\\begin{split}\n\nR_{i\\leftarrow j}^{(l, l+1)} &= \\dfrac{w_{i\\rightarrow j}^{l,l+1}\\textbf {z}^l_i + \\dfrac{\\epsilon \\textrm { sign}(\\textbf {z}^{l+1}_j) + \\textbf {b}^{l+1}_j}{D_l}}{\\textbf {z}^{l+1}_j + \\epsilon * \\textrm { sign}(\\textbf {z}^{l+1}_j)} * R_j^{l+1} \\\\\n\\end{split}$$ (Eq. 7) ",
"where $w_{i\\rightarrow j}^{l,l+1}$ is the network's weight parameter set during training, $\\textbf {b}^{l+1}_j$ is the bias for neuron $j$ of layer $l+1$ , $\\textbf {z}^{l}_i$ is the activation of neuron $i$ on layer $l$ , $\\epsilon $ is the stabilizing term set to 0.00001 and $D_l$ is the dimension of the $l$ -th layer.",
"The relevance of a neuron is then computed as the sum of the relevance he received from the above layer(s).",
"For LSTM cells we use the method from Arras et al.Arras2017 to solve the problem posed by the element-wise multiplications of vectors. Arras et al. noted that when such computation happened inside an LSTM cell, it always involved a “gate\" vector and another vector containing information. The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of “relevant\" information. Considering this, when we propagate relevance through an element-wise multiplication operation, we give all the upper-layer's relevance to the “information\" vector and none to the “gate\" vector."
],
[
"We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanau's attention BIBREF9 uses element-wise multiplications as well. We depict in Figure 2 the transmission end-to-end from the output layer to the input through the decoder, attention mechanism and then the bidirectional encoder. We then sum up the relevance on the word embedding to get the token's relevance as Arras et al. Arras2017.",
"The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially don't have one classification, but 200 (one for each word in the summary). We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves.",
"This means that for each text we obtain 50 different saliency maps, each one supposed to represent the relevance of the input for a specific generated word in the summary."
],
[
"In this section, we present our results from extracting attributions from the sequence-to-sequence model trained for abstractive text summarization. We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings."
],
[
"The first observation that is made is that for one text, the 50 saliency maps are almost identical. Indeed each mapping highlights mainly the same input words with only slight variations of importance. We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary. The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word. The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text. We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didn't observe much changes in the saliency map.",
"It can be seen as evidence that using the attention distribution as an “explanation\" of the prediction can be misleading. It is not the only information received by the decoder and the importance it “allocates\" to this attention state might be very low. What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used. Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work.",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video\" highlighted in the input text, which seems to be important for the output.",
"This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the network's use of the input features. We will call that truthfulness of the attribution in regard to the computation, meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction. We proceed to measure the truthfulness of the attributions by validating them quantitatively."
],
[
"We propose to validate the saliency maps in a similar way as Arras et al. Arras2017 by incrementally deleting “important\" words from the input text and observe the change in the resulting generated summaries.",
"We first define what “important\" (and “unimportant\") input words mean across the 50 saliency maps per texts. Relevance transmitted by LRP being positive or negative, we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most “relevant\" words. The idea is that input words with negative relevance have an impact on the resulting generated word, even if it is not participating positively, while a word with a relevance close to zero should not be important at all. We did however also try with different methods, like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor. The absolute value average seemed to deliver the best results.",
"We delete incrementally the important words (words with the highest average) in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries. We obtain mitigated results: for some texts, we observe a quick degradation when deleting important words which are not observed when deleting unimportant words (see Figure 4 ), but for other test examples we don't observe a significant difference between the two settings (see Figure 5 ).",
"One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries, we do not wish to make such a statement.",
"This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the network's computation and we may use it for further studies of the example, whereas for the text at the origin of Figure 5 we shouldn't draw any further conclusions from the attribution generated.",
"One interesting point is that one saliency map didn't look “better\" than the other, meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation. This brings us to believe that even in simpler tasks, the saliency maps might make sense to us (for example highlighting the animal in an image classification task), without actually representing what the network really attended too, or in what way.",
"We defined without saying it the counterfactual case in our experiment: “Would the important words in the input be deleted, we would have a different summary\". Such counterfactuals are however more difficult to define for image classification for example, where it could be applying a mask over an image, or just filtering a colour or a pattern. We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them."
],
[
"In this work, we have implemented and applied LRP to a sequence-to-sequence model trained on a more complex task than usual: text summarization. We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al. Bahdanau2014 attention mechanism.",
"We observed a peculiar behaviour of the saliency maps for the words in the output summary: they are almost all identical and seem uncorrelated with the attention distribution. We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps. We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another.",
"We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.",
"Future work would look into the saliency maps generated by applying LRP to pointer-generator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps. Some additional work is also needed on the validation of the saliency maps with counterfactual tests. The exploitation and evaluation of saliency map are a very important step and should not be overlooked."
]
],
"section_name": [
"Introduction",
"The Task and the Model",
"Dataset and Training Task",
"The Model",
"Obtained Summaries",
"Layer-Wise Relevance Propagation",
"Mathematical Description",
"Generation of the Saliency Maps",
"Experimental results",
"First Observations",
"Validating the Attributions",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0850b7c0555801d057062480de6bb88adb81cae3",
"93216bca45711b73083372495d9a2667736fbac9"
],
"answer": [
{
"evidence": [
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.",
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"extractive_spans": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"free_form_answer": "",
"highlighted_evidence": [
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset.",
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"extractive_spans": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"free_form_answer": "",
"highlighted_evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"101dbdd2108b3e676061cb693826f0959b47891b"
]
},
{
"annotation_id": [
"e0ca6b95c1c051723007955ce6804bd29f325379"
],
"answer": [
{
"evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"extractive_spans": [],
"free_form_answer": "one",
"highlighted_evidence": [
"The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"101dbdd2108b3e676061cb693826f0959b47891b"
]
},
{
"annotation_id": [
"79e54a7b9ba9cde5813c3434e64a02d722f13b23"
],
"answer": [
{
"evidence": [
"We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video\" highlighted in the input text, which seems to be important for the output."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"But we also showed that in some cases the saliency maps seem to not capture the important input features. ",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"101dbdd2108b3e676061cb693826f0959b47891b"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which baselines did they compare?",
"How many attention layers are there in their model?",
"Is the explanation from saliency map correct?"
],
"question_id": [
"6e2ad9ad88cceabb6977222f5e090ece36aa84ea",
"aacb0b97aed6fc6a8b471b8c2e5c4ddb60988bf5",
"710c1f8d4c137c8dad9972f5ceacdbf8004db208"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"saliency",
"saliency",
"saliency"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 2: Representation of the propagation of the relevance from the output to the input. It passes through the decoder and attention mechanism for each previous decoding time-step, then is passed onto the encoder which takes into account the relevance transiting in both direction due to the bidirectional nature of the encoding LSTM cell.",
"Figure 3: Left : Saliency map over the truncated input text for the second generated word “the”. Right : Saliency map over the truncated input text for the 25th generated word “investigation”. We see that the difference between the mappings is marginal.",
"Figure 4: Summary from Figure 1 generated after deleting important and unimportant words from the input text. We observe a significant difference in summary degradation between the two experiments, where the decoder just repeats the UNKNOWN token over and over."
],
"file": [
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png"
]
} | [
"How many attention layers are there in their model?"
] | [
[
"1907.05664-The Model-0"
]
] | [
"one"
] | 6 |
1910.14497 | Probabilistic Bias Mitigation in Word Embeddings | It has been shown that word embeddings derived from large corpora tend to incorporate biases present in their training data. Various methods for mitigating these biases have been proposed, but recent work has demonstrated that these methods hide but fail to truly remove the biases, which can still be observed in word nearest-neighbor statistics. In this work we propose a probabilistic view of word embedding bias. We leverage this framework to present a novel method for mitigating bias which relies on probabilistic observations to yield a more robust bias mitigation algorithm. We demonstrate that this method effectively reduces bias according to three separate measures of bias while maintaining embedding quality across various popular benchmark semantic tasks | {
"paragraphs": [
[
"Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.",
"The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words.",
"In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work.",
"We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms."
],
[
"Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen)...\\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \\sum _{j=1}^{k} (v \\cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$."
],
[
"The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:",
"Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT."
],
[
"The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$."
],
[
"The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias."
],
[
"Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative, probabilistic, approach. Intuitively, we seek a notion of equality akin to that of demographic parity in the fairness literature, which requires that a decision or outcome be independent of a protected attribute such as gender. BIBREF7. Similarly, when considering a probabilistic definition of unbiased in word embeddings, we can consider the conditional probabilities of word pairs, ensuring for example that $p(doctor|man) \\approx p(doctor|woman)$, and can extend this probabilistic framework to include the neighborhood of a target word, addressing the potential pitfalls of geometric bias mitigation.",
"Conveniently, most word embedding frameworks allow for immediate computation of the conditional probabilities $P(w|c)$. Here, we focus our attention on the Skip-Gram method with Negative Sampling (SGNS) of BIBREF8, although our framework can be equivalently instantiated for most other popular embedding methods, owing to their core similarities BIBREF6, BIBREF9. Leveraging this probabilistic nature, we construct a bias mitigation method in two steps, and examine each step as an independent method as well as the resulting composite method."
],
[
"This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender. We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word (i.e., one that should not be affected by the protected attribute) conditioned on two words describing the protected attribute (e.g., man and woman in the case of gender). That is, for every target word we seek to minimize:",
"where $\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen), \\dots \\rbrace $ is a set of word pairs characterizing the protected attribute, akin to that used in previous work BIBREF0.",
"At this point, the specific form of the objective will depend on the type of word embeddings used. For our expample of SGNS, recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations. Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary, we can use the estimation of log conditional probability proposed by BIBREF8, i.e., $ \\log p(w_O|w_I) \\approx \\log \\sigma ({v^{\\prime }_{wo}}^T v_{wI}) + \\sum _{i=1}^{k} [\\log {\\sigma ({{-v^{\\prime }_{wi}}^T v_{wI}})}] $."
],
[
"Based on observations by BIBREF5, we extend our method to consider the composition of the neighborhood of socially-gendered words of a target word. We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman, but also between a target word and socially-biased male or female words. Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation, but it is shown to be ineffective by the neighborhood metric BIBREF5.",
"Instead, we extend our method of bias mitigation to account for this neighborhood effect. Specifically, we examine the conditional probabilities of a target word given the $k/2$ nearest neighbors from the male socially-biased words as well as given the $k/2$ female socially-biased words (in sorted order, from smallest to largest). The groups of socially-biased words are constructed as described in the neighborhood metric. If the word is unbiased according to the neighborhood metric, these probabilities should be comparable. We then use the following as our loss function:",
"",
"where $m$ and $f$ represent the male and female neighbors sorted by distance to the target word $t$ (we use $L1$ distance)."
],
[
"We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.",
"We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.",
"We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics."
],
[
"We proposed a simple method of bias mitigation based on this probabilistic notions of fairness, and showed that it leads to promising results in various benchmark bias mitigation tasks. Future work should include considering a more rigorous definition and non-binary of bias and experimenting with various embedding algorithms and network architectures."
],
[
"The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work."
],
[
"For Equation 4, as described in the original work, in regards to the k sample words $w_i$ is drawn from the corpus using the Unigram distribution raised to the 3/4 power.",
"For reference, the most male socially-biased words include words such as:’john’, ’jr’, ’mlb’, ’dick’, ’nfl’, ’cfl’, ’sgt’, ’abbot’, ’halfback’, ’jock’, ’mike’, ’joseph’,while the most female socially-biased words include words such as:’feminine’, ’marital’, ’tatiana’, ’pregnancy’, ’eva’, ’pageant’, ’distress’, ’cristina’, ’ida’, ’beauty’, ’sexuality’,’fertility’"
],
[
"'accountant', 'acquaintance', 'actor', 'actress', 'administrator', 'adventurer', 'advocate', 'aide', 'alderman', 'ambassador', 'analyst', 'anthropologist', 'archaeologist', 'archbishop', 'architect', 'artist', 'assassin', 'astronaut', 'astronomer', 'athlete', 'attorney', 'author', 'baker', 'banker', 'barber', 'baron', 'barrister', 'bartender', 'biologist', 'bishop', 'bodyguard', 'boss', 'boxer', 'broadcaster', 'broker', 'businessman', 'butcher', 'butler', 'captain', 'caretaker', 'carpenter', 'cartoonist', 'cellist', 'chancellor', 'chaplain', 'character', 'chef', 'chemist', 'choreographer', 'cinematographer', 'citizen', 'cleric', 'clerk', 'coach', 'collector', 'colonel', 'columnist', 'comedian', 'comic', 'commander', 'commentator', 'commissioner', 'composer', 'conductor', 'confesses', 'congressman', 'constable', 'consultant', 'cop', 'correspondent', 'counselor', 'critic', 'crusader', 'curator', 'dad', 'dancer', 'dean', 'dentist', 'deputy', 'detective', 'diplomat', 'director', 'doctor', 'drummer', 'economist', 'editor', 'educator', 'employee', 'entertainer', 'entrepreneur', 'envoy', 'evangelist', 'farmer', 'filmmaker', 'financier', 'fisherman', 'footballer', 'foreman', 'gangster', 'gardener', 'geologist', 'goalkeeper', 'guitarist', 'headmaster', 'historian', 'hooker', 'illustrator', 'industrialist', 'inspector', 'instructor', 'inventor', 'investigator', 'journalist', 'judge', 'jurist', 'landlord', 'lawyer', 'lecturer', 'legislator', 'librarian', 'lieutenant', 'lyricist', 'maestro', 'magician', 'magistrate', 'maid', 'manager', 'marshal', 'mathematician', 'mechanic', 'midfielder', 'minister', 'missionary', 'monk', 'musician', 'nanny', 'narrator', 'naturalist', 'novelist', 'nun', 'nurse', 'observer', 'officer', 'organist', 'painter', 'pastor', 'performer', 'philanthropist', 'philosopher', 'photographer', 'physician', 'physicist', 'pianist', 'planner', 'playwright', 'poet', 'policeman', 'politician', 'preacher', 'president', 'priest', 'principal', 'prisoner', 'professor', 'programmer', 'promoter', 'proprietor', 'prosecutor', 'protagonist', 'provost', 'psychiatrist', 'psychologist', 'rabbi', 'ranger', 'researcher', 'sailor', 'saint', 'salesman', 'saxophonist', 'scholar', 'scientist', 'screenwriter', 'sculptor', 'secretary', 'senator', 'sergeant', 'servant', 'singer', 'skipper', 'sociologist', 'soldier', 'solicitor', 'soloist', 'sportsman', 'statesman', 'steward', 'student', 'substitute', 'superintendent', 'surgeon', 'surveyor', 'swimmer', 'teacher', 'technician', 'teenager', 'therapist', 'trader', 'treasurer', 'trooper', 'trumpeter', 'tutor', 'tycoon', 'violinist', 'vocalist', 'waiter', 'waitress', 'warden', 'warrior', 'worker', 'wrestler', 'writer'"
],
[
"Words used for WEAT statistic, consisting of baseline bias tests and gender bias tests in the format X vs Y / A vs B",
"Flowers vs Insects / Pleasant vs Unpleasant",
"X: \"aster\", \"clover\", \"hyacinth\", \"marigold\", \"poppy\", \"azalea\", \"crocus\", \"iris\", \"orchid\", \"rose\", \"bluebell\", \"daffodil\", \"lilac\", \"pansy\", \"tulip\", \"buttercup\", \"daisy\", \"lily\", \"peony\", \"violet\", \"carnation\", \"gladiola\", \"magnolia\", \"petunia\", \"zinnia\"",
"Y: \"ant\", \"caterpillar\", \"flea\", \"locust\", \"spider\", \"bedbug\", \"centipede\", \"fly\", \"maggot\", \"tarantula\", \"bee\", \"cockroach\", \"gnat\", \"mosquito\", \"termite\", \"beetle\", \"cricket\", \"hornet\", \"moth\", \"wasp\", \"blackfly\", \"dragonfly\", \"horsefly\", \"roach\", \"weevil\"",
"A: \"caress\", \"freedom\", \"health\", \"love\", \"peace\", \"cheer\", \"friend\", \"heaven\", \"loyal\", \"pleasure\", \"diamond\", \"gentle\", \"honest\", \"lucky\", \"rainbow\", \"diploma\", \"gift\", \"honor\", \"miracle\", \"sunrise\", \"family\", \"happy\", \"laughter\", \"paradise\", \"vacation\"",
"B: \"abuse\", \"crash\", \"filth\", \"murder\", \"sickness\", \"accident\", \"death\", \"grief\", \"poison\", \"stink\", \"assault\", \"disaster\", \"hatred\", \"pollute\", \"tragedy\", \"divorce\", \"jail\", \"poverty\", \"ugly\", \"cancer\", \"kill\", \"rotten\", \"vomit\", \"agony\", \"prison\"",
"Instruments vs Weapons / Pleasant vs Unpleasant:",
"X: \"bagpipe\", \"cello\", \"guitar\", \"lute\", \"trombone\", \"banjo\", \"clarinet\", \"harmonica\", \"mandolin\", \"trumpet\", \"bassoon\", \"drum\", \"harp\", \"oboe\", \"tuba\", \"bell\", \"fiddle\", \"harpsichord\", \"piano\", \"viola\", \"bongo\", \"flute\", \"horn\", \"saxophone\", \"violin\"",
"Y: \"arrow\", \"club\", \"gun\", \"missile\", \"spear\", \"ax\", \"dagger\", \"harpoon\", \"pistol\", \"sword\", \"blade\", \"dynamite\", \"hatchet\", \"rifle\", \"tank\", \"bomb\", \"firearm\", \"knife\", \"shotgun\", \"teargas\", \"cannon\", \"grenade\", \"mace\", \"slingshot\", \"whip\"",
"A: \"caress\", \"freedom\", \"health\", \"love\", \"peace\", \"cheer\", \"friend\", \"heaven\", \"loyal\", \"pleasure\", \"diamond\", \"gentle\", \"honest\", \"lucky\", \"rainbow\", \"diploma\", \"gift\", \"honor\", \"miracle\", \"sunrise\", \"family\", \"happy\", \"laughter\", \"paradise\", \"vacation\"",
"B: \"abuse\", \"crash\", \"filth\", \"murder\", \"sickness\", \"accident\", \"death\", \"grief\", \"poison\", \"stink\", \"assault\", \"disaster\", \"hatred\", \"pollute\", \"tragedy\", \"divorce\", \"jail\", \"poverty\", \"ugly\", \"cancer\", \"kill\", \"rotten\", \"vomit\", \"agony\", \"prison\"",
"Male vs Female / Career vs Family:",
"X: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\", \"king\", \"actor\"",
"Y: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\", \"queen\", \"actress\"",
"A: \"executive\", \"management\", \"professional\", \"corporation\", \"salary\", \"office\", \"business\", \"career\", \"industry\", \"company\", \"promotion\", \"profession\", \"CEO\", \"manager\", \"coworker\", \"entrepreneur\"",
"B: \"home\", \"parents\", \"children\", \"family\", \"cousins\", \"marriage\", \"wedding\", \"relatives\", \"grandparents\", \"grandchildren\", \"nurture\", \"child\", \"toddler\", \"infant\", \"teenager\"",
"Math vs Art / Male vs Female:",
"X: \"math\", \"algebra\", \"geometry\", \"calculus\", \"equations\", \"computation\", \"numbers\", \"addition\", \"trigonometry\", \"arithmetic\", \"logic\", \"proofs\", \"multiplication\", \"mathematics\"",
"Y: \"poetry\", \"art\", \"Shakespeare\", \"dance\", \"literature\", \"novel\", \"symphony\", \"drama\", \"orchestra\", \"music\", \"ballet\", \"arts\", \"creative\", \"sculpture\"",
"A: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\", \"king\", \"actor\"",
"B: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\", \"queen\", \"actress\"",
"Science vs Art / Male8 vs Female8:",
"X:\"science\", \"technology\", \"physics\", \"chemistry\", \"Einstein\", \"NASA\", \"experiment\", \"astronomy\", \"biology\", \"aeronautics\", \"mechanics\", \"thermodynamics\"",
"Y: \"poetry\", \"art\", \"Shakespeare\", \"dance\", \"literature\", \"novel\", \"symphony\", \"drama\", \"orchestra\", \"music\", \"ballet\", \"arts\", \"creative\", \"sculpture\"",
"A: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\"",
"B: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\""
]
],
"section_name": [
"Introduction",
"Background ::: Geometric Bias Mitigation",
"Background ::: Geometric Bias Mitigation ::: WEAT",
"Background ::: Geometric Bias Mitigation ::: RIPA",
"Background ::: Geometric Bias Mitigation ::: Neighborhood Metric",
"A Probabilistic Framework for Bias Mitigation",
"A Probabilistic Framework for Bias Mitigation ::: Probabilistic Bias Mitigation",
"A Probabilistic Framework for Bias Mitigation ::: Nearest Neighbor Bias Mitigation",
"Experiments",
"Discussion",
"Discussion ::: Acknowledgements",
"Experiment Notes",
"Professions",
"WEAT Word Sets"
]
} | {
"answers": [
{
"annotation_id": [
"50e0354ccb4d7d6fda33c34e69133daaa8978a2f",
"eb66f1f7e89eca5dcf2ae6ef450b1693a43f4e69"
],
"answer": [
{
"evidence": [
"We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.",
"We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.",
"We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics."
],
"extractive_spans": [
"We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics."
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.",
"We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.",
"We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"08a22700ab88c5fb568745e6f7c1b5da25782626"
],
"answer": [
{
"evidence": [
"Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen)...\\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \\sum _{j=1}^{k} (v \\cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.",
"The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:",
"Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT.",
"The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$.",
"The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.",
"FLOAT SELECTED: Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)"
],
"extractive_spans": [],
"free_form_answer": "RIPA, Neighborhood Metric, WEAT",
"highlighted_evidence": [
"Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0.",
"The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:\n\nWhere $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured.",
"The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. ",
"The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector.",
"FLOAT SELECTED: Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9b4792d66cec53f8ea37bccd5cf7cb9c22290d82"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How is embedding quality assessed?",
"What are the three measures of bias which are reduced in experiments?",
"What are the probabilistic observations which contribute to the more robust algorithm?"
],
"question_id": [
"47726be8641e1b864f17f85db9644ce676861576",
"8958465d1eaf81c8b781ba4d764a4f5329f026aa",
"31b6544346e9a31d656e197ad01756813ee89422"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias",
"bias"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Word embedding semantic quality benchmarks for each bias mitigation method (higher is better). See Jastrzkebski et al. [11] for details of each metric.",
"Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)"
],
"file": [
"4-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"What are the three measures of bias which are reduced in experiments?"
] | [
[
"1910.14497-Background ::: Geometric Bias Mitigation ::: RIPA-0",
"1910.14497-4-Table1-1.png",
"1910.14497-Background ::: Geometric Bias Mitigation ::: Neighborhood Metric-0",
"1910.14497-Background ::: Geometric Bias Mitigation ::: WEAT-1",
"1910.14497-Background ::: Geometric Bias Mitigation-0",
"1910.14497-Background ::: Geometric Bias Mitigation ::: WEAT-0"
]
] | [
"RIPA, Neighborhood Metric, WEAT"
] | 7 |
2002.02224 | Citation Data of Czech Apex Courts | In this paper, we introduce the citation data of the Czech apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). This dataset was automatically extracted from the corpus of texts of Czech court decisions - CzCDC 1.0. We obtained the citation data by building the natural language processing pipeline for extraction of the court decision identifiers. The pipeline included the (i) document segmentation model and the (ii) reference recognition model. Furthermore, the dataset was manually processed to achieve high-quality citation data as a base for subsequent qualitative and quantitative analyses. The dataset will be made available to the general public. | {
"paragraphs": [
[
"Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.",
"That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts.",
"In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper."
],
[
"The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.",
"In spite of the long-term use of the citations in the legal domain (eg. the use of Shepard's Citations since 1873), interest in the network citation analysis increased significantly when Fowler et al. published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0, BIBREF1. Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents. In the continental legal system, this work was followed by Winkels and de Ruyter BIBREF2. Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court. Similar methods were later used by Derlén and Lindholm BIBREF3, BIBREF4 and Panagis and Šadl BIBREF5 for the citation data of the Court of Justice of the European Union, and by Olsen and Küçüksu for the citation data of the European Court of Human Rights BIBREF6.",
"Additionally, a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research. Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7. Görög a Weisz introduced the new legal information retrieval system, Justeus, based on a large database of the legal sources and partly on the network analysis methods. BIBREF8"
],
[
"The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.",
"The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al. BIBREF9. The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems.",
"De Maat et al. BIBREF10 focused on an automated detection of references to legal acts in Dutch language. Their approach consisted of a grammar covering increasingly complex citation patterns.",
"Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendor-specific identifiers.",
"The language specific work by Kríž et al. BIBREF12 focused on the detecting and classification references to other court decisions and legal acts. Authors used a statistical recognition (HMM and Perceptron algorithms) and reported F1-measure over 90% averaged over all entities. It is the state-of-art in the automatic recognition of references in the Czech court decisions. Unfortunately, it allows only for the detection of docket numbers and it is unable to recognize court-specific or vendor-specific identifiers in the court decisions.",
"Other language specific-work includes our previous reference recognition model presented in BIBREF13. Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit case-law and doctrinal references. Parts of this model were used in the pipeline described further within this paper in Section SECREF3."
],
[
"Large scale quantitative and qualitative studies are often hindered by the unavailability of court data. Access to court decisions is often hindered by different obstacles. In some countries, court decisions are not available at all, while in some other they are accessible only through legal information systems, often proprietary. This effectively restricts the access to court decisions in terms of the bulk data. This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes. Non-exhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14, The Corpus of US Supreme Court Opinions BIBREF15,the HOLJ corpus BIBREF16, the Corpus of Historical English Law Reports, Corpus de Sentencias Penales BIBREF17, Juristisches Referenzkorpus BIBREF18 and many others.",
"Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus (CzCDC 1.0) BIBREF19. This corpus contains majority of court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court, hence allowing a large-scale extraction of references to yield representative results. The CzCDC 1.0 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3. Unfortunately, despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018, it is not complete. This fact is reflected in the analysis of the results."
],
[
"A large volume of legal information is available in unstructured form, which makes processing these data a challenging task – both for human lawyers and for computers. Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure.",
"Topic-based segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments.",
"The automatic segmentation is not an individual goal – it always serves as a prerequisite for further tasks requiring structured data. Segmentation is required for the text summarization BIBREF21, BIBREF22, keyword extraction BIBREF23, textual information retrieval BIBREF24, and other applications requiring input in the form of structured data.",
"Major part of research is focused on semantic similarity methods.The computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments. This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well.",
"Another approach takes word frequencies and presumes a border according to different key words extracted. Reynar BIBREF27 authored graphical method based on statistics called dotplotting. Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29. Bommarito et al. BIBREF30 introduced a Python library combining different features including pre-trained models to the use for automatic legal text segmentation. Li BIBREF31 included neural network into his method to segment Chinese legal texts.",
"Šavelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts. Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions.",
"Language specific work includes the model presented by Harašta et al. BIBREF33. This work focuses on segmentation of the Czech court decisions into pre-defined topical segments. Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3."
],
[
"In this paper, we present and describe the citation dataset of the Czech top-tier courts. To obtain this dataset, we have processed the court decisions contained in CzCDC 1.0 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33, and parts of the reference recognition model presented in BIBREF13. The process is described in this section."
],
[
"Novotná and Harašta BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court. The dataset contains 237,723 decisions published between 1st January 1993 and the 30th September 2018. These decisions are organised into three sub-corpora. The sub-corpus of the Supreme Court contains 111,977 decisions, the sub-corpus of the Supreme Administrative Court contains 52,660 decisions and the sub-corpus of the Constitutional Court contains 73,086 decisions. Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91% of all decisions of the Supreme Court, 99,5% of all decisions of the Constitutional Court, and 99,9% of all decisions of the Supreme Administrative Court. As such, it presents the best currently available dataset of the Czech top-tier court decisions."
],
[
"Harašta and Šavelka BIBREF13 introduced a reference recognition model trained specifically for the Czech top-tier courts. Moreover, authors made their training data available in the BIBREF34. Given the lack of a single citation standard, references in this work consist of smaller units, because these were identified as more uniform and therefore better suited for the automatic detection. The model was trained using conditional random fields, which is a random field model that is globally conditioned on an observation sequence O. The states of the model correspond to event labels E. Authors used a first-order conditional random fields. Model was trained for each type of the smaller unit independently."
],
[
"Harašta et al. BIBREF33, authors introduced the model for the automatic segmentation of the Czech court decisions into pre-defined multi-paragraph parts. These segments include the Header (introduction of given case), History (procedural history prior the apex court proceeding), Submission/Rejoinder (petition of plaintiff and response of defendant), Argumentation (argumentation of the court hearing the case), Footer (legally required information, such as information about further proceedings), Dissent and Footnotes. The model for automatic segmentation of the text was trained using conditional random fields. The model was trained for each type independently."
],
[
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"As the first step, every document in the CzCDC 1.0 was segmented using the text segmentation model. This allowed us to treat different parts of processed court documents differently in the further text processing. Specifically, it allowed us to subject only the specific part of a court decision, in this case the court argumentation, to further the reference recognition and extraction. A textual segment recognised as the court argumentation is then processed further.",
"As the second step, parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model. After carefully studying the evaluation of the model's performance in BIBREF13, we have decided to use only part of the said model. Specifically, we have employed the recognition of the court identifiers, as we consider the rest of the smaller units introduced by Harašta and Šavelka of a lesser value for our task. Also, deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references. The text spans recognised as identifiers of court decisions are then processed further.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification.",
"Further processing included:",
"control and repair of incompletely identified court identifiers (manual);",
"identification and sorting of identifiers as belonging to Supreme Court, Supreme Administrative Court or Constitutional Court (rule-based, manual);",
"standardisation of different types of court identifiers (rule-based, manual);",
"parsing of identifiers with court decisions available in CzCDC 1.0."
],
[
"Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3.",
"These references include all identifiers extracted from the court decisions contained in the CzCDC 1.0. Therefore, this number includes all other court decisions, including lower courts, the Court of Justice of the European Union, the European Court of Human Rights, decisions of other public authorities etc. Therefore, it was necessary to classify these into references referring to decisions of the Supreme Court, Supreme Administrative Court, Constitutional Court and others. These groups then underwent a standardisation - or more precisely a resolution - of different court identifiers used by the Czech courts. Numbers of the references resulting from this step are shown in Table TABREF16.",
"Following this step, we linked court identifiers with court decisions contained in the CzCDC 1.0. Given that, the CzCDC 1.0 does not contain all the decisions of the respective courts, we were not able to parse all the references. Numbers of the references resulting from this step are shown in Table TABREF17."
],
[
"This paper introduced the first dataset of citation data of the three Czech apex courts. Understandably, there are some pitfalls and limitations to our approach.",
"As we admitted in the evaluation in Section SECREF9, the models we included in our NLP pipelines are far from perfect. Overall, we were able to achieve a reasonable recall and precision rate, which was further enhanced by several round of manual processing of the resulting data. However, it is safe to say that we did not manage to extract all the references. Similarly, because the CzCDC 1.0 dataset we used does not contain all the decisions of the respective courts, we were not able to parse all court identifiers to the documents these refer to. Therefore, the future work in this area may include further development of the resources we used. The CzCDC 1.0 would benefit from the inclusion of more documents of the Supreme Court, the reference recognition model would benefit from more refined training methods etc.",
"That being said, the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams. This significantly reduces the costs necessary to conduct these types of studies involving network analysis, and the similar techniques requiring a large amount of citation data."
],
[
"In this paper, we have described the process of the creation of the first dataset of citation data of the three Czech apex courts. The dataset is publicly available for download at https://github.com/czech-case-law-relevance/czech-court-citations-dataset."
],
[
"J.H., and T.N. gratefully acknowledge the support from the Czech Science Foundation under grant no. GA-17-20645S. T.N. also acknowledges the institutional support of the Masaryk University. This paper was presented at CEILI Workshop on Legal Data Analysis held in conjunction with Jurix 2019 in Madrid, Spain."
]
],
"section_name": [
"Introduction",
"Related work ::: Legal Citation Analysis",
"Related work ::: Reference Recognition",
"Related work ::: Data Availability",
"Related work ::: Document Segmentation",
"Methodology",
"Methodology ::: Dataset and models ::: CzCDC 1.0 dataset",
"Methodology ::: Dataset and models ::: Reference recognition model",
"Methodology ::: Dataset and models ::: Text segmentation model",
"Methodology ::: Pipeline",
"Results",
"Discussion",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"3bf5c275ced328b66fd9a07b30a4155fa476d779",
"ae80f5c5b782ad02d1dde21b7384bc63472f5796"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"ca22977516b8d2f165904d7e9742421ad8d742e2"
],
"answer": [
{
"evidence": [
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
],
"extractive_spans": [
"it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"0bdc7f448e47059d71a0ad3c075303900370856a"
],
"answer": [
{
"evidence": [
"Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3."
],
"extractive_spans": [],
"free_form_answer": "903019 references",
"highlighted_evidence": [
"As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Did they experiment on this dataset?",
"How is quality of the citation measured?",
"How big is the dataset?"
],
"question_id": [
"ac706631f2b3fa39bf173cd62480072601e44f66",
"8b71ede8170162883f785040e8628a97fc6b5bcb",
"fa2a384a23f5d0fe114ef6a39dced139bddac20e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: NLP pipeline including the text segmentation, reference recognition and parsing of references to the specific document",
"Table 1: Model performance",
"Table 2: References sorted by categories, unlinked",
"Table 3: References linked with texts in CzCDC"
],
"file": [
"4-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png"
]
} | [
"How big is the dataset?"
] | [
[
"2002.02224-Results-0"
]
] | [
"903019 references"
] | 10 |
2003.07433 | LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment | Veteran mental health is a significant national problem as large number of veterans are returning from the recent war in Iraq and continued military presence in Afghanistan. While significant existing works have investigated twitter posts-based Post Traumatic Stress Disorder (PTSD) assessment using blackbox machine learning techniques, these frameworks cannot be trusted by the clinicians due to the lack of clinical explainability. To obtain the trust of clinicians, we explore the big question, can twitter posts provide enough information to fill up clinical PTSD assessment surveys that have been traditionally trusted by clinicians? To answer the above question, we propose, LAXARY (Linguistic Analysis-based Exaplainable Inquiry) model, a novel Explainable Artificial Intelligent (XAI) model to detect and represent PTSD assessment of twitter users using a modified Linguistic Inquiry and Word Count (LIWC) analysis. First, we employ clinically validated survey tools for collecting clinical PTSD assessment data from real twitter users and develop a PTSD Linguistic Dictionary using the PTSD assessment survey results. Then, we use the PTSD Linguistic Dictionary along with machine learning model to fill up the survey tools towards detecting PTSD status and its intensity of corresponding twitter users. Our experimental evaluation on 210 clinically validated veteran twitter users provides promising accuracies of both PTSD classification and its intensity estimation. We also evaluate our developed PTSD Linguistic Dictionary's reliability and validity. | {
"paragraphs": [
[
"Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.",
"In the context of the above research problem, we aim to answer the following research questions",
"Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?",
"If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?",
"How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?",
"In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.",
"The key contributions of our work are summarized below,",
"The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.",
"LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.",
"Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\\approx 96\\%$) and its intensity ($\\approx 1.2$ mean squared error)."
],
[
"Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community."
],
[
"Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).",
"All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians."
],
[
"There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )",
"High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.",
"Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.",
"Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.",
"No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD."
],
[
"To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model."
],
[
"We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as \"MA Women Veterans @WomenVeterans\", \"Illinois Veterans @ILVetsAffairs\", \"Veterans Benefits @VAVetBenefits\" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19."
],
[
"We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group."
],
[
"We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$",
"A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.",
"We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often."
],
[
"The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians."
],
[
"We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:",
"Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.",
"Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.",
"Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.",
"Score calculation $\\alpha $-score: $\\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts."
],
[
"After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using \"present or not\" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where \"1\" represents yes and \"0\" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.",
""
],
[
"We use the exact similar method of LIWC to extract $\\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric."
],
[
"To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset."
],
[
"To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection."
],
[
"LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method."
],
[
"To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind."
]
],
"section_name": [
"Introduction",
"Overview",
"Related Works",
"Demographics of Clinically Validated PTSD Assessment Tools",
"Twitter-based PTSD Detection",
"Twitter-based PTSD Detection ::: Data Collection",
"Twitter-based PTSD Detection ::: Pre-processing",
"Twitter-based PTSD Detection ::: PTSD Detection Baseline Model",
"LAXARY: Explainable PTSD Detection Model",
"LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation",
"LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary",
"LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation",
"Experimental Evaluation",
"Experimental Evaluation ::: Results",
"Challenges and Future Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"4e3a79dc56c6f39d1bec7bac257c57f279431967"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"fcf589c48d32bdf0ef4eab547f9ae22412f5805a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"5fb7cea5f88219c0c6b7de07c638124a52ef5701",
"b62b56730f7536bfcb03b0e784d74674badcc806"
],
"answer": [
{
"evidence": [
"To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection."
],
"extractive_spans": [],
"free_form_answer": "Given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively, the estimated intensity is established as mean squared error.",
"highlighted_evidence": [
" Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )",
"High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.",
"Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.",
"Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.",
"No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD."
],
"extractive_spans": [],
"free_form_answer": "defined into four categories from high risk, moderate risk, to low risk",
"highlighted_evidence": [
"Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )\n\nHigh risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.\n\nModerate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.\n\nLow risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.\n\nNo PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"348b89ed7cf9b893cd45d99de412e0f424f97f2a",
"9a5f2c8b73ad98f1e28c384471b29b92bcf38de5"
],
"answer": [
{
"evidence": [
"A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user."
],
"extractive_spans": [
" For each user, we calculate the proportion of tweets scored positively by each LIWC category."
],
"free_form_answer": "",
"highlighted_evidence": [
"For each user, we calculate the proportion of tweets scored positively by each LIWC category. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment."
],
"extractive_spans": [
"to calculate the possible scores of each survey question using PTSD Linguistic Dictionary "
],
"free_form_answer": "",
"highlighted_evidence": [
"LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"10d346425fb3693cdf36e224fb28ca37d57b71a0"
],
"answer": [
{
"evidence": [
"We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group."
],
"extractive_spans": [
"210"
],
"free_form_answer": "",
"highlighted_evidence": [
"We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"6185d05f806ff3e054ec5bb7fd773679b7fbb6d9"
],
"answer": [
{
"evidence": [
"We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as \"MA Women Veterans @WomenVeterans\", \"Illinois Veterans @ILVetsAffairs\", \"Veterans Benefits @VAVetBenefits\" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.",
"There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )"
],
"extractive_spans": [
"DOSPERT, BSSS and VIAS"
],
"free_form_answer": "",
"highlighted_evidence": [
"We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. ",
"Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate only on English datasets?",
"Do the authors mention any possible confounds in this study?",
"How is the intensity of the PTSD established?",
"How is LIWC incorporated into this system?",
"How many twitter users are surveyed using the clinically validated survey?",
"Which clinically validated survey tools are used?"
],
"question_id": [
"53712f0ce764633dbb034e550bb6604f15c0cacd",
"0bffc3d82d02910d4816c16b390125e5df55fd01",
"bdd8368debcb1bdad14c454aaf96695ac5186b09",
"3334f50fe1796ce0df9dd58540e9c08be5856c23",
"7081b6909cb87b58a7b85017a2278275be58bf60",
"1870f871a5bcea418c44f81f352897a2f53d0971"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Overview of our framework",
"Fig. 2. WordStat dictionary sample",
"TABLE I DRYHOOTCH CHOSEN PTSD ASSESSMENT SURVEYS (D: DOSPERT, B: BSSS AND V: VIAS) DEMOGRAPHICS",
"TABLE II SAMPLE DRYHOOTCH CHOSEN QUESTIONS FROM DOSPERT",
"Fig. 3. Each 210 users’ average tweets per month",
"Fig. 4. Category Details",
"Fig. 5. S-score table details",
"Fig. 6. Comparisons between Coppersmith et. al. and our method",
"TABLE V LAXARY MODEL BASED CLASSIFICATION DETAILS",
"Fig. 7. Percentages of Training dataset and their PTSD detection accuracy results comparisons. Rest of the dataset has been used for testing",
"Fig. 9. Percentages of Training dataset and their Accuracies for each Survey Tool. Rest of the dataset has been used for testing",
"Fig. 8. Percentages of Training dataset and their Mean Squared Error (MSE) of PTSD Intensity. Rest of the dataset has been used for testing",
"Fig. 10. Weekly PTSD detection accuracy change comparisons with baseline model"
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-TableI-1.png",
"3-TableII-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"5-Figure5-1.png",
"6-Figure6-1.png",
"6-TableV-1.png",
"7-Figure7-1.png",
"7-Figure9-1.png",
"7-Figure8-1.png",
"7-Figure10-1.png"
]
} | [
"How is the intensity of the PTSD established?"
] | [
[
"2003.07433-Demographics of Clinically Validated PTSD Assessment Tools-4",
"2003.07433-Demographics of Clinically Validated PTSD Assessment Tools-0",
"2003.07433-Demographics of Clinically Validated PTSD Assessment Tools-3",
"2003.07433-Demographics of Clinically Validated PTSD Assessment Tools-1",
"2003.07433-Demographics of Clinically Validated PTSD Assessment Tools-2",
"2003.07433-Experimental Evaluation ::: Results-0"
]
] | [
"defined into four categories from high risk, moderate risk, to low risk"
] | 11 |
1904.09678 | UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages | In this paper, we introduce UniSent a universal sentiment lexica for 1000 languages created using an English sentiment lexicon and a massively parallel corpus in the Bible domain. To the best of our knowledge, UniSent is the largest sentiment resource to date in terms of number of covered languages, including many low resource languages. To create UniSent, we propose Adapted Sentiment Pivot, a novel method that combines annotation projection, vocabulary expansion, and unsupervised domain adaptation. We evaluate the quality of UniSent for Macedonian, Czech, German, Spanish, and French and show that its quality is comparable to manually or semi-manually created sentiment resources. With the publication of this paper, we release UniSent lexica as well as Adapted Sentiment Pivot related codes. method. | {
"paragraphs": [
[
"Sentiment classification is an important task which requires either word level or document level sentiment annotations. Such resources are available for at most 136 languages BIBREF0 , preventing accurate sentiment classification in a low resource setup. Recent research efforts on cross-lingual transfer learning enable to train models in high resource languages and transfer this information into other, low resource languages using minimal bilingual supervision BIBREF1 , BIBREF2 , BIBREF3 . Besides that, little effort has been spent on the creation of sentiment lexica for low resource languages (e.g., BIBREF0 , BIBREF4 , BIBREF5 ). We create and release Unisent, the first massively cross-lingual sentiment lexicon in more than 1000 languages. An extensive evaluation across several languages shows that the quality of Unisent is close to manually created resources. Our method is inspired by BIBREF6 with a novel combination of vocabulary expansion and domain adaptation using embedding spaces. Similar to our work, BIBREF7 also use massively parallel corpora to project POS tags and dependency relations across languages. However, their approach is based on assignment of the most probable label according to the alignment model from the source to the target language and does not include any vocabulary expansion or domain adaptation and do not use the embedding graphs."
],
[
"Our method, Adapted Sentiment Pivot requires a sentiment lexicon in one language (e.g. English) as well as a massively parallel corpus. Following steps are performed on this input."
],
[
"Our goal is to evaluate the quality of UniSent against several manually created sentiment lexica in different domains to ensure its quality for the low resource languages. We do this in several steps.",
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 .",
"We use the (manually created) English sentiment lexicon (WKWSCI) in BIBREF18 as a resource to be projected over languages. For the projection step (Section SECREF1 ) we use the massively parallel Bible corpus in BIBREF8 . We then propagate the projected sentiment polarities to all words in the Wikipedia corpus. We chose Wikipedia here because its domain is closest to the manually annotated sentiment lexica we use to evaluate UniSent. In the adaptation step, we compute the shift between the vocabularies in the Bible and Wikipedia corpora. To show that our adaptation method also works well on domains like Twitter, we propose a second evaluation in which we use Adapted Sentiment Pivot to predict the sentiment of emoticons in Twitter.",
"To create our test sets, we first split UniSent and our gold standard lexica as illustrated in Figure FIGREF11 . We then form our training and test sets as follows:",
"(i) UniSent-Lexicon: we use words in UniSent for the sentiment learning in the target domain; for this purpose, we use words INLINEFORM0 .",
"(ii) Baseline-Lexicon: we use words in the gold standard lexicon for the sentiment learning in the target domain; for this purpose we use words INLINEFORM0 .",
"(iii) Evaluation-Lexicon: we randomly exclude a set of words the baseline-lexicon INLINEFORM0 . In selection of the sampling size we make sure that INLINEFORM1 and INLINEFORM2 would contain a comparable number of words.",
""
],
[
"In Table TABREF13 we compare the quality of UniSent with the Baseline-Lexicon as well as with the gold standard lexicon for general domain data. The results show that (i) UniSent clearly outperforms the baseline for all languages (ii) the quality of UniSent is close to manually annotated data (iii) the domain adaptation method brings small improvements for morphologically poor languages. The modest gains could be because our drift weighting method (Section SECREF3 ) mainly models a sense shift between words which is not always equivalent to a polarity shift.",
"In Table TABREF14 we compare the quality of UniSent with the gold standard emoticon lexicon in the Twitter domain. The results show that (i) UniSent clearly outperforms the baseline and (ii) our domain adaptation technique brings small improvements for French and Spanish."
],
[
"Using our novel Adapted Sentiment Pivot method, we created UniSent, a sentiment lexicon covering over 1000 (including many low-resource) languages in several domains. The only necessary resources to create UniSent are a sentiment lexicon in any language and a massively parallel corpus that can be small and domain specific. Our evaluation showed that the quality of UniSent is closed to manually annotated resources.",
" "
]
],
"section_name": [
"Introduction",
"Method",
"Experimental Setup",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"97009bed24107de806232d7cf069f51053d7ba5e",
"e38ed05ec140abd97006a8fa7af9a7b4930247df"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting."
],
"extractive_spans": [],
"free_form_answer": "Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"d1204f71bd3c78a11b133016f54de78e8eaecf6e"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"17db53c0c6f13fe1d43eee276a9554677f007eef"
],
"answer": [
{
"evidence": [
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 ."
],
"extractive_spans": [
"manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15"
],
"free_form_answer": "",
"highlighted_evidence": [
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"how is quality measured?",
"how many languages exactly is the sentiment lexica for?",
"what sentiment sources do they compare with?"
],
"question_id": [
"8f87215f4709ee1eb9ddcc7900c6c054c970160b",
"b04098f7507efdffcbabd600391ef32318da28b3",
"8fc14714eb83817341ada708b9a0b6b4c6ab5023"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: Neighbors of word ’sensual’ in Spanish, in bible embedding graph (a) and twitter embedding graph (b). Our unsupervised drift weighting method found this word in Spanish to be the most changing word from bible context to the twitter context. Looking more closely at the neighbors, the word sensual in the biblical context has been associated with a negative sentiment of sins. However, in the twitter domain, it has a positive sentiment. This example shows how our unsupervised method can improve the quality of sentiment lexicon.",
"Figure 2: Data split used in the experimental setup of UniSent evaluation: Set (C) is the intersection of the target embedding space words (Wikipedia/Emoticon) and the UniSent lexicon as well as the manually created lexicon. Set (A) is the intersection of the target embedding space words and the UniSent lexicon, excluding set (C). Set (B) is the intersection of the target embedding space words and the manually created lexicon, excluding set (C).",
"Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting.",
"Table 2: Comparison of domain adapted and vanilla UniSent for Emoticon sentiment prediction using monlingual twitter embeddings in German, Italian, French, and Spanish."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"how is quality measured?"
] | [
[
"1904.09678-4-Table1-1.png"
]
] | [
"Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality."
] | 13 |
1910.04269 | Spoken Language Identification using ConvNets | Language Identification (LI) is an important first step in several speech processing systems. With a growing number of voice-based assistants, speech LI has emerged as a widely researched field. To approach the problem of identifying languages, we can either adopt an implicit approach where only the speech for a language is present or an explicit one where text is available with its corresponding transcript. This paper focuses on an implicit approach due to the absence of transcriptive data. This paper benchmarks existing models and proposes a new attention based model for language identification which uses log-Mel spectrogram images as input. We also present the effectiveness of raw waveforms as features to neural network models for LI tasks. For training and evaluation of models, we classified six languages (English, French, German, Spanish, Russian and Italian) with an accuracy of 95.4% and four languages (English, French, German, Spanish) with an accuracy of 96.3% obtained from the VoxForge dataset. This approach can further be scaled to incorporate more languages. | {
"paragraphs": [
[
"Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.",
"Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar.",
"Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy.",
"In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input.",
"The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work."
],
[
"Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.",
"Revay et al. BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the log-Mel spectra of each raw audio. The model uses a cyclic learning rate where learning rate increases and then decreases linearly. Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library. The model classified six languages – English, French, Spanish, Russian, Italian and German – and achieving an accuracy of 89.0%.",
"Gazeau et al. BIBREF16 in his research showed how Neural Networks, Support Vector Machine and Hidden Markov Model (HMM) can be used to identify French, English, Spanish and German. Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets. Hidden Markov models convert speech into a sequence of vectors, was used to capture temporal features in speech. HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset. They reported an accuracy of 70.0%.",
"Bartz et al. BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification. They proposed a new architecture for extracting spatial features from log-Mel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language. This model achieved an accuracy of 91.0% on Youtube News Dataset BIBREF17. In their second architecture they used the Inception-v3 BIBREF18 architecture to extract spatial features which were then used as input for bi-directional LSTMs to predict the language accurately. This model achieved an accuracy of 96.0% on four languages which were English, German, French and Spanish. They also trained their CNN model (obtained after removing RNN from CRNN model) and the Inception-v3 on their dataset. However they were not able to achieve better results achieving and reported 90% and 95% accuracies, respectively.",
"Kumar et al. BIBREF0 used Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP), Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) as features for language identification. BFCC and RPLP are hybrid features derived using MFCC and PLP. They used two different models based on Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) for classification. These classification models were trained with different features. The authors were able to show that these models worked better with hybrid features (BFCC and RPLP) as compared to conventional features (MFCC and PLP). GMM combined with RPLP features gave the most promising results and achieved an accuracy of 88.8% on ten languages. They designed their own dataset comprising of ten languages being Dutch, English, French, German, Italian, Russian, Spanish, Hindi, Telegu, and Bengali.",
"Montavon BIBREF7 generated Mel spectrogram as features for a time-delay neural network (TDNN). This network had two-dimensional convolutional layers for feature extraction. An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch. The difficulties in classifying perceptually similar languages like German and English were also put forward in this work. It is mentioned that the proposed approach is less robust to new speakers present in the test dataset. This method was able to achieve an accuracy of 91.2% on dataset comprising of 3 languages – English, French and German.",
"In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)."
],
[
"Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.",
"Recently, using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23, BIBREF22. Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients (MFCC), Constant Q Transform (CQT), Fast Fourier Transform (FFT), etc.",
"Audio files are a sequence of spoken words, hence they have temporal features too.A CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al. BIBREF1 using audio files. Therefore, we combined both of these to make a CRNN model.",
"We propose three types of models to tackle the problem with different approaches, discussed as follows."
],
[
"As an average human's voice is around 300 Hz and according to Nyquist-Shannon sampling theorem all the useful frequencies (0-300 Hz) are preserved with sampling at 8 kHz, therefore, we sampled raw audio files from all six languages at 8 kHz",
"The average length of audio files in this dataset was about 10.4 seconds and standard deviation was 2.3 seconds. For our experiments, the audio length was set to 10 seconds. If the audio files were shorter than 10 second, then the data was repeated and concatenated. If audio files were longer, then the data was truncated."
],
[
"We applied the following design principles to all our models:",
"Every convolutional layer is always followed by an appropriate max pooling layer. This helps in containing the explosion of parameters and keeps the model small and nimble.",
"Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer. Each convolutional block is preceded or succeded by a convolutional layer.",
"Batch Normalization and Rectified linear unit activations were applied after each convolutional layer. Batch Normalization helps speed up convergence during training of a neural network.",
"Model ends with a dense layer which acts the final output layer."
],
[
"As the sampling rate is 8 kHz and audio length is 10 s, hence the input is raw audio to the models with input size of (batch size, 1, 80000). In Table TABREF10, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.",
"-10pt"
],
[
"Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters, therefore efficient exploration is needed for any feasible study. We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF12, various hyperparameters we considered are plotted against the validation accuracy as violin plots. Our observations for each hyperparameter are summarized below:",
"Number of filters in first layer: We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer. A higher number of filters in the first layer of network is able to preserve most of the characteristics of input.",
"Kernel Size: We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of {3, 5, 7, 9}. We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters. A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy.",
"Dropout: Dropout randomly turns-off (sets to 0) various individual nodes during training of the network. In a deep CNN it is important that nodes do not develop a co-dependency amongst each other during training in order to prevent overfitting on training data BIBREF25. Dropout rate of $0.1$ works well for our model. When using a higher dropout rate the network is not able to capture the patterns in training dataset.",
"Batch Size: We chose batch sizes from amongst the set {32, 64, 128}. There is more noise while calculating error in a smaller batch size as compared to a larger one. This tends to have a regularizing effect during training of the network and hence gives better results. Thus, batch size of 32 works best for the model.",
"Layers in Convolutional block 1 and 2: We varied the number of layers in both the convolutional blocks. If the number of layers is low, then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data. In our network, two layers in the first block and one layer in the second block give optimal results."
],
[
"Log-Mel spectrogram is the most commonly used method for converting audio into the image domain. The audio data was again sampled at 8 kHz. The input to this model was the log-Mel spectra. We generated log-Mel spectrogram using the LibROSA BIBREF26 library. In Table TABREF16, we present a detailed layer-by-layer illustration of the model along with its hyperparameter."
],
[
"We took some specific design choices for this model, which are as follows:",
"We added residual connections with each convolutional layer. Residual connections in a way makes the model selective of the contributing layers, determines the optimal number of layers required for training and solves the problem of vanishing gradients. Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model.",
"We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image. Spatial attention aids learning irrespective of transformations, scaling and rotation done on the input images making the model more robust and helping it to achieve better results.",
"We added Channel Attention networks so as to help the model to find interdependencies among color channels of log-Mel spectra. It adaptively assigns importance to each color channel in a deep convolutional multi-channel network. In our model we apply channel and spatial attention just before feeding the input into bi-directional GRU. This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language."
],
[
"We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF19 ,various hyperparameters we tuned are plotted against the validation accuracy. Our observations for each hyperparameter are summarized below:",
"Filter Size: 64 filters in the first layer of network can preserve most of the characteristics of input, but increasing it to 128 is inefficient as overfitting occurs.",
"Kernel Size: There is a trade-off between kernel size and capturing complex non-linear features. Using a small kernel size will require more layers to capture features whereas using a large kernel size will require less layers. Large kernels capture simple non-linear features whereas using a smaller kernel will help us capture more complex non-linear features. However, with more layers, backpropagation necessitates the need for a large memory. We experimented with large kernel size and gradually increased the layers in order to capture more complex features. The results are not conclusive and thus we chose kernel size of 7 against 3.",
"Dropout: Dropout rate of 0.1 works well for our data. When using a higher dropout rate the network is not able to capture the patterns in training dataset.",
"Batch Size: There is always a trade-off between batch size and getting accurate gradients. Using a large batch size helps the model to get more accurate gradients since the model tries to optimize gradients over a large set of images. We found that using a batch size of 128 helped the model to train faster and get better results than using a batch size less than 128.",
"Number of hidden units in bi-directional GRU: Varying the number of hidden units and layers in GRU helps the model to capture temporal features which can play a significant role in identifying the language correctly. The optimal number of hidden units and layers depends on the complexity of the dataset. Using less number of hidden units may capture less features whereas using large number of hidden units may be computationally expensive. In our case we found that using 1536 hidden units in a single bi-directional GRU layer leads to the best result.",
"Image Size: We experimented with log-Mel spectra images of sizes $64 \\times 64$ and $128 \\times 128$ pixels and found that our model worked best with images of size of $128 \\times 128$ pixels.",
"We also evaluated our model on data with mixup augmentation BIBREF28. It is a data augmentation technique that also acts as a regularization technique and prevents overfitting. Instead of directly taking images from the training dataset as input, mixup takes a linear combination of any two random images and feeds it as input. The following equations were used to prepared a mixed-up dataset:",
"and",
"where $\\alpha \\in [0, 1]$ is a random variable from a $\\beta $-distribution, $I_1$."
],
[
"This model is a similar model to 2D-ConvNet with Attention and bi-directional GRU described in section SECREF13 except that it lacks skip connections, attention layers, bi-directional GRU and the embedding layer incorporated in the previous model."
],
[
"We classified six languages (English, French, German, Spanish, Russian and Italian) from the VoxForge BIBREF6 dataset. VoxForge is an open-source speech corpus which primarily consists of samples recorded and submitted by users using their own microphone. This results in significant variation of speech quality between samples making it more representative of real world scenarios.",
"Our dataset consists of 1,500 samples for each of six languages. Out of 1,500 samples for each language, 1,200 were randomly selected as training dataset for that language and rest 300 as validation dataset using k-fold cross-validation. To sum up, we trained our model on 7,200 samples and validated it on 1800 samples comprising six languages. The results are discussed in next section."
],
[
"This paper discusses two end-to-end approaches which achieve state-of-the-art results in both the image as well as audio domain on the VoxForge dataset BIBREF6. In Table TABREF25, we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages.",
"In the audio domain (using raw audio waveform as input), 1D-ConvNet achieved a mean accuracy of 93.7% with a standard deviation of 0.3% on running k-fold cross validation. In Fig FIGREF27 (a) we present the confusion matrix for the 1D-ConvNet model.",
"In the image domain (obtained by taking log-Mel spectra of raw audio), 2D-ConvNet with 2D attention (channel and spatial attention) and bi-directional GRU achieved a mean accuracy of 95.0% with a standard deviation of 1.2% for six languages. This model performed better when mixup regularization was applied. 2D-ConvNet achieved a mean accuracy of 95.4% with standard deviation of 0.6% on running k-fold cross validation for six languages when mixup was applied. In Fig FIGREF27 (b) we present the confusion matrix for the 2D-ConvNet model. 2D attention models focused on the important features extracted by convolutional layers and bi-directional GRU captured the temporal features."
],
[
"Several of the spoken languages in Europe belong to the Indo-European family. Within this family, the languages are divided into three phyla which are Romance, Germanic and Slavic. Of the 6 languages that we selected Spanish (Es), French (Fr) and Italian (It) belong to the Romance phyla, English and German belong to Germanic phyla and Russian in Slavic phyla. Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish.",
"Our model confuses between French (Fr) and Russian (Ru) while these languages belong to different phyla, many words from French were adopted into Russian such as automate (oot-oo-mate) in French becomes ABTOMaT (aff-taa-maat) in Russian which have similar pronunciation.",
""
],
[
"The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio. Also, there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting, adding random noise and changing speed of audio. These help in making neural networks more robust to variations which might be present in real world scenarios. There can be further exploration of various feature extraction techniques like Constant-Q transform and Fast Fourier Transform and assessment of their impact on Language Identification.",
"There can be further improvements in neural network architectures like concatenating the high level features obtained from 1D-ConvNet and 2D-ConvNet, before performing classification. There can be experiments using deeper networks with skip connections and Inception modules. These are known to have positively impacted the performance of Convolutional Neural Networks."
],
[
"There are two main contributions of this paper in the domain of spoken language identification. Firstly, we presented an extensive analysis of raw audio waveforms as input features to 1D-ConvNet. We experimented with various hyperparameters in our 1D-ConvNet and evaluated their effect on validation accuracy. This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary pre-procesing step. We were able to achieve an accauracy of 93.7% using this technique.",
"Next, we discussed the enhancement in performance of 2D-ConvNet using mixup augmentation, which is a recently developed technique to prevent overfitting on test data.This approach achieved an accuracy of 95.4%. We also analysed how attention mechanism and recurrent layers impact the performance of networks. This approach achieved an accuracy of 95.0%."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Method ::: Motivations",
"Proposed Method ::: Description of Features",
"Proposed Method ::: Model Description",
"Proposed Method ::: Model Details: 1D ConvNet",
"Proposed Method ::: Model Details: 1D ConvNet ::: Hyperparameter Optimization:",
"Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU",
"Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: ",
"Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: Hyperparameter Optimization:",
"Proposed Method ::: Model details: 2D-ConvNet",
"Proposed Method ::: Dataset",
"Results and Discussion",
"Results and Discussion ::: Misclassification",
"Results and Discussion ::: Future Scope",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"32dee5de8cb44c67deef309c16e14e0634a7a95e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Results of the two models and all its variations"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Results of the two models and all its variations"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"1a51115249ab15633d834cd3ea7d986f6cc8d7c1",
"55b711611cb5f52eab6c38051fb155c5c37234ff"
],
"answer": [
{
"evidence": [
"In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2405966a3c4bcf65f3b59888f345e2b0cc5ef7b0"
],
"answer": [
{
"evidence": [
"In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 1)\nPrevious state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages)",
"highlighted_evidence": [
"In Table TABREF1, we summarize the quantitative results of the above previous studies.",
"In Table TABREF1, we summarize the quantitative results of the above previous studies."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Does the model use both spectrogram images and raw waveforms as features?",
"Is the performance compared against a baseline model?",
"What is the accuracy reported by state-of-the-art methods?"
],
"question_id": [
"dc1fe3359faa2d7daa891c1df33df85558bc461b",
"922f1b740f8b13fdc8371e2a275269a44c86195e",
"b39f2249a1489a2cef74155496511cc5d1b2a73d"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"language identification",
"language identification",
"language identification"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 2: Architecture of the 1D-ConvNet model",
"Fig. 1: Effect of hyperparameter variation of the hyperparameter on the classification accuracy for the case of 1D-ConvNet. Orange colored violin plots show the most favored choice of the hyperparameter and blue shows otherwise. One dot represents one sample.",
"Table 3: Architecture of the 2D-ConvNet model",
"Fig. 2: Effect of hyperparameter variation of the six selected hyperparameter on the classification accuracy for the case of 2D-ConvNet. Orange colored violin plots show the most favored choice of the hyperparameter and blue shows otherwise. One dot represents one sample.",
"Table 4: Results of the two models and all its variations",
"Fig. 3: Confusion matrix for classification of six languages with our (a) 1DConvNet and (b) 2D-ConvNet model. Asterisk (*) marks a value less than 0.1%."
],
"file": [
"6-Table2-1.png",
"7-Figure1-1.png",
"8-Table3-1.png",
"9-Figure2-1.png",
"11-Table4-1.png",
"12-Figure3-1.png"
]
} | [
"What is the accuracy reported by state-of-the-art methods?"
] | [
[
"1910.04269-Related Work-6"
]
] | [
"Answer with content missing: (Table 1)\nPrevious state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages)"
] | 15 |
2001.00137 | Stacked DeBERT: All Attention in Incomplete Data for Text Classification | In this paper, we propose Stacked DeBERT, short for Stacked Denoising Bidirectional Encoder Representations from Transformers. This novel model improves robustness in incomplete data, when compared to existing systems, by designing a novel encoding scheme in BERT, a powerful language representation model solely based on attention mechanisms. Incomplete data in natural language processing refer to text with missing or incorrect words, and its presence can hinder the performance of current models that were not implemented to withstand such noises, but must still perform well even under duress. This is due to the fact that current approaches are built for and trained with clean and complete data, and thus are not able to extract features that can adequately represent incomplete data. Our proposed approach consists of obtaining intermediate input representations by applying an embedding layer to the input tokens followed by vanilla transformers. These intermediate features are given as input to novel denoising transformers which are responsible for obtaining richer input representations. The proposed approach takes advantage of stacks of multilayer perceptrons for the reconstruction of missing words' embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. We consider two datasets for training and evaluation: the Chatbot Natural Language Understanding Evaluation Corpus and Kaggle's Twitter Sentiment Corpus. Our model shows improved F1-scores and better robustness in informal/incorrect texts present in tweets and in texts with Speech-to-Text error in the sentiment and intent classification tasks. | {
"paragraphs": [
[
"Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.",
"Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.",
"The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.",
"Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.",
"The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:",
"Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.",
"Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.",
"The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works."
],
[
"We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.",
"The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.",
"Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.",
"The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):",
"where $f(\\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):",
"where $g(\\cdot )$ is the parameterized function that reconstructs $\\mathbf {z}$ as $h_{rec}$.",
"The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):",
"After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.",
"Classification is done with a feedforward network and softmax activation function. Softmax $\\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):",
"where $o = W t + b$, the output of the feedforward layer used for classification."
],
[
"In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.",
"Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.",
"After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning."
],
[
"In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.",
"The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.",
"The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.",
"Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):",
"where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise."
],
[
"Besides the already mentioned BERT, the following baseline models are also used for comparison."
],
[
"We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) ."
],
[
"Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31."
],
[
"The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs."
],
[
"No settable training configurations available in the online platforms."
],
[
"Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP."
],
[
"Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus."
],
[
"Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus)."
],
[
"Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$ accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$ for our model and 74$\\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.",
"In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively."
],
[
"Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.",
"The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.",
"Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.",
"Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one."
],
[
"In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer."
],
[
"This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%)."
]
],
"section_name": [
"Introduction",
"Proposed model",
"Dataset ::: Twitter Sentiment Classification",
"Dataset ::: Intent Classification from Text with STT Error",
"Experiments ::: Baseline models",
"Experiments ::: Baseline models ::: NLU service platforms",
"Experiments ::: Baseline models ::: Semantic hashing with classifier",
"Experiments ::: Training specifications",
"Experiments ::: Training specifications ::: NLU service platforms",
"Experiments ::: Training specifications ::: Semantic hashing with classifier",
"Experiments ::: Training specifications ::: BERT",
"Experiments ::: Training specifications ::: Stacked DeBERT",
"Experiments ::: Results on Sentiment Classification from Incorrect Text",
"Experiments ::: Results on Intent Classification from Text with STT Error",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"c7a83f3225e54b6306ef3372507539e471c155d0"
],
"answer": [
{
"evidence": [
"Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.",
"The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.",
"The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7c44e07bb8f2884cd73dd023e86dfeb7241e999c"
],
"answer": [
{
"evidence": [
"Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3."
],
"extractive_spans": [],
"free_form_answer": "typos in spellings or ungrammatical words",
"highlighted_evidence": [
"Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"6d5e9774c1d04b3cac91fcc7ac9fd6ff56d9bc63"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"b36dcc41db3d7aa7503fe85cbc1793b27473e4ed",
"f0dd380c67caba4c7c3fe0ee9b8185f4923ed868"
],
"answer": [
{
"evidence": [
"The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.",
"In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.",
"In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2b4f582794c836ce6cde20b07b5f754cb67f8e20",
"c6bacbe8041fdef389e98b119b050cb03cce14e1"
],
"answer": [
{
"evidence": [
"Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$ accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$ for our model and 74$\\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1f4a6fce4f78662774735b1e27744f55b0efd7a8"
],
"answer": [
{
"evidence": [
"Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$ accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$ for our model and 74$\\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.",
"Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.",
"FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5))."
],
"extractive_spans": [],
"free_form_answer": "In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average",
"highlighted_evidence": [
"Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. ",
"Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.",
"FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5))."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they report results only on English datasets?",
"How do the authors define or exemplify 'incorrect words'?",
"How many vanilla transformers do they use after applying an embedding layer?",
"Do they test their approach on a dataset without incomplete data?",
"Should their approach be applied only when dealing with incomplete data?",
"By how much do they outperform other models in the sentiment in intent classification tasks?"
],
"question_id": [
"637aa32a34b20b4b0f1b5dfa08ef4e0e5ed33d52",
"4b8257cdd9a60087fa901da1f4250e7d910896df",
"7e161d9facd100544fa339b06f656eb2fc64ed28",
"abc5836c54fc2ac8465aee5a83b9c0f86c6fd6f5",
"4debd7926941f1a02266b1a7be2df8ba6e79311a",
"3b745f086fb5849e7ce7ce2c02ccbde7cfdedda5"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"twitter",
"twitter",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The proposed model Stacked DeBERT is organized in three layers: embedding, conventional bidirectional transformers and denoising bidirectional transformer.",
"Table 1: Types of mistakes on the Twitter dataset.",
"Table 2: Examples of original tweets and their corrected version.",
"Table 3: Details about our Twitter Sentiment Classification dataset, composed of incorrect and correct data.",
"Table 4: Details about our Incomplete Intent Classification dataset based on the Chatbot NLU Evaluation Corpus.",
"Figure 2: Diagram of 2-step process to obtain dataset with STT error in text.",
"Table 5: Example of sentence from Chatbot NLU Corpus with different TTS-STT combinations and their respective inverted BLEU score (denotes the level of noise in the text).",
"Table 6: F1-micro scores for Twitter Sentiment Classification task on Kaggle’s Sentiment140 Corpus. Note that: (Inc) is the original dataset, with naturally incorrect tweets, (Corr) is the corrected version of the dataset and (Inc+Corr) contains both.",
"Figure 3: Normalized confusion matrix for the Twitter Sentiment Classification dataset. The first row has the confusion matrices for BERT in the original Twitter dataset (Inc), the corrected version (Corr) and both original and corrected tweets (Inc+Corr) respectively. The second row contains the confusion matrices for Stacked DeBERT in the same order.",
"Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5)).",
"Figure 4: Robustness curve for the Chatbot NLU Corpus with STT error.",
"Figure 5: Normalized confusion matrix for the Chatbot NLU Intent Classification dataset for complete data and data with STT error. The first column has the confusion matrices for BERT and the second for Stacked DeBERT."
],
"file": [
"5-Figure1-1.png",
"8-Table1-1.png",
"9-Table2-1.png",
"9-Table3-1.png",
"10-Table4-1.png",
"11-Figure2-1.png",
"12-Table5-1.png",
"14-Table6-1.png",
"15-Figure3-1.png",
"16-Table7-1.png",
"17-Figure4-1.png",
"17-Figure5-1.png"
]
} | [
"How do the authors define or exemplify 'incorrect words'?",
"By how much do they outperform other models in the sentiment in intent classification tasks?"
] | [
[
"2001.00137-Introduction-0"
],
[
"2001.00137-Experiments ::: Results on Sentiment Classification from Incorrect Text-0",
"2001.00137-Experiments ::: Results on Intent Classification from Text with STT Error-0",
"2001.00137-16-Table7-1.png"
]
] | [
"typos in spellings or ungrammatical words",
"In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average"
] | 19 |
2002.06644 | Towards Detection of Subjective Bias using Contextualized Word Embeddings | Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of $360k$ labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like $BERT_{large}$ by a margin of $5.6$ F1 score. | {
"paragraphs": [
[
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than $56\\%$ of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.",
"There has been considerable work on capturing subjectivity using text-classification models ranging from linguistic-feature-based modelsBIBREF1 to finetuned pre-trained word embeddings like BERTBIBREF2. The detection of bias-inducing words in a Wikipedia statement was explored in BIBREF1. The authors propose the \"Neutral Point of View\" (NPOV) corpus made using Wikipedia revision history, containing Wikipedia edits that are specifically designed to remove subjective bias. They use logistic regression with linguistic features, including factive verbs, hedges, and subjective intensifiers to detect bias-inducing words. In BIBREF2, the authors extend this work by mitigating subjective bias after detecting bias-inducing words using a BERT-based model. However, they primarily focused on detecting and mitigating subjective bias for single-word edits. We extend their work by incorporating multi-word edits by detecting bias at the sentence level. We further use their version of the NPOV corpus called Wiki Neutrality Corpus(WNC) for this work.",
"The task of detecting sentences containing subjective bias rather than individual words inducing the bias has been explored in BIBREF3. However, they conduct majority of their experiments in controlled settings, limiting the type of articles from which the revisions were extracted. Their attempt to test their models in a general setting is dwarfed by the fact that they used revisions from a single Wikipedia article resulting in just 100 instances to evaluate their proposed models robustly. Consequently, we perform our experiments in the complete WNC corpus, which consists of $423,823$ revisions in Wikipedia marked by its editors over a period of 15 years, to simulate a more general setting for the bias.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of $5.6$ of F1 score and $5.95\\%$ of Accuracy."
],
[
"In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection."
],
[
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
],
[
"Optimized BERT-based models: We use BERT-based models optimized as in BIBREF6 and BIBREF7, pretrained on a dataset as large as twelve times as compared to $BERT_{large}$, with bigger batches, and longer sequences. ALBERT, introduced in BIBREF7, uses factorized embedding parameterization and cross-layer parameter sharing for parameter reduction. These optimizations have led both the models to outperform $BERT_{large}$ in various benchmarking tests, like GLUE for text classification and SQuAD for Question Answering.",
"Distilled BERT-based models: Secondly, we propose to use distilled BERT-based models, as introduced in BIBREF8. They are smaller general-purpose language representation model, pre-trained by leveraging distillation knowledge. This results in significantly smaller and faster models with performance comparable to their undistilled versions. We finetune these pretrained distilled models on the training corpus to efficiently detect subjectivity.",
"BERT-based ensemble models: Lastly, we use the weighted-average ensembling technique to exploit the predictions made by different variations of the above models. Ensembling methodology entails engendering a predictive model by utilizing predictions from multiple models in order to improve Accuracy and F1, decrease variance, and bias. We experiment with variations of $RoBERTa_{large}$, $ALBERT_{xxlarge.v2}$, $DistilRoBERTa$ and $BERT$ and outline selected combinations in tab:experimental-results."
],
[
"We perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019. We randomly shuffled these sentences and split this dataset into two parts in a $90:10$ Train-Test split and perform the evaluation on the held-out test dataset.",
"For all BERT-based models, we use a learning rate of $2*10^{-5}$, a maximum sequence length of 50, and a weight decay of $0.01$ while finetuning the model. We use FastText's recently open-sourced automatic hyperparameter optimization functionality while training the model. For the BiLSTM baseline, we use a dropout of $0.05$ along with a recurrent dropout of $0.2$ in two 64 unit sized stacked BiLSTMs, using softmax activation layer as the final dense layer."
],
[
"tab:experimental-results shows the performance of different models on the WNC corpus evaluated on the following four metrics: Precision, Recall, F1, and Accuracy. Our proposed methodology, the use of finetuned optimized BERT based models, and BERT-based ensemble models outperform the baselines for all the metrics.",
"Among the optimized BERT based models, $RoBERTa_{large}$ outperforms all other non-ensemble models and the baselines for all metrics. It further achieves a maximum recall of $0.681$ for all the proposed models. We note that DistillRoBERTa, a distilled model, performs competitively, achieving $69.69\\%$ accuracy, and $0.672$ F1 score. This observation shows that distilled pretrained models can replace their undistilled counterparts in a low-computing environment.",
"We further observe that ensemble models perform better than optimized BERT-based models and distilled pretrained models. Our proposed ensemble comprising of $RoBERTa_{large}$, $ALBERT_{xxlarge.v2}$, $DistilRoBERTa$ and $BERT$ outperforms all the proposed models obtaining $0.704$ F1 score, $0.733$ precision, and $71.61\\%$ Accuracy."
],
[
"In this paper, we investigated BERT-based architectures for sentence level subjective bias detection. We perform our experiments on a general Wikipedia corpus consisting of more than $360k$ pre and post subjective bias neutralized sentences. We found our proposed architectures to outperform the existing baselines significantly. BERT-based ensemble consisting of RoBERTa, ALBERT, DistillRoBERTa, and BERT led to the highest F1 and Accuracy. In the future, we would like to explore document-level detection of subjective bias, multi-word mitigation of the bias, applications of detecting the bias in recommendation systems."
]
],
"section_name": [
"Introduction",
"Baselines and Approach",
"Baselines and Approach ::: Baselines",
"Baselines and Approach ::: Proposed Approaches",
"Experiments ::: Dataset and Experimental Settings",
"Experiments ::: Experimental Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"dfc487e35ee5131bc5054463ace009e6bd8fc671"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"23c76dd5ac11dd015f81868f3a8e1bafdf3d424c",
"2c63f673e8658e64600cc492bc7d6a48b56c2119"
],
"answer": [
{
"evidence": [
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
],
"extractive_spans": [
"FastText",
"BiLSTM",
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baselines and Approach",
"In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.",
"Baselines and Approach ::: Baselines",
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset.",
"FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task"
],
"extractive_spans": [
"FastText",
"BERT ",
"two-layer BiLSTM architecture with GloVe word embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines and Approach\nIn this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.\n\n",
"Baselines and Approach ::: Baselines\nFastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.\n\nBiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.\n\nBERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset.",
"FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"293dcdfb800de157c1c4be7641cd05512cc26fb2"
],
"answer": [
{
"evidence": [
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than $56\\%$ of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of $5.6$ of F1 score and $5.95\\%$ of Accuracy.",
"Experiments ::: Dataset and Experimental Settings",
"We perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019. We randomly shuffled these sentences and split this dataset into two parts in a $90:10$ Train-Test split and perform the evaluation on the held-out test dataset."
],
"extractive_spans": [],
"free_form_answer": "They used BERT-based models to detect subjective language in the WNC corpus",
"highlighted_evidence": [
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection.",
"Experiments ::: Dataset and Experimental Settings\nWe perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do the authors report only on English?",
"What is the baseline for the experiments?",
"Which experiments are perfomed?"
],
"question_id": [
"830de0bd007c4135302138ffa8f4843e4915e440",
"680dc3e56d1dc4af46512284b9996a1056f89ded",
"bd5379047c2cf090bea838c67b6ed44773bcd56f"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias",
"bias"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Experimental Results for the Subjectivity Detection Task"
],
"file": [
"2-Table1-1.png"
]
} | [
"Which experiments are perfomed?"
] | [
[
"2002.06644-Introduction-3",
"2002.06644-Experiments ::: Dataset and Experimental Settings-0",
"2002.06644-Introduction-0"
]
] | [
"They used BERT-based models to detect subjective language in the WNC corpus"
] | 21 |
1809.08731 | Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! | Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level. We further introduce WPSLOR, a novel WordPiece-based version, which harnesses a more compact language model. Even though word-overlap metrics like ROUGE are computed with the help of hand-written references, our referenceless methods obtain a significantly higher correlation with human fluency scores on a benchmark dataset of compressed sentences. Finally, we present ROUGE-LM, a reference-based metric which is a natural extension of WPSLOR to the case of available references. We show that ROUGE-LM yields a significantly higher correlation with human judgments than all baseline metrics, including WPSLOR on its own. | {
"paragraphs": [
[
"Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming.",
"Evaluating sentences on their fluency, on the other hand, is a linguistic ability of humans which has been the subject of a decade-long debate in cognitive science. In particular, the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 , BIBREF4 , BIBREF5 . Within this context, lau2017grammaticality have recently shown that neural language models (LMs) can be used for modeling human ratings of acceptability. Namely, they found SLOR BIBREF6 —sentence log-probability which is normalized by unigram log-probability and sentence length—to correlate well with acceptability judgments at the sentence level.",
"However, to the best of our knowledge, these insights have so far gone disregarded by the natural language processing (NLP) community. In this paper, we investigate the practical implications of lau2017grammaticality's findings for fluency evaluation of NLG, using the task of automatic compression BIBREF7 , BIBREF8 as an example (cf. Table 1 ). Specifically, we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which (i) does not rely on references; (ii) can naturally be applied at the sentence level (in contrast to the system level); and (iii) does not need human fluency annotations of any kind. In particular the first aspect, i.e., SLOR not needing references, makes it a promising candidate for automatic evaluation. Getting rid of human references has practical importance in a variety of settings, e.g., if references are unavailable due to a lack of resources for annotation, or if obtaining references is impracticable. The latter would be the case, for instance, when filtering system outputs at application time.",
"We further introduce WPSLOR, a novel, WordPiece BIBREF9 -based version of SLOR, which drastically reduces model size and training time. Our experiments show that both approaches correlate better with human judgments than traditional word-overlap metrics, even though the latter do rely on reference compressions. Finally, investigating the case of available references and how to incorporate them, we combine WPSLOR and ROUGE to ROUGE-LM, a novel reference-based metric, and increase the correlation with human fluency ratings even further."
],
[
"Acceptability judgments, i.e., speakers' judgments of the well-formedness of sentences, have been the basis of much linguistics research BIBREF10 , BIBREF11 : a speakers intuition about a sentence is used to draw conclusions about a language's rules. Commonly, “acceptability” is used synonymously with “grammaticality”, and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably. Strictly speaking, however, a sentence can be unacceptable, even though it is grammatical – a popular example is Chomsky's phrase “Colorless green ideas sleep furiously.” BIBREF3 In turn, acceptable sentences can be ungrammatical, e.g., in an informal context or in poems BIBREF12 .",
"Scientists—linguists, cognitive scientists, psychologists, and NLP researcher alike—disagree about how to represent human linguistic abilities. One subject of debates are acceptability judgments: while, for many, acceptability is a binary condition on membership in a set of well-formed sentences BIBREF3 , others assume that it is gradient in nature BIBREF13 , BIBREF2 . Tackling this research question, lau2017grammaticality aimed at modeling human acceptability judgments automatically, with the goal to gain insight into the nature of human perception of acceptability. In particular, they tried to answer the question: Do humans judge acceptability on a gradient scale? Their experiments showed a strong correlation between human judgments and normalized sentence log-probabilities under a variety of LMs for artificial data they had created by translating and back-translating sentences with neural models. While they tried different types of LMs, best results were obtained for neural models, namely recurrent neural networks (RNNs).",
"In this work, we investigate if approaches which have proven successful for modeling acceptability can be applied to the NLP problem of automatic fluency evaluation."
],
[
"In this section, we first describe SLOR and the intuition behind this score. Then, we introduce WordPieces, before explaining how we combine the two."
],
[
"SLOR assigns to a sentence $S$ a score which consists of its log-probability under a given LM, normalized by unigram log-probability and length: ",
"$$\\text{SLOR}(S) = &\\frac{1}{|S|} (\\ln (p_M(S)) \\\\\\nonumber &- \\ln (p_u(S)))$$ (Eq. 8) ",
" where $p_M(S)$ is the probability assigned to the sentence under the LM. The unigram probability $p_u(S)$ of the sentence is calculated as ",
"$$p_u(S) = \\prod _{t \\in S}p(t)$$ (Eq. 9) ",
"with $p(t)$ being the unconditional probability of a token $t$ , i.e., given no context.",
"The intuition behind subtracting unigram log-probabilities is that a token which is rare on its own (in contrast to being rare at a given position in the sentence) should not bring down the sentence's rating. The normalization by sentence length is necessary in order to not prefer shorter sentences over equally fluent longer ones. Consider, for instance, the following pair of sentences: ",
"$$\\textrm {(i)} ~ ~ &\\textrm {He is a citizen of France.}\\nonumber \\\\\n\\textrm {(ii)} ~ ~ &\\textrm {He is a citizen of Tuvalu.}\\nonumber $$ (Eq. 11) ",
" Given that both sentences are of equal length and assuming that France appears more often in a given LM training set than Tuvalu, the length-normalized log-probability of sentence (i) under the LM would most likely be higher than that of sentence (ii). However, since both sentences are equally fluent, we expect taking each token's unigram probability into account to lead to a more suitable score for our purposes.",
"We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus."
],
[
"Sub-word units like WordPieces BIBREF9 are getting increasingly important in NLP. They constitute a compromise between characters and words: On the one hand, they yield a smaller vocabulary, which reduces model size and training time, and improve handling of rare words, since those are partitioned into more frequent segments. On the other hand, they contain more information than characters.",
"WordPiece models are estimated using a data-driven approach which maximizes the LM likelihood of the training corpus as described in wu2016google and 6289079."
],
[
"We propose a novel version of SLOR, by incorporating a LM which is trained on a corpus which has been split by a WordPiece model. This leads to a smaller vocabulary, resulting in a LM with less parameters, which is faster to train (around 12h compared to roughly 5 days for the word-based version in our experiments). We will refer to the word-based SLOR as WordSLOR and to our newly proposed WordPiece-based version as WPSLOR."
],
[
"Now, we present our main experiment, in which we assess the performances of WordSLOR and WPSLOR as fluency evaluation metrics."
],
[
"We experiment on the compression dataset by toutanova2016dataset. It contains single sentences and two-sentence paragraphs from the Open American National Corpus (OANC), which belong to 4 genres: newswire, letters, journal, and non-fiction. Gold references are manually created and the outputs of 4 compression systems (ILP (extractive), NAMAS (abstractive), SEQ2SEQ (extractive), and T3 (abstractive); cf. toutanova2016dataset for details) for the test data are provided. Each example has 3 to 5 independent human ratings for content and fluency. We are interested in the latter, which is rated on an ordinal scale from 1 (disfluent) through 3 (fluent). We experiment on the 2955 system outputs for the test split.",
"Average fluency scores per system are shown in Table 2 . As can be seen, ILP produces the best output. In contrast, NAMAS is the worst system for fluency. In order to be able to judge the reliability of the human annotations, we follow the procedure suggested by TACL732 and used by toutanova2016dataset, and compute the quadratic weighted $\\kappa $ BIBREF14 for the human fluency scores of the system-generated compressions as $0.337$ ."
],
[
"We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data.",
"The hyperparameters of all LMs are tuned using perplexity on a held-out part of Gigaword, since we expect LM perplexity and final evaluation performance of WordSLOR and, respectively, WPSLOR to correlate. Our best networks consist of two layers with 512 hidden units each, and are trained for $2,000,000$ steps with a minibatch size of 128. For optimization, we employ ADAM BIBREF16 ."
],
[
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.",
"We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as ",
"$$\\text{NCE}(S) = \\tfrac{1}{|S|} \\ln (p_M(S))$$ (Eq. 22) ",
"with $p_M(S)$ being the probability assigned to the sentence by a LM. We employ the same LMs as for SLOR, i.e., LMs trained on words (WordNCE) and WordPieces (WPNCE).",
"Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy: ",
"$$\\text{PPL}(S) = \\exp (-\\text{NCE}(S))$$ (Eq. 24) ",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments."
],
[
"Following earlier work BIBREF2 , we evaluate our metrics using Pearson correlation with human judgments. It is defined as the covariance divided by the product of the standard deviations: ",
"$$\\rho _{X,Y} = \\frac{\\text{cov}(X,Y)}{\\sigma _X \\sigma _Y}$$ (Eq. 28) ",
"Pearson cannot accurately judge a metric's performance for sentences of very similar quality, i.e., in the extreme case of rating outputs of identical quality, the correlation is either not defined or 0, caused by noise of the evaluation model. Thus, we additionally evaluate using mean squared error (MSE), which is defined as the squares of residuals after a linear transformation, divided by the sample size: ",
"$$\\text{MSE}_{X,Y} = \\underset{f}{\\min }\\frac{1}{|X|}\\sum \\limits _{i = 1}^{|X|}{(f(x_i) - y_i)^2}$$ (Eq. 30) ",
"with $f$ being a linear function. Note that, since MSE is invariant to linear transformations of $X$ but not of $Y$ , it is a non-symmetric quasi-metric. We apply it with $Y$ being the human ratings. An additional advantage as compared to Pearson is that it has an interpretable meaning: the expected error made by a given metric as compared to the human rating."
],
[
"As shown in Table 3 , WordSLOR and WPSLOR correlate best with human judgments: WordSLOR (respectively WPSLOR) has a $0.025$ (respectively $0.008$ ) higher Pearson correlation than the best word-overlap metric ROUGE-L-mult, even though the latter requires multiple reference compressions. Furthermore, if we consider with ROUGE-L-single a setting with a single given reference, the distance to WordSLOR increases to $0.048$ for Pearson correlation. Note that, since having a single reference is very common, this result is highly relevant for practical applications. Considering MSE, the top two metrics are still WordSLOR and WPSLOR, with a $0.008$ and, respectively, $0.002$ lower error than the third best metric, ROUGE-L-mult. ",
"Comparing WordSLOR and WPSLOR, we find no significant differences: $0.017$ for Pearson and $0.006$ for MSE. However, WPSLOR uses a more compact LM and, hence, has a shorter training time, since the vocabulary is smaller ( $16,000$ vs. $128,000$ tokens).",
"Next, we find that WordNCE and WPNCE perform roughly on par with word-overlap metrics. This is interesting, since they, in contrast to traditional metrics, do not require reference compressions. However, their correlation with human fluency judgments is strictly lower than that of their respective SLOR counterparts. The difference between WordSLOR and WordNCE is bigger than that between WPSLOR and WPNCE. This might be due to accounting for differences in frequencies being more important for words than for WordPieces. Both WordPPL and WPPPL clearly underperform as compared to all other metrics in our experiments.",
"The traditional word-overlap metrics all perform similarly. ROUGE-L-mult and LR2-F-mult are best and worst, respectively.",
"Results are shown in Table 7 . First, we can see that using SVR (line 1) to combine ROUGE-L-mult and WPSLOR outperforms both individual scores (lines 3-4) by a large margin. This serves as a proof of concept: the information contained in the two approaches is indeed complementary.",
"Next, we consider the setting where only references and no annotated examples are available. In contrast to SVR (line 1), ROUGE-LM (line 2) has only the same requirements as conventional word-overlap metrics (besides a large corpus for training the LM, which is easy to obtain for most languages). Thus, it can be used in the same settings as other word-overlap metrics. Since ROUGE-LM—an uninformed combination—performs significantly better than both ROUGE-L-mult and WPSLOR on their own, it should be the metric of choice for evaluating fluency with given references."
],
[
"The results per compression system (cf. Table 4 ) look different from the correlations in Table 3 : Pearson and MSE are both lower. This is due to the outputs of each given system being of comparable quality. Therefore, the datapoints are similar and, thus, easier to fit for the linear function used for MSE. Pearson, in contrast, is lower due to its invariance to linear transformations of both variables. Note that this effect is smallest for ILP, which has uniformly distributed targets ( $\\text{Var}(Y) = 0.35$ vs. $\\text{Var}(Y) = 0.17$ for SEQ2SEQ).",
"Comparing the metrics, the two SLOR approaches perform best for SEQ2SEQ and T3. In particular, they outperform the best word-overlap metric baseline by $0.244$ and $0.097$ Pearson correlation as well as $0.012$ and $0.012$ MSE, respectively. Since T3 is an abstractive system, we can conclude that WordSLOR and WPSLOR are applicable even for systems that are not limited to make use of a fixed repertoire of words.",
"For ILP and NAMAS, word-overlap metrics obtain best results. The differences in performance, however, are with a maximum difference of $0.072$ for Pearson and ILP much smaller than for SEQ2SEQ. Thus, while the differences are significant, word-overlap metrics do not outperform our SLOR approaches by a wide margin. Recall, additionally, that word-overlap metrics rely on references being available, while our proposed approaches do not require this."
],
[
"Looking next at the correlations for all models but different domains (cf. Table 5 ), we first observe that the results across domains are similar, i.e., we do not observe the same effect as in Subsection \"Analysis I: Fluency Evaluation per Compression System\" . This is due to the distributions of scores being uniform ( $\\text{Var}(Y) \\in [0.28, 0.36]$ ).",
"Next, we focus on an important question: How much does the performance of our SLOR-based metrics depend on the domain, given that the respective LMs are trained on Gigaword, which consists of news data?",
"Comparing the evaluation performance for individual metrics, we observe that, except for letters, WordSLOR and WPSLOR perform best across all domains: they outperform the best word-overlap metric by at least $0.019$ and at most $0.051$ Pearson correlation, and at least $0.004$ and at most $0.014$ MSE. The biggest difference in correlation is achieved for the journal domain. Thus, clearly even LMs which have been trained on out-of-domain data obtain competitive performance for fluency evaluation. However, a domain-specific LM might additionally improve the metrics' correlation with human judgments. We leave a more detailed analysis of the importance of the training data's domain for future work."
],
[
"ROUGE was shown to correlate well with ratings of a generated text's content or meaning at the sentence level BIBREF2 . We further expect content and fluency ratings to be correlated. In fact, sometimes it is difficult to distinguish which one is problematic: to illustrate this, we show some extreme examples—compressions which got the highest fluency rating and the lowest possible content rating by at least one rater, but the lowest fluency score and the highest content score by another—in Table 6 . We, thus, hypothesize that ROUGE should contain information about fluency which is complementary to SLOR, and want to make use of references for fluency evaluation, if available. In this section, we experiment with two reference-based metrics – one trainable one, and one that can be used without fluency annotations, i.e., in the same settings as pure word-overlap metrics."
],
[
"First, we assume a setting in which we have the following available: (i) system outputs whose fluency is to be evaluated, (ii) reference generations for evaluating system outputs, (iii) a small set of system outputs with references, which has been annotated for fluency by human raters, and (iv) a large unlabeled corpus for training a LM. Note that available fluency annotations are often uncommon in real-world scenarios; the reason we use them is that they allow for a proof of concept. In this setting, we train scikit's BIBREF18 support vector regression model (SVR) with the default parameters on predicting fluency, given WPSLOR and ROUGE-L-mult. We use 500 of our total 2955 examples for each of training and development, and the remaining 1955 for testing.",
"Second, we simulate a setting in which we have only access to (i) system outputs which should be evaluated on fluency, (ii) reference compressions, and (iii) large amounts of unlabeled text. In particular, we assume to not have fluency ratings for system outputs, which makes training a regression model impossible. Note that this is the standard setting in which word-overlap metrics are applied. Under these conditions, we propose to normalize both given scores by mean and variance, and to simply add them up. We call this new reference-based metric ROUGE-LM. In order to make this second experiment comparable to the SVR-based one, we use the same 1955 test examples."
],
[
"Fluency evaluation is related to grammatical error detection BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 and grammatical error correction BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . However, it differs from those in several aspects; most importantly, it is concerned with the degree to which errors matter to humans.",
"Work on automatic fluency evaluation in NLP has been rare. heilman2014predicting predicted the fluency (which they called grammaticality) of sentences written by English language learners. In contrast to ours, their approach is supervised. stent2005evaluating and cahill2009correlating found only low correlation between automatic metrics and fluency ratings for system-generated English paraphrases and the output of a German surface realiser, respectively. Explicit fluency evaluation of NLG, including compression and the related task of summarization, has mostly been performed manually. vadlapudi-katragadda:2010:SRW used LMs for the evaluation of summarization fluency, but their models were based on part-of-speech tags, which we do not require, and they were non-neural. Further, they evaluated longer texts, not single sentences like we do. toutanova2016dataset compared 80 word-overlap metrics for evaluating the content and fluency of compressions, finding only low correlation with the latter. However, they did not propose an alternative evaluation. We aim at closing this gap."
],
[
"Automatic compression evaluation has mostly had a strong focus on content. Hence, word-overlap metrics like ROUGE BIBREF1 have been widely used for compression evaluation. However, they have certain shortcomings, e.g., they correlate best for extractive compression, while we, in contrast, are interested in an approach which generalizes to abstractive systems. Alternatives include success rate BIBREF28 , simple accuracy BIBREF29 , which is based on the edit distance between the generation and the reference, or word accuracy BIBREF30 , the equivalent for multiple references."
],
[
"In the sense that we promote an explicit evaluation of fluency, our work is in line with previous criticism of evaluating NLG tasks with a single score produced by word-overlap metrics.",
"The need for better evaluation for machine translation (MT) was expressed, e.g., by callison2006re, who doubted the meaningfulness of BLEU, and claimed that a higher BLEU score was neither a necessary precondition nor a proof of improved translation quality. Similarly, song2013bleu discussed BLEU being unreliable at the sentence or sub-sentence level (in contrast to the system-level), or for only one single reference. This was supported by isabelle-cherry-foster:2017:EMNLP2017, who proposed a so-called challenge set approach as an alternative. graham-EtAl:2016:COLING performed a large-scale evaluation of human-targeted metrics for machine translation, which can be seen as a compromise between human evaluation and fully automatic metrics. They also found fully automatic metrics to correlate only weakly or moderately with human judgments. bojar2016ten further confirmed that automatic MT evaluation methods do not perform well with a single reference. The need of better metrics for MT has been addressed since 2008 in the WMT metrics shared task BIBREF31 , BIBREF32 .",
"For unsupervised dialogue generation, liu-EtAl:2016:EMNLP20163 obtained close to no correlation with human judgements for BLEU, ROUGE and METEOR. They contributed this in a large part to the unrestrictedness of dialogue answers, which makes it hard to match given references. They emphasized that the community should move away from these metrics for dialogue generation tasks, and develop metrics that correlate more strongly with human judgments. elliott-keller:2014:P14-2 reported the same for BLEU and image caption generation. duvsek2017referenceless suggested an RNN to evaluate NLG at the utterance level, given only the input meaning representation."
],
[
"We empirically confirmed the effectiveness of SLOR, a LM score which accounts for the effects of sentence length and individual unigram probabilities, as a metric for fluency evaluation of the NLG task of automatic compression at the sentence level. We further introduced WPSLOR, an adaptation of SLOR to WordPieces, which reduced both model size and training time at a similar evaluation performance. Our experiments showed that our proposed referenceless metrics correlate significantly better with fluency ratings for the outputs of compression systems than traditional word-overlap metrics on a benchmark dataset. Additionally, they can be applied even in settings where no references are available, or would be costly to obtain. Finally, for given references, we proposed the reference-based metric ROUGE-LM, which consists of a combination of WPSLOR and ROUGE. Thus, we were able to obtain an even more accurate fluency evaluation."
],
[
"We would like to thank Sebastian Ebert and Samuel Bowman for their detailed and helpful feedback."
]
],
"section_name": [
"Introduction",
"On Acceptability",
"Method",
"SLOR",
"WordPieces",
"WPSLOR",
"Experiment",
"Dataset",
"LM Hyperparameters and Training",
"Baseline Metrics",
"Correlation and Evaluation Scores",
"Results and Discussion",
"Analysis I: Fluency Evaluation per Compression System",
"Analysis II: Fluency Evaluation per Domain",
"Incorporation of Given References",
"Experimental Setup",
"Fluency Evaluation",
"Compression Evaluation",
"Criticism of Common Metrics for NLG",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"24ebf6cd50b3f873f013cd206aa999a4aa841317",
"d04c757c5a09e8a9f537d15bdd93ac4043c7a3e9"
],
"answer": [
{
"evidence": [
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.",
"We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as",
"Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset;",
"We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length.",
"Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.",
"We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as",
"Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments."
],
"extractive_spans": [],
"free_form_answer": "No, other baseline metrics they use besides ROUGE-L are n-gram overlap, negative cross-entropy, perplexity, and BLEU.",
"highlighted_evidence": [
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset;",
"We further compare to the negative LM cross-entropy",
"Our next baseline is perplexity, ",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"5ecd71796a1b58d848a20b0fe4be06ee50ea40fb"
],
"answer": [
{
"evidence": [
"We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus.",
"We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data."
],
"extractive_spans": [
"LSTM LMs"
],
"free_form_answer": "",
"highlighted_evidence": [
"We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus.",
"We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"3fd01f74c49811127a1014b99a0681072e1ec34d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"yes",
"yes",
"yes"
],
"question": [
"Is ROUGE their only baseline?",
"what language models do they use?",
"what questions do they ask human judges?"
],
"question_id": [
"7aa8375cdf4690fc3b9b1799b0f5a9ec1c1736ed",
"3ac30bd7476d759ea5d9a5abf696d4dfc480175b",
"0e57a0983b4731eba9470ba964d131045c8c7ea7"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social",
"social",
"social"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Example compressions from our dataset with their fluency scores; scores in [1, 3], higher is better.",
"Table 2: Average fluency ratings for each compression system in the dataset by Toutanova et al. (2016).",
"Table 3: Pearson correlation (higher is better) and MSE (lower is better) for all metrics; best results in bold; refs=number of references used to compute the metric.",
"Table 4: Pearson correlation (higher is better) and MSE (lower is better), reported by compression system; best results in bold; refs=number of references used to compute the metric.",
"Table 5: Pearson correlation (higher is better) and MSE (lower is better), reported by domain of the original sentence or paragraph; best results in bold; refs=number of references used to compute the metric.",
"Table 6: Sentences for which raters were unsure if they were perceived as problematic due to fluency or content issues, together with the model which generated them.",
"Table 7: Combinations; all differences except for 3 and 4 are statistically significant; refs=number of references used to compute the metric; ROUGE=ROUGE-L-mult; best results in bold."
],
"file": [
"1-Table1-1.png",
"3-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png"
]
} | [
"Is ROUGE their only baseline?"
] | [
[
"1809.08731-Baseline Metrics-1",
"1809.08731-Baseline Metrics-7",
"1809.08731-Baseline Metrics-0"
]
] | [
"No, other baseline metrics they use besides ROUGE-L are n-gram overlap, negative cross-entropy, perplexity, and BLEU."
] | 22 |
1809.04960 | Unsupervised Machine Commenting with Neural Variational Topic Model | Article comments can provide supplementary opinions and facts for readers, thereby increase the attraction and engagement of articles. Therefore, automatically commenting is helpful in improving the activeness of the community, such as online forums and news websites. Previous work shows that training an automatic commenting system requires large parallel corpora. Although part of articles are naturally paired with the comments on some websites, most articles and comments are unpaired on the Internet. To fully exploit the unpaired data, we completely remove the need for parallel data and propose a novel unsupervised approach to train an automatic article commenting model, relying on nothing but unpaired articles and comments. Our model is based on a retrieval-based commenting framework, which uses news to retrieve comments based on the similarity of their topics. The topic representation is obtained from a neural variational topic model, which is trained in an unsupervised manner. We evaluate our model on a news comment dataset. Experiments show that our proposed topic-based approach significantly outperforms previous lexicon-based models. The model also profits from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | {
"paragraphs": [
[
"Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.",
"Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.",
"Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.",
"To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.",
"The contributions of this work are as follows:"
],
[
"In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges."
],
[
"Here, we first introduce the challenges of building a well-performed machine commenting system.",
"The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.",
"One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.",
"There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments."
],
[
"Facing the above challenges, we provide three solutions to the problems.",
"Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).",
"The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.",
"Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics."
],
[
"We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning."
],
[
"Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.",
"The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.",
"To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0 ",
"The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model."
],
[
"We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.",
"We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.",
"In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0 ",
"where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1 ",
"We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .",
"At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 ."
],
[
"In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0 ",
"where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 ."
],
[
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words."
],
[
"The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch."
],
[
"We compare our model with several unsupervised models and supervised models.",
"Unsupervised baseline models are as follows:",
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.",
"LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract \"topics\" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"The supervised baseline models are:",
"S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.",
"IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 ."
],
[
"For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:",
"Correct: The ground-truth comments of the corresponding news provided by the human.",
"Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.",
"Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.",
"Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.",
"Following previous work, we measure the rank in terms of the following metrics:",
"Recall@k: The proportion of human comments found in the top-k recommendations.",
"Mean Rank (MR): The mean rank of the human comments.",
"Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.",
"The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.",
"We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance."
],
[
"Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios."
],
[
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.",
"Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.",
"Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.",
"IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments."
],
[
"There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 ."
],
[
"Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.",
"Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN."
],
[
"We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios."
]
],
"section_name": [
"Introduction",
"Machine Commenting",
"Challenges",
"Solutions",
"Proposed Approach",
"Retrieval-based Commenting",
"Neural Variational Topic Model",
"Training",
"Datasets",
"Implementation Details",
"Baselines",
"Retrieval Evaluation",
"Generative Evaluation",
"Analysis and Discussion",
"Article Comment",
"Topic Model and Variational Auto-Encoder",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"4cab4c27ed7f23d35b539bb3b1c7380ef603afe7",
"a951e1f37364826ddf170c9076b0d647f29db95a"
],
"answer": [
{
"evidence": [
"In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0"
],
"extractive_spans": [
"dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1"
],
"free_form_answer": "",
"highlighted_evidence": [
" In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.",
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model."
],
"extractive_spans": [
"Chinese dataset BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model.",
"We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2d08e056385b01322aee0901a9b84cfc9a888ee1",
"a103500a032c68c4c921e371020286f6642f2eb5"
],
"answer": [
{
"evidence": [
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.",
"FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.",
"FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)"
],
"extractive_spans": [],
"free_form_answer": "Under the retrieval evaluation setting, their proposed model + IR2 had better MRR than NVDM by 0.3769, better MR by 4.6, and better Recall@10 by 20 . \nUnder the generative evaluation setting the proposed model + IR2 had better BLEU by 0.044 , better CIDEr by 0.033, better ROUGE by 0.032, and better METEOR by 0.029",
"highlighted_evidence": [
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. ",
"FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation.",
"FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.",
"FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)"
],
"extractive_spans": [],
"free_form_answer": "Proposed model is better than both lexical based models by significan margin in all metrics: BLEU 0.261 vs 0.250, ROUGLE 0.162 vs 0.155 etc.",
"highlighted_evidence": [
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation.",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation.",
"FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5244e8c8bd4b0b37950dfc4396147d6107ea361f"
],
"answer": [
{
"evidence": [
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic."
],
"extractive_spans": [
"TF-IDF",
"NVDM"
],
"free_form_answer": "",
"highlighted_evidence": [
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3b43bfea62e231d06768f9eb11ddfbfb0d8973a5"
],
"answer": [
{
"evidence": [
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model."
],
"extractive_spans": [
"from 50K to 4.8M"
],
"free_form_answer": "",
"highlighted_evidence": [
"We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c16bd2e6d7fedcc710352b168120d7b82f78d55a"
],
"answer": [
{
"evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words."
],
"extractive_spans": [
"198,112"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset consists of 198,112 news articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"bd7c9ed29ee02953c27630de0beee67f7b23eba0"
],
"answer": [
{
"evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words."
],
"extractive_spans": [
"Chinese dataset BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Which paired corpora did they use in the other experiment?",
"By how much does their system outperform the lexicon-based models?",
"Which lexicon-based models did they compare with?",
"How many comments were used?",
"How many articles did they have?",
"What news comment dataset was used?"
],
"question_id": [
"100cf8b72d46da39fedfe77ec939fb44f25de77f",
"8cc56fc44136498471754186cfa04056017b4e54",
"5fa431b14732b3c47ab6eec373f51f2bca04f614",
"33ccbc401b224a48fba4b167e86019ffad1787fb",
"cca74448ab0c518edd5fc53454affd67ac1a201c",
"b69ffec1c607bfe5aa4d39254e0770a3433a191b"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)",
"Figure 1: The performance of the supervised model and the semi-supervised model trained on different paired data size.",
"Figure 2: Error types of comments generated by different models."
],
"file": [
"5-Table2-1.png",
"5-Table3-1.png",
"6-Figure1-1.png",
"6-Figure2-1.png"
]
} | [
"By how much does their system outperform the lexicon-based models?"
] | [
[
"1809.04960-Baselines-2",
"1809.04960-Generative Evaluation-1",
"1809.04960-Retrieval Evaluation-10",
"1809.04960-Baselines-4",
"1809.04960-5-Table2-1.png",
"1809.04960-5-Table3-1.png"
]
] | [
"Proposed model is better than both lexical based models by significan margin in all metrics: BLEU 0.261 vs 0.250, ROUGLE 0.162 vs 0.155 etc."
] | 24 |
1708.05873 | What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016 | There is surprisingly little known about agenda setting for international development in the United Nations (UN) despite it having a significant influence on the process and outcomes of development efforts. This paper addresses this shortcoming using a novel approach that applies natural language processing techniques to countries' annual statements in the UN General Debate. Every year UN member states deliver statements during the General Debate on their governments' perspective on major issues in world politics. These speeches provide invaluable information on state preferences on a wide range of issues, including international development, but have largely been overlooked in the study of global politics. This paper identifies the main international development topics that states raise in these speeches between 1970 and 2016, and examine the country-specific drivers of international development rhetoric. | {
"paragraphs": [
[
"Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 .",
"Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.",
"The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states' preferences about international development issues. To address this gap we employ a novel approach based on the application of natural language processing (NLP) to countries' speeches in the UN. Every September, the heads of state and other high-level country representatives gather in New York at the start of a new session of the United Nations General Assembly (UNGA) and address the Assembly in the General Debate. The General Debate (GD) provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics – including international development. As such, the statements made during GD are an invaluable and, largely untapped, source of information on governments' policy preferences on international development over time.",
"An important feature of these annual country statements is that they are not institutionally connected to decision-making in the UN. This means that governments face few external constraints when delivering these speeches, enabling them to raise the issues that they consider the most important. Therefore, the General Debate acts “as a barometer of international opinion on important issues, even those not on the agenda for that particular session” BIBREF2 . In fact, the GD is usually the first item for each new session of the UNGA, and as such it provides a forum for governments to identify like-minded members, and to put on the record the issues they feel the UNGA should address. Therefore, the GD can be viewed as a key forum for governments to put different policy issues on international agenda.",
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements."
],
[
"In the analysis we consider the nature of international development issues raised in the UN General Debates, and the effect of structural covariates on the level of developmental rhetoric in the GD statements. To do this, we first implement a structural topic model BIBREF4 . This enables us to identify the key international development topics discussed in the GD. We model topic prevalence in the context of the structural covariates. In addition, we control for region fixed effects and time trend. The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches. This allows us to test the degree of association between covariates (and region/time effects) and the average proportion of a document discussing a topic."
],
[
"We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.",
"Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 ."
],
[
"Figure FIGREF4 provides a list of the main topics (and the highest probability words associated these topics) that emerge from the STM of UN General Debate statements. In addition to the highest probability words, we use several other measures of key words (not presented here) to interpret the dimensions. This includes the FREX metric (which combines exclusivity and word frequency), the lift (which gives weight to words that appear less frequently in other topics), and the score (which divides the log frequency of the word in the topic by the log frequency of the word in other topics). We provide a brief description of each of the 16 topics here.",
"Topic 1 - Security and cooperation in Europe.",
"The first topic is related to issues of security and cooperation, with a focus on Central and Eastern Europe.",
"Topic 2 - Economic development and the global system.",
"This topic is related to economic development, particularly around the global economic system. The focus on `trade', `growth', `econom-', `product', `growth', `financ-', and etc. suggests that Topic 2 represent a more traditional view of international development in that the emphasis is specifically on economic processes and relations.",
"Topic 3 - Nuclear disarmament.",
"This topic picks up the issue of nuclear weapons, which has been a major issue in the UN since its founding.",
"Topic 4 - Post-conflict development.",
"This topic relates to post-conflict development. The countries that feature in the key words (e.g. Rwanda, Liberia, Bosnia) have experienced devastating civil wars, and the emphasis on words such as `develop', `peace', `hope', and `democrac-' suggest that this topic relates to how these countries recover and move forward.",
"Topic 5 - African independence / decolonisation.",
"This topic picks up the issue of African decolonisation and independence. It includes the issue of apartheid in South Africa, as well as racism and imperialism more broadly.",
"Topic 6 - Africa.",
"While the previous topic focused explicitly on issues of African independence and decolonisation, this topic more generally picks up issues linked to Africa, including peace, governance, security, and development.",
"Topic 7 - Sustainable development.",
"This topic centres on sustainable development, picking up various issues linked to development and climate change. In contrast to Topic 2, this topic includes some of the newer issues that have emerged in the international development agenda, such as sustainability, gender, education, work and the MDGs.",
"Topic 8 - Functional topic.",
"This topic appears to be comprised of functional or process-oriented words e.g. `problem', `solution', `effort', `general', etc.",
"Topic 9 - War.",
"This topic directly relates to issues of war. The key words appear to be linked to discussions around ongoing wars.",
"Topic 10 - Conflict in the Middle East.",
"This topic clearly picks up issues related to the Middle East – particularly around peace and conflict in the Middle East.",
"Topic 11 - Latin America.",
"This is another topic with a regional focus, picking up on issues related to Latin America.",
"Topic 12 - Commonwealth.",
"This is another of the less obvious topics to emerge from the STM in that the key words cover a wide range of issues. However, the places listed (e.g. Australia, Sri Lanka, Papua New Guinea) suggest the topic is related to the Commonwealth (or former British colonies).",
"Topic 13 - International security.",
"This topic broadly captures international security issues (e.g. terrorism, conflict, peace) and in particularly the international response to security threats, such as the deployment of peacekeepers.",
"Topic 14 - International law.",
"This topic picks up issues related to international law, particularly connected to territorial disputes.",
"Topic 15 - Decolonisation.",
"This topic relates more broadly to decolonisation. As well as specific mention of decolonisation, the key words include a range of issues and places linked to the decolonisation process.",
"Topic 16 - Cold War.",
"This is another of the less tightly defined topics. The topics appears to pick up issues that are broadly related to the Cold War. There is specific mention of the Soviet Union, and detente, as well as issues such as nuclear weapons, and the Helsinki Accords.",
"Based on these topics, we examine Topic 2 and Topic 7 as the principal “international development” topics. While a number of other topics – for example post-conflict development, Africa, Latin America, etc. – are related to development issues, Topic 2 and Topic 7 most directly capture aspects of international development. We consider these two topics more closely by contrasting the main words linked to these two topics. In Figure FIGREF6 , the word clouds show the 50 words most likely to mentioned in relation to each of the topics.",
"The word clouds provide further support for Topic 2 representing a more traditional view of international development focusing on economic processes. In addition to a strong emphasis on 'econom-', other key words, such as `trade', `debt', `market', `growth', `industri-', `financi-', `technolog-', `product', and `argicultur-', demonstrate the narrower economic focus on international development captured by Topic 2. In contrast, Topic 7 provides a much broader focus on development, with key words including `climat-', `sustain', `environ-', `educ-', `health', `women', `work', `mdgs', `peac-', `govern-', and `right'. Therefore, Topic 7 captures many of the issues that feature in the recent Sustainable Development Goals (SDGs) agenda BIBREF9 .",
"Figure FIGREF7 calculates the difference in probability of a word for the two topics, normalized by the maximum difference in probability of any word between the two topics. The figure demonstrates that while there is a much high probability of words, such as `econom-', `trade', and even `develop-' being used to discuss Topic 2; words such as `climat-', `govern-', `sustain', `goal', and `support' being used in association with Topic 7. This provides further support for the Topic 2 representing a more economistic view of international development, while Topic 7 relating to a broader sustainable development agenda.",
"We also assess the relationship between topics in the STM framework, which allows correlations between topics to be examined. This is shown in the network of topics in Figure FIGREF8 . The figure shows that Topic 2 and Topic 7 are closely related, which we would expect as they both deal with international development (and share key words on development, such as `develop-', `povert-', etc.). It is also worth noting that while Topic 2 is more closely correlated with the Latin America topic (Topic 11), Topic 7 is more directly correlated with the Africa topic (Topic 6)."
],
[
"We next look at the relationship between topic proportions and structural factors. The data for these structural covariates is taken from the World Bank's World Development Indicators (WDI) unless otherwise stated. Confidence intervals produced by the method of composition in STM allow us to pick up statistical uncertainty in the linear regression model.",
"Figure FIGREF9 demonstrates the effect of wealth (GDP per capita) on the the extent to which states discuss the two international development topics in their GD statements. The figure shows that the relationship between wealth and the topic proportions linked to international development differs across Topic 2 and Topic 7. Discussion of Topic 2 (economic development) remains far more constant across different levels of wealth than Topic 7. The poorest states tend to discuss both topics more than other developing nations. However, this effect is larger for Topic 7. There is a decline in the proportion of both topics as countries become wealthier until around $30,000 when there is an increase in discussion of Topic 7. There is a further pronounced increase in the extent countries discuss Topic 7 at around $60,000 per capita. However, there is a decline in expected topic proportions for both Topic 2 and Topic 7 for the very wealthiest countries.",
"Figure FIGREF10 shows the expected topic proportions for Topic 2 and Topic 7 associated with different population sizes. The figure shows a slight surge in the discussion of both development topics for countries with the very smallest populations. This reflects the significant amount of discussion of development issues, particularly sustainable development (Topic 7) by the small island developing states (SIDs). The discussion of Topic 2 remains relatively constant across different population sizes, with a slight increase in the expected topic proportion for the countries with the very largest populations. However, with Topic 7 there is an increase in expected topic proportion until countries have a population of around 300 million, after which there is a decline in discussion of Topic 7. For countries with populations larger than 500 million there is no effect of population on discussion of Topic 7. It is only with the very largest populations that we see a positive effect on discussion of Topic 7.",
"We would also expect the extent to which states discuss international development in their GD statements to be impacted by the amount of aid or official development assistance (ODA) they receive. Figure FIGREF11 plots the expected topic proportion according to the amount of ODA countries receive. Broadly-speaking the discussion of development topics remains largely constant across different levels of ODA received. There is, however, a slight increase in the expected topic proportions of Topic 7 according to the amount of ODA received. It is also worth noting the spikes in discussion of Topic 2 and Topic 7 for countries that receive negative levels of ODA. These are countries that are effectively repaying more in loans to lenders than they are receiving in ODA. These countries appear to raise development issues far more in their GD statements, which is perhaps not altogether surprising.",
"We also consider the effects of democracy on the expected topic proportions of both development topics using the Polity IV measure of democracy BIBREF10 . Figure FIGREF12 shows the extent to which states discuss the international development topics according to their level of democracy. Discussion of Topic 2 is fairly constant across different levels of democracy (although there are some slight fluctuations). However, the extent to which states discuss Topic 7 (sustainable development) varies considerably across different levels of democracy. Somewhat surprisingly the most autocratic states tend to discuss Topic 7 more than the slightly less autocratic states. This may be because highly autocratic governments choose to discuss development and environmental issues to avoid a focus on democracy and human rights. There is then an increase in the expected topic proportion for Topic 7 as levels of democracy increase reaching a peak at around 5 on the Polity scale, after this there is a gradual decline in discussion of Topic 7. This would suggest that democratizing or semi-democratic countries (which are more likely to be developing countries with democratic institutions) discuss sustainable development more than established democracies (that are more likely to be developed countries).",
"We also plot the results of the analysis as the difference in topic proportions for two different values of the effect of conflict. Our measure of whether a country is experiencing a civil conflict comes from the UCDP/PRIO Armed Conflict Dataset BIBREF11 . Point estimates and 95% confidence intervals are plotted in Figure FIGREF13 . The figure shows that conflict affects only Topic 7 and not Topic 2. Countries experiencing conflict are less likely to discuss Topic 7 (sustainable development) than countries not experiencing conflict. The most likely explanation is that these countries are more likely to devote a greater proportion of their annual statements to discussing issues around conflict and security than development. The fact that there is no effect of conflict on Topic 2 is interesting in this regard.",
"Finally, we consider regional effects in Figure FIGREF14 . We use the World Bank's classifications of regions: Latin America and the Caribbean (LCN), South Asia (SAS), Sub-Saharan Africa (SSA), Europe and Central Asia (ECS), Middle East and North Africa (MEA), East Asia and the Pacific (EAS), North America (NAC). The figure shows that states in South Asia, and Latin America and the Caribbean are likely to discuss Topic 2 the most. States in South Asia and East Asia and the Pacific discuss Topic 7 the most. The figure shows that countries in North America are likely to speak about Topic 7 least.",
"The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration."
],
[
"Despite decisions taken in international organisations having a huge impact on development initiatives and outcomes, we know relatively little about the agenda-setting process around the global governance of development. Using a novel approach that applies NLP methods to a new dataset of speeches in the UN General Debate, this paper has uncovered the main development topics discussed by governments in the UN, and the structural factors that influence the degree to which governments discuss international development. In doing so, the paper has shed some light on state preferences regarding the international development agenda in the UN. The paper more broadly demonstrates how text analytic approaches can help us to better understand different aspects of global governance."
]
],
"section_name": [
"Introduction",
"The UN General Debate and international development",
"Estimation of topic models",
"Topics in the UN General Debate",
"Explaining the rhetoric",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"9a8d3b251090979a6b4c6d04ed95386a881bbd1c"
],
"answer": [
{
"evidence": [
"Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.",
"The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration."
],
"extractive_spans": [
"wealth ",
"democracy ",
"population",
"levels of ODA",
"conflict "
],
"free_form_answer": "",
"highlighted_evidence": [
" More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.",
" We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"3976a227b981d398255fd5581bce0111300e6916",
"45b831b84ca84f2bd169ab070e005947b848d2e8"
],
"answer": [
{
"evidence": [
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements.",
"FLOAT SELECTED: Fig. 2. Topic quality. 20 highest probability words for the 16-topic model."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . ",
"FLOAT SELECTED: Fig. 2. Topic quality. 20 highest probability words for the 16-topic model."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"b5c02e8f62e47bd5c139f9741433bd8cec5ae9bb"
],
"answer": [
{
"evidence": [
"We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.",
"Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 ."
],
"extractive_spans": [],
"free_form_answer": " They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence.",
"highlighted_evidence": [
"We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures.",
"Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. ",
"Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"",
"",
""
],
"question": [
"What are the country-specific drivers of international development rhetoric?",
"Is the dataset multilingual?",
"How are the main international development topics that states raise identified?"
],
"question_id": [
"a2103e7fe613549a9db5e65008f33cf2ee0403bd",
"13b36644357870008d70e5601f394ec3c6c07048",
"e4a19b91b57c006a9086ae07f2d6d6471a8cf0ce"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Fig. 1. Optimal model search. Semantic coherence and exclusivity results for a model search from 3 to 50 topics. Models above the regression line provide a better trade off. Largest positive residual is a 16-topic model.",
"Fig. 2. Topic quality. 20 highest probability words for the 16-topic model.",
"Fig. 3. Topic content. 50 highest probability words for the 2nd and 7th topics.",
"Fig. 4. Comparing Topics 2 and 7 quality. 50 highest probability words contrasted between Topics 2 and 7.",
"Fig. 5. Network of topics. Correlation of topics.",
"Fig. 6. Effect of wealth. Main effect and 95% confidence interval.",
"Fig. 7. Effect of population. Main effect and 95% confidence interval.",
"Fig. 9. Effect of democracy. Main effect and 95% confidence interval.",
"Fig. 8. Effect of ODA. Main effect and 95% confidence interval.",
"Fig. 10. Effect of conflict. Point estimates and 95% confidence intervals.",
"Fig. 11. Regional effects. Point estimates and 95% confidence intervals."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Figure5-1.png",
"4-Figure6-1.png",
"5-Figure7-1.png",
"5-Figure9-1.png",
"5-Figure8-1.png",
"6-Figure10-1.png",
"6-Figure11-1.png"
]
} | [
"How are the main international development topics that states raise identified?"
] | [
[
"1708.05873-Estimation of topic models-0",
"1708.05873-Estimation of topic models-1"
]
] | [
" They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence."
] | 29 |
1909.12140 | DisSim: A Discourse-Aware Syntactic Text Simplification Frameworkfor English and German | We introduce DisSim, a discourse-aware sentence splitting framework for English and German whose goal is to transform syntactically complex sentences into an intermediate representation that presents a simple and more regular structure which is easier to process for downstream semantic applications. For this purpose, we turn input sentences into a two-layered semantic hierarchy in the form of core facts and accompanying contexts, while identifying the rhetorical relations that hold between them. In that way, we preserve the coherence structure of the input and, hence, its interpretability for downstream tasks. | {
"paragraphs": [
[
"We developed a syntactic text simplification (TS) approach that can be used as a preprocessing step to facilitate and improve the performance of a wide range of artificial intelligence (AI) tasks, such as Machine Translation, Information Extraction (IE) or Text Summarization. Since shorter sentences are generally better processed by natural language processing (NLP) systems BIBREF0, the goal of our approach is to break down a complex source sentence into a set of minimal propositions, i.e. a sequence of sound, self-contained utterances, with each of them presenting a minimal semantic unit that cannot be further decomposed into meaningful propositions BIBREF1.",
"However, any sound and coherent text is not simply a loose arrangement of self-contained units, but rather a logical structure of utterances that are semantically connected BIBREF2. Consequently, when carrying out syntactic simplification operations without considering discourse implications, the rewriting may easily result in a disconnected sequence of simplified sentences that lack important contextual information, making the text harder to interpret. Thus, in order to preserve the coherence structure and, hence, the interpretability of the input, we developed a discourse-aware TS approach based on Rhetorical Structure Theory (RST) BIBREF3. It establishes a contextual hierarchy between the split components, and identifies and classifies the semantic relationship that holds between them. In that way, a complex source sentence is turned into a so-called discourse tree, consisting of a set of hierarchically ordered and semantically interconnected sentences that present a simplified syntax which is easier to process for downstream semantic applications and may support a faster generalization in machine learning tasks."
],
[
"We present DisSim, a discourse-aware sentence splitting approach for English and German that creates a semantic hierarchy of simplified sentences. It takes a sentence as input and performs a recursive transformation process that is based upon a small set of 35 hand-crafted grammar rules for the English version and 29 rules for the German approach. These patterns were heuristically determined in a comprehensive linguistic analysis and encode syntactic and lexical features that can be derived from a sentence's parse tree. Each rule specifies (1) how to split up and rephrase the input into structurally simplified sentences and (2) how to set up a semantic hierarchy between them. They are recursively applied on a given source sentence in a top-down fashion. When no more rule matches, the algorithm stops and returns the generated discourse tree."
],
[
"In a first step, source sentences that present a complex linguistic form are turned into clean, compact structures by decomposing clausal and phrasal components. For this purpose, the transformation rules encode both the splitting points and rephrasing procedure for reconstructing proper sentences."
],
[
"Each split will create two or more sentences with a simplified syntax. To establish a semantic hierarchy between them, two subtasks are carried out:"
],
[
"First, we set up a contextual hierarchy between the split sentences by connecting them with information about their hierarchical level, similar to the concept of nuclearity in RST. For this purpose, we distinguish core sentences (nuclei), which carry the key information of the input, from accompanying contextual sentences (satellites) that disclose additional information about it. To differentiate between those two types of constituents, the transformation patterns encode a simple syntax-based approach where subordinate clauses/phrases are classified as context sentences, while superordinate as well as coordinate clauses/phrases are labelled as core."
],
[
"Second, we aim to restore the semantic relationship between the disembedded components. For this purpose, we identify and classify the rhetorical relations that hold between the simplified sentences, making use of both syntactic features, which are derived from the input's parse tree structure, and lexical features in the form of cue phrases. Following the work of Taboada13, they are mapped to a predefined list of rhetorical cue words to infer the type of rhetorical relation."
],
[
"DisSim can be either used as a Java API, imported as a Maven dependency, or as a service which we provide through a command line interface or a REST-like web service that can be deployed via docker. It takes as input NL text in the form of a single sentence. Alternatively, a file containing a sequence of sentences can be loaded. The result of the transformation process is either written to the console or stored in a specified output file in JSON format. We also provide a browser-based user interface, where the user can directly type in sentences to be processed (see Figure FIGREF1)."
],
[
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
],
[
"An extrinsic evaluation was carried out on the task of Open IE BIBREF7. It revealed that when applying DisSim as a preprocessing step, the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall, i.e. leading to a lower information loss and a higher accuracy of the extracted relations. For details, the interested reader may refer to niklaus-etal-2019-transforming.",
"Moreover, most current Open IE approaches output only a loose arrangement of extracted tuples that are hard to interpret as they ignore the context under which a proposition is complete and correct and thus lack the expressiveness needed for a proper interpretation of complex assertions BIBREF8. As illustrated in Figure FIGREF9, with the help of the semantic hierarchy generated by our discourse-aware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and, hence, preserve their interpretability in downstream tasks."
],
[
"We developed and implemented a discourse-aware syntactic TS approach that recursively splits and rephrases complex English or German sentences into a semantic hierarchy of simplified sentences. The resulting lightweight semantic representation can be used to facilitate and improve a variety of AI tasks."
]
],
"section_name": [
"Introduction",
"System Description",
"System Description ::: Split into Minimal Propositions",
"System Description ::: Establish a Semantic Hierarchy",
"System Description ::: Establish a Semantic Hierarchy ::: Constituency Type Classification.",
"System Description ::: Establish a Semantic Hierarchy ::: Rhetorical Relation Identification.",
"Usage",
"Experiments",
"Application in Downstream Tasks",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"4083f879cdc02cfa51c88a45ce16e30707a8a63e",
"d12ac9d62a47d355ba1fdd0799c58e59877d5eb8"
],
"answer": [
{
"evidence": [
"An extrinsic evaluation was carried out on the task of Open IE BIBREF7. It revealed that when applying DisSim as a preprocessing step, the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall, i.e. leading to a lower information loss and a higher accuracy of the extracted relations. For details, the interested reader may refer to niklaus-etal-2019-transforming."
],
"extractive_spans": [],
"free_form_answer": "Yes, Open IE",
"highlighted_evidence": [
"An extrinsic evaluation was carried out on the task of Open IE BIBREF7."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Moreover, most current Open IE approaches output only a loose arrangement of extracted tuples that are hard to interpret as they ignore the context under which a proposition is complete and correct and thus lack the expressiveness needed for a proper interpretation of complex assertions BIBREF8. As illustrated in Figure FIGREF9, with the help of the semantic hierarchy generated by our discourse-aware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and, hence, preserve their interpretability in downstream tasks."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As illustrated in Figure FIGREF9, with the help of the semantic hierarchy generated by our discourse-aware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and, hence, preserve their interpretability in downstream tasks."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"c3e99448c2420d3cb04bd3efce32a638d0e62a31"
],
"answer": [
{
"evidence": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
],
"extractive_spans": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains",
"The evaluation of the German version is in progress."
],
"free_form_answer": "",
"highlighted_evidence": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains ",
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains ",
"The evaluation of the German version is in progress."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"f819d17832ad50d4b30bba15edae222e7cf068c1"
],
"answer": [
{
"evidence": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
],
"extractive_spans": [],
"free_form_answer": "the English version is evaluated. The German version evaluation is in progress ",
"highlighted_evidence": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Is the semantic hierarchy representation used for any task?",
"What are the corpora used for the task?",
"Is the model evaluated?"
],
"question_id": [
"f8281eb49be3e8ea0af735ad3bec955a5dedf5b3",
"a5ee9b40a90a6deb154803bef0c71c2628acb571",
"e286860c41a4f704a3a08e45183cb8b14fa2ad2f"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"German",
"German",
"German"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: DISSIM’s browser-based user interface. The simplified output is displayed in the form of a directed graph where the split sentences are connected by arrows whose labels denote the semantic relationship that holds between a pair of simplified sentences and whose direction indicates their contextual hierarchy. The colors signal different context layers. In that way, a semantic hierarchy of minimal, self-contained propositions is established.",
"Figure 2: Comparison of the propositions extracted by Supervised-OIE (Stanovsky et al., 2018) with (5-11) and without (1-4) using our discourse-aware TS approach as a preprocessing step."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png"
]
} | [
"Is the semantic hierarchy representation used for any task?",
"Is the model evaluated?"
] | [
[
"1909.12140-Application in Downstream Tasks-0",
"1909.12140-Application in Downstream Tasks-1"
],
[
"1909.12140-Experiments-0"
]
] | [
"Yes, Open IE",
"the English version is evaluated. The German version evaluation is in progress "
] | 33 |
1909.08859 | Procedural Reasoning Networks for Understanding Multimodal Procedures | This paper addresses the problem of comprehending procedural commonsense knowledge. This is a challenging task as it requires identifying key entities, keeping track of their state changes, and understanding temporal and causal relations. Contrary to most of the previous work, in this study, we do not rely on strong inductive bias and explore the question of how multimodality can be exploited to provide a complementary semantic signal. Towards this end, we introduce a new entity-aware neural comprehension model augmented with external relational memory units. Our model learns to dynamically update entity states in relation to each other while reading the text instructions. Our experimental analysis on the visual reasoning tasks in the recently proposed RecipeQA dataset reveals that our approach improves the accuracy of the previously reported models by a large margin. Moreover, we find that our model learns effective dynamic representations of entities even though we do not use any supervision at the level of entity states. | {
"paragraphs": [
[
"A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals. Understanding and reasoning about procedural texts (e.g. cooking recipes, how-to guides, scientific processes) are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0, BIBREF1, BIBREF2. That is, one must be aware of the entities present in the text, infer relations among them and even anticipate changes in the states of the entities after each action. For example, consider the cheeseburger recipe presented in Fig. FIGREF2. The instruction “salt and pepper each patty and cook for 2 to 3 minutes on the first side” in Step 5 entails mixing three basic ingredients, the ground beef, salt and pepper, together and then applying heat to the mix, which in turn causes chemical changes that alter both the appearance and the taste. From a natural language understanding perspective, the main difficulty arises when a model sees the word patty again at a later stage of the recipe. It still corresponds to the same entity, but its form is totally different.",
"Over the past few years, many new datasets and approaches have been proposed that address this inherently hard problem BIBREF0, BIBREF1, BIBREF3, BIBREF4. To mitigate the aforementioned challenges, the existing works rely mostly on heavy supervision and focus on predicting the individual state changes of entities at each step. Although these models can accurately learn to make local predictions, they may lack global consistency BIBREF3, BIBREF4, not to mention that building such annotated corpora is very labor-intensive. In this work, we take a different direction and explore the problem from a multimodal standpoint. Our basic motivation, as illustrated in Fig. FIGREF2, is that accompanying images provide complementary cues about causal effects and state changes. For instance, it is quite easy to distinguish raw meat from cooked one in visual domain.",
"In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps."
],
[
"In our study, we particularly focus on the visual reasoning tasks of RecipeQA, namely visual cloze, visual coherence, and visual ordering tasks, each of which examines a different reasoning skill. We briefly describe these tasks below.",
"Visual Cloze. In the visual cloze task, the question is formed by a sequence of four images from consecutive steps of a recipe where one of them is replaced by a placeholder. A model should select the correct one from a multiple-choice list of four answer candidates to fill in the missing piece. In that regard, the task inherently requires aligning visual and textual information and understanding temporal relationships between the cooking actions and the entities.",
"Visual Coherence. The visual coherence task tests the ability to identify the image within a sequence of four images that is inconsistent with the text instructions of a cooking recipe. To succeed in this task, a model should have a clear understanding of the procedure described in the recipe and at the same time connect language and vision.",
"Visual Ordering. The visual ordering task is about grasping the temporal flow of visual events with the help of the given recipe text. The questions show a set of four images from the recipe and the task is to sort jumbled images into the correct order. Here, a model needs to infer the temporal relations between the images and align them with the recipe steps."
],
[
"In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images.",
"Input Module extracts vector representations of inputs at different levels of granularity by using several different encoders.",
"Reasoning Module scans the procedural text and tracks the states of the entities and their relations through a recurrent relational memory core unit BIBREF5.",
"Attention Module computes context-aware query vectors and query-aware context vectors as well as query-aware memory vectors.",
"Modeling Module employs two multi-layered RNNs to encode previous layers outputs.",
"Output Module scores a candidate answer from the given multiple-choice list.",
"At a high level, as the model is reading the cooking recipe, it continually updates the internal memory representations of the entities (ingredients) based on the content of each step – it keeps track of changes in the states of the entities, providing an entity-centric summary of the recipe. The response to a question and a possible answer depends on the representation of the recipe text as well as the last states of the entities. All this happens in a series of implicit relational reasoning steps and there is no need for explicitly encoding the state in terms of a predefined vocabulary."
],
[
"Let the triple $(\\mathbf {R},\\mathbf {Q},\\mathbf {A})$ be a sample input. Here, $\\mathbf {R}$ denotes the input recipe which contains textual instructions composed of $N$ words in total. $\\mathbf {Q}$ represents the question that consists of a sequence of $M$ images. $\\mathbf {A}$ denotes an answer that is either a single image or a series of $L$ images depending on the reasoning task. In particular, for the visual cloze and the visual coherence type questions, the answer contains a single image ($L=1$) and for the visual ordering task, it includes a sequence.",
"We encode the input recipe $\\mathbf {R}$ at character, word, and step levels. Character-level embedding layer uses a convolutional neural network, namely CharCNN model by BIBREF7, which outputs character level embeddings for each word and alleviates the issue of out-of-vocabulary (OOV) words. In word embedding layer, we use a pretrained GloVe model BIBREF8 and extract word-level embeddings. The concatenation of the character and the word embeddings are then fed to a two-layer highway network BIBREF10 to obtain a contextual embedding for each word in the recipe. This results in the matrix $\\mathbf {R}^{\\prime } \\in \\mathbb {R}^{2d \\times N}$.",
"On top of these layers, we have another layer that encodes the steps of the recipe in an individual manner. Specifically, we obtain a step-level contextual embedding of the input recipe containing $T$ steps as $\\mathcal {S}=(\\mathbf {s}_1,\\mathbf {s}_2,\\dots ,\\mathbf {s}_T)$ where $\\mathbf {s}_i$ represents the final state of a BiLSTM encoding the $i$-th step of the recipe obtained from the character and word-level embeddings of the tokens exist in the corresponding step.",
"We represent both the question $\\mathbf {Q}$ and the answer $\\mathbf {A}$ in terms of visual embeddings. Here, we employ a pretrained ResNet-50 model BIBREF11 trained on ImageNet dataset BIBREF12 and represent each image as a real-valued 2048-d vector using features from the penultimate average-pool layer. Then these embeddings are passed first to a multilayer perceptron (MLP) and then its outputs are fed to a BiLSTM. We then form a matrix $\\mathbf {Q}^{\\prime } \\in \\mathbb {R}^{2d \\times M}$ for the question by concatenating the cell states of the BiLSTM. For the visual ordering task, to represent the sequence of images in the answer with a single vector, we additionally use a BiLSTM and define the answering embedding by the summation of the cell states of the BiLSTM. Finally, for all tasks, these computations produce answer embeddings denoted by $\\mathbf {a} \\in \\mathbb {R}^{2d \\times 1}$."
],
[
"As mentioned before, comprehending a cooking recipe is mostly about entities (basic ingredients) and actions (cooking activities) described in the recipe instructions. Each action leads to changes in the states of the entities, which usually affects their visual characteristics. A change rarely occurs in isolation; in most cases, the action affects multiple entities at once. Hence, in our reasoning module, we have an explicit memory component implemented with relational memory units BIBREF5. This helps us to keep track of the entities, their state changes and their relations in relation to each other over the course of the recipe (see Fig. FIGREF14). As we will examine in more detail in Section SECREF4, it also greatly improves the interpretability of model outputs.",
"Specifically, we set up the memory with a memory matrix $\\mathbf {E} \\in \\mathbb {R}^{d_E \\times K}$ by extracting $K$ entities (ingredients) from the first step of the recipe. We initialize each memory cell $\\mathbf {e}_i$ representing a specific entity by its CharCNN and pre-trained GloVe embeddings. From now on, we will use the terms memory cells and entities interchangeably throughout the paper. Since the input recipe is given in the form of a procedural text decomposed into a number of steps, we update the memory cells after each step, reflecting the state changes happened on the entities. This update procedure is modelled via a relational recurrent neural network (R-RNN), recently proposed by BIBREF5. It is built on a 2-dimensional LSTM model whose matrix of cell states represent our memory matrix $\\mathbf {E}$. Here, each row $i$ of the matrix $\\mathbf {E}$ refers to a specific entity $\\mathbf {e}_i$ and is updated after each recipe step $t$ as follows:",
"where $\\mathbf {s}_{t}$ denotes the embedding of recipe step $t$ and $\\mathbf {\\phi }_{i,t}=(\\mathbf {h}_{i,t},\\mathbf {e}_{i,t})$ is the cell state of the R-RNN at step $t$ with $\\mathbf {h}_{i,t}$ and $\\mathbf {e}_{i,t}$ being the $i$-th row of the hidden state of the R-RNN and the dynamic representation of entity $\\mathbf {e}_{i}$ at the step $t$, respectively. The R-RNN model exploits a multi-headed self-attention mechanism BIBREF13 that allows memory cells to interact with each other and attend multiple locations simultaneously during the update phase.",
"In Fig. FIGREF14, we illustrate how this interaction takes place in our relational memory module by considering a sample cooking recipe and by presenting how the attention matrix changes throughout the recipe. In particular, the attention matrix at a specific time shows the attention flow from one entity (memory cell) to another along with the attention weights to the corresponding recipe step (offset column). The color intensity shows the magnitude of the attention weights. As can be seen from the figure, the internal representations of the entities are actively updated at each step. Moreover, as argued in BIBREF5, this can be interpreted as a form of relational reasoning as each update on a specific memory cell is operated in relation to others. Here, we should note that it is often difficult to make sense of these attention weights. However, we observe that the attention matrix changes very gradually near the completion of the recipe."
],
[
"Attention module is in charge of linking the question with the recipe text and the entities present in the recipe. It takes the matrices $\\mathbf {Q^{\\prime }}$ and $\\mathbf {R}^{\\prime }$ from the input module, and $\\mathbf {E}$ from the reasoning module and constructs the question-aware recipe representation $\\mathbf {G}$ and the question-aware entity representation $\\mathbf {Y}$. Following the attention flow mechanism described in BIBREF14, we specifically calculate attentions in four different directions: (1) from question to recipe, (2) from recipe to question, (3) from question to entities, and (4) from entities to question.",
"The first two of these attentions require computing a shared affinity matrix $\\mathbf {S}^R \\in \\mathbb {R}^{N \\times M}$ with $\\mathbf {S}^R_{i,j}$ indicating the similarity between $i$-th recipe word and $j$-th image in the question estimated by",
"where $\\mathbf {w}^{\\top }_{R}$ is a trainable weight vector, $\\circ $ and $[;]$ denote elementwise multiplication and concatenation operations, respectively.",
"Recipe-to-question attention determines the images within the question that is most relevant to each word of the recipe. Let $\\mathbf {\\tilde{Q}} \\in \\mathbb {R}^{2d \\times N}$ represent the recipe-to-question attention matrix with its $i$-th column being given by $ \\mathbf {\\tilde{Q}}_i=\\sum _j \\mathbf {a}_{ij}\\mathbf {Q}^{\\prime }_j$ where the attention weight is computed by $\\mathbf {a}_i=\\operatorname{softmax}(\\mathbf {S}^R_{i}) \\in \\mathbb {R}^M$.",
"Question-to-recipe attention signifies the words within the recipe that have the closest similarity to each image in the question, and construct an attended recipe vector given by $ \\tilde{\\mathbf {r}}=\\sum _{i}\\mathbf {b}_i\\mathbf {R}^{\\prime }_i$ with the attention weight is calculated by $\\mathbf {b}=\\operatorname{softmax}(\\operatorname{max}_{\\mathit {col}}(\\mathbf {S}^R)) \\in \\mathbb {R}^{N}$ where $\\operatorname{max}_{\\mathit {col}}$ denotes the maximum function across the column. The question-to-recipe matrix is then obtained by replicating $\\tilde{\\mathbf {r}}$ $N$ times across the column, giving $\\tilde{\\mathbf {R}} \\in \\mathbb {R}^{2d \\times N}$.",
"Then, we construct the question aware representation of the input recipe, $\\mathbf {G}$, with its $i$-th column $\\mathbf {G}_i \\in \\mathbb {R}^{8d \\times N}$ denoting the final embedding of $i$-th word given by",
"Attentions from question to entities, and from entities to question are computed in a way similar to the ones described above. The only difference is that it uses a different shared affinity matrix to be computed between the memory encoding entities $\\mathbf {E}$ and the question $\\mathbf {Q}^{\\prime }$. These attentions are then used to construct the question aware representation of entities, denoted by $\\mathbf {Y}$, that links and integrates the images in the question and the entities in the input recipe."
],
[
"Modeling module takes the question-aware representations of the recipe $\\mathbf {G}$ and the entities $\\mathbf {Y}$, and forms their combined vector representation. For this purpose, we first use a two-layer BiLSTM to read the question-aware recipe $\\mathbf {G}$ and to encode the interactions among the words conditioned on the question. For each direction of BiLSTM , we use its hidden state after reading the last token as its output. In the end, we obtain a vector embedding $\\mathbf {c} \\in \\mathbb {R}^{2d \\times 1}$. Similarly, we employ a second BiLSTM, this time, over the entities $\\mathbf {Y}$, which results in another vector embedding $\\mathbf {f} \\in \\mathbb {R}^{2d_E \\times 1}$. Finally, these vector representations are concatenated and then projected to a fixed size representation using $\\mathbf {o}=\\varphi _o(\\left[\\mathbf {c}; \\mathbf {f}\\right]) \\in \\mathbb {R}^{2d \\times 1}$ where $\\varphi _o$ is a multilayer perceptron with $\\operatorname{tanh}$ activation function."
],
[
"The output module takes the output of the modeling module, encoding vector embeddings of the question-aware recipe and the entities $\\mathbf {Y}$, and the embedding of the answer $\\mathbf {A}$, and returns a similarity score which is used while determining the correct answer. Among all the candidate answer, the one having the highest similarity score is chosen as the correct answer. To train our proposed procedural reasoning network, we employ a hinge ranking loss BIBREF15, similar to the one used in BIBREF2, given below.",
"where $\\gamma $ is the margin parameter, $\\mathbf {a}_+$ and $\\mathbf {a}_{-}$ are the correct and the incorrect answers, respectively."
],
[
"In this section, we describe our experimental setup and then analyze the results of the proposed Procedural Reasoning Networks (PRN) model."
],
[
"Given a recipe, we automatically extract the entities from the initial step of a recipe by using a dictionary of ingredients. While determining the ingredients, we exploit Recipe1M BIBREF16 and Kaggle What’s Cooking Recipes BIBREF17 datasets, and form our dictionary using the most commonly used ingredients in the training set of RecipeQA. For the cases when no entity can be extracted from the recipe automatically (20 recipes in total), we manually annotate those recipes with the related entities."
],
[
"In our experiments, we separately trained models on each task, as well as we investigated multi-task learning where a single model is trained to solve all these tasks at once. In total, the PRN architecture consists of $\\sim $12M trainable parameters. We implemented our models in PyTorch BIBREF18 using AllenNLP library BIBREF6. We used Adam optimizer with a learning rate of 1e-4 with an early stopping criteria with the patience set to 10 indicating that the training procedure ends after 10 iterations if the performance would not improve. We considered a batch size of 32 due to our hardware constraints. In the multi-task setting, batches are sampled round-robin from all tasks, where each batch is solely composed of examples from one task. We performed our experiments on a system containing four NVIDIA GTX-1080Ti GPUs, and training a single model took around 2 hours. We employed the same hyperparameters for all the baseline systems. We plan to share our code and model implementation after the review process."
],
[
"We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.",
"Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.",
"Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.",
"BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.",
"BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates."
],
[
"Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.",
"In Fig. FIGREF28, we illustrate the entity embeddings space by projecting the learned embeddings from the step-by-step memory snapshots through time with t-SNE to 3-d space from 200-d vector space. Color codes denote the categories of the cooking recipes. As can be seen, these step-aware embeddings show clear clustering of these categories. Moreover, within each cluster, the entities are grouped together in terms of their state characteristics. For instance, in the zoomed parts of the figure, chopped and sliced, or stirred and whisked entities are placed close to each other.",
"Fig. FIGREF30 demonstrates the entity arithmetics using the learned embeddings from each entity step. Here, we show that the learned embedding from the memory snapshots can effectively capture the contextual information about the entities at each time point in the corresponding step while taking into account of the recipe data. This basic arithmetic operation suggests that the proposed model can successfully capture the semantics of each entity's state in the corresponding step."
],
[
"In recent years, tracking entities and their state changes have been explored in the literature from a variety of perspectives. In an early work, BIBREF21 proposed a dynamic memory based network which updates entity states using a gating mechanism while reading the text. BIBREF22 presented a more structured memory augmented model which employs memory slots for representing both entities and their relations. BIBREF23 suggested a conceptually similar model in which the pairwise relations between attended memories are utilized to encode the world state. The main difference between our approach and these works is that by utilizing relational memory core units we also allow memories to interact with each other during each update.",
"BIBREF24 showed that similar ideas can be used to compile supporting memories in tracking dialogue state. BIBREF25 has shown the importance of coreference signals for reading comprehension task. More recently, BIBREF26 introduced a specialized recurrent layer which uses coreference annotations for improving reading comprehension tasks. On language modeling task, BIBREF27 proposed a language model which can explicitly incorporate entities while dynamically updating their representations for a variety of tasks such as language modeling, coreference resolution, and entity prediction.",
"Our work builds upon and contributes to the growing literature on tracking states changes in procedural text. BIBREF0 presented a neural model that can learn to explicitly predict state changes of ingredients at different points in a cooking recipe. BIBREF1 proposed another entity-aware model to track entity states in scientific processes. BIBREF3 demonstrated that the prediction quality can be boosted by including hard and soft constraints to eliminate unlikely or favor probable state changes. In a follow-up work, BIBREF4 exploited the notion of label consistency in training to enforce similar predictions in similar procedural contexts. BIBREF28 proposed a model that dynamically constructs a knowledge graph while reading the procedural text to track the ever-changing entities states. As discussed in the introduction, however, these previous methods use a strong inductive bias and assume that state labels are present during training. In our study, we deliberately focus on unlabeled procedural data and ask the question: Can multimodality help to identify and provide insights to understanding state changes."
],
[
"We have presented a new neural architecture called Procedural Reasoning Networks (PRN) for multimodal understanding of step-by-step instructions. Our proposed model is based on the successful BiDAF framework but also equipped with an explicit memory unit that provides an implicit mechanism to keep track of the changes in the states of the entities over the course of the procedure. Our experimental analysis on visual reasoning tasks in the RecipeQA dataset shows that the model significantly improves the results of the previous models, indicating that it better understands the procedural text and the accompanying images. Additionally, we carefully analyze our results and find that our approach learns meaningful dynamic representations of entities without any entity-level supervision. Although we achieve state-of-the-art results on RecipeQA, clearly there is still room for improvement compared to human performance. We also believe that the PRN architecture will be of value to other visual and textual sequential reasoning tasks."
],
[
"We thank the anonymous reviewers and area chairs for their invaluable feedback. This work was supported by TUBA GEBIP fellowship awarded to E. Erdem; and by the MMVC project via an Institutional Links grant (Project No. 217E054) under the Newton-Katip Çelebi Fund partnership funded by the Scientific and Technological Research Council of Turkey (TUBITAK) and the British Council. We also thank NVIDIA Corporation for the donation of GPUs used in this research."
]
],
"section_name": [
"Introduction",
"Visual Reasoning in RecipeQA",
"Procedural Reasoning Networks",
"Procedural Reasoning Networks ::: Input Module",
"Procedural Reasoning Networks ::: Reasoning Module",
"Procedural Reasoning Networks ::: Attention Module",
"Procedural Reasoning Networks ::: Modeling Module",
"Procedural Reasoning Networks ::: Output Module",
"Experiments",
"Experiments ::: Entity Extraction",
"Experiments ::: Training Details",
"Experiments ::: Baselines",
"Experiments ::: Results",
"Related Work",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"4e5d6e5c9fcd614bd589bc0ea42cc2997bcf28eb",
"9a39d77579baa6cde733cb84ad043de21ec9d0d5"
],
"answer": [
{
"evidence": [
"In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images."
],
"extractive_spans": [
"context is a procedural text, the question and the multiple choice answers are composed of images"
],
"free_form_answer": "",
"highlighted_evidence": [
"Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps."
],
"extractive_spans": [
"images and text"
],
"free_form_answer": "",
"highlighted_evidence": [
"In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. ",
"We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"4c7a5de9be2822f80cb4ff3b2b5e2467f53c3668"
],
"answer": [
{
"evidence": [
"We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.",
"Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.",
"Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.",
"BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.",
"BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates."
],
"extractive_spans": [
"Hasty Student",
"Impatient Reader",
"BiDAF",
"BiDAF w/ static memory"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.\n\nHasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.\n\nImpatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.\n\nBiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context.",
"BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ec1378e356486a4ae207f3c0cd9adc9dab841863"
],
"answer": [
{
"evidence": [
"Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.",
"FLOAT SELECTED: Table 1: Quantitative comparison of the proposed PRN model against the baselines."
],
"extractive_spans": [],
"free_form_answer": "Average accuracy of proposed model vs best prevous result:\nSingle-task Training: 57.57 vs 55.06\nMulti-task Training: 50.17 vs 50.59",
"highlighted_evidence": [
"Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models.",
"In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF.",
"FLOAT SELECTED: Table 1: Quantitative comparison of the proposed PRN model against the baselines."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What multimodality is available in the dataset?",
"What are previously reported models?",
"How better is accuracy of new model compared to previously reported models?"
],
"question_id": [
"a883bb41449794e0a63b716d9766faea034eb359",
"5d83b073635f5fd8cd1bdb1895d3f13406583fbd",
"171ebfdc9b3a98e4cdee8f8715003285caeb2f39"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A recipe for preparing a cheeseburger (adapted from the cooking instructions available at https: //www.instructables.com/id/In-N-Out-Double-Double-Cheeseburger-Copycat). Each basic ingredient (entity) is highlighted by a different color in the text and with bounding boxes on the accompanying images. Over the course of the recipe instructions, ingredients interact with each other, change their states by each cooking action (underlined in the text), which in turn alter the visual and physical properties of entities. For instance, the tomato changes it form by being sliced up and then stacked on a hamburger bun.",
"Figure 2: An illustration of our Procedural Reasoning Networks (PRN). For a sample question from visual coherence task in RecipeQA, while reading the cooking recipe, the model constantly performs updates on the representations of the entities (ingredients) after each step and makes use of their representations along with the whole recipe when it scores a candidate answer. Please refer to the main text for more details.",
"Figure 3: Sample visualizations of the self-attention weights demonstrating both the interactions among the ingredients and between the ingredients and the textual instructions throughout the steps of a sample cooking recipe from RecipeQA (darker colors imply higher attention weights). The attention maps do not change much after the third step as the steps after that mostly provide some redundant information about the completed recipe.",
"Figure 4: t-SNE visualizations of learned embeddings from each memory snapshot mapping to each entity and their corresponding states from each step for visual cloze task.",
"Table 1: Quantitative comparison of the proposed PRN model against the baselines.",
"Figure 5: Step-aware entity representations can be used to discover the changes occurred in the states of the ingredients between two different recipe steps. The difference vector between two entities can then be added to other entities to find their next states. For instance, in the first example, the difference vector encodes the chopping action done on onions. In the second example, it encodes the pouring action done on the water. When these vectors are added to the representations of raw tomatoes and milk, the three most likely next states capture the semantics of state changes in an accurate manner."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"7-Figure4-1.png",
"7-Table1-1.png",
"8-Figure5-1.png"
]
} | [
"How better is accuracy of new model compared to previously reported models?"
] | [
[
"1909.08859-Experiments ::: Results-0",
"1909.08859-7-Table1-1.png"
]
] | [
"Average accuracy of proposed model vs best prevous result:\nSingle-task Training: 57.57 vs 55.06\nMulti-task Training: 50.17 vs 50.59"
] | 35 |
1908.08419 | Active Learning for Chinese Word Segmentation in Medical Text | Electronic health records (EHRs) stored in hospital information systems completely reflect the patients' diagnosis and treatment processes, which are essential to clinical data mining. Chinese word segmentation (CWS) is a fundamental and important task for Chinese natural language processing. Currently, most state-of-the-art CWS methods greatly depend on large-scale manually-annotated data, which is a very time-consuming and expensive work, specially for the annotation in medical field. In this paper, we present an active learning method for CWS in medical text. To effectively utilize complete segmentation history, a new scoring model in sampling strategy is proposed, which combines information entropy with neural network. Besides, to capture interactions between adjacent characters, K-means clustering features are additionally added in word segmenter. We experimentally evaluate our proposed CWS method in medical text, experimental results based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine show that our proposed method outperforms other reference methods, which can effectively save the cost of manual annotation. | {
"paragraphs": [
[
"Electronic health records (EHRs) systematically collect patients' clinical information, such as health profiles, histories of present illness, past medical histories, examination results and treatment plans BIBREF0 . By analyzing EHRs, many useful information, closely related to patients, can be discovered BIBREF1 . Since Chinese EHRs are recorded without explicit word delimiters (e.g., “UTF8gkai糖尿病酮症酸中毒” (diabetic ketoacidosis)), Chinese word segmentation (CWS) is a prerequisite for processing EHRs. Currently, state-of-the-art CWS methods usually require large amounts of manually-labeled data to reach their full potential. However, there are many challenges inherent in labeling EHRs. First, EHRs have many medical terminologies, such as “UTF8gkai高血压性心脏病” (hypertensive heart disease) and “UTF8gkai罗氏芬” (Rocephin), so only annotators with medical backgrounds can be qualified to label EHRs. Second, EHRs may involve personal privacies of patients. Therefore, they cannot be openly published on a large scale for labeling. The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text.",
"CWS was usually formulated as a sequence labeling task BIBREF2 , which can be solved by supervised learning approaches, such as hidden markov model (HMM) BIBREF3 and conditional random field (CRF) BIBREF4 . However, these methods rely heavily on handcrafted features. To relieve the efforts of feature engineering, neural network-based methods are beginning to thrive BIBREF5 , BIBREF6 , BIBREF7 . However, due to insufficient annotated training data, conventional models for CWS trained on open corpus often suffer from significant performance degradation when transferred to a domain-specific text. Moreover, the task in medical domain is rarely dabbled, and only one related work on transfer learning is found in recent literatures BIBREF8 . However, researches related to transfer learning mostly remain in general domains, causing a major problem that a considerable amount of manually annotated data is required, when introducing the models into specific domains.",
"One of the solutions for this obstacle is to use active learning, where only a small scale of samples are selected and labeled in an active manner. Active learning methods are favored by the researchers in many natural language processing (NLP) tasks, such as text classification BIBREF9 and named entity recognition (NER) BIBREF10 . However, only a handful of works are conducted on CWS BIBREF2 , and few focuses on medical domain tasks.",
"Given the aforementioned challenges and current researches, we propose a word segmentation method based on active learning. To model the segmentation history, we incorporate a sampling strategy consisting of word score, link score and sequence score, which effectively evaluates the segmentation decisions. Specifically, we combine information branch and gated neural network to determine if the segment is a legal word, i.e., word score. Meanwhile, we use the hidden layer output of the long short-term memory (LSTM) BIBREF11 to find out how the word is linked to its surroundings, i.e., link score. The final decision on the selection of labeling samples is made by calculating the average of word and link scores on the whole segmented sentence, i.e., sequence score. Besides, to capture coherence over characters, we additionally add K-means clustering features to the input of CRF-based word segmenter.",
"To sum up, the main contributions of our work are summarized as follows:",
"The rest of this paper is organized as follows. Section SECREF2 briefly reviews the related work on CWS and active learning. Section SECREF3 presents an active learning method for CWS. We experimentally evaluate our proposed method in Section SECREF4 . Finally, Section SECREF5 concludes the paper and envisions on future work."
],
[
"In past decades, researches on CWS have a long history and various methods have been proposed BIBREF13 , BIBREF14 , BIBREF15 , which is an important task for Chinese NLP BIBREF7 . These methods are mainly focus on two categories: supervised learning and deep learning BIBREF2 .",
"Supervised Learning Methods. Initially, supervised learning methods were widely-used in CWS. Xue BIBREF13 employed a maximum entropy tagger to automatically assign Chinese characters. Zhao et al. BIBREF16 used a conditional random field for tag decoding and considered both feature template selection and tag set selection. However, these methods greatly rely on manual feature engineering BIBREF17 , while handcrafted features are difficult to design, and the size of these features is usually very large BIBREF6 .",
"Deep Learning Methods. Recently, neural networks have been applied in CWS tasks. To name a few, Zheng et al. BIBREF14 used deep layers of neural networks to learn feature representations of characters. Chen et al. BIBREF6 adopted LSTM to capture the previous important information. Chen et al. BIBREF18 proposed a gated recursive neural network (GRNN), which contains reset and update gates to incorporate the complicated combinations of characters. Jiang and Tang BIBREF19 proposed a sequence-to-sequence transformer model to avoid overfitting and capture character information at the distant site of a sentence. Yang et al. BIBREF20 investigated subword information for CWS and integrated subword embeddings into a Lattice LSTM (LaLSTM) network. However, general word segmentation models do not work well in specific field due to lack of annotated training data.",
"Currently, a handful of domain-specific CWS approaches have been studied, but they focused on decentralized domains. In the metallurgical field, Shao et al. BIBREF15 proposed a domain-specific CWS method based on Bi-LSTM model. In the medical field, Xing et al. BIBREF8 proposed an adaptive multi-task transfer learning framework to fully leverage domain-invariant knowledge from high resource domain to medical domain. Meanwhile, transfer learning still greatly focuses on the corpus in general domain. When it comes to the specific domain, large amounts of manually-annotated data is necessary. Active learning can solve this problem to a certain extent. However, due to the challenges faced by performing active learning on CWS, only a few studies have been conducted. On judgements, Yan et al. BIBREF21 adopted the local annotation strategy, which selects substrings around the informative characters in active learning. However, their method still stays at the statistical level. Unlike the above method, we propose an active learning approach for CWS in medical text, which combines information entropy with neural network to effectively reduce annotation cost."
],
[
"Active learning BIBREF22 mainly aims to ease the data collection process by automatically deciding which instances should be labeled by annotators to train a model as quickly and effectively as possible BIBREF23 . The sampling strategy plays a key role in active learning. In the past decade, the rapid development of active learning has resulted in various sampling strategies, such as uncertainty sampling BIBREF24 , query-by-committee BIBREF25 and information gain BIBREF26 . Currently, the most mainstream sampling strategy is uncertainty sampling. It focuses its selection on samples closest to the decision boundary of the classifier and then chooses these samples for annotators to relabel BIBREF27 .",
"The formal definition of uncertainty sampling is to select a sample INLINEFORM0 that maximizes the entropy INLINEFORM1 over the probability of predicted classes: DISPLAYFORM0 ",
"where INLINEFORM0 is a multi-dimensional feature vector, INLINEFORM1 is its binary label, and INLINEFORM2 is the predicted probability, through which a classifier trained on training sets can map features to labels. However, in some complicated tasks, such as CWS and NER, only considering the uncertainty of classifier is obviously not enough."
],
[
"Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.",
"Fig. FIGREF7 and Algorithm SECREF3 demonstrate the procedure of CWS based on active learning. First, we train a CRF-based segmenter by train set. Then, the segmenter is employed to annotate the unlabeled set roughly. Subsequently, information entropy based scoring model picks INLINEFORM0 -lowest ranking samples for annotators to relabel. Meanwhile, the train sets and unlabeled sets are updated. Finally, we re-train the segmenter. The above steps iterate until the desired accuracy is achieved or the number of iterations has reached a predefined threshold. [!ht] Active Learning for Chinese Word Segmentation labeled data INLINEFORM1 , unlabeled data INLINEFORM2 , the number of iterations INLINEFORM3 , the number of samples selected per iteration INLINEFORM4 , partitioning function INLINEFORM5 , size INLINEFORM6 a word segmentation model INLINEFORM7 with the smallest test set loss INLINEFORM8 Initialize: INLINEFORM9 ",
" train a word segmenter INLINEFORM0 ",
" estimate the test set loss INLINEFORM0 ",
" label INLINEFORM0 by INLINEFORM1 ",
" INLINEFORM0 to INLINEFORM1 INLINEFORM2 compute INLINEFORM3 by branch information entropy based scoring model",
" select INLINEFORM0 -lowest ranking samples INLINEFORM1 ",
"relabel INLINEFORM0 by annotators",
"form a new labeled dataset INLINEFORM0 ",
"form a new unlabeled dataset INLINEFORM0 ",
"train a word segmenter INLINEFORM0 ",
"estimate the new test loss INLINEFORM0 ",
"compute the loss reduction INLINEFORM0 ",
" INLINEFORM0 INLINEFORM1 ",
" INLINEFORM0 ",
" INLINEFORM0 INLINEFORM1 with the smallest test set loss INLINEFORM2 INLINEFORM3 "
],
[
"CWS can be formalized as a sequence labeling problem with character position tags, which are (`B', `M', `E', `S'). So, we convert the labeled data into the `BMES' format, in which each character in the sequence is assigned into a label as follows one by one: B=beginning of a word, M=middle of a word, E=end of a word and S=single word.",
"In this paper, we use CRF as a training model for CWS task. Given the observed sequence, CRF has a single exponential model for the joint probability of the entire sequence of labels, while maximum entropy markov model (MEMM) BIBREF29 uses per-state exponential models for the conditional probabilities of next states BIBREF4 . Therefore, it can solve the label bias problem effectively. Compared with neural networks, it has less dependency on the corpus size.",
"First, we pre-process EHRs at the character-level, separating each character of raw EHRs. For instance, given a sentence INLINEFORM0 , where INLINEFORM1 represents the INLINEFORM2 -th character, the separated form is INLINEFORM3 . Then, we employ Word2Vec BIBREF30 to train pre-processed EHRs to get character embeddings. To capture interactions between adjacent characters, K-means clustering algorithm BIBREF31 is utilized to feature the coherence over characters. In general, K-means divides INLINEFORM4 EHR characters into INLINEFORM5 groups of clusters and the similarity of EHR characters in the same cluster is higher. With each iteration, K-means can classify EHR characters into the nearest cluster based on distance to the mean vector. Then, recalculating and adjusting the mean vectors of these clusters until the mean vector converges. K-means features explicitly show the difference between two adjacent characters and even multiple characters. Finally, we additionally add K-means clustering features to the input of CRF-based segmenter. The segmenter makes positional tagging decisions over individual characters. For example, a Chinese segmented sentence UTF8gkai“病人/长期/于/我院/肾病科/住院/治疗/。/\" (The patient was hospitalized for a long time in the nephrology department of our hospital.) is labeled as `BEBESBEBMEBEBES'."
],
[
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.",
"We use gated neural network and information entropy to capture the likelihood of the segment being a legal word. The architecture of word score model is depicted in Fig. FIGREF12 .",
"Gated Combination Neural Network (GCNN)",
"To effectively learn word representations through character embeddings, we use GCNN BIBREF32 . The architecture of GCNN is demonstrated in Fig. FIGREF13 , which includes update gate and reset gate. The gated mechanism not only captures the characteristics of the characters themselves, but also utilizes the interaction between the characters. There are two types of gates in this network structure: reset gates and update gates. These two gated vectors determine the final output of the gated recurrent neural network, where the update gate helps the model determine what to be passed, and the reset gate primarily helps the model decide what to be cleared. In particular, the word embedding of a word with INLINEFORM0 characters can be computed as: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are update gates for new combination vector INLINEFORM2 and the i-th character INLINEFORM3 respectively, the combination vector INLINEFORM4 is formalized as: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are reset gates for characters.",
"Left and Right Branch Information Entropy In general, each string in a sentence may be a word. However, compared with a string which is not a word, the string of a word is significantly more independent. The branch information entropy is usually used to judge whether each character in a string is tightly linked through the statistical characteristics of the string, which reflects the likelihood of a string being a word. The left and right branch information entropy can be formalized as follows: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 denotes the INLINEFORM1 -th candidate word, INLINEFORM2 denotes the character set, INLINEFORM3 denotes the probability that character INLINEFORM4 is on the left of word INLINEFORM5 and INLINEFORM6 denotes the probability that character INLINEFORM7 is on the right of word INLINEFORM8 . INLINEFORM9 and INLINEFORM10 respectively represent the left and right branch information entropy of the candidate word INLINEFORM11 . If the left and right branch information entropy of a candidate word is relatively high, the probability that the candidate word can be combined with the surrounded characters to form a word is low, thus the candidate word is likely to be a legal word.",
"To judge whether the candidate words in a segmented sentence are legal words, we compute the left and right entropy of each candidate word, then take average as the measurement standard: DISPLAYFORM0 ",
"We represent a segmented sentence with INLINEFORM0 candidate words as [ INLINEFORM1 , INLINEFORM2 ,..., INLINEFORM3 ], so the INLINEFORM4 ( INLINEFORM5 ) of the INLINEFORM6 -th candidate word is computed by its average entropy: DISPLAYFORM0 ",
"In this paper, we use LSTM to capture the coherence between words in a segmented sentence. This neural network is mainly an optimization for traditional RNN. RNN is widely used to deal with time-series prediction problems. The result of its current hidden layer is determined by the input of the current layer and the output of the previous hidden layer BIBREF33 . Therefore, RNN can remember historical results. However, traditional RNN has problems of vanishing gradient and exploding gradient when training long sequences BIBREF34 . By adding a gated mechanism to RNN, LSTM effectively solves these problems, which motivates us to get the link score with LSTM. Formally, the LSTM unit performs the following operations at time step INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are the inputs of LSTM, all INLINEFORM3 and INLINEFORM4 are a set of parameter matrices to be trained, and INLINEFORM5 is a set of bias parameter matrices to be trained. INLINEFORM6 and INLINEFORM7 operation respectively represent matrix element-wise multiplication and sigmoid function. In the LSTM unit, there are two hidden layers ( INLINEFORM8 , INLINEFORM9 ), where INLINEFORM10 is the internal memory cell for dealing with vanishing gradient, while INLINEFORM11 is the main output of the LSTM unit for complex operations in subsequent layers.",
"We denotes INLINEFORM0 as the word embedding of time step INLINEFORM1 , a prediction INLINEFORM2 of next word embedding INLINEFORM3 can be computed by hidden layer INLINEFORM4 : DISPLAYFORM0 ",
"Therefore, link score of next word embedding INLINEFORM0 can be computed as: DISPLAYFORM0 ",
"Due to the structure of LSTM, vector INLINEFORM0 contains important information of entire segmentation decisions. In this way, the link score gets the result of the sequence-level word segmentation, not just word-level.",
"Intuitively, we can compute the score of a segmented sequence by summing up word scores and link scores. However, we find that a sequence with more candidate words tends to have higher sequence scores. Therefore, to alleviate the impact of the number of candidate words on sequence scores, we calculate final scores as follows: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the INLINEFORM1 -th segmented sequence with INLINEFORM2 candidate words, and INLINEFORM3 represents the INLINEFORM4 -th candidate words in the segmented sequence.",
"When training the model, we seek to minimize the sequence score of the corrected segmented sentence and the predicted segmented sentence. DISPLAYFORM0 ",
"where INLINEFORM0 is the loss function."
],
[
"We collect 204 EHRs with cardiovascular diseases from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine and each contains 27 types of records. We choose 4 different types with a total of 3868 records from them, which are first course reports, medical records, chief ward round records and discharge records. The detailed information of EHRs are listed in Table TABREF32 .",
"We split our datasets as follows. First, we randomly select 3200 records from 3868 records as unlabeled set. Then, we manually annotate remaining 668 records as labeled set, which contains 1170 sentences. Finally, we divide labeled set into train set and test set with the ratio of 7:3 randomly. Statistics of datasets are listed in Table TABREF33 ."
],
[
"To determine suitable parameters, we divide training set into two sets, the first 80% sentences as training set and the rest 20% sentences as validation set.",
"Character embedding dimensions and K-means clusters are two main parameters in the CRF-based word segmenter.",
"In this paper, we choose character-based CRF without any features as baseline. First, we use Word2Vec to train character embeddings with dimensions of [`50', `100', `150', `200', `300', `400'] respectively, thus we obtain 6 different dimensional character embeddings. Second, these six types of character embeddings are used as the input to K-means algorithm with the number of clusters [`50', `100', `200', `300', `400', `500', `600'] respectively to capture the corresponding features of character embeddings. Then, we add K-means clustering features to baseline for training. As can be seen from Fig. FIGREF36 , when the character embedding dimension INLINEFORM0 = 150 and the number of clusters INLINEFORM1 = 400, CRF-based word segmenter performs best, so these two parameters are used in subsequent experiments.",
"Hyper-parameters of neural network have a great impact on the performance. The hyper-parameters we choose are listed in Table TABREF38 .",
"The dimension of character embeddings is set as same as the parameter used in CRF-based word segmenter and the number of hidden units is also set to be the same as it. Maximum word length is ralated to the number of parameters in GCNN unit. Since there are many long medical terminologies in EHRs, we set the maximum word length as 6. In addition, dropout is an effective way to prevent neural networks from overfitting BIBREF35 . To avoid overfitting, we drop the input layer of the scoring model with the rate of 20%."
],
[
"Our work experimentally compares two mainstream CWS tools (LTP and Jieba) on training and testing sets. These two tools are widely used and recognized due to their high INLINEFORM0 -score of word segmentation in general fields. However, in specific fields, there are many terminologies and uncommon words, which lead to the unsatisfactory performance of segmentation results. To solve the problem of word segmentation in specific fields, these two tools provide a custom dictionary for users. In the experiments, we also conduct a comparative experiment on whether external domain dictionary has an effect on the experimental results. We manually construct the dictionary when labeling EHRs.",
"From the results in Table TABREF41 , we find that Jieba benefits a lot from the external dictionary. However, the Recall of LTP decreases when joining the domain dictionary. Generally speaking, since these two tools are trained by general domain corpus, the results are not ideal enough to cater to the needs of subsequent NLP of EHRs when applied to specific fields.",
"To investigate the effectiveness of K-means features in CRF-based segmenter, we also compare K-means with 3 different clustering features, including MeanShift BIBREF36 , SpectralClustering BIBREF37 and DBSCAN BIBREF38 on training and testing sets. From the results in Table TABREF43 , by adding additional clustering features in CRF-based segmenter, there is a significant improvement of INLINEFORM0 -score, which indicates that clustering features can effectively capture the semantic coherence between characters. Among these clustering features, K-means performs best, so we utlize K-means results as additional features for CRF-based segmenter.",
"In this experiment, since uncertainty sampling is the most popular strategy in real applications for its simpleness and effectiveness BIBREF27 , we compare our proposed strategy with uncertainty sampling in active learning. We conduct our experiments as follows. First, we employ CRF-based segmenter to annotate the unlabeled set. Then, sampling strategy in active learning selects a part of samples for annotators to relabel. Finally, the relabeled samples are added to train set for segmenter to re-train. Our proposed scoring strategy selects samples according to the sequence scores of the segmented sentences, while uncertainty sampling suggests relabeling samples that are closest to the segmenter’s decision boundary.",
"Generally, two main parameters in active learning are the numbers of iterations and samples selected per iteration. To fairly investigate the influence of two parameters, we compare our proposed strategy with uncertainty sampling on the same parameter. We find that though the number of iterations is large enough, it has a limited impact on the performance of segmenter. Therefore, we choose 30 as the number of iterations, which is a good trade-off between speed and performance. As for the number of samples selected per iteration, there are 6078 sentences in unlabeled set, considering the high cost of relabeling, we set four sizes of samples selected per iteration, which are 2%, 5%, 8% and 11%.",
"The experimental results of two sampling strategies with 30 iterations on four different proportions of relabeled data are shown in Fig. FIGREF45 , where x-axis represents the number of iterations and y-axis denotes the INLINEFORM0 -score of the segmenter. Scoring strategy shows consistent improvements over uncertainty sampling in the early iterations, indicating that scoring strategy is more capable of selecting representative samples.",
"Furthermore, we also investigate the relations between the best INLINEFORM0 -score and corresponding number of iteration on two sampling strategies, which is depicted in Fig. FIGREF46 .",
"It is observed that in our proposed scoring model, with the proportion of relabeled data increasing, the iteration number of reaching the optimal word segmentation result is decreasing, but the INLINEFORM0 -score of CRF-based word segmenter is also gradually decreasing. When the proportion is 2%, the segmenter reaches the highest INLINEFORM1 -score: 90.62%. Obviously, our proposed strategy outperforms uncertainty sampling by a large margin. Our proposed method needs only 2% relabeled samples to obtain INLINEFORM2 -score of 90.62%, while uncertainty sampling requires 8% samples to reach its best INLINEFORM3 -score of 88.98%, which indicates that with our proposed method, we only need to manually relabel a small number of samples to achieve a desired segmentation result."
],
[
"To relieve the efforts of EHRs annotation, we propose an effective word segmentation method based on active learning, in which the sampling strategy is a scoring model combining information entropy with neural network. Compared with the mainstream uncertainty sampling, our strategy selects samples from statistical perspective and deep learning level. In addition, to capture coherence between characters, we add K-means clustering features to CRF-based word segmenter. Based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, we evaluate our method on CWS task. Compared with uncertainty sampling, our method requires 6% less relabeled samples to achieve better performance, which proves that our method can save the cost of manual annotation to a certain extent.",
"In future, we plan to employ other widely-used deep neural networks, such as convolutional neural network and attention mechanism, in the research of EHRs segmentation. Then, we believe that our method can be applied to other tasks as well, so we will fully investigate the application of our method in other tasks, such as NER and relation extraction."
],
[
"The authors would like to appreciate any suggestions or comments from the anonymous reviewers. This work was supported by the National Natural Science Foundation of China (No. 61772201) and the National Key R&D Program of China for “Precision medical research\" (No. 2018YFC0910550)."
]
],
"section_name": [
"Introduction",
"Chinese Word Segmentation",
"Active Learning",
"Active Learning for Chinese Word Segmentation",
"CRF-based Word Segmenter",
"Information Entropy Based Scoring Model",
"Datasets",
"Parameter Settings",
"Experimental Results",
"Conclusion and Future Work",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"7f52a42b5c714e3a236ad19e17d6118d7150020d",
"dfd42925ad6801aefc716d18331afc2671840e52"
],
"answer": [
{
"evidence": [
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
"extractive_spans": [
"First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word"
],
"free_form_answer": "",
"highlighted_evidence": [
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
"extractive_spans": [
" the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history"
],
"free_form_answer": "",
"highlighted_evidence": [
"The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"589355ec9f709793c89446fbfa5eba29dcd02fa5"
],
"answer": [
{
"evidence": [
"Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively."
],
"extractive_spans": [],
"free_form_answer": "Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.",
"highlighted_evidence": [
"Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"91d6990deb8ffb2a24a890eea56dd15de40b3546"
],
"answer": [
{
"evidence": [
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
"extractive_spans": [
"gated neural network "
],
"free_form_answer": "",
"highlighted_evidence": [
"A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How does the scoring model work?",
"How does the active learning model work?",
"Which neural network architectures are employed?"
],
"question_id": [
"3c3cb51093b5fd163e87a773a857496a4ae71f03",
"53a0763eff99a8148585ac642705637874be69d4",
"0bfed6f9cfe93617c5195c848583e3945f2002ff"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"word segmentation",
"word segmentation",
"word segmentation"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. The diagram of active learning for the Chinese word segmentation.",
"Fig. 2. The architecture of the information entropy based scoring model, where ‘/’ represents candidate word separator, xi represents the one-hot encoding of the i-th character, cj represents the j-th character embedding learned by Word2Vec, wm represents the distributed representation of the mth candidate word and pn represents the prediction of the (n+1)-th candidate word.",
"Fig. 3. The architecture of word score, where ‘/’ represents candidate word separator, ci represents the i-th character embedding, wj represents the j-th candidate word embedding and ScoreWord(wk) represents the word score of the k-th candidate word.",
"Fig. 4. The architecture of GCNN.",
"TABLE I DETAILED INFORMATION OF EHRS",
"TABLE III HYPER-PARAMETER SETTING.",
"TABLE IV EXPERIMENTAL RESULTS WITH DIFFERENT WORD SEGMENTATION TOOLS.",
"Fig. 5. The relation between F1-score and K-means class with different character embedding dimensions.",
"TABLE II STATISTICS OF DATASETS",
"TABLE V COMPARISON WITH DIFFERENT CLUSTERING FEATURES.",
"Fig. 7. The relations between the best F1-score and corresponding iteration on two sampling strategies with different relabeled sample sizes.",
"Fig. 6. The results of two sampling strategies with different relabeled sample sizes."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"5-TableI-1.png",
"6-TableIII-1.png",
"6-TableIV-1.png",
"6-Figure5-1.png",
"6-TableII-1.png",
"7-TableV-1.png",
"7-Figure7-1.png",
"7-Figure6-1.png"
]
} | [
"How does the active learning model work?"
] | [
[
"1908.08419-Active Learning for Chinese Word Segmentation-0"
]
] | [
"Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively."
] | 36 |
1703.05260 | InScript: Narrative texts annotated with script information | This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing. | {
"paragraphs": [
[
"A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” BIBREF0 . Script events describe an action/activity along with the involved participants. For example, in the script describing a visit to a restaurant, typical events are entering the restaurant, ordering food or eating. Participants in this scenario can include animate objects like the waiter and the customer, as well as inanimate objects such as cutlery or food.",
"Script knowledge has been shown to play an important role in text understanding (cullingford1978script, miikkulainen1995script, mueller2004understanding, Chambers2008, Chambers2009, modi2014inducing, rudinger2015learning). It guides the expectation of the reader, supports coreference resolution as well as common-sense knowledge inference and enables the appropriate embedding of the current sentence into the larger context. Figure 1 shows the first few sentences of a story describing the scenario taking a bath. Once the taking a bath scenario is evoked by the noun phrase (NP) “a bath”, the reader can effortlessly interpret the definite NP “the faucet” as an implicitly present standard participant of the taking a bath script. Although in this story, “entering the bath room”, “turning on the water” and “filling the tub” are explicitly mentioned, a reader could nevertheless have inferred the “turning on the water” event, even if it was not explicitly mentioned in the text. Table 1 gives an example of typical events and participants for the script describing the scenario taking a bath.",
"A systematic study of the influence of script knowledge in texts is far from trivial. Typically, text documents (e.g. narrative texts) describing various scenarios evoke many different scripts, making it difficult to study the effect of a single script. Efforts have been made to collect scenario-specific script knowledge via crowdsourcing, for example the OMICS and SMILE corpora (singh2002open, Regneri:2010, Regneri2013), but these corpora describe script events in a pointwise telegram style rather than in full texts.",
"This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). It is a corpus of simple narrative texts in the form of stories, wherein each story is centered around a specific scenario. The stories have been collected via Amazon Mechanical Turk (M-Turk). In this experiment, turkers were asked to write down a concrete experience about a bus ride, a grocery shopping event etc. We concentrated on 10 scenarios and collected 100 stories per scenario, giving a total of 1,000 stories with about 200,000 words. Relevant verbs and noun phrases in all stories are annotated with event types and participant types respectively. Additionally, the texts have been annotated with coreference information in order to facilitate the study of the interdependence between script structure and coreference.",
"The InScript corpus is a unique resource that provides a basis for studying various aspects of the role of script knowledge in language processing by humans. The acquisition of this corpus is part of a larger research effort that aims at using script knowledge to model the surprisal and information density in written text. Besides InScript, this project also released a corpus of generic descriptions of script activities called DeScript (for Describing Script Structure, Wanzare2016). DeScript contains a range of short and textually simple phrases that describe script events in the style of OMICS or SMILE (singh2002open, Regneri:2010). These generic telegram-style descriptions are called Event Descriptions (EDs); a sequence of such descriptions that cover a complete script is called an Event Sequence Description (ESD). Figure 2 shows an excerpt of a script in the baking a cake scenario. The figure shows event descriptions for 3 different events in the DeScript corpus (left) and fragments of a story in the InScript corpus (right) that instantiate the same event type."
],
[
"We selected 10 scenarios from different available scenario lists (e.g. Regneri:2010 , VanDerMeer2009, and the OMICS corpus BIBREF1 ), including scripts of different complexity (Taking a bath vs. Flying in an airplane) and specificity (Riding a public bus vs. Repairing a flat bicycle tire). For the full scenario list see Table 2 .",
"Texts were collected via the Amazon Mechanical Turk platform, which provides an opportunity to present an online task to humans (a.k.a. turkers). In order to gauge the effect of different M-Turk instructions on our task, we first conducted pilot experiments with different variants of instructions explaining the task. We finalized the instructions for the full data collection, asking the turkers to describe a scenario in form of a story as if explaining it to a child and to use a minimum of 150 words. The selected instruction variant resulted in comparably simple and explicit scenario-related stories. In the future we plan to collect more complex stories using different instructions. In total 190 turkers participated. All turkers were living in the USA and native speakers of English. We paid USD $0.50 per story to each turker. On average, the turkers took 9.37 minutes per story with a maximum duration of 17.38 minutes."
],
[
"Statistics for the corpus are given in Table 2 . On average, each story has a length of 12 sentences and 217 words with 98 word types on average. Stories are coherent and concentrate mainly on the corresponding scenario. Neglecting auxiliaries, modals and copulas, on average each story has 32 verbs, out of which 58% denote events related to the respective scenario. As can be seen in Table 2 , there is some variation in stories across scenarios: The flying in an airplane scenario, for example, is most complex in terms of the number of sentences, tokens and word types that are used. This is probably due to the inherent complexity of the scenario: Taking a flight, for example, is more complicated and takes more steps than taking a bath. The average count of sentences, tokens and types is also very high for the baking a cake scenario. Stories from the scenario often resemble cake recipes, which usually contain very detailed steps, so people tend to give more detailed descriptions in the stories.",
"For both flying in an airplane and baking a cake, the standard deviation is higher in comparison to other scenarios. This indicates that different turkers described the scenario with a varying degree of detail and can also be seen as an indicator for the complexity of both scenarios. In general, different people tend to describe situations subjectively, with a varying degree of detail. In contrast, texts from the taking a bath and planting a tree scenarios contain a relatively smaller number of sentences and fewer word types and tokens. Both planting a tree and taking a bath are simpler activities, which results in generally less complex texts.",
"The average pairwise word type overlap can be seen as a measure of lexical variety among stories: If it is high, the stories resemble each other more. We can see that stories in the flying in an airplane and baking a cake scenarios have the highest values here, indicating that most turkers used a similar vocabulary in their stories.",
"In general, the response quality was good. We had to discard 9% of the stories as these lacked the quality we were expecting. In total, we selected 910 stories for annotation."
],
[
"This section deals with the annotation of the data. We first describe the final annotation schema. Then, we describe the iterative process of corpus annotation and the refinement of the schema. This refinement was necessary due to the complexity of the annotation."
],
[
"For each of the scenarios, we designed a specific annotation template. A script template consists of scenario-specific event and participant labels. An example of a template is shown in Table 1 . All NP heads in the corpus were annotated with a participant label; all verbs were annotated with an event label. For both participants and events, we also offered the label unclear if the annotator could not assign another label. We additionally annotated coreference chains between NPs. Thus, the process resulted in three layers of annotation: event types, participant types and coreference annotation. These are described in detail below.",
"As a first layer, we annotated event types. There are two kinds of event type labels, scenario-specific event type labels and general labels. The general labels are used across every scenario and mark general features, for example whether an event belongs to the scenario at all. For the scenario-specific labels, we designed an unique template for every scenario, with a list of script-relevant event types that were used as labels. Such labels include for example ScrEv_close_drain in taking a bath as in Example UID10 (see Figure 1 for a complete list for the taking a bath scenario)",
"I start by closing $_{\\textsc {\\scriptsize ScrEv\\_close\\_drain}}$ the drain at the bottom of the tub.",
"The general labels that were used in addition to the script-specific labels in every scenario are listed below:",
"ScrEv_other. An event that belongs to the scenario, but its event type occurs too infrequently (for details, see below, Section \"Modification of the Schema\" ). We used the label “other\" because event classification would become too finegrained otherwise.",
"Example: After I am dried I put my new clothes on and clean up $_{\\textsc {\\scriptsize ScrEv\\_other}}$ the bathroom.",
"RelNScrEv. Related non-script event. An event that can plausibly happen during the execution of the script and is related to it, but that is not part of the script.",
"Example: After finding on what I wanted to wear, I went into the bathroom and shut $_{\\textsc {\\scriptsize RelNScrEv}}$ the door.",
"UnrelEv. An event that is unrelated to the script.",
"Example: I sank into the bubbles and took $_{\\textsc {\\scriptsize UnrelEv}}$ a deep breath.",
"Additionally, the annotators were asked to annotate verbs and phrases that evoke the script without explicitly referring to a script event with the label Evoking, as shown in Example UID10 . Today I took a bath $_{\\textsc {\\scriptsize Evoking}}$ in my new apartment.",
"As in the case of the event type labels, there are two kinds of participant labels: general labels and scenario-specific labels. The latter are part of the scenario-specific templates, e.g. ScrPart_drain in the taking a bath scenario, as can be seen in Example UID15 .",
"I start by closing the drain $_{\\textsc {\\scriptsize ScrPart\\_drain}}$ at the bottom of the tub.",
"The general labels that are used across all scenarios mark noun phrases with scenario-independent features. There are the following general labels:",
"ScrPart_other. A participant that belongs to the scenario, but its participant type occurs only infrequently.",
"Example: I find my bath mat $_{\\textsc {\\scriptsize ScrPart\\_other}}$ and lay it on the floor to keep the floor dry.",
"NPart. Non-participant. A referential NP that does not belong to the scenario.",
"Example: I washed myself carefully because I did not want to spill water onto the floor $_{\\textsc {\\scriptsize NPart}}$ .labeled",
"SuppVComp. A support verb complement. For further discussion of this label, see Section \"Special Cases\" ",
"Example: I sank into the bubbles and took a deep breath $_{\\textsc {\\scriptsize SuppVComp}}$ .",
"Head_of_Partitive. The head of a partitive or a partitive-like construction. For a further discussion of this label cf. Section \"Special Cases\" ",
"Example: I grabbed a bar $_{\\textsc {\\scriptsize Head\\_of\\_Partitive}}$ of soap and lathered my body.",
"No_label. A non-referential noun phrase that cannot be labeled with another label. Example: I sat for a moment $_{\\textsc {\\scriptsize No\\_label}}$ , relaxing, allowing the warm water to sooth my skin.",
"All NPs labeled with one of the labels SuppVComp, Head_of_Partitive or No_label are considered to be non-referential. No_label is used mainly in four cases in our data: non-referential time expressions (in a while, a million times better), idioms (no matter what), the non-referential “it” (it felt amazing, it is better) and other abstracta (a lot better, a little bit).",
"In the first annotation phase, annotators were asked to mark verbs and noun phrases that have an event or participant type, that is not listed in the template, as MissScrEv/ MissScrPart (missing script event or participant, resp.). These annotations were used as a basis for extending the templates (see Section \"Modification of the Schema\" ) and replaced later by newly introduced labels or ScrEv_other and ScrPart_other respectively.",
"All noun phrases were annotated with coreference information indicating which entities denote the same discourse referent. The annotation was done by linking heads of NPs (see Example UID21 , where the links are indicated by coindexing). As a rule, we assume that each element of a coreference chain is marked with the same participant type label.",
"I $ _{\\textsc {\\scriptsize Coref1}}$ washed my $ _{\\textsc {\\scriptsize Coref1}}$ entire body $ _{\\textsc {\\scriptsize Coref2}}$ , starting with my $ _{\\textsc {\\scriptsize Coref1}}$ face $ _{\\textsc {\\scriptsize Coref3}} $ and ending with the toes $ _{\\textsc {\\scriptsize Coref4}} $ . I $ _{\\textsc {\\scriptsize Coref1}}$ always wash my $ _{\\textsc {\\scriptsize Coref1}}$ toes $_{\\textsc {\\scriptsize Coref4}}$ very thoroughly ...",
"The assignment of an entity to a referent is not always trivial, as is shown in Example UID21 . There are some cases in which two discourse referents are grouped in a plural NP. In the example, those things refers to the group made up of shampoo, soap and sponge. In this case, we asked annotators to introduce a new coreference label, the name of which indicates which referents are grouped together (Coref_group_washing_tools). All NPs are then connected to the group phrase, resulting in an additional coreference chain.",
"I $ _{\\textsc {\\scriptsize Coref1}}$ made sure that I $ _{\\textsc {\\scriptsize Coref1}}$ have my $ _{\\textsc {\\scriptsize Coref1}}$ shampoo $ _{\\textsc {\\scriptsize Coref2 + Coref\\_group\\_washing\\_tools}}$ , soap $_{\\textsc {\\scriptsize Coref3 + Coref\\_group\\_washing\\_tools}}$ and sponge $ _{\\textsc {\\scriptsize Coref4 + Coref\\_group\\_washing\\_tools}}$ ready to get in. Once I $ _{\\textsc {\\scriptsize Coref1}}$ have those things $ _{\\textsc {\\scriptsize Coref\\_group\\_washing\\_tools}}$ I $ _{\\textsc {\\scriptsize Coref1}}$ sink into the bath. ... I $ _{\\textsc {\\scriptsize Coref1}}$ applied some soap $ _{\\textsc {\\scriptsize Coref1}}$0 on my $ _{\\textsc {\\scriptsize Coref1}}$1 body and used the sponge $ _{\\textsc {\\scriptsize Coref1}}$2 to scrub a bit. ... I $ _{\\textsc {\\scriptsize Coref1}}$3 rinsed the shampoo $ _{\\textsc {\\scriptsize Coref1}}$4 . Example UID21 thus contains the following coreference chains: Coref1: I $ _{\\textsc {\\scriptsize Coref1}}$5 I $ _{\\textsc {\\scriptsize Coref1}}$6 my $ _{\\textsc {\\scriptsize Coref1}}$7 I $ _{\\textsc {\\scriptsize Coref1}}$8 I $ _{\\textsc {\\scriptsize Coref1}}$9 I $ _{\\textsc {\\scriptsize Coref1}}$0 my $ _{\\textsc {\\scriptsize Coref1}}$1 I",
"Coref2: shampoo $\\rightarrow $ shampoo",
"Coref3: soap $\\rightarrow $ soap",
"Coref4: sponge $\\rightarrow $ sponge",
"Coref_group_washing_ tools: shampoo $\\rightarrow $ soap $\\rightarrow $ sponge $\\rightarrow $ things"
],
[
"The templates were carefully designed in an iterated process. For each scenario, one of the authors of this paper provided a preliminary version of the template based on the inspection of some of the stories. For a subset of the scenarios, preliminary templates developed at our department for a psycholinguistic experiment on script knowledge were used as a starting point. Subsequently, the authors manually annotated 5 randomly selected texts for each of the scenarios based on the preliminary template. Necessary extensions and changes in the templates were discussed and agreed upon. Most of the cases of disagreement were related to the granularity of the event and participant types. We agreed on the script-specific functional equivalence as a guiding principle. For example, reading a book, listening to music and having a conversation are subsumed under the same event label in the flight scenario, because they have the common function of in-flight entertainment in the scenario. In contrast, we assumed different labels for the cake tin and other utensils (bowls etc.), since they have different functions in the baking a cake scenario and accordingly occur with different script events.",
"Note that scripts and templates as such are not meant to describe an activity as exhaustively as possible and to mention all steps that are logically necessary. Instead, scripts describe cognitively prominent events in an activity. An example can be found in the flight scenario. While more than a third of the turkers mentioned the event of fastening the seat belts in the plane (buckle_seat_belt), no person wrote about undoing their seat belts again, although in reality both events appear equally often. Consequently, we added an event type label for buckling up, but no label for undoing the seat belts."
],
[
"We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section \"Inter-Annotator Agreement\" ). It was found to be sufficiently high.",
"Annotation of the corpus together with some pre- and post-processing of the data required about 500 hours of work. All stories were annotated with event and participant types (a total of 12,188 and 43,946 instances, respectively). On average there were 7 coreference chains per story with an average length of 6 tokens."
],
[
"After the first annotation round, we extended and changed the templates based on the results. As mentioned before, we used MissScrEv and MissScrPart labels to mark verbs and noun phrases instantiating events and participants for which no appropriate labels were available in the templates. Based on the instances with these labels (a total of 941 and 1717 instances, respectively), we extended the guidelines to cover the sufficiently frequent cases. In order to include new labels for event and participant types, we tried to estimate the number of instances that would fall under a certain label. We added new labels according to the following conditions:",
"For the participant annotations, we added new labels for types that we expected to appear at least 10 times in total in at least 5 different stories (i.e. in approximately 5% of the stories).",
"For the event annotations, we chose those new labels for event types that would appear in at least 5 different stories.",
"In order to avoid too fine a granularity of the templates, all other instances of MissScrEv and MissScrPart were re-labeled with ScrEv_other and ScrPart_other. We also relabeled participants and events from the first annotation phase with ScrEv_other and ScrPart_other, if they did not meet the frequency requirements. The event label air_bathroom (the event of letting fresh air into the room after the bath), for example, was only used once in the stories, so we relabeled that instance to ScrEv_other.",
"Additionally, we looked at the DeScript corpus BIBREF3 , which contains manually clustered event paraphrase sets for the 10 scenarios that are also covered by InScript (see Section \"Comparison to the DeScript Corpus\" ). Every such set contains event descriptions that describe a certain event type. We extended our templates with additional labels for these events, if they were not yet part of the template."
],
[
"Noun-noun compounds were annotated twice with the same label (whole span plus the head noun), as indicated by Example UID31 . This redundant double annotation is motivated by potential processing requirements.",
"I get my (wash (cloth $ _{\\textsc {\\scriptsize ScrPart\\_washing\\_tools}} ))$ , $_{\\textsc {\\scriptsize ScrPart\\_washing\\_tools}} $ and put it under the water.",
"A special treatment was given to support verb constructions such as take time, get home or take a seat in Example UID32 . The semantics of the verb itself is highly underspecified in such constructions; the event type is largely dependent on the object NP. As shown in Example UID32 , we annotate the head verb with the event type described by the whole construction and label its object with SuppVComp (support verb complement), indicating that it does not have a proper reference.",
"I step into the tub and take $ _{\\textsc {\\scriptsize ScrEv\\_sink\\_water}} $ a seat $ _{\\textsc {\\scriptsize SuppVComp}} $ .",
"We used the Head_of_Partitive label for the heads in partitive constructions, assuming that the only referential part of the construction is the complement. This is not completely correct, since different partitive heads vary in their degree of concreteness (cf. Examples UID33 and UID33 ), but we did not see a way to make the distinction sufficiently transparent to the annotators. Our seats were at the back $ _{\\textsc {\\scriptsize Head\\_of\\_Partitive}} $ of the train $ _{\\textsc {\\scriptsize ScrPart\\_train}} $ . In the library you can always find a couple $ _{\\textsc {\\scriptsize Head\\_of\\_Partitive}} $ of interesting books $ _{\\textsc {\\scriptsize ScrPart\\_book}} $ .",
"Group denoting NPs sometimes refer to groups whose members are instances of different participant types. In Example UID34 , the first-person plural pronoun refers to the group consisting of the passenger (I) and a non-participant (my friend). To avoid a proliferation of event type labels, we labeled these cases with Unclear.",
"I $ _{\\textsc {\\scriptsize {ScrPart\\_passenger}}}$ wanted to visit my $_{\\textsc {\\scriptsize {ScrPart\\_passenger}}}$ friend $ _{\\textsc {\\scriptsize {NPart}}}$ in New York. ... We $_{\\textsc {\\scriptsize Unclear}}$ met at the train station.",
"We made an exception for the Getting a Haircut scenario, where the mixed participant group consisting of the hairdresser and the customer occurs very often, as in Example UID34 . Here, we introduced the additional ad-hoc participant label Scr_Part_hairdresser_customer.",
"While Susan $_{\\textsc {\\scriptsize {ScrPart\\_hairdresser}}}$ is cutting my $_{\\textsc {\\scriptsize {ScrPart\\_customer}}}$ hair we $_{\\textsc {\\scriptsize Scr\\_Part\\_hairdresser\\_customer}}$ usually talk a bit."
],
[
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement."
],
[
"Figure 5 gives an overview of the number of event and participant types provided in the templates. Taking a flight and getting a haircut stand out with a large number of both event and participant types, which is due to the inherent complexity of the scenarios. In contrast, planting a tree and going on a train contain the fewest labels. There are 19 event and participant types on average.",
"Figure 6 presents overview statistics about the usage of event labels, participant labels and coreference chain annotations. As can be seen, there are usually many more mentions of participants than events. For coreference chains, there are some chains that are really long (which also results in a large scenario-wise standard deviation). Usually, these chains describe the protagonist.",
"We also found again that the flying in an airplane scenario stands out in terms of participant mentions, event mentions and average number of coreference chains.",
"Figure 7 shows for every participant label in the baking a cake scenario the number of stories which they occurred in. This indicates how relevant a participant is for the script. As can be seen, a small number of participants are highly prominent: cook, ingredients and cake are mentioned in every story. The fact that the protagonist appears most often consistently holds for all other scenarios, where the acting person appears in every story, and is mentioned most frequently.",
"Figure 8 shows the distribution of participant/event type labels over all appearances over all scenarios on average. The groups stand for the most frequently appearing label, the top 2 to 5 labels in terms of frequency and the top 6 to 10. ScrEv_other and ScrPart_other are shown separately. As can be seen, the most frequently used participant label (the protagonist) makes up about 40% of overall participant instances. The four labels that follow the protagonist in terms of frequency together appear in 37% of the cases. More than 2 out of 3 participants in total belong to one of only 5 labels.",
"In contrast, the distribution for events is more balanced. 14% of all event instances have the most prominent event type. ScrEv_other and ScrPart_other both appear as labels in at most 5% of all event and participant instantiations: The specific event and participant type labels in our templates cover by far most of the instances.",
"In Figure 9 , we grouped participants similarly into the first, the top 2-5 and top 6-10 most frequently appearing participant types. The figure shows for each of these groups the average frequency per story, and in the rightmost column the overall average. The results correspond to the findings from the last paragraph."
],
[
"As mentioned previously, the InScript corpus is part of a larger research project, in which also a corpus of a different kind, the DeScript corpus, was created. DeScript covers 40 scenarios, and also contains the 10 scenarios from InScript. This corpus contains texts that describe scripts on an abstract and generic level, while InScript contains instantiations of scripts in narrative texts. Script events in DeScript are described in a very simple, telegram-style language (see Figure 2 ). Since one of the long-term goals of the project is to align the InScript texts with the script structure given from DeScript, it is interesting to compare both resources.",
"The InScript corpus exhibits much more lexical variation than DeScript. Many approaches use the type-token ratio to measure this variance. However, this measure is known to be sensitive to text length (see e.g. Tweedie1998), which would result in very small values for InScript and relatively large ones for DeScript, given the large average difference of text lengths between the corpora. Instead, we decided to use the Measure of Textual Lexical Diversity (MTLD) (McCarthy2010, McCarthy2005), which is familiar in corpus linguistics. This metric measures the average number of tokens in a text that are needed to retain a type-token ratio above a certain threshold. If the MTLD for a text is high, many tokens are needed to lower the type-token ratio under the threshold, so the text is lexically diverse. In contrast, a low MTLD indicates that only a few words are needed to make the type-token ratio drop, so the lexical diversity is smaller. We use the threshold of 0.71, which is proposed by the authors as a well-proven value.",
"Figure 10 compares the lexical diversity of both resources. As can be seen, the InScript corpus with its narrative texts is generally much more diverse than the DeScript corpus with its short event descriptions, across all scenarios. For both resources, the flying in an airplane scenario is most diverse (as was also indicated above by the mean word type overlap). However, the difference in the variation of lexical variance of scenarios is larger for DeScript than for InScript. Thus, the properties of a scenario apparently influence the lexical variance of the event descriptions more than the variance of the narrative texts. We used entropy BIBREF6 over lemmas to measure the variance of lexical realizations for events. We excluded events for which there were less than 10 occurrences in DeScript or InScript. Since there is only an event annotation for 50 ESDs per scenario in DeScript, we randomly sampled 50 texts from InScript for computing the entropy to make the numbers more comparable.",
"Figure 11 shows as an example the entropy values for the event types in the going on a train scenario. As can be seen in the graph, the entropy for InScript is in general higher than for DeScript. In the stories, a wider variety of verbs is used to describe events. There are also large differences between events: While wait has a really low entropy, spend_time_train has an extremely high entropy value. This event type covers many different activities such as reading, sleeping etc."
],
[
"In this paper we described the InScript corpus of 1,000 narrative texts annotated with script structure and coreference information. We described the annotation process, various difficulties encountered during annotation and different remedies that were taken to overcome these. One of the future research goals of our project is also concerned with finding automatic methods for text-to-script mapping, i.e. for the alignment of text segments with script states. We consider InScript and DeScript together as a resource for studying this alignment. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing."
],
[
"This research was funded by the German Research Foundation (DFG) as part of SFB 1102 'Information Density and Linguistic Encoding'."
]
],
"section_name": [
"Motivation",
"Collection via Amazon M-Turk",
"Data Statistics",
"Annotation",
"Annotation Schema",
"Development of the Schema",
"First Annotation Phase",
"Modification of the Schema",
"Special Cases",
"Inter-Annotator Agreement",
"Annotated Corpus Statistics",
"Comparison to the DeScript Corpus",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"697e318cbd3c0685caf6f8670044f74eeca2dd29"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"06fa905d7f2aaced6dc72e9511c71a2a51e8aead"
]
},
{
"annotation_id": [
"fccbfbfd1cb203422c01866dd2ef25ff342de6d1",
"31f0262a036f427ffe0c75ba54ab33d723ed818d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 4: Inter-annotator agreement statistics.",
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement."
],
"extractive_spans": [],
"free_form_answer": "For event types and participant types, there was a moderate to substantial level of agreement using the Fleiss' Kappa. For coreference chain annotation, there was average agreement of 90.5%.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 4: Inter-annotator agreement statistics.",
" The results are shown in Figure 4 and indicate moderate to substantial agreement",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement.",
"FLOAT SELECTED: Figure 4: Inter-annotator agreement statistics."
],
"extractive_spans": [],
"free_form_answer": "Moderate agreement of 0.64-0.68 Fleiss’ Kappa over event type labels, 0.77 Fleiss’ Kappa over participant labels, and good agreement of 90.5% over coreference information.",
"highlighted_evidence": [
"The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 .",
"We take the result of 90.5% between annotators to be a good agreement.",
"FLOAT SELECTED: Figure 4: Inter-annotator agreement statistics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"06fa905d7f2aaced6dc72e9511c71a2a51e8aead",
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
},
{
"annotation_id": [
"f9ae2e4623e644564b6b0851573a5cd257eb2208"
],
"answer": [
{
"evidence": [
"We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section \"Inter-Annotator Agreement\" ). It was found to be sufficiently high."
],
"extractive_spans": [
" four different annotators"
],
"free_form_answer": "",
"highlighted_evidence": [
"The stories from each scenario were distributed among four different annotators. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"06fa905d7f2aaced6dc72e9511c71a2a51e8aead"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What are the key points in the role of script knowledge that can be studied?",
"Did the annotators agreed and how much?",
"How many subjects have been used to create the annotations?"
],
"question_id": [
"352c081c93800df9654315e13a880d6387b91919",
"18fbf9c08075e3b696237d22473c463237d153f5",
"a37ef83ab6bcc6faff3c70a481f26174ccd40489"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An excerpt from a story on the TAKING A BATH script.",
"Figure 2: Connecting DeScript and InScript: an example from the BAKING A CAKE scenario (InScript participant annotation is omitted for better readability).",
"Table 1: Bath scenario template (labels added in the second phase of annotation are marked in bold).",
"Table 2: Corpus statistics for different scenarios (standard deviation given in parentheses). The maximum per column is highlighted in boldface, the minimum in boldface italics.",
"Figure 3: Sample event and participant annotation for the TAKING A BATH script.",
"Figure 4: Inter-annotator agreement statistics.",
"Figure 5: The number of participants and events in the templates.",
"Figure 6: Annotation statistics over all scenarios.",
"Figure 8: Distribution of participants (left) and events (right) for the 1, the top 2-5, top 6-10 most frequently appearing events/participants, SCREV/SCRPART OTHER and the rest.",
"Figure 9: Average number of participant mentions for a story, for the first, the top 2-5, top 6-10 most frequently appearing events/participants, and the overall average.",
"Figure 7: The number of stories in the BAKING A CAKE scenario that contain a certain participant label.",
"Figure 10: MTLD values for DeScript and InScript, per scenario.",
"Figure 11: Entropy over verb lemmas for events (left y-axis, H(x)) in the GOING ON A TRAIN SCENARIO. Bars in the background indicate the absolute number of occurrence of instances (right y-axis, N(x))."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"2-Table1-1.png",
"3-Table2-1.png",
"4-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"6-Figure6-1.png",
"7-Figure8-1.png",
"7-Figure9-1.png",
"7-Figure7-1.png",
"8-Figure10-1.png",
"8-Figure11-1.png"
]
} | [
"Did the annotators agreed and how much?"
] | [
[
"1703.05260-Inter-Annotator Agreement-1",
"1703.05260-6-Figure4-1.png",
"1703.05260-Inter-Annotator Agreement-0"
]
] | [
"Moderate agreement of 0.64-0.68 Fleiss’ Kappa over event type labels, 0.77 Fleiss’ Kappa over participant labels, and good agreement of 90.5% over coreference information."
] | 37 |
1905.00563 | Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications | Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on improving accuracy and overlook other aspects such as robustness and interpretability. In this paper, we propose adversarial modifications for link prediction models: identifying the fact to add into or remove from the knowledge graph that changes the prediction for a target fact after the model is retrained. Using these single modifications of the graph, we identify the most influential fact for a predicted link and evaluate the sensitivity of the model to the addition of fake facts. We introduce an efficient approach to estimate the effect of such modifications by approximating the change in the embeddings when the knowledge graph changes. To avoid the combinatorial search over all possible facts, we train a network to decode embeddings to their corresponding graph components, allowing the use of gradient-based optimization to identify the adversarial modification. We use these techniques to evaluate the robustness of link prediction models (by measuring sensitivity to additional facts), study interpretability through the facts most responsible for predictions (by identifying the most influential neighbors), and detect incorrect facts in the knowledge base. | {
"paragraphs": [
[
"Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .",
"Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\\%$ accuracy in detecting errors."
],
[
"In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors. In KGs, facts are represented using triples of subject, relation, and object, $\\langle s, r, o\\rangle $ , where $s,o\\in \\xi $ , the set of entities, and $r\\in $ , the set of relations. To model the KG, a scoring function $\\psi :\\xi \\times \\times \\xi \\rightarrow $ is learned to evaluate whether any given fact is true. In this work, we focus on multiplicative models of link prediction, specifically DistMult BIBREF2 because of its simplicity and popularity, and ConvE BIBREF10 because of its high accuracy. We can represent the scoring function of such methods as $\\psi (s,r,o) = , ) \\cdot $ , where $,,\\in ^d$ are embeddings of the subject, relation, and object respectively. In DistMult, $, ) = \\odot $ , where $\\odot $ is element-wise multiplication operator. Similarly, in ConvE, $, )$ is computed by a convolution on the concatenation of $$ and $s,o\\in \\xi $0 .",
"We use the same setup as BIBREF10 for training, i.e., incorporate binary cross-entropy loss over the triple scores. In particular, for subject-relation pairs $(s,r)$ in the training data $G$ , we use binary $y^{s,r}_o$ to represent negative and positive facts. Using the model's probability of truth as $\\sigma (\\psi (s,r,o))$ for $\\langle s,r,o\\rangle $ , the loss is defined as: (G) = (s,r)o ys,ro(((s,r,o)))",
"+ (1-ys,ro)(1 - ((s,r,o))). Gradient descent is used to learn the embeddings $,,$ , and the parameters of $, if any.\n$ "
],
[
"For adversarial modifications on KGs, we first define the space of possible modifications. For a target triple $\\langle s, r, o\\rangle $ , we constrain the possible triples that we can remove (or inject) to be in the form of $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ i.e $s^{\\prime }$ and $r^{\\prime }$ may be different from the target, but the object is not. We analyze other forms of modifications such as $\\langle s, r^{\\prime }, o^{\\prime }\\rangle $ and $\\langle s, r^{\\prime }, o\\rangle $ in appendices \"Modifications of the Form 〈s,r ' ,o ' 〉\\langle s, r^{\\prime }, o^{\\prime } \\rangle \" and \"Modifications of the Form 〈s,r ' ,o〉\\langle s, r^{\\prime }, o \\rangle \" , and leave empirical evaluation of these modifications for future work."
],
[
"For explaining a target prediction, we are interested in identifying the observed fact that has the most influence (according to the model) on the prediction. We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned. Previous work have used this concept of influence similarly for several different tasks BIBREF19 , BIBREF20 . Formally, for the target triple ${s,r,o}$ and observed graph $G$ , we want to identify a neighboring triple ${s^{\\prime },r^{\\prime },o}\\in G$ such that the score $\\psi (s,r,o)$ when trained on $G$ and the score $\\overline{\\psi }(s,r,o)$ when trained on $G-\\lbrace {s^{\\prime },r^{\\prime },o}\\rbrace $ are maximally different, i.e. *argmax(s', r')Nei(o) (s',r')(s,r,o) where $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)=\\psi (s, r, o)-\\overline{\\psi }(s,r,o)$ , and $\\text{Nei}(o)=\\lbrace (s^{\\prime },r^{\\prime })|\\langle s^{\\prime },r^{\\prime },o \\rangle \\in G \\rbrace $ ."
],
[
"We are also interested in investigating the robustness of models, i.e., how sensitive are the predictions to small additions to the knowledge graph. Specifically, for a target prediction ${s,r,o}$ , we are interested in identifying a single fake fact ${s^{\\prime },r^{\\prime },o}$ that, when added to the knowledge graph $G$ , changes the prediction score $\\psi (s,r,o)$ the most. Using $\\overline{\\psi }(s,r,o)$ as the score after training on $G\\cup \\lbrace {s^{\\prime },r^{\\prime },o}\\rbrace $ , we define the adversary as: *argmax(s', r') (s',r')(s,r,o) where $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)=\\psi (s, r, o)-\\overline{\\psi }(s,r,o)$ . The search here is over any possible $s^{\\prime }\\in \\xi $ , which is often in the millions for most real-world KGs, and $r^{\\prime }\\in $ . We also identify adversaries that increase the prediction score for specific false triple, i.e., for a target fake fact ${s,r,o}$ , the adversary is ${s^{\\prime },r^{\\prime },o}$0 , where ${s^{\\prime },r^{\\prime },o}$1 is defined as before."
],
[
"There are a number of crucial challenges when conducting such adversarial attack on KGs. First, evaluating the effect of changing the KG on the score of the target fact ( $\\overline{\\psi }(s,r,o)$ ) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of $G$ . Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive. Specifically, the search size for unobserved facts is $|\\xi | \\times ||$ , which, for example in YAGO3-10 KG, can be as many as $4.5 M$ possible facts for a single target prediction."
],
[
"In this section, we propose algorithms to address mentioned challenges by (1) approximating the effect of changing the graph on a target prediction, and (2) using continuous optimization for the discrete search over potential modifications."
],
[
"We first study the addition of a fact to the graph, and then extend it to cover removal as well. To capture the effect of an adversarial modification on the score of a target triple, we need to study the effect of the change on the vector representations of the target triple. We use $$ , $$ , and $$ to denote the embeddings of $s,r,o$ at the solution of $\\operatornamewithlimits{argmin} (G)$ , and when considering the adversarial triple $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ , we use $$ , $$ , and $$ for the new embeddings of $s,r,o$ , respectively. Thus $$0 is a solution to $$1 , which can also be written as $$2 . Similarly, $$3 s', r', o $$4 $$5 $$6 $$7 o $$8 $$9 $$0 $$1 $$2 $$3 O(n3) $$4 $$5 $$6 (s,r,o)-(s, r, o) $$7 - $$8 s, r = ,) $$9 - $s,r,o$0 (G)= (G)+(s', r', o ) $s,r,o$1 $s,r,o$2 s', r' = ',') $s,r,o$3 = ((s',r',o)) $s,r,o$4 eo (G)=0 $s,r,o$5 eo (G) $s,r,o$6 Ho $s,r,o$7 dd $s,r,o$8 o $s,r,o$9 $\\operatornamewithlimits{argmin} (G)$0 - $\\operatornamewithlimits{argmin} (G)$1 -= $\\operatornamewithlimits{argmin} (G)$2 Ho $\\operatornamewithlimits{argmin} (G)$3 Ho + (1-) s',r's',r' $\\operatornamewithlimits{argmin} (G)$4 Ho $\\operatornamewithlimits{argmin} (G)$5 dd $\\operatornamewithlimits{argmin} (G)$6 d $\\operatornamewithlimits{argmin} (G)$7 s,r,s',r'd $\\operatornamewithlimits{argmin} (G)$8 s, r, o $\\operatornamewithlimits{argmin} (G)$9 s', r', o $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $0 $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $1 $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $2 "
],
[
"Using the approximations provided in the previous section, Eq. () and (), we can use brute force enumeration to find the adversary $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ . This approach is feasible when removing an observed triple since the search space of such modifications is usually small; it is the number of observed facts that share the object with the target. On the other hand, finding the most influential unobserved fact to add requires search over a much larger space of all possible unobserved facts (that share the object). Instead, we identify the most influential unobserved fact $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ by using a gradient-based algorithm on vector $_{s^{\\prime },r^{\\prime }}$ in the embedding space (reminder, $_{s^{\\prime },r^{\\prime }}=^{\\prime },^{\\prime })$ ), solving the following continuous optimization problem in $^d$ : *argmaxs', r' (s',r')(s,r,o). After identifying the optimal $_{s^{\\prime }, r^{\\prime }}$ , we still need to generate the pair $(s^{\\prime },r^{\\prime })$ . We design a network, shown in Figure 2 , that maps the vector $_{s^{\\prime },r^{\\prime }}$ to the entity-relation space, i.e., translating it into $(s^{\\prime },r^{\\prime })$ . In particular, we train an auto-encoder where the encoder is fixed to receive the $s$ and $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $0 as one-hot inputs, and calculates $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $1 in the same way as the DistMult and ConvE encoders respectively (using trained embeddings). The decoder is trained to take $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $2 as input and produce $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $3 and $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $4 , essentially inverting $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $5 s, r $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $6 s $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $7 r $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $8 s, r $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $9 We evaluate the performance of our inverter networks (one for each model/dataset) on correctly recovering the pairs of subject and relation from the test set of our benchmarks, given the $_{s^{\\prime },r^{\\prime }}$0 . The accuracy of recovered pairs (and of each argument) is given in Table 1 . As shown, our networks achieve a very high accuracy, demonstrating their ability to invert vectors $_{s^{\\prime },r^{\\prime }}$1 to $_{s^{\\prime },r^{\\prime }}$2 pairs."
],
[
"We evaluate by ( \"Influence Function vs \" ) comparing estimate with the actual effect of the attacks, ( \"Robustness of Link Prediction Models\" ) studying the effect of adversarial attacks on evaluation metrics, ( \"Interpretability of Models\" ) exploring its application to the interpretability of KG representations, and ( \"Finding Errors in Knowledge Graphs\" ) detecting incorrect triples."
],
[
"To evaluate the quality of our approximations and compare with influence function (IF), we conduct leave one out experiments. In this setup, we take all the neighbors of a random target triple as candidate modifications, remove them one at a time, retrain the model each time, and compute the exact change in the score of the target triple. We can use the magnitude of this change in score to rank the candidate triples, and compare this exact ranking with ranking as predicted by: , influence function with and without Hessian matrix, and the original model score (with the intuition that facts that the model is most confident of will have the largest impact when removed). Similarly, we evaluate by considering 200 random triples that share the object entity with the target sample as candidates, and rank them as above. The average results of Spearman's $\\rho $ and Kendall's $\\tau $ rank correlation coefficients over 10 random target samples is provided in Table 3 . performs comparably to the influence function, confirming that our approximation is accurate. Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters, while we only approximate the change by calculating the Hessian over $$ . The effect of this difference on scalability is dramatic, constraining IF to very small graphs and small embedding dimensionality ( $d\\le 10$ ) before we run out of memory. In Figure 3 , we show the time to compute a single adversary by IF compared to , as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples. As it shows, is mostly unaffected by the number of entities while IF increases quadratically. Considering that real-world KGs have tens of thousands of times more entities, making IF unfeasible for them."
],
[
"Now we evaluate the effectiveness of to successfully attack link prediction by adding false facts. The goal here is to identify the attacks for triples in the test data, and measuring their effect on MRR and Hits@ metrics (ranking evaluations) after conducting the attack and retraining the model.",
"Since this is the first work on adversarial attacks for link prediction, we introduce several baselines to compare against our method. For finding the adversarial fact to add for the target triple $\\langle s, r, o \\rangle $ , we consider two baselines: 1) choosing a random fake fact $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ (Random Attack); 2) finding $(s^{\\prime }, r^{\\prime })$ by first calculating $, )$ and then feeding $-, )$ to the decoder of the inverter function (Opposite Attack). In addition to , we introduce two other alternatives of our method: (1) , that uses to increase the score of fake fact over a test triple, i.e., we find the fake fact the model ranks second after the test triple, and identify the adversary for them, and (2) that selects between and attacks based on which has a higher estimated change in score.",
"All-Test The result of the attack on all test facts as targets is provided in the Table 4 . outperforms the baselines, demonstrating its ability to effectively attack the KG representations. It seems DistMult is more robust against random attacks, while ConvE is more robust against designed attacks. is more effective than since changing the score of a fake fact is easier than of actual facts; there is no existing evidence to support fake facts. We also see that YAGO3-10 models are more robust than those for WN18. Looking at sample attacks (provided in Appendix \"Sample Adversarial Attacks\" ), mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type.",
"Uncertain-Test To better understand the effect of attacks, we consider a subset of test triples that 1) the model predicts correctly, 2) difference between their scores and the negative sample with the highest score is minimum. This “Uncertain-Test” subset contains 100 triples from each of the original test sets, and we provide results of attacks on this data in Table 4 . The attacks are much more effective in this scenario, causing a considerable drop in the metrics. Further, in addition to significantly outperforming other baselines, they indicate that ConvE's confidence is much more robust.",
"Relation Breakdown We perform additional analysis on the YAGO3-10 dataset to gain a deeper understanding of the performance of our model. As shown in Figure 4 , both DistMult and ConvE provide a more robust representation for isAffiliatedTo and isConnectedTo relations, demonstrating the confidence of models in identifying them. Moreover, the affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations.",
"Examples Sample adversarial attacks are provided in Table 5 . attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types."
],
[
"To be able to understand and interpret why a link is predicted using the opaque, dense embeddings, we need to find out which part of the graph was most influential on the prediction. To provide such explanations for each predictions, we identify the most influential fact using . Instead of focusing on individual predictions, we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique: we find simple patterns on subgraphs that surround the target triple and the removed fact from , and appear more than $90\\%$ of the time. We only focus on extracting length-2 horn rules, i.e., $R_1(a,c)\\wedge R_2(c,b)\\Rightarrow R(a,b)$ , where $R(a,b)$ is the target and $R_2(c,b)$ is the removed fact. Table 6 shows extracted YAGO3-10 rules that are common to both models, and ones that are not. The rules show several interesting inferences, such that hasChild is often inferred via married parents, and isLocatedIn via transitivity. There are several differences in how the models reason as well; DistMult often uses the hasCapital as an intermediate step for isLocatedIn, while ConvE incorrectly uses isNeighbor. We also compare against rules extracted by BIBREF2 for YAGO3-10 that utilizes the structure of DistMult: they require domain knowledge on types and cannot be applied to ConvE. Interestingly, the extracted rules contain all the rules provided by , demonstrating that can be used to accurately interpret models, including ones that are not interpretable, such as ConvE. These are preliminary steps toward interpretability of link prediction models, and we leave more analysis of interpretability to future work."
],
[
"Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ in the neighborhood of the train triple $\\langle s, r, o\\rangle $ , we need to find the triple $\\langle s^{\\prime },r^{\\prime },o\\rangle $ that results in the least change $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)$ when removed from the graph.",
"To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\\langle s^{\\prime }, r, o\\rangle $ where $s^{\\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ where $s^{\\prime }$ and $r^{\\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\\%$ and $55\\%$ in detecting errors."
],
[
"Learning relational knowledge representations has been a focus of active research in the past few years, but to the best of our knowledge, this is the first work on conducting adversarial modifications on the link prediction task. Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions BIBREF21 , BIBREF22 , BIBREF23 . Although is primarily applicable to multiplicative scoring functions BIBREF0 , BIBREF1 , BIBREF2 , BIBREF24 , these ideas apply to additive scoring functions BIBREF18 , BIBREF6 , BIBREF7 , BIBREF25 as well, as we show in Appendix \"First-order Approximation of the Change For TransE\" .",
"Furthermore, there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values BIBREF26 , images BIBREF27 , text BIBREF28 , BIBREF29 , BIBREF30 , and their combinations BIBREF31 . Using , we can gain a deeper understanding of these methods, especially those that build their embeddings wit hmultiplicative scoring functions.",
"Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models BIBREF16 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 to attain the interpretability, and further, evaluate the robustness of those models. BIBREF20 uses influence function to provide an approach to understanding black-box models by studying the changes in the loss occurring as a result of changes in the training data. In addition to incorporating their established method on KGs, we derive a novel approach that differs from their procedure in two ways: (1) instead of changes in the loss, we consider the changes in the scoring function, which is more appropriate for KG representations, and (2) in addition to searching for an attack, we introduce a gradient-based method that is much faster, especially for “adding an attack triple” (the size of search space make the influence function method infeasible). Previous work has also considered adversaries for KGs, but as part of training to improve their representation of the graph BIBREF37 , BIBREF38 . Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction, there are two approaches BIBREF39 , BIBREF17 that consider the task of adversarial attack on graphs. There are a few fundamental differences from our work: (1) they build their method on top of a path-based representations while we focus on embeddings, (2) they consider node classification as the target of their attacks while we attack link prediction, and (3) they conduct the attack on small graphs due to restricted scalability, while the complexity of our method does not depend on the size of the graph, but only the neighborhood, allowing us to attack real-world graphs."
],
[
"Motivated by the need to analyze the robustness and interpretability of link prediction models, we present a novel approach for conducting adversarial modifications to knowledge graphs. We introduce , completion robustness and interpretability via adversarial graph edits: identifying the fact to add into or remove from the KG that changes the prediction for a target fact. uses (1) an estimate of the score change for any target triple after adding or removing another fact, and (2) a gradient-based algorithm for identifying the most influential modification. We show that can effectively reduce ranking metrics on link prediction models upon applying the attack triples. Further, we incorporate the to study the interpretability of KG representations by summarizing the most influential facts for each relation. Finally, using , we introduce a novel automated error detection method for knowledge graphs. We have release the open-source implementation of our models at: https://pouyapez.github.io/criage."
],
[
"We would like to thank Matt Gardner, Marco Tulio Ribeiro, Zhengli Zhao, Robert L. Logan IV, Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions. This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF awards #IIS-1817183 and #IIS-1756023. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies."
],
[
"We approximate the change on the score of the target triple upon applying attacks other than the $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ ones. Since each relation appears many times in the training triples, we can assume that applying a single attack will not considerably affect the relations embeddings. As a result, we just need to study the attacks in the form of $\\langle s, r^{\\prime }, o \\rangle $ and $\\langle s, r^{\\prime }, o^{\\prime } \\rangle $ . Defining the scoring function as $\\psi (s,r,o) = , ) \\cdot = _{s,r} \\cdot $ , we further assume that $\\psi (s,r,o) =\\cdot (, ) =\\cdot _{r,o}$ ."
],
[
"Using similar argument as the attacks in the form of $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ , we can calculate the effect of the attack, $\\overline{\\psi }{(s,r,o)}-\\psi (s, r, o)$ as: (s,r,o)-(s, r, o)=(-) s, r where $_{s, r} = (,)$ .",
"We now derive an efficient computation for $(-)$ . First, the derivative of the loss $(\\overline{G})= (G)+(\\langle s, r^{\\prime }, o^{\\prime } \\rangle )$ over $$ is: es (G) = es (G) - (1-) r', o' where $_{r^{\\prime }, o^{\\prime }} = (^{\\prime },^{\\prime })$ , and $\\varphi = \\sigma (\\psi (s,r^{\\prime },o^{\\prime }))$ . At convergence, after retraining, we expect $\\nabla _{e_s} (\\overline{G})=0$ . We perform first order Taylor approximation of $\\nabla _{e_s} (\\overline{G})$ to get: 0 - (1-)r',o'+",
"(Hs+(1-)r',o' r',o')(-) where $H_s$ is the $d\\times d$ Hessian matrix for $s$ , i.e. second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $-$ gives us: -=",
"(1-) (Hs + (1-) r',o'r',o')-1 r',o' In practice, $H_s$ is positive definite, making $H_s + \\varphi (1-\\varphi ) _{r^{\\prime },o^{\\prime }}^\\intercal _{r^{\\prime },o^{\\prime }}$ positive definite as well, and invertible. Then, we compute the score change as: (s,r,o)-(s, r, o)= r,o (-) =",
" ((1-) (Hs + (1-) r',o'r',o')-1 r',o')r,o."
],
[
"In this section we approximate the effect of attack in the form of $\\langle s, r^{\\prime }, o \\rangle $ . In contrast to $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ attacks, for this scenario we need to consider the change in the $$ , upon applying the attack, in approximation of the change in the score as well. Using previous results, we can approximate the $-$ as: -=",
"(1-) (Ho + (1-) s,r's,r')-1 s,r' and similarly, we can approximate $-$ as: -=",
" (1-) (Hs + (1-) r',or',o)-1 r',o where $H_s$ is the Hessian matrix over $$ . Then using these approximations: s,r(-) =",
" s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r') and: (-) r,o=",
" ((1-) (Hs + (1-) r',or',o)-1 r',o) r,o and then calculate the change in the score as: (s,r,o)-(s, r, o)=",
" s,r.(-) +(-).r,o =",
" s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r')+",
" ((1-) (Hs + (1-) r',or',o)-1 r',o) r, o"
],
[
"In here we derive the approximation of the change in the score upon applying an adversarial modification for TransE BIBREF18 . Using similar assumptions and parameters as before, to calculate the effect of the attack, $\\overline{\\psi }{(s,r,o)}$ (where $\\psi {(s,r,o)}=|+-|$ ), we need to compute $$ . To do so, we need to derive an efficient computation for $$ . First, the derivative of the loss $(\\overline{G})= (G)+(\\langle s^{\\prime }, r^{\\prime }, o \\rangle )$ over $$ is: eo (G) = eo (G) + (1-) s', r'-(s',r',o) where $_{s^{\\prime }, r^{\\prime }} = ^{\\prime }+ ^{\\prime }$ , and $\\varphi = \\sigma (\\psi (s^{\\prime },r^{\\prime },o))$ . At convergence, after retraining, we expect $\\nabla _{e_o} (\\overline{G})=0$ . We perform first order Taylor approximation of $\\nabla _{e_o} (\\overline{G})$ to get: 0",
" (1-) (s', r'-)(s',r',o)+(Ho - Hs',r',o)(-)",
" Hs',r',o = (1-)(s', r'-)(s', r'-)(s',r',o)2+",
" 1-(s',r',o)-(1-) (s', r'-)(s', r'-)(s',r',o)3 where $H_o$ is the $d\\times d$ Hessian matrix for $o$ , i.e., second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $$ gives us: = -(1-) (Ho - Hs',r',o)-1 (s', r'-)(s',r',o)",
" + Then, we compute the score change as: (s,r,o)= |+-|",
"= |++(1-) (Ho - Hs',r',o)-1",
" (s', r'-)(s',r',o) - |",
"Calculating this expression is efficient since $H_o$ is a $d\\times d$ matrix."
],
[
"In this section, we provide the output of the for some target triples. Sample adversarial attacks are provided in Table 5 . As it shows, attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types."
]
],
"section_name": [
"Introduction",
"Background and Notation",
"Completion Robustness and Interpretability via Adversarial Graph Edits ()",
"Removing a fact ()",
"Adding a new fact ()",
"Challenges",
"Efficiently Identifying the Modification",
"First-order Approximation of Influence",
"Continuous Optimization for Search",
"Experiments",
"Influence Function vs ",
"Robustness of Link Prediction Models",
"Interpretability of Models",
"Finding Errors in Knowledge Graphs",
"Related Work",
"Conclusions",
"Acknowledgements",
"Appendix",
"Modifications of the Form 〈s,r ' ,o ' 〉\\langle s, r^{\\prime }, o^{\\prime } \\rangle ",
"Modifications of the Form 〈s,r ' ,o〉\\langle s, r^{\\prime }, o \\rangle ",
"First-order Approximation of the Change For TransE",
"Sample Adversarial Attacks"
]
} | {
"answers": [
{
"annotation_id": [
"8f1f61837454d9f482cd81ea51f1eabd07870b6f",
"a922089b7e48e898c731a414d8b871e45fc72666"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Data Statistics of the benchmarks."
],
"extractive_spans": [],
"free_form_answer": " Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Data Statistics of the benchmarks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\\%$ accuracy in detecting errors."
],
"extractive_spans": [
"WN18 and YAGO3-10"
],
"free_form_answer": "",
"highlighted_evidence": [
"WN18 and YAGO3-10",
"Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"ea95d6212fa8ce6e137058f83fa16c11f6c1c871"
],
"answer": [
{
"evidence": [
"Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ in the neighborhood of the train triple $\\langle s, r, o\\rangle $ , we need to find the triple $\\langle s^{\\prime },r^{\\prime },o\\rangle $ that results in the least change $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)$ when removed from the graph."
],
"extractive_spans": [
"if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. "
],
"free_form_answer": "",
"highlighted_evidence": [
"if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data.",
"Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ in the neighborhood of the train triple $\\langle s, r, o\\rangle $ , we need to find the triple $\\langle s^{\\prime },r^{\\prime },o\\rangle $ that results in the least change $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)$ when removed from the graph."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"71d59a65743aca17c4b889d73bece4a6fac89739"
],
"answer": [
{
"evidence": [
"To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\\langle s^{\\prime }, r, o\\rangle $ where $s^{\\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ where $s^{\\prime }$ and $r^{\\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\\%$ and $55\\%$ in detecting errors."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\\%$ and $55\\%$ in detecting errors."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What datasets are used to evaluate this approach?",
"How is this approach used to detect incorrect facts?",
"Can this adversarial approach be used to directly improve model accuracy?"
],
"question_id": [
"bc9c31b3ce8126d1d148b1025c66f270581fde10",
"185841e979373808d99dccdade5272af02b98774",
"d427e3d41c4c9391192e249493be23926fc5d2e9"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"link prediction",
"link prediction",
"link prediction"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Completion Robustness and Interpretability via Adversarial Graph Edits (CRIAGE): Change in the graph structure that changes the prediction of the retrained model, where (a) is the original sub-graph of the KG, (b) removes a neighboring link of the target, resulting in a change in the prediction, and (c) shows the effect of adding an attack triple on the target. These modifications were identified by our proposed approach.",
"Figure 2: Inverter Network The architecture of our inverter function that translate zs,r to its respective (s̃, r̃). The encoder component is fixed to be the encoder network of DistMult and ConvE respectively.",
"Table 1: Inverter Functions Accuracy, we calculate the accuracy of our inverter networks in correctly recovering the pairs of subject and relation from the test set of our benchmarks.",
"Table 2: Data Statistics of the benchmarks.",
"Figure 3: Influence function vs CRIAGE. We plot the average time (over 10 facts) of influence function (IF) and CRIAGE to identify an adversary as the number of entities in the Kinship KG is varied (by randomly sampling subgraphs of the KG). Even with small graphs and dimensionality, IF quickly becomes impractical.",
"Table 3: Ranking modifications by their impact on the target. We compare the true ranking of candidate triples with a number of approximations using ranking correlation coefficients. We compare our method with influence function (IF) with and without Hessian, and ranking the candidates based on their score, on two KGs (d = 10, averaged over 10 random targets). For the sake of brevity, we represent the Spearman’s ρ and Kendall’s τ rank correlation coefficients simply as ρ and τ .",
"Table 4: Robustness of Representation Models, the effect of adversarial attack on link prediction task. We consider two scenario for the target triples, 1) choosing the whole test dataset as the targets (All-Test) and 2) choosing a subset of test data that models are uncertain about them (Uncertain-Test).",
"Figure 4: Per-Relation Breakdown showing the effect of CRIAGE-Add on different relations in YAGO3-10.",
"Table 5: Extracted Rules for identifying the most influential link. We extract the patterns that appear more than 90% times in the neighborhood of the target triple. The output of CRIAGE-Remove is presented in red.",
"Table 6: Error Detection Accuracy in the neighborhood of 100 chosen samples. We choose the neighbor with the least value of ∆(s′,r′)(s, r, o) as the incorrect fact. This experiment assumes we know each target fact has exactly one error.",
"Table 7: Top adversarial triples for target samples."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"5-Figure3-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Figure4-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"12-Table7-1.png"
]
} | [
"What datasets are used to evaluate this approach?"
] | [
[
"1905.00563-5-Table2-1.png",
"1905.00563-Introduction-1"
]
] | [
" Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs "
] | 38 |
2002.11893 | CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset | To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. We also provide a user simulator and several benchmark models for pipelined task-oriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. The large size and rich annotation of CrossWOZ make it suitable to investigate a variety of tasks in cross-domain dialogue modeling, such as dialogue state tracking, policy learning, user simulation, etc. | {
"paragraphs": [
[
"Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11.",
"Despite the significant contributions to the community, these datasets are still limited in size, language variation, or task complexity. Furthermore, there is a gap between existing dialogue corpora and real-life human dialogue data. In real-life conversations, it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts. Thus, real-life dialogues are much more complicated than those dialogues that are only simulated within a single domain. To address this issue, some multi-domain corpora have been proposed BIBREF12, BIBREF13. The most notable corpus is MultiWOZ BIBREF12, a large-scale multi-domain dataset which consists of crowdsourced human-to-human dialogues. It contains 10K dialogue sessions and 143K utterances for 7 domains, with annotation of system-side dialogue states and dialogue acts. However, the state annotations are noisy BIBREF14, and user-side dialogue acts are missing. The dependency across domains is simply embodied in imposing the same pre-specified constraints on different domains, such as requiring both a hotel and an attraction to locate in the center of the town.",
"In comparison to the abundance of English dialogue data, surprisingly, there is still no widely recognized Chinese task-oriented dialogue corpus. In this paper, we propose CrossWOZ, a large-scale Chinese multi-domain (cross-domain) task-oriented dialogue dataset. An dialogue example is shown in Figure FIGREF1. We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6. Our dataset has the following features comparing to other corpora (particularly MultiWOZ BIBREF12):",
"The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ. As shown in Figure FIGREF1 and Table TABREF6, the hotel must be near the attraction chosen by the user in previous turns, which requires more accurate context understanding.",
"It is the first Chinese corpus that contains large-scale multi-domain task-oriented dialogues, consisting of 6K sessions and 102K utterances for 5 domains (attraction, restaurant, hotel, metro, and taxi).",
"Annotation of dialogue states and dialogue acts is provided for both the system side and user side. The annotation of user states enables us to track the conversation from the user's perspective and can empower the development of more elaborate user simulators.",
"In this paper, we present the process of dialogue collection and provide detailed data analysis of the corpus. Statistics show that our cross-domain dialogues are complicated. To facilitate model comparison, benchmark models are provided for different modules in pipelined task-oriented dialogue systems, including natural language understanding, dialogue state tracking, dialogue policy learning, and natural language generation. We also provide a user simulator, which will facilitate the development and evaluation of dialogue models on this corpus. The corpus and the benchmark models are publicly available at https://github.com/thu-coai/CrossWOZ."
],
[
"According to whether the dialogue agent is human or machine, we can group the collection methods of existing task-oriented dialogue datasets into three categories. The first one is human-to-human dialogues. One of the earliest and well-known ATIS dataset BIBREF6 used this setting, followed by BIBREF8, BIBREF9, BIBREF10, BIBREF15, BIBREF16 and BIBREF12. Though this setting requires many human efforts, it can collect natural and diverse dialogues. The second one is human-to-machine dialogues, which need a ready dialogue system to converse with humans. The famous Dialogue State Tracking Challenges provided a set of human-to-machine dialogue data BIBREF17, BIBREF7. The performance of the dialogue system will largely influence the quality of dialogue data. The third one is machine-to-machine dialogues. It needs to build both user and system simulators to generate dialogue outlines, then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11, BIBREF13. It needs much less human effort. However, the complexity and diversity of dialogue policy are limited by the simulators. To explore dialogue policy in multi-domain scenarios, and to collect natural and diverse dialogues, we resort to the human-to-human setting.",
"Most of the existing datasets only involve single domain in one dialogue, except MultiWOZ BIBREF12 and Schema BIBREF13. MultiWOZ dataset has attracted much attention recently, due to its large size and multi-domain characteristics. It is at least one order of magnitude larger than previous datasets, amounting to 8,438 dialogues and 115K turns in the training set. It greatly promotes the research on multi-domain dialogue modeling, such as policy learning BIBREF18, state tracking BIBREF19, and context-to-text generation BIBREF20. Recently the Schema dataset is collected in a machine-to-machine fashion, resulting in 16,142 dialogues and 330K turns for 16 domains in the training set. However, the multi-domain dependency in these two datasets is only embodied in imposing the same pre-specified constraints on different domains, such as requiring a restaurant and an attraction to locate in the same area, or the city of a hotel and the destination of a flight to be the same (Table TABREF6).",
"Table TABREF5 presents a comparison between our dataset with other task-oriented datasets. In comparison to MultiWOZ, our dataset has a comparable scale: 5,012 dialogues and 84K turns in the training set. The average number of domains and turns per dialogue are larger than those of MultiWOZ, which indicates that our task is more complex. The cross-domain dependency in our dataset is natural and challenging. For example, as shown in Table TABREF6, the system needs to recommend a hotel near the attraction chosen by the user in previous turns. Thus, both system recommendation and user selection will dynamically impact the dialogue. We also allow the same domain to appear multiple times in a user goal since a tourist may want to go to more than one attraction.",
"To better track the conversation flow and model user dialogue policy, we provide annotation of user states in addition to system states and dialogue acts. While the system state tracks the dialogue history, the user state is maintained by the user and indicates whether the sub-goals have been completed, which can be used to predict user actions. This information will facilitate the construction of the user simulator.",
"To the best of our knowledge, CrossWOZ is the first large-scale Chinese dataset for task-oriented dialogue systems, which will largely alleviate the shortage of Chinese task-oriented dialogue corpora that are publicly available."
],
[
"Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:",
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
[
"We collected 465 attractions, 951 restaurants, and 1,133 hotels in Beijing from the Web. Some statistics are shown in Table TABREF11. There are three types of slots for each entity: common slots such as name and address; binary slots for hotel services such as wake-up call; nearby attractions/restaurants/hotels slots that contain nearby entities in the attraction, restaurant, and hotel domains. Since it is not usual to find another nearby hotel in the hotel domain, we did not collect such information. This nearby relation allows us to generate natural cross-domain goals, such as \"find another attraction near the first one\" and \"find a restaurant near the attraction\". Nearest metro stations of HAR entities form the metro database. In contrast, we provided the pseudo car type and plate number for the taxi domain."
],
[
"To avoid generating overly complex goals, each goal has at most five sub-goals. To generate more natural goals, the sub-goals can be of the same domain, such as two attractions near each other. The goal is represented as a list of (sub-goal id, domain, slot, value) tuples, named as semantic tuples. The sub-goal id is used to distinguish sub-goals which may be in the same domain. There are two types of slots: informable slots which are the constraints that the user needs to inform the system, and requestable slots which are the information that the user needs to inquire from the system. As shown in Table TABREF13, besides common informable slots (italic values) whose values are determined before the conversation, we specially design cross-domain informable slots (bold values) whose values refer to other sub-goals. Cross-domain informable slots utilize sub-goal id to connect different sub-goals. Thus the actual constraints vary according to the different contexts instead of being pre-specified. The values of common informable slots are sampled randomly from the database. Based on the informable slots, users are required to gather the values of requestable slots (blank values in Table TABREF13) through conversation.",
"There are four steps in goal generation. First, we generate independent sub-goals in HAR domains. For each domain in HAR domains, with the same probability $\\mathcal {P}$ we generate a sub-goal, while with the probability of $1-\\mathcal {P}$ we do not generate any sub-goal for this domain. Each sub-goal has common informable slots and requestable slots. As shown in Table TABREF15, all slots of HAR domains can be requestable slots, while the slots with an asterisk can be common informable slots.",
"Second, we generate cross-domain sub-goals in HAR domains. For each generated sub-goal (e.g., the attraction sub-goal in Table TABREF13), if its requestable slots contain \"nearby hotels\", we generate an additional sub-goal in the hotel domain (e.g., the hotel sub-goal in Table TABREF13) with the probability of $\\mathcal {P}_{attraction\\rightarrow hotel}$. Of course, the selected hotel must satisfy the nearby relation to the attraction entity. Similarly, we do not generate any additional sub-goal in the hotel domain with the probability of $1-\\mathcal {P}_{attraction\\rightarrow hotel}$. This also works for the attraction and restaurant domains. $\\mathcal {P}_{hotel\\rightarrow hotel}=0$ since we do not allow the user to find the nearby hotels of one hotel.",
"Third, we generate sub-goals in the metro and taxi domains. With the probability of $\\mathcal {P}_{taxi}$, we generate a sub-goal in the taxi domain (e.g., the taxi sub-goal in Table TABREF13) to commute between two entities of HAR domains that are already generated. It is similar for the metro domain and we set $\\mathcal {P}_{metro}=\\mathcal {P}_{taxi}$. All slots in the metro or taxi domain appear in the sub-goals and must be filled. As shown in Table TABREF15, from and to slots are always cross-domain informable slots, while others are always requestable slots.",
"Last, we rearrange the order of the sub-goals to generate more natural and logical user goals. We require that a sub-goal should be followed by its referred sub-goal as immediately as possible.",
"To make the workers aware of this cross-domain feature, we additionally provide a task description for each user goal in natural language, which is generated from the structured goal by hand-crafted templates.",
"Compared with the goals whose constraints are all pre-specified, our goals impose much more dependency between different domains, which will significantly influence the conversation. The exact values of cross-domain informable slots are finally determined according to the dialogue context."
],
[
"We developed a specialized website that allows two workers to converse synchronously and make annotations online. On the website, workers are free to choose one of the two roles: tourist (user) or system (wizard). Then, two paired workers are sent to a chatroom. The user needs to accomplish the allocated goal through conversation while the wizard searches the database to provide the necessary information and gives responses. Before the formal data collection, we trained the workers to complete a small number of dialogues by giving them feedback. Finally, 90 well-trained workers are participating in the data collection.",
"In contrast, MultiWOZ BIBREF12 hired more than a thousand workers to converse asynchronously. Each worker received a dialogue context to review and need to respond for only one turn at a time. The collected dialogues may be incoherent because workers may not understand the context correctly and multiple workers contributed to the same dialogue session, possibly leading to more variance in the data quality. For example, some workers expressed two mutually exclusive constraints in two consecutive user turns and failed to eliminate the system's confusion in the next several turns. Compared with MultiWOZ, our synchronous conversation setting may produce more coherent dialogues."
],
[
"The user state is the same as the user goal before a conversation starts. At each turn, the user needs to 1) modify the user state according to the system response at the preceding turn, 2) select some semantic tuples in the user state, which indicates the dialogue acts, and 3) compose the utterance according to the selected semantic tuples. In addition to filling the required values and updating cross-domain informable slots with real values in the user state, the user is encouraged to modify the constraints when there is no result under such constraints. The change will also be recorded in the user state. Once the goal is completed (all the values in the user state are filled), the user can terminate the dialogue."
],
[
"We regard the database query as the system state, which records the constraints of each domain till the current turn. At each turn, the wizard needs to 1) fill the query according to the previous user response and search the database if necessary, 2) select the retrieved entities, and 3) respond in natural language based on the information of the selected entities. If none of the entities satisfy all the constraints, the wizard will try to relax some of them for a recommendation, resulting in multiple queries. The first query records original user constraints while the last one records the constraints relaxed by the system."
],
[
"After collecting the conversation data, we used some rules to annotate dialogue acts automatically. Each utterance can have several dialogue acts. Each dialogue act is a tuple that consists of intent, domain, slot, and value. We pre-define 6 types of intents and use the update of the user state and system state as well as keyword matching to obtain dialogue acts. For the user side, dialogue acts are mainly derived from the selection of semantic tuples that contain the information of domain, slot, and value. For example, if (1, Attraction, fee, free) in Table TABREF13 is selected by the user, then (Inform, Attraction, fee, free) is labelled. If (1, Attraction, name, ) is selected, then (Request, Attraction, name, none) is labelled. If (2, Hotel, name, near (id=1)) is selected, then (Select, Hotel, src_domain, Attraction) is labelled. This intent is specially designed for the \"nearby\" constraint. For the system side, we mainly applied keyword matching to label dialogue acts. Inform intent is derived by matching the system utterance with the information of selected entities. When the wizard selects multiple retrieved entities and recommend them, Recommend intent is labeled. When the wizard expresses that no result satisfies user constraints, NoOffer is labeled. For General intents such as \"goodbye\", \"thanks\" at both user and system sides, keyword matching is applied.",
"We also obtained a binary label for each semantic tuple in the user state, which indicates whether this semantic tuple has been selected to be expressed by the user. This annotation directly illustrates the progress of the conversation.",
"To evaluate the quality of the annotation of dialogue acts and states (both user and system states), three experts were employed to manually annotate dialogue acts and states for the same 50 dialogues (806 utterances), 10 for each goal type (see Section SECREF4). Since dialogue act annotation is not a classification problem, we didn't use Fleiss' kappa to measure the agreement among experts. We used dialogue act F1 and state accuracy to measure the agreement between each two experts' annotations. The average dialogue act F1 is 94.59% and the average state accuracy is 93.55%. We then compared our annotations with each expert's annotations which are regarded as gold standard. The average dialogue act F1 is 95.36% and the average state accuracy is 94.95%, which indicates the high quality of our annotations."
],
[
"After removing uncompleted dialogues, we collected 6,012 dialogues in total. The dataset is split randomly for training/validation/test, where the statistics are shown in Table TABREF25. The average number of sub-goals in our dataset is 3.24, which is much larger than that in MultiWOZ (1.80) BIBREF12 and Schema (1.84) BIBREF13. The average number of turns (16.9) is also larger than that in MultiWOZ (13.7). These statistics indicate that our dialogue data are more complex.",
"According to the type of user goal, we group the dialogues in the training set into five categories:",
"417 dialogues have only one sub-goal in HAR domains.",
"1573 dialogues have multiple sub-goals (2$\\sim $3) in HAR domains. However, these sub-goals do not have cross-domain informable slots.",
"691 dialogues have multiple sub-goals in HAR domains and at least one sub-goal in the metro or taxi domain (3$\\sim $5 sub-goals). The sub-goals in HAR domains do not have cross-domain informable slots.",
"1,759 dialogues have multiple sub-goals (2$\\sim $5) in HAR domains with cross-domain informable slots.",
"572 dialogues have multiple sub-goals in HAR domains with cross-domain informable slots and at least one sub-goal in the metro or taxi domain (3$\\sim $5 sub-goals).",
"The data statistics are shown in Table TABREF26. As mentioned in Section SECREF14, we generate independent multi-domain, cross multi-domain, and traffic domain sub-goals one by one. Thus in terms of the task complexity, we have S<M<CM and M<M+T<CM+T, which is supported by the average number of sub-goals, semantic tuples, and turns per dialogue in Table TABREF26. The average number of tokens also becomes larger when the goal becomes more complex. About 60% of dialogues (M+T, CM, and CM+T) have cross-domain informable slots. Because of the limit of maximal sub-goals number, the ratio of dialogue number of CM+T to CM is smaller than that of M+T to M.",
"CM and CM+T are much more challenging than other tasks because additional cross-domain constraints in HAR domains are strict and will result in more \"NoOffer\" situations (i.e., the wizard finds no result that satisfies the current constraints). In this situation, the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original goal. The negotiation process is captured by \"NoOffer rate\", \"Multi-query rate\", and \"Goal change rate\" in Table TABREF26. In addition, \"Multi-query rate\" suggests that each sub-goal in M and M+T is as easy to finish as the goal in S.",
"The distribution of dialogue length is shown in Figure FIGREF27, which is an indicator of the task complexity. Most single-domain dialogues terminate within 10 turns. The curves of M and M+T are almost of the same shape, which implies that the traffic task requires two additional turns on average to complete the task. The curves of CM and CM+T are less similar. This is probably because CM goals that have 5 sub-goals (about 22%) can not further generate a sub-goal in traffic domains and become CM+T goals."
],
[
"Our corpus is unique in the following aspects:",
"Complex user goals are designed to favor inter-domain dependency and natural transition between multiple domains. In return, the collected dialogues are more complex and natural for cross-domain dialogue tasks.",
"A well-controlled, synchronous setting is applied to collect human-to-human dialogues. This ensures the high quality of the collected dialogues.",
"Explicit annotations are provided at not only the system side but also the user side. This feature allows us to model user behaviors or develop user simulators more easily."
],
[
"CrossWOZ can be used in different tasks or settings of a task-oriented dialogue system. To facilitate further research, we provided benchmark models for different components of a pipelined task-oriented dialogue system (Figure FIGREF32), including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning, and natural language generation (NLG). These models are implemented using ConvLab-2 BIBREF21, an open-source task-oriented dialog system toolkit. We also provided a rule-based user simulator, which can be used to train dialogue policy and generate simulated dialogue data. The benchmark models and simulator will greatly facilitate researchers to compare and evaluate their models on our corpus."
],
[
"Task: The natural language understanding component in a task-oriented dialogue system takes an utterance as input and outputs the corresponding semantic representation, namely, a dialogue act. The task can be divided into two sub-tasks: intent classification that decides the intent type of an utterance, and slot tagging which identifies the value of a slot.",
"Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings (\"free\") as input and outputs tags in BIO schema (\"B-Inform-Attraction-fee\"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.",
"Result Analysis: The results of the dialogue act prediction (F1 score) are shown in Table TABREF31. We further tested the performance on different intent types, as shown in Table TABREF35. In general, BERTNLU performs well with context information. The performance on cross multi-domain dialogues (CM and CM+T) drops slightly, which may be due to the decrease of \"General\" intent and the increase of \"NoOffer\" as well as \"Select\" intent in the dialogue data. We also noted that the F1 score of \"Select\" intent is remarkably lower than those of other types, but context information can improve the performance significantly. Since recognizing domain transition is a key factor for a cross-domain dialogue system, natural language understanding models need to utilize context information more effectively."
],
[
"Task: Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the pre-defined system state. Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules, while recently there are joint models obtaining the system state directly from the context.",
"Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the \"fee\" slot in the attraction domain will be filled with \"free\". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.",
"Result Analysis: We evaluated the joint state accuracy (percentage of exact matching) of these two models (Table TABREF31). TRADE, the state-of-the-art model on MultiWOZ, performs poorly on our dataset, indicating that more powerful state trackers are necessary. At the test stage, RuleDST can access the previous gold system state and user dialogue acts, which leads to higher joint state accuracy than TRADE. Both models perform worse on cross multi-domain dialogues (CM and CM+T). To evaluate the ability of modeling cross-domain transition, we further calculated joint state accuracy for those turns that receive \"Select\" intent from users (e.g., \"Find a hotel near the attraction\"). The performances are 11.6% and 12.0% for RuleDST and TRADE respectively, showing that they are not able to track domain transition well."
],
[
"Task: Dialogue policy receives state $s$ and outputs system action $a$ at each turn. Compared with the state given by a dialogue state tracker, $s$ may have more information, such as the last user dialogue acts and the entities provided by the backend database.",
"Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction.",
"Result Analysis: As illustrated in Table TABREF31, there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act, which means we need a powerful system state tracker to find correct entities. The result also shows that cross multi-domain dialogues (CM and CM+T) are harder for system dialogue act prediction. Additionally, when there is \"Select\" intent in preceding user dialogue acts, the F1 score of exact dialogue act and delexicalized dialogue act are 41.53% and 54.39% respectively. This shows that the policy performs poorly for cross-domain transition."
],
[
"Task: Natural language generation transforms a structured dialogue act into a natural language sentence. It usually takes delexicalized dialogue acts as input and generates a template-style sentence that contains placeholders for slots. Then, the placeholders will be replaced by the exact values, which is called lexicalization.",
"Model: We provided a template-based model (named TemplateNLG) and SC-LSTM (Semantically Conditioned LSTM) BIBREF1 for natural language generation. For TemplateNLG, we extracted templates from the training set and manually added some templates for infrequent dialogue acts. For SC-LSTM we adapted the implementation on MultiWOZ and trained two SC-LSTM with system-side and user-side utterances respectively.",
"Result Analysis: We calculated corpus-level BLEU as used by BIBREF1. We took all utterances with the same delexcalized dialogue acts as references (100 references on average), which results in high BLEU score. For user-side utterances, the BLEU score for TemplateNLG is 0.5780, while the BLEU score for SC-LSTM is 0.7858. For system-side, the two scores are 0.6828 and 0.8595. As exemplified in Table TABREF39, the gap between the two models can be attributed to that SC-LSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information. We do not provide BLEU scores for different goal types (namely, S, M, CM, etc.) because BLEU scores on different corpus are not comparable."
],
[
"Task: A user simulator imitates the behavior of users, which is useful for dialogue policy learning and automatic evaluation. A user simulator at dialogue act level (e.g., the \"Usr Policy\" in Figure FIGREF32) receives the system dialogue acts and outputs user dialogue acts, while a user simulator at natural language level (e.g., the left part in Figure FIGREF32) directly takes system's utterance as input and outputs user's utterance.",
"Model: We built a rule-based user simulator that works at dialogue act level. Different from agenda-based BIBREF24 user simulator that maintains a stack-like agenda, our simulator maintains the user state straightforwardly (Section SECREF17). The simulator will generate a user goal as described in Section SECREF14. At each user turn, the simulator receives system dialogue acts, modifies its state, and outputs user dialogue acts according to some hand-crafted rules. For example, if the system inform the simulator that the attraction is free, then the simulator will fill the \"fee\" slot in the user state with \"free\", and ask for the next empty slot such as \"address\". The simulator terminates when all requestable slots are filled, and all cross-domain informable slots are filled by real values.",
"Result Analysis: During the evaluation, we initialized the user state of the simulator using the previous gold user state. The input to the simulator is the gold system dialogue acts. We used joint state accuracy (percentage of exact matching) to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts. The results are presented in Table TABREF31. We can observe that the performance on complex dialogues (CM and CM+T) is remarkably lower than that on simple ones (S, M, and M+T). This simple rule-based simulator is provided to facilitate dialogue policy learning and automatic evaluation, and our corpus supports the development of more elaborated simulators as we provide the annotation of user-side dialogue states and dialogue acts."
],
[
"In addition to corpus-based evaluation for each module, we also evaluated the performance of a whole dialogue system using the user simulator as described above. Three configurations were explored:",
"Simulation at dialogue act level. As shown by the dashed connections in Figure FIGREF32, we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy.",
"Simulation at natural language level using TemplateNLG. As shown by the solid connections in Figure FIGREF32, the simulator and the dialogue system were equipped with BERTNLU and TemplateNLG additionally.",
"Simulation at natural language level using SC-LSTM. TemplateNLG was replaced with SC-LSTM in the second configuration.",
"When all the slots in a user goal are filled by real values, the simulator terminates. This is regarded as \"task finish\". It's worth noting that \"task finish\" does not mean the task is success, because the system may provide wrong information. We calculated \"task finish rate\" on 1000 times simulations for each goal type (See Table TABREF31). Findings are summarized below:",
"Cross multi-domain tasks (CM and CM+T) are much harder to finish. Comparing M and M+T, although each module performs well in traffic domains, additional sub-goals in these domains are still difficult to accomplish.",
"The system-level performance is largely limited by RuleDST and SL policy. Although the corpus-based performance of NLU and NLG modules is high, the two modules still harm the performance. Thus more powerful models are needed for all components of a pipelined dialogue system.",
"TemplateNLG has a much lower BLEU score but performs better than SC-LSTM in natural language level simulation. This may be attributed to that BERTNLU prefers templates retrieved from the training set."
],
[
"In this paper, we present the first large-scale Chinese Cross-Domain task-oriented dialogue dataset, CrossWOZ. It contains 6K dialogues and 102K utterances for 5 domains, with the annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals, which encourage natural transition between related domains. Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side, this corpus provides a new testbed for a wide range of tasks to investigate cross-domain dialogue modeling, such as dialogue state tracking, policy learning, etc. Our experiments show that the cross-domain constraints are challenging for all these tasks. The transition between related domains is especially challenging to model. Besides corpus-based component-wise evaluation, we also performed system-level evaluation with a user simulator, which requires more powerful models for all components of a pipelined cross-domain dialogue system."
],
[
"This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data Collection",
"Data Collection ::: Database Construction",
"Data Collection ::: Goal Generation",
"Data Collection ::: Dialogue Collection",
"Data Collection ::: Dialogue Collection ::: User Side",
"Data Collection ::: Dialogue Collection ::: Wizard Side",
"Data Collection ::: Dialogue Annotation",
"Statistics",
"Corpus Features",
"Benchmark and Analysis",
"Benchmark and Analysis ::: Natural Language Understanding",
"Benchmark and Analysis ::: Dialogue State Tracking",
"Benchmark and Analysis ::: Dialogue Policy Learning",
"Benchmark and Analysis ::: Natural Language Generation",
"Benchmark and Analysis ::: User Simulator",
"Benchmark and Analysis ::: Evaluation with User Simulation",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"d1dbe98f982bef1faf43aa1d472c8ed9ffd763fd",
"ff705c27c283670b07e788139cc9e91baa6f328d"
],
"answer": [
{
"evidence": [
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
"extractive_spans": [
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. ",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. "
],
"free_form_answer": "",
"highlighted_evidence": [
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. ",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. ",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:",
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
"extractive_spans": [],
"free_form_answer": "They crawled travel information from the Web to build a database, created a multi-domain goal generator from the database, collected dialogue between workers an automatically annotated dialogue acts. ",
"highlighted_evidence": [
"The data collection process is summarized as below:\n\nDatabase Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.\n\nGoal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.\n\nDialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.\n\nDialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"e6c3ce2d618ab1a5518ad3fd1b92ffd367c2dba8"
],
"answer": [
{
"evidence": [
"Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings (\"free\") as input and outputs tags in BIO schema (\"B-Inform-Attraction-fee\"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.",
"Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the \"fee\" slot in the attraction domain will be filled with \"free\". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.",
"Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction."
],
"extractive_spans": [
"BERTNLU from ConvLab-2",
"a rule-based model (RuleDST) ",
"TRADE (Transferable Dialogue State Generator) ",
"a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We adapted BERTNLU from ConvLab-2. ",
"We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. ",
"We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"a72845ebd9c3ddb40ace7a4fc7120028f693fa5c"
],
"answer": [
{
"evidence": [
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
"extractive_spans": [
"The workers were also asked to annotate both user states and system states",
"we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories"
],
"free_form_answer": "",
"highlighted_evidence": [
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.\n\nDialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How was the dataset collected?",
"What are the benchmark models?",
"How was the corpus annotated?"
],
"question_id": [
"2376c170c343e2305dac08ba5f5bda47c370357f",
"0137ecebd84a03b224eb5ca51d189283abb5f6d9",
"5f6fbd57cce47f20a0fda27d954543c00c4344c2"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [],
"file": []
} | [
"How was the dataset collected?"
] | [
[
"2002.11893-Data Collection-1",
"2002.11893-Data Collection-3",
"2002.11893-Data Collection-4",
"2002.11893-Data Collection-2",
"2002.11893-Data Collection-0"
]
] | [
"They crawled travel information from the Web to build a database, created a multi-domain goal generator from the database, collected dialogue between workers an automatically annotated dialogue acts. "
] | 40 |
1910.07181 | BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance | Pretraining deep contextualized representations using an unsupervised language modeling objective has led to large performance gains for a variety of NLP tasks. Despite this success, recent work by Schick and Schutze (2019) suggests that these architectures struggle to understand rare words. For context-independent word embeddings, this problem can be addressed by separately learning representations for infrequent words. In this work, we show that the same idea can also be applied to contextualized models and clearly improves their downstream task performance. Most approaches for inducing word embeddings into existing embedding spaces are based on simple bag-of-words models; hence they are not a suitable counterpart for deep neural network language models. To overcome this problem, we introduce BERTRAM, a powerful architecture based on a pretrained BERT language model and capable of inferring high-quality representations for rare words. In BERTRAM, surface form and contexts of a word directly interact with each other in a deep architecture. Both on a rare word probing task and on three downstream task datasets, BERTRAM considerably improves representations for rare and medium frequency words compared to both a standalone BERT model and previous work. | {
"paragraphs": [
[
"As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings.",
"With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11, BIBREF12, BIBREF13, BIBREF14, the question naturally arises whether these approaches are facing the same problem. As all of them already handle rare words implicitly – using methods such as byte-pair encoding BIBREF15 and WordPiece embeddings BIBREF16, or even character-level CNNs BIBREF17 –, it is unclear whether these models even require special treatment of rare words. However, the listed methods only make use of surface-form information, whereas BIBREF9 found that for covering a wide range of rare words, it is crucial to consider both surface-form and contexts.",
"Consistently, BIBREF0 recently showed that for BERT BIBREF13, a popular pretrained language model based on a Transformer architecture BIBREF18, performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19. However, their proposed model is limited in two important respects:",
"For processing contexts, it uses a simple bag-of-words model, throwing away much of the available information.",
"It combines form and context only in a shallow fashion, thus preventing both input signals from sharing information in any sophisticated manner.",
"Importantly, this limitation applies not only to their model, but to all previous work on obtaining representations for rare words by leveraging form and context. While using bag-of-words models is a reasonable choice for uncontextualized embeddings, which are often themselves based on such models BIBREF1, BIBREF7, it stands to reason that they are suboptimal for contextualized embeddings based on position-aware deep neural architectures.",
"To overcome these limitations, we introduce Bertram (BERT for Attentive Mimicking), a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19. Unlike previous approaches making use of language models BIBREF5, our approach integrates BERT in an end-to-end fashion and directly makes use of its hidden states. By giving Bertram access to both surface form and context information already at its very lowest layer, we allow for a deep connection and exchange of information between both input signals.",
"For various reasons, assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty: While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6, BIBREF3, BIBREF4, BIBREF5, BIBREF10, all of these datasets are tailored towards context-independent embeddings and thus not suitable for evaluating our proposed model. Furthermore, understanding rare words is of negligible importance for most commonly used downstream task datasets. To evaluate our proposed model, we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important. This is achieved by replacing classification-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20.",
"Using this procedure, we extract rare word datasets from three commonly used text (or text pair) classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. On both the WNLaMPro dataset of BIBREF0 and all three so-obtained datasets, our proposed Bertram model outperforms previous work by a large margin.",
"In summary, our contributions are as follows:",
"We show that a pretrained BERT instance can be integrated into Attentive Mimicking, resulting in much better context representations and a deeper connection of form and context.",
"We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important.",
"We show that Bertram achieves a new state-of-the-art on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 24% over a BERT baseline."
],
[
"Incorporating surface-form information (e.g., morphemes, characters or character $n$-grams) is a commonly used technique for improving word representations. For context-independent word embeddings, this information can either be injected into a given embedding space BIBREF6, BIBREF8, or a model can directly be given access to it during training BIBREF7, BIBREF24, BIBREF25. In the area of contextualized representations, many architectures employ subword segmentation methods BIBREF12, BIBREF13, BIBREF26, BIBREF14, whereas others use convolutional neural networks to directly access character-level information BIBREF27, BIBREF11, BIBREF17.",
"Complementary to surface form, another useful source of information for understanding rare words are the contexts in which they occur BIBREF2, BIBREF3, BIBREF4. As recently shown by BIBREF19, BIBREF9, combining form and context leads to significantly better results than using just one of both input signals for a wide range of tasks. While all aforementioned methods are based on simple bag-of-words models, BIBREF5 recently proposed an architecture based on the context2vec language model BIBREF28. However, in contrast to our work, they (i) do not incorporate surface-form information and (ii) do not directly access the hidden states of the language model, but instead simply use its output distribution.",
"There are several datasets explicitly focusing on rare words, e.g. the Stanford Rare Word dataset of BIBREF6, the Definitional Nonce dataset of BIBREF3 and the Contextual Rare Word dataset BIBREF4. However, all of these datasets are only suitable for evaluating context-independent word representations.",
"Our proposed method of generating rare word datasets is loosely related to adversarial example generation methods such as HotFlip BIBREF29, which manipulate the input to change a model's prediction. We use a similar mechanism to determine which words in a given sentence are most important and replace these words with rare synonyms."
],
[
"We review the architecture of the form-context model (FCM) BIBREF9, which forms the basis for our model. Given a set of $d$-dimensional high-quality embeddings for frequent words, FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space. This is done as follows: Given a word $w$ and a context $C$ in which it occurs, a surface-form embedding $v_{(w,{C})}^\\text{form} \\in \\mathbb {R}^d$ is obtained similar to BIBREF7 by averaging over embeddings of all $n$-grams in $w$; these $n$-gram embeddings are learned during training. Similarly, a context embedding $v_{(w,{C})}^\\text{context} \\in \\mathbb {R}^d$ is obtained by averaging over the embeddings of all words in $C$. The so-obtained form and context embeddings are then combined using a gate",
"with parameters $w \\in \\mathbb {R}^{2d}, b \\in \\mathbb {R}$ and $\\sigma $ denoting the sigmoid function, allowing the model to decide for each pair $(x,y)$ of form and context embeddings how much attention should be paid to $x$ and $y$, respectively.",
"The final representation of $w$ is then simply a weighted sum of form and context embeddings:",
"where $\\alpha = g(v_{(w,C)}^\\text{form}, v_{(w,C)}^\\text{context})$ and $A$ is a $d\\times d$ matrix that is learned during training.",
"While the context-part of FCM is able to capture the broad topic of numerous rare words, in many cases it is not able to obtain a more concrete and detailed understanding thereof BIBREF9. This is hardly surprising given the model's simplicity; it does, for example, make no use at all of the relative positions of context words. Furthermore, the simple gating mechanism results in only a shallow combination of form and context. That is, the model is not able to combine form and context until the very last step: While it can choose how much to attend to form and context, respectively, the corresponding embeddings do not share any information and thus cannot influence each other in any way."
],
[
"To overcome both limitations described above, we introduce Bertram, an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19. To this end, let $d_h$ be the hidden dimension size and $l_\\text{max}$ be the number of layers for the BERT model being used. We denote with $e_{t}$ the (uncontextualized) embedding assigned to a token $t$ by BERT and, given a sequence of such uncontextualized embeddings $\\mathbf {e} = e_1, \\ldots , e_n$, we denote by $\\textbf {h}_j^l(\\textbf {e})$ the contextualized representation of the $j$-th token at layer $l$ when the model is given $\\mathbf {e}$ as input.",
"Given a word $w$ and a context $C = w_1, \\ldots , w_n$ in which it occurs, let $\\mathbf {t} = t_1, \\ldots , t_{m}$ with $m \\ge n$ be the sequence obtained from $C$ by (i) replacing $w$ with a [MASK] token and (ii) tokenizing the so-obtained sequence to match the BERT vocabulary; furthermore, let $i$ denote the index for which $t_i = \\texttt {[MASK]}$. Perhaps the most simple approach for obtaining a context embedding from $C$ using BERT is to define",
"where $\\mathbf {e} = e_{t_1}, \\ldots , e_{t_m}$. The so-obtained context embedding can then be combined with its form counterpart as described in Eq. DISPLAY_FORM8. While this achieves our first goal of using a more sophisticated context model that can potentially gain a deeper understanding of a word than just its broad topic, the so-obtained architecture still only combines form and context in a shallow fashion. We thus refer to it as the shallow variant of our model and investigate two alternative approaches (replace and add) that work as follows:",
"Replace: Before computing the context embedding, we replace the uncontextualized embedding of the [MASK] token with the word's surface-form embedding:",
"As during BERT pretraining, words chosen for prediction are replaced with [MASK] tokens only 80% of the time and kept unchanged 10% of the time, we hypothesize that even without further training, BERT is able to make use of form embeddings ingested this way.",
"Add: Before computing the context embedding, we prepad the input with the surface-form embedding of $w$, followed by a colon:",
"We also experimented with various other prefixes, but ended up choosing this particular strategy because we empirically found that after masking a token $t$, adding the sequence “$t :$” at the beginning helps BERT the most in recovering this very token at the masked position.",
"tnode/.style=rectangle, inner sep=0.1cm, minimum height=4ex, text centered,text height=1.5ex, text depth=0.25ex, opnode/.style=draw, rectangle, rounded corners, minimum height=4ex, minimum width=4ex, text centered, arrow/.style=draw,->,>=stealth",
"As for both variants, surface-form information is directly and deeply integrated into the computation of the context embedding, we do not require any further gating mechanism and may directly set $v_{(w,C)} = A \\cdot v^\\text{context}_{(w,C)}$.",
"However, we note that for the add variant, the contextualized representation of the [MASK] token is not the only natural candidate to be used for computing the final embedding: We might just as well look at the contextualized representation of the surface-form based embedding added at the very first position. Therefore, we also try a shallow combination of both embeddings. Note, however, that unlike FCM, we combine the contextualized representations – that is, the form part was already influenced by the context part and vice versa before combining them using a gate. For this combination, we define",
"with $A^{\\prime } \\in \\mathbb {R}^{d \\times d_h}$ being an additional learnable parameter. We then combine the two contextualized embeddings similar to Eq. DISPLAY_FORM8 as",
"where $\\alpha = g(h^\\text{form}_{(w,C)}, h^\\text{context}_{(w,C)})$. We refer to this final alternative as the add-gated approach. The model architecture for this variant can be seen in Figure FIGREF14 (left).",
"As in many cases, not just one, but a handful of contexts is known for a rare word, we follow the approach of BIBREF19 to deal with multiple contexts: We add an Attentive Mimicking head on top of our model, as can be seen in Figure FIGREF14 (right). That is, given a set of contexts $\\mathcal {C} = \\lbrace C_1, \\ldots , C_m\\rbrace $ and the corresponding embeddings $v_{(w,C_1)}, \\ldots , v_{(w,C_m)}$, we apply a self-attention mechanism to all embeddings, allowing the model to distinguish informative contexts from uninformative ones. The final embedding $v_{(w, \\mathcal {C})}$ is then a linear combination of the embeddings obtained from each context, where the weight of each embedding is determined based on the self-attention layer. For further details on this mechanism, we refer to BIBREF19."
],
[
"Like previous work, we use mimicking BIBREF8 as a training objective. That is, given a frequent word $w$ with known embedding $e_w$ and a set of corresponding contexts $\\mathcal {C}$, Bertram is trained to minimize $\\Vert e_w - v_{(w, \\mathcal {C})}\\Vert ^2$.",
"As training Bertram end-to-end requires much computation (processing a single training instance $(w,\\mathcal {C})$ is as costly as processing an entire batch of $|\\mathcal {C}|$ examples in the original BERT architecture), we resort to the following three-stage training process:",
"We train only the form part, i.e. our loss for a single example $(w, \\mathcal {C})$ is $\\Vert e_w - v^\\text{form}_{(w, \\mathcal {C})} \\Vert ^2$.",
"We train only the context part, minimizing $\\Vert e_w - A \\cdot v^\\text{context}_{(w, \\mathcal {C})} \\Vert ^2$ where the context embedding is obtained using the shallow variant of Bertram. Furthermore, we exclude all of BERT's parameters from our optimization.",
"We combine the pretrained form-only and context-only model and train all additional parameters.",
"Pretraining the form and context parts individually allows us to train the full model for much fewer steps with comparable results. Importantly, for the first two stages of our training procedure, we do not have to backpropagate through the entire BERT model to obtain all required gradients, drastically increasing the training speed."
],
[
"To measure the quality of rare word representations in a contextualized setting, we would ideally need text classification datasets with the following two properties:",
"A model that has no understanding of rare words at all should perform close to 0%.",
"A model that perfectly understands rare words should be able to classify every instance correctly.",
"Unfortunately, this requirement is not even remotely fulfilled by most commonly used datasets, simply because rare words occur in only a few entries and when they do, they are often of negligible importance.",
"To solve this problem, we devise a procedure to automatically transform existing text classification datasets such that rare words become important. For this procedure, we require a pretrained language model $M$ as a baseline, an arbitrary text classification dataset $\\mathcal {D}$ containing labelled instances $(\\mathbf {x}, y)$ and a substitution dictionary $S$, mapping each word $w$ to a set of rare synonyms $S(w)$. Given these ingredients, our procedure consists of three steps: (i) splitting the dataset into a train set and a set of test candidates, (ii) training the baseline model on the train set and (iii) modifying a subset of the test candidates to generate the final test set."
],
[
"We partition $\\mathcal {D}$ into a train set $\\mathcal {D}_\\text{train}$ and a set of test candidates, $\\mathcal {D}_\\text{cand}$, with the latter containing all instances $(\\mathbf {x},y) \\in \\mathcal {D}$ such that for at least one word $w$ in $\\mathbf {x}$, $S(w) \\ne \\emptyset $. Additionally, we require that the training set consists of at least one third of the entire data."
],
[
"We finetune $M$ on $\\mathcal {D}_\\text{train}$. Let $(\\mathbf {x}, y) \\in \\mathcal {D}_\\text{train}$ where $\\mathbf {x} = w_1, \\ldots , w_n$ is a sequence of words. We deviate from the standard finetuning procedure of BIBREF13 in three respects:",
"We randomly replace 5% of all words in $\\mathbf {x}$ with a [MASK] token. This allows the model to cope with missing or unknown words, a prerequisite for our final test set generation.",
"As an alternative to overwriting the language model's uncontextualized embeddings for rare words, we also want to allow models to simply add an alternative representation during test time, in which case we simply separate both representations by a slash. To accustom the language model to this duplication of words, we replace each word $w_i$ with “$w_i$ / $w_i$” with a probability of 10%. To make sure that the model does not simply learn to always focus on the first instance during training, we randomly mask each of the two repetitions with probability 25%.",
"We do not finetune the model's embedding layer. In preliminary experiments, we found this not to hurt performance."
],
[
"Let $p(y \\mid \\mathbf {x})$ be the probability that the finetuned model $M$ assigns to class $y$ given input $\\mathbf {x}$, and let",
"be the model's prediction for input $\\mathbf {x}$ where $\\mathcal {Y}$ denotes the set of all labels. For generating our test set, we only consider candidates that are classified correctly by the baseline model, i.e. candidates $(\\mathbf {x}, y) \\in \\mathcal {D}_\\text{cand}$ with $M(\\mathbf {x}) = y$. For each such entry, let $\\mathbf {x} = w_1, \\ldots , w_n$ and let $\\mathbf {x}_{w_i = t}$ be the sequence obtained from $\\mathbf {x}$ by replacing $w_i$ with $t$. We compute",
"i.e., we select the word $w_i$ whose masking pushes the model's prediction the furthest away from the correct label. If removing this word already changes the model's prediction – that is, $M(\\mathbf {x}_{w_i = \\texttt {[MASK]}}) \\ne y$ –, we select a random rare synonym $\\hat{w}_i \\in S(w_i)$ and add $(\\mathbf {x}_{w_i = \\hat{w}_i}, y)$ to the test set. Otherwise, we repeat the above procedure; if the label still has not changed after masking up to 5 words, we discard the corresponding entry. All so-obtained test set entries $(\\mathbf {x}_{w_{i_1} = \\hat{w}_{i_1}, \\ldots , w_{i_k} = \\hat{w}_{i_k} }, y)$ have the following properties:",
"If each $w_{i_j}$ is replaced by a [MASK] token, the entry is classified incorrectly by $M$. In other words, understanding the words $w_{i_j}$ is essential for $M$ to determine the correct label.",
"If the model's internal representation of each $\\hat{w}_{i_j}$ is equal to its representation of $w_{i_j}$, the entry is classified correctly by $M$. That is, if the model is able to understand the rare words $\\hat{w}_{i_j}$ and to identify them as synonyms of ${w_{i_j}}$, it predicts the correct label for each instance.",
"It is important to notice that the so-obtained test set is very closely coupled to the baseline model $M$, because we selected the words to replace based on the model's predictions. Importantly, however, the model is never queried with any rare synonym during test set generation, so its representations of rare words are not taken into account for creating the test set. Thus, while the test set is not suitable for comparing $M$ with an entirely different model $M^{\\prime }$, it allows us to compare various strategies for representing rare words in the embedding space of $M$. A similar constraint can be found in the Definitional Nonce dataset BIBREF3, which is tied to a given embedding space based on Word2Vec BIBREF1."
],
[
"For our evaluation of Bertram, we largely follow the experimental setup of BIBREF0. Our implementation of Bertram is based on PyTorch BIBREF30 and the Transformers library of BIBREF31. Throughout all of our experiments, we use BERT$_\\text{base}$ as the underlying language model for Bertram. To obtain embeddings for frequent multi-token words during training, we use one-token approximation BIBREF0. Somewhat surprisingly, we found in preliminary experiments that excluding BERT's parameters from the finetuning procedure outlined in Section SECREF17 improves performance while speeding up training; we thus exclude them in the third step of our training procedure.",
"While BERT was trained on BooksCorpus BIBREF32 and a large Wikipedia dump, we follow previous work and train Bertram on only the much smaller Westbury Wikipedia Corpus (WWC) BIBREF33; this of course gives BERT a clear advantage over our proposed method. In order to at least partially compensate for this, in our downstream task experiments we gather the set of contexts $\\mathcal {C}$ for a given rare word from both the WWC and BooksCorpus during inference."
],
[
"We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like",
"and the task is to correctly fill the slot (____) with one of several acceptable target words (e.g., “fruit”, “bush” and “berry”), which requires knowledge of the phrase's keyword (“lingonberry” in the above example). As the goal of this dataset is to probe a language model's ability to understand rare words without any task-specific finetuning, BIBREF0 do not provide a training set. Furthermore, the dataset is partitioned into three subsets; this partition is based on the frequency of the keyword, with keywords occurring less than 10 times in the WWC forming the rare subset, those occurring between 10 and 100 times forming the medium subset, and all remaining words forming the frequent subset. As our focus is on improving representations for rare words, we evaluate our model only on the former two sets.",
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
],
[
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35.",
"Just like for WNLaMPro, our default way of injecting Bertram embeddings into the baseline model is to replace the sequence of uncontextualized WordPiece tokens for a given rare word with its Bertram-based embedding. That is, given a sequence of uncontextualized token embeddings $\\mathbf {e} = e_1, \\ldots , e_n$ where $e_{i}, \\ldots , e_{i+j}$ with $1 \\le i \\le i+j \\le n$ is the sequence of WordPiece embeddings for a single rare word $w$, we replace $\\mathbf {e}$ with",
"By default, the set of contexts $\\mathcal {C}$ required for this replacement is obtained by collecting all sentences from the WWC and BooksCorpus in which $w$ occurs. As our model architecture allows us to easily include new contexts without requiring any additional training, we also try a variant where we add in-domain contexts by giving the model access to the texts found in the test set.",
"In addition to the procedure described above, we also try a variant where instead of replacing the original WordPiece embeddings for a given rare word, we merely add the Bertram-based embedding, separating both representations using a single slash:",
"As it performs best on the rare and medium subsets of WNLaMPro combined, we use only the add-gated variant of Bertram for all datasets. Results can be seen in Table TABREF37, where for each task, we report the accuracy on the entire dataset as well as scores obtained considering only instances where at least one word was replaced by a misspelling or a WordNet synonym, respectively. Consistent with results on WNLaMPro, combining BERT with Bertram outperforms both a standalone BERT model and one combined with Attentive Mimicking across all tasks. While keeping the original BERT embeddings in addition to Bertram's representation brings no benefit, adding in-domain data clearly helps for two out of three datasets. This makes sense as for rare words, every single additional context can be crucial for gaining a deeper understanding.",
"To further understand for which words using Bertram is helpful, in Figure FIGREF39 we look at the accuracy of BERT both with and without Bertram on all three tasks as a function of word frequency. That is, we compute the accuracy scores for both models when considering only entries $(\\mathbf {x}_{w_{i_1} = \\hat{w}_{i_1}, \\ldots , w_{i_k} = \\hat{w}_{i_k} }, y)$ where each substituted word $\\hat{w}_{i_j}$ occurs less than $c_\\text{max}$ times in WWC and BooksCorpus, for various values of $c_\\text{max}$. As one would expect, $c_\\text{max}$ is positively correlated with the accuracies of both models, showing that the rarer a word is, the harder it is to understand. Perhaps more interestingly, for all three datasets the gap between Bertram and BERT remains more or less constant regardless of $c_\\text{max}$. This indicates that using Bertram might also be useful for even more frequent words than the ones considered."
],
[
"We have introduced Bertram, a novel architecture for relearning high-quality representations of rare words. This is achieved by employing a powerful pretrained language model and deeply connecting surface-form and context information. By replacing important words with rare synonyms, we have created various downstream task datasets focusing on rare words; on all of these datasets, Bertram improves over a BERT model without special handling of rare words, demonstrating the usefulness of our proposed method.",
"As our analysis has shown that even for the most frequent words considered, using Bertram is still beneficial, future work might further investigate the limits of our proposed method. Furthermore, it would be interesting to explore more complex ways of incorporating surface-form information – e.g., by using a character-level CNN similar to the one of BIBREF27 – to balance out the potency of Bertram's form and context parts."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model ::: Form-Context Model",
"Model ::: Bertram",
"Model ::: Training",
"Generation of Rare Word Datasets",
"Generation of Rare Word Datasets ::: Dataset Splitting",
"Generation of Rare Word Datasets ::: Baseline Training",
"Generation of Rare Word Datasets ::: Test Set Generation",
"Evaluation ::: Setup",
"Evaluation ::: WNLaMPro",
"Evaluation ::: Downstream Task Datasets",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"d01e0f2398f8229187e2e368b2b09229b352b9a7"
],
"answer": [
{
"evidence": [
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
],
"extractive_spans": [],
"free_form_answer": "Only Bert base and Bert large are compared to proposed approach.",
"highlighted_evidence": [
"Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b4a55e4cc1e42a71095f3c6e06272669f6706228"
],
"answer": [
{
"evidence": [
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
],
"extractive_spans": [
"improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking"
],
"free_form_answer": "",
"highlighted_evidence": [
"Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"709376b155cf4c245d587fd6177d3ce8b4e23a32",
"a7eec1f4a5f97265f08cfd09b1cec20b97c573f6"
],
"answer": [
{
"evidence": [
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35."
],
"extractive_spans": [
"MNLI BIBREF21",
"AG's News BIBREF22",
"DBPedia BIBREF23"
],
"free_form_answer": "",
"highlighted_evidence": [
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35."
],
"extractive_spans": [
"MNLI",
"AG's News",
"DBPedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0055f1c704b2380b3f9692330601906890b9b49d"
],
"answer": [
{
"evidence": [
"We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like"
],
"extractive_spans": [
"WNLaMPro dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evalute Bertram on the WNLaMPro dataset of BIBREF0."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What models other than standalone BERT is new model compared to?",
"How much is representaton improved for rare/medum frequency words compared to standalone BERT and previous work?",
"What are three downstream task datasets?",
"What is dataset for word probing task?"
],
"question_id": [
"d6e2b276390bdc957dfa7e878de80cee1f41fbca",
"32537fdf0d4f76f641086944b413b2f756097e5e",
"ef081d78be17ef2af792e7e919d15a235b8d7275",
"537b2d7799124d633892a1ef1a485b3b071b303d"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Schematic representation of BERTRAM in the add-gated configuration processing the input word w = “washables” given a single context C1 = “other washables such as trousers . . .” (left) and given multiple contexts C = {C1, . . . , Cm} (right)",
"Table 1: Results on WNLaMPro test for baseline models and all BERTRAM variants",
"Table 2: Exemplary entries from the datasets obtained through our procedure. Replaced words from the original datasets are shown crossed out, their rare replacements are in bold.",
"Table 3: Results for BERT, Attentive Mimicking and BERTRAM on rare word datasets generated from AG’s News, MNLI and DBPedia. For each dataset, accuracy for all training instances as well as for those instances containing at least one misspelling (Msp) and those containing at least one rare WordNet synonym (WN) is shown.",
"Figure 2: Comparison of BERT and BERTRAM on three downstream tasks for varying maximum numbers of contexts cmax"
],
"file": [
"5-Figure1-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"9-Figure2-1.png"
]
} | [
"What models other than standalone BERT is new model compared to?"
] | [
[
"1910.07181-Evaluation ::: WNLaMPro-2"
]
] | [
"Only Bert base and Bert large are compared to proposed approach."
] | 41 |
1902.00330 | Joint Entity Linking with Deep Reinforcement Learning | Entity linking is the task of aligning mentions to corresponding entities in a given knowledge base. Previous studies have highlighted the necessity for entity linking systems to capture the global coherence. However, there are two common weaknesses in previous global models. First, most of them calculate the pairwise scores between all candidate entities and select the most relevant group of entities as the final result. In this process, the consistency among wrong entities as well as that among right ones are involved, which may introduce noise data and increase the model complexity. Second, the cues of previously disambiguated entities, which could contribute to the disambiguation of the subsequent mentions, are usually ignored by previous models. To address these problems, we convert the global linking into a sequence decision problem and propose a reinforcement learning model which makes decisions from a global perspective. Our model makes full use of the previous referred entities and explores the long-term influence of current selection on subsequent decisions. We conduct experiments on different types of datasets, the results show that our model outperforms state-of-the-art systems and has better generalization performance. | {
"paragraphs": [
[
"Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc.",
"Existing EL methods can be divided into two categories: local model and global model. Local models concern mainly on contextual words surrounding the mentions, where mentions are disambiguated independently. These methods are not work well when the context information is not rich enough. Global models take into account the topical coherence among the referred entities within the same document, where mentions are disambiguated jointly. Most of previous global models BIBREF0 , BIBREF1 , BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities. However, the consistency among wrong entities as well as that among right ones are involved, which not only increases the model complexity but also introduces some noises. For example, in Figure 1, there are three mentions \"France\", \"Croatia\" and \"2018 World Cup\", and each mention has three candidate entities. Here, \"France\" may refer to French Republic, France national basketball team or France national football team in KB. It is difficult to disambiguate using local models, due to the scarce common information in the contextual words of \"France\" and the descriptions of its candidate entities. Besides, the topical coherence among the wrong entities related to basketball team (linked by an orange dashed line) may make the global models mistakenly refer \"France\" to France national basketball team. So, how to solve these problems?",
"We note that, mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence. Intuitively, if we start with mentions that are easier to disambiguate and gain correct results, it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions. In the above example, it is much easier to map \"2018 World Cup\" to 2018 FIFA World Cup based on their common contextual words \"France\", \"Croatia\", \"4-2\". Then, it is obvious that \"France\" and \"Croatia\" should be referred to the national football team because football-related terms are mentioned many times in the description of 2018 FIFA World Cup.",
"Inspired by this intuition, we design the solution with three principles: (i) utilizing local features to rank the mentions in text and deal with them in a sequence manner; (ii) utilizing the information of previously referred entities for the subsequent entity disambiguation; (iii) making decisions from a global perspective to avoid the error propagation if the previous decision is wrong.",
"In order to achieve these aims, we consider global EL as a sequence decision problem and proposed a deep reinforcement learning (RL) based model, RLEL for short, which consists of three modules: Local Encoder, Global Encoder and Entity Selector. For each mention and its candidate entities, Local Encoder encodes the local features to obtain their latent vector representations. Then, the mentions are ranked according to their disambiguation difficulty, which is measured by the learned vector representations. In order to enforce global coherence between mentions, Global Encoder encodes the local representations of mention-entity pairs in a sequential manner via a LSTM network, which maintains a long-term memory on features of entities which has been selected in previous states. Entity Selector uses a policy network to choose the target entities from the candidate set. For a single disambiguation decision, the policy network not only considers the pairs of current mention-entity representations, but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder. In this way, Entity Selector is able to take actions based on the current state and previous ones. When eliminating the ambiguity of all mentions in the sequence, delayed rewards are used to adjust its policy in order to gain an optimized global decision.",
"Deep RL model, which learns to directly optimize the overall evaluation metrics, works much better than models which learn with loss functions that just evaluate a particular single decision. By this property, RL has been successfully used in many NLP tasks, such as information retrieval BIBREF3 , dialogue system BIBREF4 and relation classification BIBREF5 , etc. To the best of our knowledge, we are the first to design a RL model for global entity linking. And in this paper, our RL model is able to produce more accurate results by exploring the long-term influence of independent decisions and encoding the entities disambiguated in previous states.",
"In summary, the main contributions of our paper mainly include following aspects:"
],
[
"The overall structure of our RLEL model is shown in Figure 2. The proposed framework mainly includes three parts: Local Encoder which encodes local features of mentions and their candidate entities, Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set. As the Entity Selector and the Global Encoder are correlated mutually, we train them jointly. Moreover, the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts. In the following, we will introduce the technical details of these modules."
],
[
"Before introducing our model, we firstly define the entity linking task. Formally, given a document $D$ with a set of mentions $M = \\lbrace m_1, m_2,...,m_k\\rbrace $ , each mention $ m_t \\in D$ has a set of candidate entities $C_{m_t} = \\lbrace e_{t}^1, e_{t}^2,..., e_{t}^n\\rbrace $ . The task of entity linking is to map each mention $m_t$ to its corresponding correct target entity $e_{t}^+$ or return \"NIL\" if there is not correct target entity in the knowledge base. Before selecting the target entity, we need to generate a certain number of candidate entities for model selection.",
"Inspired by the previous works BIBREF6 , BIBREF7 , BIBREF8 , we use the mention's redirect and disambiguation pages in Wikipedia to generate candidate sets. For those mentions without corresponding disambiguation pages, we use its n-grams to retrieve the candidates BIBREF8 . In most cases, the disambiguation page contains many entities, sometimes even hundreds. To optimize the model's memory and avoid unnecessary calculations, the candidate sets need to be filtered BIBREF9 , BIBREF0 , BIBREF1 . Here we utilize the XGBoost model BIBREF10 as an entity ranker to reduce the size of candidate set. The features used in XGBoost can be divided into two aspects, the one is string similarity like the Jaro-Winkler distance between the entity title and the mention, the other is semantic similarity like the cosine distance between the mention context representation and the entity embedding. Furthermore, we also use the statistical features based on the pageview and hyperlinks in Wikipedia. Empirically, we get the pageview of the entity from the Wikipedia Tool Labs which counts the number of visits on each entity page in Wikipedia. After ranking the candidate sets based on the above features, we take the top k scored entities as final candidate set for each mention."
],
[
"Given a mention $m_t$ and the corresponding candidate set $\\lbrace e_t^1, e_t^2,..., \\\\ e_t^k\\rbrace $ , we aim to get their local representation based on the mention context and the candidate entity description. For each mention, we firstly select its $n$ surrounding words, and represent them as word embedding using a pre-trained lookup table BIBREF11 . Then, we use Long Short-Term Memory (LSTM) networks to encode the contextual word sequence $\\lbrace w_c^1, w_c^2,..., w_c^n\\rbrace $ as a fixed-size vector $V_{m_t}$ . The description of entity is encoded as $D_{e_t^i}$ in the same way. Apart from the description of entity, there are many other valuable information in the knowledge base. To make full use of these information, many researchers trained entity embeddings by combining the description, category, and relationship of entities. As shown in BIBREF0 , entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or co-occurrence statistics. Therefore, we use the pre-trained entity embedding $E_{e_t^i}$ and concatenate it with the description vector $D_{e_t^i}$ to enrich the entity representation. The concatenation result is denoted by $V_{e_t^i}$ .",
"After getting $V_{e_t^i}$ , we concatenate it with $V_{m_t}$ and then pass the concatenation result to a multilayer perceptron (MLP). The MLP outputs a scalar to represent the local similarity between the mention $m_t$ and the candidate entity $e_t^i$ . The local similarity is calculated by the following equations: ",
"$$\\Psi (m_t, e_t^i) = MLP(V_{m_t}\\oplus {V_{e_t^i}})$$ (Eq. 9) ",
"Where $\\oplus $ indicates vector concatenation. With the purpose of distinguishing the correct target entity and wrong candidate entities when training the local encoder model, we utilize a hinge loss that ranks ground truth higher than others. The rank loss function is defined as follows: ",
"$$L_{local} = max(0, \\gamma -\\Psi (m_t, e_t^+)+\\Psi (m_t, e_t^-))$$ (Eq. 10) ",
"When optimizing the objective function, we minimize the rank loss similar to BIBREF0 , BIBREF1 . In this ranking model, a training instance is constructed by pairing a positive target entity $e_t^+$ with a negative entity $e_t^-$ . Where $\\gamma > 0$ is a margin parameter and our purpose is to make the score of the positive target entity $e_t^+$ is at least a margin $\\gamma $ higher than that of negative candidate entity $e_t^-$ .",
"With the local encoder, we obtain the representation of mention context and candidate entities, which will be used as the input into the global encoder and entity selector. In addition, the similarity scores calculated by MLP will be utilized for ranking mentions in the global encoder."
],
[
"In the global encoder module, we aim to enforce the topical coherence among the mentions and their target entities. So, we use an LSTM network which is capable of maintaining the long-term memory to encode the ranked mention sequence. What we need to emphasize is that our global encoder just encode the mentions that have been disambiguated by the entity selector which is denoted as $V_{a_t}$ .",
"As mentioned above, the mentions should be sorted according to their contextual information and topical coherence. So, we firstly divide the adjacent mentions into a segment by the order they appear in the document based on the observation that the topical consistency attenuates along with the distance between the mentions. Then, we sort mentions in a segment based on the local similarity and place the mention that has a higher similarity value in the front of the sequence. In Equation 1, we define the local similarity of $m_i$ and its corresponding candidate entity $e_t^i$ . On this basis, we define $\\Psi _{max}(m_i, e_i^a)$ as the the maximum local similarity between the $m_i$ and its candidate set $C_{m_i} = \\lbrace e_i^1, e_i^2,..., e_i^n\\rbrace $ . We use $\\Psi _{max}(m_i, e_i^a)$ as criterion when sorting mentions. For instance, if $\\Psi _{max}(m_i, e_i^a) > \\Psi _{max}(m_j, e_j^b)$ then we place $m_i$ before $m_j$ . Under this circumstances, the mentions in the front positions may not be able to make better use of global consistency, but their target entities have a high degree of similarity to the context words, which allows them to be disambiguated without relying on additional information. In the end, previous selected target entity information is encoded by global encoder and the encoding result will be served as input to the entity selector.",
"Before using entity selector to choose target entities, we pre-trained the global LSTM network. During the training process, we input not only positive samples but also negative ones to the LSTM. By doing this, we can enhance the robustness of the network. In the global encoder module, we adopt the following cross entropy loss function to train the model. ",
"$$L_{global} = -\\frac{1}{n}\\sum _x{\\left[y\\ln {y^{^{\\prime }}} + (1-y)\\ln (1-y^{^{\\prime }})\\right]}$$ (Eq. 12) ",
"Where $y\\in \\lbrace 0,1\\rbrace $ represents the label of the candidate entity. If the candidate entity is correct $y=1$ , otherwise $y=0$ . $y^{^{\\prime }}\\in (0,1)$ indicates the output of our model. After pre-training the global encoder, we start using the entity selector to choose the target entity for each mention and encode these selections."
],
[
"In the entity selector module, we choose the target entity from candidate set based on the results of local and global encoder. In the process of sequence disambiguation, each selection result will have an impact on subsequent decisions. Therefore, we transform the choice of the target entity into a reinforcement learning problem and view the entity selector as an agent. In particular, the agent is designed as a policy network which can learn a stochastic policy and prevents the agent from getting stuck at an intermediate state BIBREF12 . Under the guidance of policy, the agent can decide which action (choosing the target entity from the candidate set)should be taken at each state, and receive a delay reward when all the selections are made. In the following part, we first describe the state, action and reward. Then, we detail how to select target entity via a policy network.",
"The result of entity selection is based on the current state information. For time $t$ , the state vector $S_t$ is generated as follows: ",
"$$S_t = V_{m_i}^t\\oplus {V_{e_i}^t}\\oplus {V_{feature}^t}\\oplus {V_{e^*}^{t-1}}$$ (Eq. 15) ",
"Where $\\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \\in \\mathbb {R}^{1\\times {n}}$ to $V_{m_i}^t{^{\\prime }} \\in \\mathbb {R}^{k\\times {n}}$ and then combine it with $V_{e_i}^t \\in \\mathbb {R}^{k\\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action.",
"According to the status at each time step, we take corresponding action. Specifically, we define the action at time step $t$ is to select the target entity $e_t^*$ for $m_t$ . The size of action space is the number of candidate entities for each mention, where $a_i \\in \\lbrace 0,1,2...k\\rbrace $ indicates the position of the selected entity in the candidate entity list. Clearly, each action is a direct indicator of target entity selection in our model. After completing all the actions in the sequence we will get a delayed reward.",
"The agent takes the reward value as the feedback of its action and learns the policy based on it. Since current selection result has a long-term impact on subsequent decisions, we don't give an immediate reward when taking an action. Instead, a delay reward is given by follows, which can reflect whether the action improves the overall performance or not. ",
"$$R(a_t) = p(a_t)\\sum _{j=t}^{T}p(a_j) + (1 - p(a_t))(\\sum _{j=t}^{T}p(a_j) + t - T)$$ (Eq. 16) ",
"where $p(a_t)\\in \\lbrace 0,1\\rbrace $ indicates whether the current action is correct or not. When the action is correct $p(a_t)=1$ otherwise $p(a_t)=0$ . Hence $\\sum _{j=t}^{T}p(a_j)$ and $\\sum _{j=t}^{T}p(a_j) + t - T$ respectively represent the number of correct and wrong actions from time t to the end of episode. Based on the above definition, our delayed reward can be used to guide the learning of the policy for entity linking.",
"After defining the state, action, and reward, our main challenge becomes to choose an action from the action space. To solve this problem, we sample the value of each action by a policy network $\\pi _{\\Theta }(a|s)$ . The structure of the policy network is shown in Figure 3. The input of the network is the current state, including the mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions. We concatenate these representations and fed them into a multilayer perceptron, for each hidden layer, we generate the output by: ",
"$$h_i(S_t) = Relu(W_i*h_{i-1}(S_t) + b_i)$$ (Eq. 17) ",
"Where $W_i$ and $ b_i$ are the parameters of the $i$ th hidden layer, through the $relu$ activation function we get the $h_i(S_t)$ . After getting the output of the last hidden layer, we feed it into a softmax layer which generates the probability distribution of actions. The probability distribution is generated as follows: ",
"$$\\pi (a|s) = Softmax(W * h_l(S) + b)$$ (Eq. 18) ",
"Where the $W$ and $b$ are the parameters of the softmax layer. For each mention in the sequence, we will take action to select the target entity from its candidate set. After completing all decisions in the episode, each action will get an expected reward and our goal is to maximize the expected total rewards. Formally, the objective function is defined as: ",
"$$\\begin{split}\nJ(\\Theta ) &= \\mathbb {E}_{(s_t, a_t){\\sim }P_\\Theta {(s_t, a_t)}}R(s_1{a_1}...s_L{a_L}) \\\\\n&=\\sum _{t}\\sum _{a}\\pi _{\\Theta }(a|s)R(a_t)\n\\end{split}$$ (Eq. 19) ",
"Where $P_\\Theta {(s_t, a_t)}$ is the state transfer function, $\\pi _{\\Theta }(a|s)$ indicates the probability of taking action $a$ under the state $s$ , $R(a_t)$ is the expected reward of action $a$ at time step $t$ . According to REINFORCE policy gradient algorithm BIBREF13 , we update the policy gradient by the way of equation 9. ",
"$$\\Theta \\leftarrow \\Theta + \\alpha \\sum _{t}R(a_t)\\nabla _{\\Theta }\\log \\pi _{\\Theta }(a|s)$$ (Eq. 20) ",
"As the global encoder and the entity selector are correlated mutually, we train them jointly after pre-training the two networks. The details of the joint learning are presented in Algorithm 1.",
"[t] The Policy Learning for Entity Selector [1] Training data include multiple documents $D = \\lbrace D_1, D_2, ..., D_N\\rbrace $ The target entity for mentions $\\Gamma = \\lbrace T_1, T_2, ..., T_N\\rbrace $ ",
"Initialize the policy network parameter $\\Theta $ , global LSTM network parameter $\\Phi $ ; $D_k$ in $D$ Generate the candidate set for each mention Divide the mentions in $D_k$ into multiple sequences $S = \\lbrace S_1, S_2, ..., S_N\\rbrace $ ; $S_k$ in $S$ Rank the mentions $M = \\lbrace m_1, m_2, ..., m_n\\rbrace $ in $S_k$ based on the local similarity; $\\Phi $0 in $\\Phi $1 Sample the target entity $\\Phi $2 for $\\Phi $3 with $\\Phi $4 ; Input the $\\Phi $5 and $\\Phi $6 to global LSTM network; $\\Phi $7 End of sampling, update parameters Compute delayed reward $\\Phi $8 for each action; Update the parameter $\\Phi $9 of policy network:",
" $\\Theta \\leftarrow \\Theta + \\alpha \\sum _{t}R(a_t)\\nabla _{\\Theta }\\log \\pi _{\\Theta }(a|s)$ ",
"Update the parameter $\\Phi $ in the global LSTM network"
],
[
"In order to evaluate the effectiveness of our method, we train the RLEL model and validate it on a series of popular datasets that are also used by BIBREF0 , BIBREF1 . To avoid overfitting with one dataset, we use both AIDA-Train and Wikipedia data in the training set. Furthermore, we compare the RLEL with some baseline methods, where our model achieves the state-of-the-art results. We implement our models in Tensorflow and run experiments on 4 Tesla V100 GPU."
],
[
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.",
"ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.",
"MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)",
"AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.",
"WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.",
"WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.",
"OURSELF-WIKI is crawled by ourselves from Wikipedia pages.",
"During the training of our RLEL model, we select top K candidate entities for each mention to optimize the memory and run time. In the top K candidate list, we define the recall of correct target entity is $R_t$ . According to our statistics, when K is set to 1, $R_t$ is 0.853, when K is 5, $R_t$ is 0.977, when K increases to 10, $R_t$ is 0.993. Empirically, we choose top 5 candidate entities as the input of our RLEL model. For the entity description, there are lots of redundant information in the wikipedia page, to reduce the impact of noise data, we use TextRank algorithm BIBREF19 to select 15 keywords as description of the entity. Simultaneously, we choose 15 words around mention as its context. In the global LSTM network, when the number of mentions does not reach the set length, we adopt the mention padding strategy. In short, we copy the last mention in the sequence until the number of mentions reaches the set length.",
"We set the dimensions of word embedding and entity embedding to 300, where the word embedding and entity embedding are released by BIBREF20 and BIBREF0 respectively. For parameters of the local LSTM network, the number of LSTM cell units is set to 512, the batch size is 64, and the rank margin $\\gamma $ is 0.1. Similarly, in global LSTM network, the number of LSTM cell units is 700 and the batch size is 16. In the above two LSTM networks, the learning rate is set to 1e-3, the probability of dropout is set to 0.8, and the Adam is utilized as optimizer. In addition, we set the number of MLP layers to 4 and extend the priori feature dimension to 50 in the policy network."
],
[
"We compare RLEL with a series of EL systems which report state-of-the-art results on the test datasets. There are various methods including classification model BIBREF17 , rank model BIBREF21 , BIBREF15 and probability graph model BIBREF18 , BIBREF14 , BIBREF22 , BIBREF0 , BIBREF1 . Except that, Cheng $et$ $al.$ BIBREF23 formulate their global decision problem as an Integer Linear Program (ILP) which incorporates the entity-relation inference. Globerson $et$ $al.$ BIBREF24 introduce a multi-focal attention model which allows each candidate to focus on limited mentions, Yamada $et$ $al.$ BIBREF25 propose a word and entity embedding model specifically designed for EL.",
"We use the standard Accuracy, Precision, Recall and F1 at mention level (Micro) as the evaluation metrics: ",
"$$Accuracy = \\frac{|M \\cap M^*|}{|M \\cup M^*|}$$ (Eq. 31) ",
"$$Precision = \\frac{|M \\cap M^*|}{|M|}$$ (Eq. 32) ",
"where $M^*$ is the golden standard set of the linked name mentions, $M$ is the set of linked name mentions outputted by an EL method.",
"Same as previous work, we use in-KB accuracy and micro F1 to evaluate our method. We first test the model on the AIDA-B dataset. From Table 2, we can observe that our model achieves the best result. Previous best results on this dataset are generated by BIBREF0 , BIBREF1 which both built CRF models. They calculate the pairwise scores between all candidate entities. Differently, our model only considers the consistency of the target entities and ignores the relationship between incorrect candidates. The experimental results show that our model can reduce the impact of noise data and improve the accuracy of disambiguation. Apart from experimenting on AIDA-B, we also conduct experiments on several different datasets to verify the generalization performance of our model.",
"From Table 3, we can see that RLEL has achieved relatively good performances on ACE2004, CWEB and WIKI. At the same time, previous models BIBREF0 , BIBREF1 , BIBREF23 achieve better performances on the news datasets such as MSNBC and AQUINT, but their results on encyclopedia datasets such as WIKI are relatively poor. To avoid overfitting with some datasets and improve the robustness of our model, we not only use AIDA-Train but also add Wikipedia data to the training set. In the end, our model achieve the best overall performance.",
"For most existing EL systems, entities with lower frequency are difficult to disambiguate. To gain further insight, we analyze the accuracy of the AIDA-B dataset for situations where gold entities have low popularity. We divide the gold entities according to their pageviews in wikipedia, the statistical disambiguation results are shown in Table 4. Since some pageviews can not be obtained, we only count part of gold entities. The result indicates that our model is still able to work well for low-frequency entities. But for medium-frequency gold entities, our model doesn't work well enough. The most important reason is that other candidate entities corresponding to these medium-frequency gold entities have higher pageviews and local similarities, which makes the model difficult to distinguish."
],
[
"To demonstrate the effects of RLEL, we evaluate our model under different conditions. First, we evaluate the effect of sequence length on global decision making. Second, we assess whether sorting the mentions have a positive effect on the results. Third, we analysis the results of not adding globally encoding during entity selection. Last, we compare our RL selection strategy with the greedy choice.",
"A document may contain multiple topics, so we do not add all mentions to a single sequence. In practice, we add some adjacent mentions to the sequence and use reinforcement learning to select entities from beginning to end. To analysis the impact of the number of mentions on joint disambiguation, we experiment with sequences on different lengths. The results on AIDA-B are shown in Figure 4. We can see that when the sequence is too short or too long, the disambiguation results are both very poor. When the sequence length is less than 3, delay reward can't work in reinforcement learning, and when the sequence length reaches 5 or more, noise data may be added. Finally, we choose the 4 adjacent mentions to form a sequence.",
"In this section, we test whether ranking mentions is helpful for entity selections. At first, we directly input them into the global encoder by the order they appear in the text. We record the disambiguation results and compare them with the method which adopts ranking mentions. As shown in Figure 5a, the model with ranking mentions has achieved better performances on most of datasets, indicating that it is effective to place the mention that with a higher local similarity in front of the sequence. It is worth noting that the effect of ranking mentions is not obvious on the MSNBC dataset, the reason is that most of mentions in MSNBC have similar local similarities, the order of disambiguation has little effect on the final result.",
"Most of previous methods mainly use the similarities between entities to correlate each other, but our model associates them by encoding the selected entity information. To assess whether the global encoding contributes to disambiguation rather than add noise, we compare the performance with and without adding the global information. When the global encoding is not added, the current state only contains the mention context representation, candidate entity representation and feature representation, notably, the selected target entity information is not taken into account. From the results in Figure 5b, we can see that the model with global encoding achieves an improvement of 4% accuracy over the method that without global encoding.",
"To illustrate the necessity for adopting the reinforcement learning for entity selection, we compare two entity selection strategies like BIBREF5 . Specifically, we perform entity selection respectively with reinforcement learning and greedy choice. The greedy choice is to select the entity with largest local similarity from candidate set. But the reinforcement learning selection is guided by delay reward, which has a global perspective. In the comparative experiment, we keep the other conditions consistent, just replace the RL selection with a greedy choice. Based on the results in Figure 5c, we can draw a conclusion that our entity selector perform much better than greedy strategies."
],
[
"Table 5 shows two entity selection examples by our RLEL model. For multiple mentions appearing in the document, we first sort them according to their local similarities, and select the target entities in order by the reinforcement learning model. From the results of sorting and disambiguation, we can see that our model is able to utilize the topical consistency between mentions and make full use of the selected target entity information."
],
[
"The related work can be roughly divided into two groups: entity linking and reinforcement learning."
],
[
"Entity linking falls broadly into two major approaches: local and global disambiguation. Early studies use local models to resolve mentions independently, they usually disambiguate mentions based on lexical matching between the mention's surrounding words and the entity profile in the reference KB. Various methods have been proposed to model mention's local context ranging from binary classification BIBREF17 to rank models BIBREF26 , BIBREF27 . In these methods, a large number of hand-designed features are applied. For some marginal mentions that are difficult to extract features, researchers also exploit the data retrieved by search engines BIBREF28 , BIBREF29 or Wikipedia sentences BIBREF30 . However, the feature engineering and search engine methods are both time-consuming and laborious. Recently, with the popularity of deep learning models, representation learning is utilized to automatically find semantic features BIBREF31 , BIBREF32 . The learned entity representations which by jointly modeling textual contexts and knowledge base are effective in combining multiple sources of information. To make full use of the information contained in representations, we also utilize the pre-trained entity embeddings in our model.",
"In recent years, with the assumption that the target entities of all mentions in a document shall be related, many novel global models for joint linking are proposed. Assuming the topical coherence among mentions, authors in BIBREF33 , BIBREF34 construct factor graph models, which represent the mention and candidate entities as variable nodes, and exploit factor nodes to denote a series of features. Two recent studies BIBREF0 , BIBREF1 use fully-connected pairwise Conditional Random Field(CRF) model and exploit loopy belief propagation to estimate the max-marginal probability. Moreover, PageRank or Random Walk BIBREF35 , BIBREF18 , BIBREF7 are utilized to select the target entity for each mention. The above probabilistic models usually need to predefine a lot of features and are difficult to calculate the max-marginal probability as the number of nodes increases. In order to automatically learn features from the data, Cao et al. BIBREF9 applies Graph Convolutional Network to flexibly encode entity graphs. However, the graph-based methods are computationally expensive because there are lots of candidate entity nodes in the graph.",
"To reduce the calculation between candidate entity pairs, Globerson et al. BIBREF24 introduce a coherence model with an attention mechanism, where each mention only focus on a fixed number of mentions. Unfortunately, choosing the number of attention mentions is not easy in practice. Two recent studies BIBREF8 , BIBREF36 finish linking all mentions by scanning the pairs of mentions at most once, they assume each mention only needs to be consistent with one another mention in the document. The limitation of their method is that the consistency information is too sparse, resulting in low confidence. Similar to us, Guo et al. BIBREF18 also sort mentions according to the difficulty of disambiguation, but they did not make full use of the information of previously referred entities for the subsequent entity disambiguation. Nguyen et al. BIBREF2 use the sequence model, but they simply encode the results of the greedy choice, and measure the similarities between the global encoding and the candidate entity representations. Their model does not consider the long-term impact of current decisions on subsequent choices, nor does they add the selected target entity information to the current state to help disambiguation."
],
[
"In the last few years, reinforcement learning has emerged as a powerful tool for solving complex sequential decision-making problems. It is well known for its great success in the game field, such as Go BIBREF37 and Atari games BIBREF38 . Recently, reinforcement learning has also been successfully applied to many natural language processing tasks and achieved good performance BIBREF12 , BIBREF39 , BIBREF5 . Feng et al. BIBREF5 used reinforcement learning for relation classification task by filtering out the noisy data from the sentence bag and they achieved huge improvements compared with traditional classifiers. Zhang et al. BIBREF40 applied the reinforcement learning on sentence representation by automatically discovering task-relevant structures. To automatic taxonomy induction from a set of terms, Han et al. BIBREF41 designed an end-to-end reinforcement learning model to determine which term to select and where to place it on the taxonomy, which effectively reduced the error propagation between two phases. Inspired by the above works, we also add reinforcement learning to our framework."
],
[
"In this paper we consider entity linking as a sequence decision problem and present a reinforcement learning based model. Our model learns the policy on selecting target entities in a sequential manner and makes decisions based on current state and previous ones. By utilizing the information of previously referred entities, we can take advantage of global consistency to disambiguate mentions. For each selection result in the current state, it also has a long-term impact on subsequent decisions, which allows learned policy strategy has a global view. In experiments, we evaluate our method on AIDA-B and other well-known datasets, the results show that our system outperforms state-of-the-art solutions. In the future, we would like to use reinforcement learning to detect mentions and determine which mention should be firstly disambiguated in the document.",
"This research is supported by the GS501100001809National Key Research and Development Program of China (No. GS5011000018092018YFB1004703), GS501100001809the Beijing Municipal Science and Technology Project under grant (No. GS501100001809",
"Z181100002718004), and GS501100001809the National Natural Science Foundation of China grants(No. GS50110000180961602466)."
]
],
"section_name": [
"Introduction",
"Methodology",
"Preliminaries",
"Local Encoder",
"Global Encoder",
"Entity Selector",
"Experiment",
"Experiment Setup",
"Comparing with Previous Work",
"Discussion on different RLEL variants",
"Case Study",
"Related Work",
"Entity Linking",
"Reinforcement Learning",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"42325ec6f5639d307e01d65ebd24c589954df837"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2846a1ba6ad38fa848bcf90df690ea6e75a070e4"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Compare our model with other baseline methods on different types of datasets. The evaluation metric is micro F1."
],
"extractive_spans": [],
"free_form_answer": "Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Compare our model with other baseline methods on different types of datasets. The evaluation metric is micro F1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"007037927f1cabc42b0b0cd366c3fcf15becbf73",
"e9393b6c500f4ea6a8a0cb2df9c7307139c5cb0c"
],
"answer": [
{
"evidence": [
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.",
"ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.",
"MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)",
"AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.",
"WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.",
"WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation."
],
"extractive_spans": [
"AIDA-B",
"ACE2004",
"MSNBC",
"AQUAINT",
"WNED-CWEB",
"WNED-WIKI"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. ",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.\n\nACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.\n\nMSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)\n\nAQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.\n\nWNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.\n\nWNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.",
"ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.",
"MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)",
"AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.",
"WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.",
"WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.",
"OURSELF-WIKI is crawled by ourselves from Wikipedia pages."
],
"extractive_spans": [
"AIDA-CoNLL",
"ACE2004",
"MSNBC",
"AQUAINT",
"WNED-CWEB",
"WNED-WIKI",
"OURSELF-WIKI"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. ",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.\n\nACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.\n\nMSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)\n\nAQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.\n\nWNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.\n\nWNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.\n\nOURSELF-WIKI is crawled by ourselves from Wikipedia pages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"af84319f3ae34ff40bb5f030903e56a43afe43ab"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 2: The overall structure of our RLEL model. It contains three parts: Local Encoder, Global Encoder and Entity Selector. In this framework, (Vmt ,Vekt ) denotes the concatenation of the mention context vector Vmt and one candidate entity vector Vekt . The policy network selects one entity from the candidate set, and Vat denotes the concatenation of the mention context vector Vmt and the selected entity vector Ve∗t . ht represents the hidden status of Vat , and it will be fed into St+1.",
"Where $\\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \\in \\mathbb {R}^{1\\times {n}}$ to $V_{m_i}^t{^{\\prime }} \\in \\mathbb {R}^{k\\times {n}}$ and then combine it with $V_{e_i}^t \\in \\mathbb {R}^{k\\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action."
],
"extractive_spans": [
"output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: The overall structure of our RLEL model. It contains three parts: Local Encoder, Global Encoder and Entity Selector. In this framework, (Vmt ,Vekt ) denotes the concatenation of the mention context vector Vmt and one candidate entity vector Vekt . The policy network selects one entity from the candidate set, and Vat denotes the concatenation of the mention context vector Vmt and the selected entity vector Ve∗t . ht represents the hidden status of Vat , and it will be fed into St+1.",
"As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How fast is the model compared to baselines?",
"How big is the performance difference between this method and the baseline?",
"What datasets used for evaluation?",
"what are the mentioned cues?"
],
"question_id": [
"9aca4b89e18ce659c905eccc78eda76af9f0072a",
"b0376a7f67f1568a7926eff8ff557a93f434a253",
"dad8cc543a87534751f9f9e308787e1af06f0627",
"0481a8edf795768d062c156875d20b8fb656432c"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"Entity linking",
"Entity linking",
"Entity linking",
"Entity linking"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Illustration of mentions in the free text and their candidate entities in the knowledge base. Solid black lines point to the correct target entities corresponding to the mentions and to the descriptions of these correct target entities. Solid red lines indicate the consistency between correct target entities and the orange dashed lines denote the consistency between wrong candidate entities.",
"Figure 2: The overall structure of our RLEL model. It contains three parts: Local Encoder, Global Encoder and Entity Selector. In this framework, (Vmt ,Vekt ) denotes the concatenation of the mention context vector Vmt and one candidate entity vector Vekt . The policy network selects one entity from the candidate set, and Vat denotes the concatenation of the mention context vector Vmt and the selected entity vector Ve∗t . ht represents the hidden status of Vat , and it will be fed into St+1.",
"Figure 3: The architecture of policy network. It is a feedforward neural network and the input consists of four parts: mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions.",
"Table 1: Statistics of document and mention numbers on experimental datasets.",
"Table 2: In-KB accuracy result on AIDA-B dataset.",
"Table 3: Compare our model with other baseline methods on different types of datasets. The evaluation metric is micro F1.",
"Figure 4: The performance of models with different sequence lengths on AIDA-B dataset.",
"Table 4: The micro F1 of gold entities with different pageviews on part of AIDA-B dataset.",
"Figure 5: The comparative experiments of RLEL model.",
"Table 5: Entity selection examples by our RLEL model."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Figure4-1.png",
"7-Table4-1.png",
"8-Figure5-1.png",
"8-Table5-1.png"
]
} | [
"How big is the performance difference between this method and the baseline?"
] | [
[
"1902.00330-7-Table3-1.png"
]
] | [
"Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores."
] | 42 |
1909.00542 | Classification Betters Regression in Query-based Multi-document Summarisation Techniques for Question Answering: Macquarie University at BioASQ7b | Task B Phase B of the 2019 BioASQ challenge focuses on biomedical question answering. Macquarie University's participation applies query-based multi-document extractive summarisation techniques to generate a multi-sentence answer given the question and the set of relevant snippets. In past participation we explored the use of regression approaches using deep learning architectures and a simple policy gradient architecture. For the 2019 challenge we experiment with the use of classification approaches with and without reinforcement learning. In addition, we conduct a correlation analysis between various ROUGE metrics and the BioASQ human evaluation scores. | {
"paragraphs": [
[
"The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation.",
"As in past participation BIBREF1, BIBREF2, we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation. In contrast with past years where the training procedure was based on a regression set up, this year we experiment with various classification set ups. The main contributions of this paper are:",
"We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.",
"We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall.",
"Section SECREF2 briefly introduces some related work for context. Section SECREF3 describes our classification and regression experiments. Section SECREF4 details our experiments using deep learning architectures. Section SECREF5 explains the reinforcement learning approaches. Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations. Section SECREF7 lists the specific runs submitted at BioASQ 7b. Finally, Section SECREF8 concludes the paper."
],
[
"The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0. Every year there has been a task about semantic indexing (task a) and another about question answering (task b), and occasionally there have been additional tasks. The tasks defined for 2019 are:",
"Large Scale Online Biomedical Semantic Indexing.",
"Biomedical Semantic QA involving Information Retrieval (IR), Question Answering (QA), and Summarisation.",
"Medical Semantic Indexing in Spanish.",
"BioASQ Task 7b consists of two phases. Phase A provides a biomedical question as an input, and participants are expected to find relevant concepts from designated terminologies and ontologies, relevant articles from PubMed, relevant snippets from the relevant articles, and relevant RDF triples from designated ontologies. Phase B provides a biomedical question and a list of relevant articles and snippets, and participant systems are expected to return the exact answers and the ideal answers. The training data is composed of the test data from all previous years, and amounts to 2,747 samples. There has been considerable research on the use of machine learning approaches for tasks related to text summarisation, especially on single-document summarisation. Abstractive approaches normally use an encoder-decoder architecture and variants of this architecture incorporate attention BIBREF3 and pointer-generator BIBREF4. Recent approaches leveraged the use of pre-trained models BIBREF5. Recent extractive approaches to summarisation incorporate recurrent neural networks that model sequences of sentence extractions BIBREF6 and may incorporate an abstractive component and reinforcement learning during the training stage BIBREF7. But relatively few approaches have been proposed for query-based multi-document summarisation. Table TABREF8 summarises the approaches presented in the proceedings of the 2018 BioASQ challenge."
],
[
"Our past participation in BioASQ BIBREF1, BIBREF2 and this paper focus on extractive approaches to summarisation. Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores, thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1. Our past participating systems used regression approaches using the following framework:",
"Train the regressor to predict the ROUGE-SU4 F1 score of the input sentence.",
"Produce a summary by selecting the top $n$ input sentences.",
"A novelty in the current participation is the introduction of classification approaches using the following framework.",
"Train the classifier to predict the target label (“summary” or “not summary”) of the input sentence.",
"Produce a summary by selecting all sentences predicted as “summary”.",
"If the total number of sentences selected is less than $n$, select $n$ sentences with higher probability of label “summary”.",
"Introducing a classifier makes labelling the training data not trivial, since the target summaries are human-generated and they do not have a perfect mapping to the input sentences. In addition, some samples have multiple reference summaries. BIBREF11 showed that different data labelling approaches influence the quality of the final summary, and some labelling approaches may lead to better results than using regression. In this paper we experiment with the following labelling approaches:",
": Label as “summary” all sentences from the input text that have a ROUGE score above a threshold $t$.",
": Label as “summary” the $m$ input text sentences with highest ROUGE score.",
"As in BIBREF11, The ROUGE score of an input sentence was the ROUGE-SU4 F1 score of the sentence against the set of reference summaries.",
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"Preliminary experiments showed a relatively high number of cases where the classifier did not classify any of the input sentences as “summary”. To solve this problem, and as mentioned above, the summariser used in Table TABREF26 introduces a backoff step that extracts the $n$ sentences with highest predicted values when the summary has less than $n$ sentences. The value of $n$ is as reported in our prior work and shown in Table TABREF25.",
"The results confirm BIBREF11's finding that classification outperforms regression. However, the actual choice of optimal labelling scheme was different: whereas in BIBREF11 the optimal labelling was based on a labelling threshold of 0.1, our experiments show a better result when using the top 5 sentences as the target summary. The reason for this difference might be the fact that BIBREF11 used all sentences from the abstracts of the relevant PubMed articles, whereas we use only the snippets as the input to our summariser. Consequently, the number of input sentences is now much smaller. We therefore report the results of using the labelling schema of top 5 snippets in all subsequent classifier-based experiments of this paper.",
"barchart=[fill=black!20,draw=black] errorbar=[very thin,draw=black!75] sscale=[very thin,draw=black!75]"
],
[
"Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer.",
"Table TABREF26 also shows the standard deviation across the cross-validation folds. Whereas this standard deviation is fairly large compared with the differences in results, in general the results are compatible with the top part of the table and prior work suggesting that classification-based approaches improve over regression-based approaches."
],
[
"We also experiment with the use of reinforcement learning techniques. Again these experiments are based on BIBREF2, who uses REINFORCE to train a global policy. The policy predictor uses a simple feedforward network with a hidden layer.",
"The results reported by BIBREF2 used ROUGE Recall and indicated no improvement with respect to deep learning architectures. Human evaluation results are preferable over ROUGE but these were made available after the publication of the paper. When comparing the ROUGE and human evaluation results (Table TABREF29), we observe an inversion of the results. In particular, the reinforcement learning approaches (RL) of BIBREF2 receive good human evaluation results, and as a matter of fact they are the best of our runs in two of the batches. In contrast, the regression systems (NNR) fare relatively poorly. Section SECREF6 expands on the comparison between the ROUGE and human evaluation scores.",
"Encouraged by the results of Table TABREF29, we decided to continue with our experiments with reinforcement learning. We use the same features as in BIBREF2, namely the length (in number of sentences) of the summary generated so far, plus the $tf.idf$ vectors of the following:",
"Candidate sentence;",
"Entire input to summarise;",
"Summary generated so far;",
"Candidate sentences that are yet to be processed; and",
"Question.",
"The reward used by REINFORCE is the ROUGE value of the summary generated by the system. Since BIBREF2 observed a difference between the ROUGE values of the Python implementation of ROUGE and the original Perl version (partly because the Python implementation does not include ROUGE-SU4), we compare the performance of our system when trained with each of them. Table TABREF35 summarises some of our experiments. We ran the version trained on Python ROUGE once, and the version trained on Perl twice. The two Perl runs have different results, and one of them clearly outperforms the Python run. However, given the differences of results between the two Perl runs we advice to re-run the experiments multiple times and obtain the mean and standard deviation of the runs before concluding whether there is any statistical difference between the results. But it seems that there may be an improvement of the final evaluation results when training on the Perl ROUGE values, presumably because the final evaluation results are measured using the Perl implementation of ROUGE.",
"We have also tested the use of word embeddings instead of $tf.idf$ as input features to the policy model, while keeping the same neural architecture for the policy (one hidden layer using the same number of hidden nodes). In particular, we use the mean of word embeddings using 100 and 200 dimensions. These word embeddings were pre-trained using word2vec on PubMed documents provided by the organisers of BioASQ, as we did for the architectures described in previous sections. The results, not shown in the paper, indicated no major improvement, and re-runs of the experiments showed different results on different runs. Consequently, our submission to BioASQ included the original system using $tf.idf$ as input features in all batches but batch 2, as described in Section SECREF7."
],
[
"As mentioned in Section SECREF5, there appears to be a large discrepancy between ROUGE Recall and the human evaluations. This section describes a correlation analysis between human and ROUGE evaluations using the runs of all participants to all previous BioASQ challenges that included human evaluations (Phase B, ideal answers). The human evaluation results were scraped from the BioASQ Results page, and the ROUGE results were kindly provided by the organisers. We compute the correlation of each of the ROUGE metrics (recall, precision, F1 for ROUGE-2 and ROUGE-SU4) against the average of the human scores. The correlation metrics are Pearson, Kendall, and a revised Kendall correlation explained below.",
"The Pearson correlation between two variables is computed as the covariance of the two variables divided by the product of their standard deviations. This correlation is a good indication of a linear relation between the two variables, but may not be very effective when there is non-linear correlation.",
"The Spearman rank correlation and the Kendall rank correlation are two of the most popular among metrics that aim to detect non-linear correlations. The Spearman rank correlation between two variables can be computed as the Pearson correlation between the rank values of the two variables, whereas the Kendall rank correlation measures the ordinal association between the two variables using Equation DISPLAY_FORM36.",
"It is useful to account for the fact that the results are from 28 independent sets (3 batches in BioASQ 1 and 5 batches each year between BioASQ 2 and BioASQ 6). We therefore also compute a revised Kendall rank correlation measure that only considers pairs of variable values within the same set. The revised metric is computed using Equation DISPLAY_FORM37, where $S$ is the list of different sets.",
"Table TABREF38 shows the results of all correlation metrics. Overall, ROUGE-2 and ROUGE-SU4 give similar correlation values but ROUGE-SU4 is marginally better. Among precision, recall and F1, both precision and F1 are similar, but precision gives a better correlation. Recall shows poor correlation, and virtually no correlation when using the revised Kendall measure. For reporting the evaluation of results, it will be therefore more useful to use precision or F1. However, given the small difference between precision and F1, and given that precision may favour short summaries when used as a function to optimise in a machine learning setting (e.g. using reinforcement learning), it may be best to use F1 as the metric to optimise.",
"Fig. FIGREF40 shows the scatterplots of ROUGE-SU4 recall, precision and F1 with respect to the average human evaluation. We observe that the relation between ROUGE and the human evaluations is not linear, and that Precision and F1 have a clear correlation."
],
[
"Table TABREF41 shows the results and details of the runs submitted to BioASQ. The table uses ROUGE-SU4 Recall since this is the metric available at the time of writing this paper. However, note that, as explained in Section SECREF6, these results might differ from the final human evaluation results. Therefore we do not comment on the results, other than observing that the “first $n$” baseline produces the same results as the neural regressor. As mentioned in Section SECREF3, the labels used for the classification experiments are the 5 sentences with highest ROUGE-SU4 F1 score."
],
[
"Macquarie University's participation in BioASQ 7 focused on the task of generating the ideal answers. The runs use query-based extractive techniques and we experiment with classification, regression, and reinforcement learning approaches. At the time of writing there were no human evaluation results, and based on ROUGE-F1 scores under cross-validation on the training data we observed that classification approaches outperform regression approaches. We experimented with several approaches to label the individual sentences for the classifier and observed that the optimal labelling policy for this task differed from prior work.",
"We also observed poor correlation between ROUGE-Recall and human evaluation metrics and suggest to use alternative automatic evaluation metrics with better correlation, such as ROUGE-Precision or ROUGE-F1. Given the nature of precision-based metrics which could bias the system towards returning short summaries, ROUGE-F1 is probably more appropriate when using at development time, for example for the reward function used by a reinforcement learning system.",
"Reinforcement learning gives promising results, especially in human evaluations made on the runs submitted to BioASQ 6b. This year we introduced very small changes to the runs using reinforcement learning, and will aim to explore more complex reinforcement learning strategies and more complex neural models in the policy and value estimators."
]
],
"section_name": [
"Introduction",
"Related Work",
"Classification vs. Regression Experiments",
"Deep Learning Models",
"Reinforcement Learning",
"Evaluation Correlation Analysis",
"Submitted Runs",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"be76304cc653b787c5b7c0d4f88dbfbafd20e537"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ada830beff3690f98d83d92a55dc600fd8f87d0c",
"dd13f22ac95caf0d6996852322bdb192ffdf3ba9"
],
"answer": [
{
"evidence": [
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer."
],
"extractive_spans": [],
"free_form_answer": "classification, regression, neural methods",
"highlighted_evidence": [
"The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28."
],
"extractive_spans": [
" Support Vector Regression (SVR) and Support Vector Classification (SVC)",
"deep learning regression models of BIBREF2 to convert them to classification models"
],
"free_form_answer": "",
"highlighted_evidence": [
"The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively.",
"Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"00aa8254441466bf3eb8d92b5cb8e6f0ccba0fcb"
],
"answer": [
{
"evidence": [
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer."
],
"extractive_spans": [
"NNC SU4 F1",
"NNC top 5",
"Support Vector Classification (SVC)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively.",
"The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"74f77e49538c04f04248ecb1687279386942ee72"
],
"answer": [
{
"evidence": [
"We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How did the author's work rank among other submissions on the challenge?",
"What approaches without reinforcement learning have been tried?",
"What classification approaches were experimented for this task?",
"Did classification models perform better than previous regression one?"
],
"question_id": [
"b6a4ab009e6f213f011320155a7ce96e713c11cf",
"cfffc94518d64cb3c8789395707e4336676e0345",
"f60629c01f99de3f68365833ee115b95a3388699",
"a7cb4f8e29fd2f3d1787df64cd981a6318b65896"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1. Summarisation techniques used in BioASQ 6b for the generation of ideal answers. The evaluation result is the human evaluation of the best run.",
"Fig. 2. Architecture of the neural classification and regression systems. A matrix of pre-trained word embeddings (same pre-trained vectors as in Fig. 1) is used to find the embeddings of the words of the input sentence and the question. Then, LSTM chains are used to generate sentence embeddings — the weights of the LSTM chains of input sentence and question are not shared. Then, the sentence position is concatenated to the sentence embedding and the similarity of sentence and question embeddings, implemented as a product. A final layer predicts the label of the sentence.",
"Table 5. Experiments using Perl and Python versions of ROUGE. The Python version used the average of ROUGE-2 and ROUGE-L, whereas the Perl version used ROUGESU4.",
"Table 6. Correlation analysis of evaluation results",
"Table 7. Runs submitted to BioASQ 7b",
"Fig. 3. Scatterplots of ROUGE SU4 evaluation metrics against the average human evaluations."
],
"file": [
"3-Table1-1.png",
"6-Figure2-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"10-Table7-1.png",
"11-Figure3-1.png"
]
} | [
"What approaches without reinforcement learning have been tried?"
] | [
[
"1909.00542-Classification vs. Regression Experiments-11",
"1909.00542-Deep Learning Models-0",
"1909.00542-Deep Learning Models-1"
]
] | [
"classification, regression, neural methods"
] | 43 |
1810.06743 | Marrying Universal Dependencies and Universal Morphology | The Universal Dependencies (UD) and Universal Morphology (UniMorph) projects each present schemata for annotating the morphosyntactic details of language. Each project also provides corpora of annotated text in many languages - UD at the token level and UniMorph at the type level. As each corpus is built by different annotators, language-specific decisions hinder the goal of universal schemata. With compatibility of tags, each project's annotations could be used to validate the other's. Additionally, the availability of both type- and token-level resources would be a boon to tasks such as parsing and homograph disambiguation. To ease this interoperability, we present a deterministic mapping from Universal Dependencies v2 features into the UniMorph schema. We validate our approach by lookup in the UniMorph corpora and find a macro-average of 64.13% recall. We also note incompatibilities due to paucity of data on either side. Finally, we present a critical evaluation of the foundations, strengths, and weaknesses of the two annotation projects. | {
"paragraphs": [
[
"The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement.",
"A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.",
"This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation.",
"The contributions of this work are:"
],
[
"Morphological inflection is the act of altering the base form of a word (the lemma, represented in fixed-width type) to encode morphosyntactic features. As an example from English, prove takes on the form proved to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.",
"A classic result in psycholinguistics BIBREF4 shows that inflectional morphology is a fully productive process. Indeed, it cannot be that humans simply have the equivalent of a lookup table, where they store the inflected forms for retrieval as the syntactic context requires. Instead, there needs to be a mental process that can generate properly inflected words on demand. BIBREF4 showed this insightfully through the wug-test, an experiment where she forced participants to correctly inflect out-of-vocabulary lemmata, such as the novel noun wug.",
"Certain features of a word do not vary depending on its context: In German or Spanish where nouns are gendered, the word for onion will always be grammatically feminine. Thus, to prepare for later discussion, we divide the morphological features of a word into two categories: the modifiable inflectional features and the fixed lexical features.",
"A part of speech (POS) is a coarse syntactic category (like verb) that begets a word's particular menu of lexical and inflectional features. In English, verbs express no gender, and adjectives do not reflect person or number. The part of speech dictates a set of inflectional slots to be filled by the surface forms. Completing these slots for a given lemma and part of speech gives a paradigm: a mapping from slots to surface forms. Regular English verbs have five slots in their paradigm BIBREF5 , which we illustrate for the verb prove, using simple labels for the forms in tab:ptb.",
"A morphosyntactic schema prescribes how language can be annotated—giving stricter categories than our simple labels for prove—and can vary in the level of detail provided. Part of speech tags are an example of a very coarse schema, ignoring details of person, gender, and number. A slightly finer-grained schema for English is the Penn Treebank tagset BIBREF6 , which includes signals for English morphology. For instance, its VBZ tag pertains to the specially inflected 3rd-person singular, present-tense verb form (e.g. proves in tab:ptb).",
"If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm, it is called a morphosyntactic description (MSD). These descriptions require varying amounts of detail: While the English verbal paradigm is small enough to fit on a page, the verbal paradigm of the Northeast Caucasian language Archi can have over 1500000 slots BIBREF7 ."
],
[
"Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as Person: 1. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word."
],
[
"The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes. In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.",
"The UD schema seeks to balance language-specific and cross-lingual concerns. It annotates for both inflectional features such as case and lexical features such as gender. Additionally, the UD schema annotates for features which can be interpreted as derivational in some languages. For example, the Czech UD guidance uses a Coll value for the Number feature to denote mass nouns (for example, \"lidstvo\" \"humankind\" from the root \"lid\" \"people\").",
"UD represents a confederation of datasets BIBREF8 annotated with dependency relationships (which are not the focus of this work) and morphosyntactic descriptions. Each dataset is an annotated treebank, making it a resource of token-level annotations. The schema is guided by these treebanks, with feature names chosen for relevance to native speakers. (In sec:unimorph, we will contrast this with UniMorph's treatment of morphosyntactic categories.) The UD datasets have been used in the CoNLL shared tasks BIBREF9 ."
],
[
"In the Universal Morphological Feature Schema BIBREF10 , there are at least 212 values, spread across 23 attributes. It identifies some attributes that UD excludes like information structure and deixis, as well as providing more values for certain attributes, like 23 different noun classes endemic to Bantu languages. As it is a schema for marking morphology, its part of speech attribute does not have POS values for punctuation, symbols, or miscellany (Punct, Sym, and X in Universal Dependencies).",
"Like the UD schema, the decomposition of a word into its lemma and MSD is directly comparable across languages. Its features are informed by a distinction between universal categories, which are widespread and psychologically real to speakers; and comparative concepts, only used by linguistic typologists to compare languages BIBREF11 . Additionally, it strives for identity of meaning across languages, not simply similarity of terminology. As a prime example, it does not regularly label a dative case for nouns, for reasons explained in depth by BIBREF11 .",
"The UniMorph resources for a language contain complete paradigms extracted from Wiktionary BIBREF12 , BIBREF13 . Word types are annotated to form a database, mapping a lemma–tag pair to a surface form. The schema is explained in detail in BIBREF10 . It has been used in the SIGMORPHON shared task BIBREF14 and the CoNLL–SIGMORPHON shared tasks BIBREF15 , BIBREF16 . Several components of the UniMorph schema have been adopted by UD."
],
[
"While the two schemata annotate different features, their annotations often look largely similar. Consider the attested annotation of the Spanish word mandaba (I/he/she/it) commanded. tab:annotations shows that these annotations share many attributes.",
"Some conversions are straightforward: VERB to V, Mood=Ind to IND, Number=Sing to SG, and Person=3 to 3. One might also suggest mapping Tense=Imp to IPFV, though this crosses semantic categories: IPFV represents the imperfective aspect, whereas Tense=Imp comes from imperfect, the English name often given to Spanish's pasado continuo form. The imperfect is a verb form which combines both past tense and imperfective aspect. UniMorph chooses to split this into the atoms PST and IPFV, while UD unifies them according to the familiar name of the tense."
],
[
"Prima facie, the alignment task may seem trivial. But we've yet to explore the humans in the loop. This conversion is a hard problem because we're operating on idealized schemata. We're actually annotating human decisions—and human mistakes. If both schemata were perfectly applied, their overlapping attributes could be mapped to each other simply, in a cross-lingual and totally general way. Unfortunately, the resources are imperfect realizations of their schemata. The cross-lingual, cross-resource, and within-resource problems that we'll note mean that we need a tailor-made solution for each language.",
"Showcasing their schemata, the Universal Dependencies and UniMorph projects each present large, annotated datasets. UD's v2.1 release BIBREF1 has 102 treebanks in 60 languages. The large resource, constructed by independent parties, evinces problems in the goal of a universal inventory of annotations. Annotators may choose to omit certain values (like the coerced gender of refrescante in fig:disagreement), and they may disagree on how a linguistic concept is encoded. (See, e.g., BIBREF11 's ( BIBREF11 ) description of the dative case.) Additionally, many of the treebanks were created by fully- or semi-automatic conversion from treebanks with less comprehensive annotation schemata than UD BIBREF0 . For instance, the Spanish word vas you go is incorrectly labeled Gender: Fem|Number: Pl because it ends in a character sequence which is common among feminine plural nouns. (Nevertheless, the part of speech field for vas is correct.)",
"UniMorph's development is more centralized and pipelined. Inflectional paradigms are scraped from Wiktionary, annotators map positions in the scraped data to MSDs, and the mapping is automatically applied to all of the scraped paradigms. Because annotators handle languages they are familiar with (or related ones), realization of the schema is also done on a language-by-language basis. Further, the scraping process does not capture lexical aspects that are not inflected, like noun gender in many languages. The schema permits inclusion of these details; their absence is an artifact of the data collection process. Finally, UniMorph records only exist for nouns, verbs, and adjectives, though the schema is broader than these categories.",
"For these reasons, we treat the corpora as imperfect realizations of the schemata. Moreover, we contend that ambiguity in the schemata leave the door open to allow for such imperfections. With no strict guidance, it's natural that annotators would take different paths. Nevertheless, modulo annotator disagreement, we assume that within a particular corpus, one word form will always be consistently annotated.",
"Three categories of annotation difficulty are missing values, language-specific attributes, and multiword expressions."
],
[
"In our work, the goal is not simply to translate one schema into the other, but to translate one resource (the imperfect manifestation of the schema) to match the other. The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward.",
"Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attribute-value pairs which comprise the MSDs. The former is untenable: the table of all UD feature combinations (including null features, excluding language-specific attributes) would have 2.445e17 entries. Of course, most combinations won't exist, but this gives a sense of the table's scale. Also, it doesn't leverage the factorial nature of the annotations: constructing the table would require a massive duplication of effort. On the other hand, attribute-value lookup lacks the flexibility to show how a pair of values interacts. Neither approach would handle language- and annotator-specific tendencies in the corpora.",
"Our approach to converting UD MSDs to UniMorph MSDs begins with the attribute-value lookup, then amends it on a language-specific basis. Alterations informed by the MSD and the word form, like insertion, substitution, and deletion, increase the number of agreeing annotations. They are critical for work that examines the MSD monolithically instead of feature-by-feature BIBREF25 , BIBREF26 : Without exact matches, converting the individual tags becomes hollow.",
"Beginning our process, we relied on documentation of the two schemata to create our initial, language-agnostic mapping of individual values. This mapping has 140 pairs in it. Because the mapping was derived purely from the schemata, it is a useful approximation of how well the schemata match up. We note, however, that the mapping does not handle idiosyncrasies like the many uses of dative or features which are represented in UniMorph by argument templates: possession and ergative–absolutive argument marking. The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD.",
"As shown in sec:results, the initial proposal is often frustratingly deficient. Thus we introduce the post-edits. To concoct these, we looked into UniMorph corpora for these languages, compared these to the conversion outputs, and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora. When a form and its lemma existed in both corpora, we could directly inspect how the annotations differed. Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values (lemma, form, etc.) to a UniMorph MSD, though we do not store the table explicitly.",
"Some conversion rules we've created must be applied before or after others. These sequential dependencies provide conciseness. Our post-editing procedure operates on the initial MSD hypothesis as follows:"
],
[
"We evaluate our tool on two tasks:",
"To be clear, our scope is limited to the schema conversion. Future work will explore NLP tasks that exploit both the created token-level UniMorph data and the existing type-level UniMorph data."
],
[
"We transform all UD data to the UniMorph. We compare the simple lookup-based transformation to the one with linguistically informed post-edits on all languages with both UD and UniMorph data. We then evaluate the recall of MSDs without partial credit.",
"Because the UniMorph tables only possess annotations for verbs, nouns, adjectives, or some combination, we can only examine performance for these parts of speech. We consider two words to be a match if their form and lemma are present in both resources. Syncretism allows a single surface form to realize multiple MSDs (Spanish mandaba can be first- or third-person), so we define success as the computed MSD matching any of the word's UniMorph MSDs. This gives rise to an equation for recall: of the word–lemma pairs found in both resources, how many of their UniMorph-converted MSDs are present in the UniMorph tables?",
"Our problem here is not a learning problem, so the question is ill-posed. There is no training set, and the two resources for a given language make up a test set. The quality of our model—the conversion tool—comes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora."
],
[
"If the UniMorph-converted treebanks perform differently on downstream tasks, then they convey different information. This signals a failure of the conversion process. As a downstream task, we choose morphological tagging, a critical step to leveraging morphological information on new text.",
"We evaluate taggers trained on the transformed UD data, choosing eight languages randomly from the intersection of UD and UniMorph resources. We report the macro-averaged F1 score of attribute-value pairs on a held-out test set, with official train/validation/test splits provided in the UD treebanks. As a reference point, we also report tagging accuracy on those languages' untransformed data.",
"We use the state-of-the-art morphological tagger of BIBREF0 . It is a factored conditional random field with potentials for each attribute, attribute pair, and attribute transition. The potentials are computed by neural networks, predicting the values of each attribute jointly but not monolithically. Inference with the potentials is performed approximately by loopy belief propagation. We use the authors' hyperparameters.",
"We note a minor implementation detail for the sake of reproducibility. The tagger exploits explicit guidance about the attribute each value pertains to. The UniMorph schema's values are globally unique, but their attributes are not explicit. For example, the UniMorph Masc denotes a masculine gender. We amend the code of BIBREF0 to incorporate attribute identifiers for each UniMorph value."
],
[
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.",
"For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance."
],
[
"The goal of a tagset-to-tagset mapping of morphological annotations is shared by the Interset project BIBREF28 . Interset decodes features in the source corpus to a tag interlingua, then encodes that into target corpus features. (The idea of an interlingua is drawn from machine translation, where a prevailing early mindset was to convert to a universal representation, then encode that representation's semantics in the target language. Our approach, by contrast, is a direct flight from the source to the target.) Because UniMorph corpora are noisy, the encoding from the interlingua would have to be rewritten for each target. Further, decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form.",
"The creators of HamleDT sought to harmonize dependency annotations among treebanks, similar to our goal of harmonizing across resources BIBREF29 . The treebanks they sought to harmonize used multiple diverse annotation schemes, which the authors unified under a single scheme.",
" BIBREF30 present mappings into a coarse, universal part of speech for 22 languages. Working with POS tags rather than morphological tags (which have far more dimensions), their space of options to harmonize is much smaller than ours.",
"Our extrinsic evaluation is most in line with the paradigm of BIBREF31 (and similar work therein), who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation. Our problem differs, though, in that the dependency representations express different relationships, while our two schemata vastly overlap. As our conversion is lossy, we do not appraise the learnability of representations as they did.",
"In addition to using the number of extra rules as a proxy for harmony between resources, one could perform cross-lingual projection of morphological tags BIBREF32 , BIBREF33 . Our approach succeeds even without parallel corpora."
],
[
"We created a tool for annotating Universal Dependencies CoNLL-U files with UniMorph annotations. Our tool is ready to use off-the-shelf today, requires no training, and is deterministic. While under-specification necessitates a lossy and imperfect conversion, ours is interpretable. Patterns of mistakes can be identified and ameliorated.",
"The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology (UniMorph) schemata. As the Universal Dependencies project provides a set of treebanks with token-level annotation, while the UniMorph project releases type-level annotated tables, the newfound compatibility opens up new experiments. A prime example of exploiting token- and type-level data is BIBREF34 . That work presents a part-of-speech (POS) dictionary built from Wiktionary, where the POS tagger is also constrained to options available in their type-level POS dictionary, improving performance. Our transformation means that datasets are prepared for similar experiments with morphological tagging. It would also be reasonable to incorporate this tool as a subroutine to UDPipe BIBREF35 and Udapi BIBREF36 . We leave open the task of converting in the opposite direction, turning UniMorph MSDs into Universal Dependencies MSDs.",
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation."
],
[
"We thank Hajime Senuma and John Sylak-Glassman for early comments in devising the starting language-independent mapping from Universal Dependencies to UniMorph."
]
],
"section_name": [
"Introduction",
"Background: Morphological Inflection",
"Two Schemata, Two Philosophies",
"Universal Dependencies",
"UniMorph",
"Similarities in the annotation",
"UD treebanks and UniMorph tables",
"A Deterministic Conversion",
"Experiments",
"Intrinsic evaluation",
"Extrinsic evaluation",
"Results",
"Related Work",
"Conclusion and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"020ac14a36ff656cccfafcb0e6e869f98de7a78e"
],
"answer": [
{
"evidence": [
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation."
],
"extractive_spans": [
"irremediable annotation discrepancies",
"differences in choice of attributes to annotate",
"The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them",
"the two annotations encode distinct information",
"incorrectly applied UniMorph annotation",
"cross-lingual inconsistency in both resources"
],
"free_form_answer": "",
"highlighted_evidence": [
"irremediable annotation discrepancies",
"Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"1ef9f42e15ec3175a8fe9e36e5fffac30e30986d"
],
"answer": [
{
"evidence": [
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"a810b95038cbcc84945b1fd29cc9ec50fee5dc56"
],
"answer": [
{
"evidence": [
"The contributions of this work are:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The contributions of this work are:"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"1d39c43a1873cde6fd7b76dae134a1dc84f55f52",
"253ef0cc299e30dcfceb74e8526bdf3a76e5fb9c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method."
],
"extractive_spans": [],
"free_form_answer": "Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs",
"A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.",
"FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.",
"There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.",
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance."
],
"extractive_spans": [
"We apply this conversion to the 31 languages",
"Arabic, Hindi, Lithuanian, Persian, and Russian. ",
"Dutch",
"Spanish"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs",
"We apply this conversion to the 31 languages",
"FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.",
"Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.",
"Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What are the main sources of recall errors in the mapping?",
"Do they look for inconsistencies between different languages' annotations in UniMorph?",
"Do they look for inconsistencies between different UD treebanks?",
"Which languages do they validate on?"
],
"question_id": [
"642c4704a71fd01b922a0ef003f234dcc7b223cd",
"e477e494fe15a978ff9c0a5f1c88712cdaec0c5c",
"04495845251b387335bf2e77e2c423130f43c7d9",
"564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"morphology",
"morphology",
"morphology",
"morphology"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Example of annotation disagreement in UD between two languages on translations of one phrase, reproduced from Malaviya et al. (2018). The final word in each, “refrescante”, is not inflected for gender: It has the same surface form whether masculine or feminine. Only in Portuguese, it is annotated as masculine to reflect grammatical concord with the noun it modifies.",
"Table 1: Inflected forms of the English verb prove, along with their Penn Treebank tags",
"Table 2: Attested annotations for the Spanish verb form “mandaba” “I/he/she/it commanded”. Note that UD separates the part of speech from the remainder of the morphosyntactic description. In each schema, order of the values is irrelevant.",
"Figure 2: Transliterated Persian with a gloss and translation from Karimi-Doostan (2011), annotated in a Persianspecific schema. The light verb construction “latme zadan” (“to damage”) has been spread across the sentence. Multiword constructions like this are a challenge for word-level tagging schemata.",
"Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.",
"Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs"
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"4-Table2-1.png",
"5-Figure2-1.png",
"8-Table3-1.png",
"8-Table4-1.png"
]
} | [
"Which languages do they validate on?"
] | [
[
"1810.06743-Results-1",
"1810.06743-Introduction-1",
"1810.06743-Results-0",
"1810.06743-Results-2",
"1810.06743-8-Table3-1.png",
"1810.06743-8-Table4-1.png"
]
] | [
"Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur"
] | 44 |
1909.02764 | Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning | The recognition of emotions by humans is a complex process which considers multiple interacting signals such as facial expressions and both prosody and semantic content of utterances. Commonly, research on automatic recognition of emotions is, with few exceptions, limited to one modality. We describe an in-car experiment for emotion recognition from speech interactions for three modalities: the audio signal of a spoken interaction, the visual signal of the driver's face, and the manually transcribed content of utterances of the driver. We use off-the-shelf tools for emotion detection in audio and face and compare that to a neural transfer learning approach for emotion recognition from text which utilizes existing resources from other domains. We see that transfer learning enables models based on out-of-domain corpora to perform well. This method contributes up to 10 percentage points in F1, with up to 76 micro-average F1 across the emotions joy, annoyance and insecurity. Our findings also indicate that off-the-shelf-tools analyzing face and audio are not ready yet for emotion detection in in-car speech interactions without further adjustments. | {
"paragraphs": [
[
"Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.",
"Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance.",
"In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent.",
"Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4.",
"Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7.",
"With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer."
],
[
"A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.",
"In the automotive domain, FACS is still popular. Ma2017 use support vector machines to distinguish happy, bothered, confused, and concentrated based on data from a natural driving environment. They found that bothered and confused are difficult to distinguish, while happy and concentrated are well identified. Aiming to reduce computational cost, Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas. They analyze the variance of the three facial areas for the recognition of happy, anger and neutral. Ihme2018 aim at detecting frustration in a simulator environment. They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS. Paschero2012 use OpenCV (https://opencv.org/) to detect the eyes and the mouth region and track facial movements. They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekman's set of fundamental emotions.",
"Overall, we found that studies using facial features usually focus on continuous driver monitoring, often in driver-only scenarios. In contrast, our work investigates the potential of emotion recognition during speech interactions."
],
[
"Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.",
"In the automotive sector, Boril2011 approach the detection of negative emotional states within interactions between driver and co-driver as well as in calls of the driver towards the automated spoken dialogue system. Using real-world driving data, they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best. Schuller2006 collects 2,000 dialog turns directed towards an automotive user interface and investigate the classification of anger, confusion, and neutral. They show that automatic feature generation and feature selection boost the performance of an SVM-based classifier. Further, they analyze the performance under systematically added noise and develop methods to mitigate negative effects. For more details, we refer the reader to the survey by Schuller2018. In this work, we explore the straight-forward application of domain independent software to an in-car scenario without domain-specific adaptations."
],
[
"Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).",
"To automatically assign emotions to textual units, the application of dictionaries has been a popular approach and still is, particularly in domains without annotated corpora. Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision: use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data. This is sometimes referred to as self-labeling BIBREF21, BIBREF28, BIBREF29, BIBREF30.",
"A variety of classification approaches have been tested, including SNoW BIBREF15, support vector machines BIBREF16, maximum entropy classification, long short-term memory network, and convolutional neural network models BIBREF18. More recently, the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29. Still, it has been shown that transferring from one domain to another is challenging, as the way emotions are expressed varies between areas BIBREF27. The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead.",
"Recently, the state of the art has also been pushed forward with a set of shared tasks, in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21, BIBREF31, BIBREF20.",
"Our work follows this approach and builds up on embeddings with deep learning. Furthermore, we approach the application and adaption of text-based classifiers to the automotive domain with transfer learning."
],
[
"The first contribution of this paper is the construction of the AMMER data set which we describe in the following. We focus on the drivers' interactions with both a virtual agent as well as a co-driver. To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations, the study was conducted in a driving simulator."
],
[
"The study environment consists of a fixed-base driving simulator running Vires's VTD (Virtual Test Drive, v2.2.0) simulation software (https://vires.com/vtd-vires-virtual-test-drive/). The vehicle has an automatic transmission, a steering wheel and gas and brake pedals. We collect data from video, speech and biosignals (Empatica E4 to record heart rate, electrodermal activity, skin temperature, not further used in this paper) and questionnaires. Two RGB cameras are fixed in the vehicle to capture the drivers face, one at the sun shield above the drivers seat and one in the middle of the dashboard. A microphone is placed on the center console. One experimenter sits next to the driver, the other behind the simulator. The virtual agent accompanying the drive is realized as Wizard-of-Oz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the in-car speakers and to bring new content to the center screen. Figure FIGREF4 shows the driving simulator.",
"The experimental setting is comparable to an everyday driving task. Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant. To increase the probability of emotions to arise, participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits. They are informed that the time needed for the task would be compared to other participants. The route comprises highways, rural roads, and city streets. A navigation system with voice commands and information on the screen keeps the participants on the predefined track.",
"To trigger emotion changes in the participant, we use the following events: (i) a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver (ii) a skateboarder who appears unexpectedly on the street and (iii) participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants.",
"Based on these events, we trigger three interactions (Table TABREF6 provides examples) with the intelligent agent (Driver-Agent Interactions, D–A). Pretending to be aware of the current situation, e. g., to recognize unusual driving behavior such as strong braking, the agent asks the driver to explain his subjective perception of these events in detail. Additionally, we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive, where participants are asked to describe their mood and thoughts regarding the (upcoming) drive. This results in five interactions between the driver and the virtual agent.",
"Furthermore, the co-driver asks three different questions during sessions with light traffic and low cognitive demand (Driver-Co-Driver Interactions, D–Co). These questions are more general and non-traffic-related and aim at triggering the participants' memory and fantasy. Participants are asked to describe their last vacation, their dream house and their idea of the perfect job. In sum, there are eight interactions per participant (5 D–A, 3 D–Co)."
],
[
"At the beginning of the study, participants were welcomed and the upcoming study procedure was explained. Subsequently, participants signed a consent form and completed a questionnaire to provide demographic information. After that, the co-driving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners, lane changing and strong braking. Subsequently, participants started with the main driving task. The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions. After the completion of the drive, the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire, namely the TEIQue-SF BIBREF32, was handed to the participants. Finally, a retrospective interview was conducted, in which participants were played recordings of their in-car interactions and asked to give discrete (annoyance, insecurity, joy, relaxation, boredom, none, following BIBREF8) was well as dimensional (valence, arousal, dominance BIBREF33 on a 11-point scale) emotion ratings for the interactions and the according situations. We only use the discrete class annotations in this paper."
],
[
"Overall, 36 participants aged 18 to 64 years ($\\mu $=28.89, $\\sigma $=12.58) completed the experiment. This leads to 288 interactions, 180 between driver and the agent and 108 between driver and co-driver. The emotion self-ratings from the participants yielded 90 utterances labeled with joy, 26 with annoyance, 49 with insecurity, 9 with boredom, 111 with relaxation and 3 with no emotion. One example interaction per interaction type and emotion is shown in Table TABREF7. For further experiments, we only use joy, annoyance/anger, and insecurity/fear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity."
],
[
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored."
],
[
"We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise."
],
[
"For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.",
"We train models on a variety of corpora, namely the common format published by BIBREF27 of the FigureEight (formally known as Crowdflower) data set of social media, the ISEAR data BIBREF40 (self-reported emotional events), and, the Twitter Emotion Corpus (TEC, weakly annotated Tweets with #anger, #disgust, #fear, #happy, #sadness, and #surprise, Mohammad2012). From all corpora, we use instances with labels fear, anger, or joy. These corpora are English, however, we do predictions on German utterances. Therefore, each corpus is preprocessed to German with Google Translate. We remove URLs, user tags (“@Username”), punctuation and hash signs. The distributions of the data sets are shown in Table TABREF12.",
"To adapt models trained on these data, we apply transfer learning as follows: The model is first trained until convergence on one out-of-domain corpus (only on classes fear, joy, anger for compatibility reasons). Then, the parameters of the bi-LSTM layer are frozen and the remaining layers are further trained on AMMER. This procedure is illustrated in Figure FIGREF13"
],
[
"Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging.",
"Regarding the audio signal, we observe a macro $\\text{F}_1$ score of 29 % (P=42 %, R=22 %). There is a bias towards negative emotions, which results in a small number of detected joy predictions (R=4 %). Insecurity and annoyance are frequently confused."
],
[
"The experimental setting for the evaluation of emotion recognition from text is as follows: We evaluate the BiLSTM model in three different experiments: (1) in-domain, (2) out-of-domain and (3) transfer learning. For all experiments we train on the classes anger/annoyance, fear/insecurity and joy. Table TABREF19 shows all results for the comparison of these experimental settings."
],
[
"We first set a baseline by validating our models on established corpora. We train the baseline model on 60 % of each data set listed in Table TABREF12 and evaluate that model with 40 % of the data from the same domain (results shown in the column “In-Domain” in Table TABREF19). Excluding AMMER, we achieve an average micro $\\text{F}_1$ of 68 %, with best results of F$_1$=73 % on TEC. The model trained on our AMMER corpus achieves an F1 score of 57%. This is most probably due to the small size of this data set and the class bias towards joy, which makes up more than half of the data set. These results are mostly in line with Bostan2018."
],
[
"Now we analyze how well the models trained in Experiment 1 perform when applied to our data set. The results are shown in column “Simple” in Table TABREF19. We observe a clear drop in performance, with an average of F$_1$=48 %. The best performing model is again the one trained on TEC, en par with the one trained on the Figure8 data. The model trained on ISEAR performs second best in Experiment 1, it performs worst in Experiment 2."
],
[
"To adapt models trained on previously existing data sets to our particular application, the AMMER corpus, we apply transfer learning. Here, we perform leave-one-out cross validation. As pre-trained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER. The results are shown in the column “Transfer L.” in Table TABREF19. The confusion matrix is also depicted in Table TABREF16.",
"With this procedure we achieve an average performance of F$_1$=75 %, being better than the results from the in-domain Experiment 1. The best performance of F$_1$=76 % is achieved with the model pre-trained on each data set, except for ISEAR. All transfer learning models clearly outperform their simple out-of-domain counterpart.",
"To ensure that this performance increase is not only due to the larger data set, we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER (again, in leave-one-out crossvalidation). These results are depicted in column “Joint C.”. Thus, both settings, “transfer learning” and “joint corpus” have access to the same information.",
"The results show an increase in performance in contrast to not using AMMER for training, however, the transfer approach based on partial retraining the model shows a clear improvement for all models (by 7pp for Figure8, 10pp for EmoInt, 8pp for TEC, 13pp for ISEAR) compared to the ”Joint” setup."
],
[
"We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a co-driver. We analyzed the modalities of facial expressions, acoustics, and transcribed utterances regarding their potential for emotion recognition during in-car speech interactions. We applied off-the-shelf emotion recognition tools for facial expressions and acoustics. For transcribed text, we developed a neural network-based classifier with transfer learning exploiting existing annotated corpora. We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy, annoyance and insecurity.",
"Our results for facial expressions indicate that there is potential for the classification of joy, however, the states of annoyance and insecurity are not well recognized. Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions. Furthermore, movements of the mouth region during speech interactions might negatively influence the classification from facial expressions. Therefore, the question remains how facial expressions can best contribute to multimodal detection in speech interactions.",
"Regarding the classification from the acoustic signal, the application of off-the-shelf classifiers without further adjustments seems to be challenging. We find a strong bias towards negative emotional states for our experimental setting. For instance, the personalization of the recognition algorithm (e. g., mean and standard deviation normalization) could help to adapt the classification for specific speakers and thus to reduce this bias. Further, the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations.",
"Our transfer learning-based text classifier shows considerably better results. This is a substantial result in its own, as only one previous method for transfer learning in emotion recognition has been proposed, in which a sentiment/emotion specific source for labels in pre-training has been used, to the best of our knowledge BIBREF29. Other applications of transfer learning from general language models include BIBREF41, BIBREF42. Our approach is substantially different, not being trained on a huge amount of noisy data, but on smaller out-of-domain sets of higher quality. This result suggests that emotion classification systems which work across domains can be developed with reasonable effort.",
"For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speech-to-text module followed by an analysis of the text. Further, in this work, we did not explore an ensemble model or the interaction of different modalities. Thus, future work should investigate the fusion of multiple modalities in a single classifier."
],
[
"We thank Laura-Ana-Maria Bostan for discussions and data set preparations. This research has partially been funded by the German Research Council (DFG), project SEAT (KL 2869/1-1)."
]
],
"section_name": [
"Introduction",
"Related Work ::: Facial Expressions",
"Related Work ::: Acoustic",
"Related Work ::: Text",
"Data set Collection",
"Data set Collection ::: Study Setup and Design",
"Data set Collection ::: Procedure",
"Data set Collection ::: Data Analysis",
"Methods ::: Emotion Recognition from Facial Expressions",
"Methods ::: Emotion Recognition from Audio Signal",
"Methods ::: Emotion Recognition from Transcribed Utterances",
"Results ::: Facial Expressions and Audio",
"Results ::: Text from Transcribed Utterances",
"Results ::: Text from Transcribed Utterances ::: Experiment 1: In-Domain application",
"Results ::: Text from Transcribed Utterances ::: Experiment 2: Simple Out-Of-Domain application",
"Results ::: Text from Transcribed Utterances ::: Experiment 3: Transfer Learning application",
"Summary & Future Work",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"600f0c923d0043277bfac1962a398d487bdca7fa"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9b32c0c17e68ed2a3a61811e6ff7d83bc2caa7d6"
],
"answer": [
{
"evidence": [
"Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging."
],
"extractive_spans": [
"confusion matrices",
"$\\text{F}_1$ score"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d7c7133b07c598abc8e12d2366753d72e8b02f3c"
],
"answer": [
{
"evidence": [
"For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model."
],
"extractive_spans": [],
"free_form_answer": "For the emotion recognition from text they use described neural network as baseline.\nFor audio and face there is no baseline.",
"highlighted_evidence": [
"For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"050ddcbace29bfd6201c7b4813158d89c290c7b5",
"65fa4bf0328b2368ffb3570d974e8232d6b98731"
],
"answer": [
{
"evidence": [
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.",
"We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise."
],
"extractive_spans": [
"We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions)"
],
"free_form_answer": "",
"highlighted_evidence": [
" We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear.",
"We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored."
],
"extractive_spans": [
"cannot be disclosed due to licensing restrictions"
],
"free_form_answer": "",
"highlighted_evidence": [
"We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Does the paper evaluate any adjustment to improve the predicion accuracy of face and audio features?",
"How is face and audio data analysis evaluated?",
"What is the baseline method for the task?",
"What are the emotion detection tools used for audio and face input?"
],
"question_id": [
"f3d0e6452b8d24b7f9db1fd898d1fbe6cd23f166",
"9b1d789398f1f1a603e4741a5eee63ccaf0d4a4f",
"00bcdffff7e055f99aaf1b05cf41c98e2748e948",
"f92ee3c5fce819db540bded3cfcc191e21799cb1"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"German",
"German",
"German",
"German"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The setup of the driving simulator.",
"Table 1: Examples for triggered interactions with translations to English. (D: Driver, A: Agent, Co: Co-Driver)",
"Table 2: Examples from the collected data set (with translation to English). E: Emotion, IT: interaction type with agent (A) and with Codriver (C). J: Joy, A: Annoyance, I: Insecurity, B: Boredom, R: Relaxation, N: No emotion.",
"Figure8 8,419 1,419 9,179 19,017 EmoInt 2,252 1,701 1,616 5,569 ISEAR 1,095 1,096 1,094 3,285 TEC 2,782 1,534 8,132 12,448 AMMER 49 26 90 165",
"Figure 2: Model for Transfer Learning from Text. Grey boxes contain frozen parameters in the corresponding learning step.",
"Figure8 66 55 59 76 EmoInt 62 48 56 76 TEC 73 55 58 76 ISEAR 70 35 59 72 AMMER 57 — — —",
"Table 4: Confusion Matrix for Face Classification and Audio Classification (on full AMMER data) and for transfer learning from text (training set of EmoInt and test set of AMMER). Insecurity, annoyance and joy are the gold labels. Fear, anger and joy are predictions.",
"Table 5: Performance for classification from vision, audio, and transfer learning from text (training set of EmoInt)."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Figure8,419-1.png",
"6-Figure2-1.png",
"7-Figure66-1.png",
"7-Table4-1.png",
"7-Table5-1.png"
]
} | [
"What is the baseline method for the task?"
] | [
[
"1909.02764-Methods ::: Emotion Recognition from Transcribed Utterances-0"
]
] | [
"For the emotion recognition from text they use described neural network as baseline.\nFor audio and face there is no baseline."
] | 45 |
1905.11901 | Revisiting Low-Resource Neural Machine Translation: A Case Study | It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and recent techniques that have shown to be especially helpful in low-resource settings, resulting in a set of best practices for low-resource NMT. In our experiments on German--English with different amounts of IWSLT14 training data, we show that, without the use of any auxiliary monolingual or multilingual data, an optimized NMT system can outperform PBSMT with far less data than previously claimed. We also apply these techniques to a low-resource Korean-English dataset, surpassing previously reported results by 4 BLEU. | {
"paragraphs": [
[
"While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:"
],
[
"Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by BIBREF4 are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource settings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions."
],
[
"The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives, including a language modelling objective BIBREF5 , BIBREF6 , BIBREF7 , an autoencoding objective BIBREF8 , BIBREF9 , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language BIBREF6 , BIBREF10 , BIBREF11 . As an extreme case, models that rely exclusively on monolingual data have been shown to work BIBREF12 , BIBREF13 , BIBREF14 , BIBREF4 . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .",
"While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match BIBREF22 ",
"More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes BIBREF23 , BIBREF24 ."
],
[
"We consider the hyperparameters used by BIBREF3 to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture BIBREF25 , label smoothing BIBREF26 , dropout BIBREF27 , word dropout BIBREF28 , layer normalization BIBREF29 and tied embeddings BIBREF30 ."
],
[
"Subword representations such as BPE BIBREF31 have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; BIBREF32 report mixed results when comparing vocabularies of 30k and 90k subwords.",
"In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. BIBREF33 propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets."
],
[
"Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and low-resource settings. While the trend in high-resource settings is towards using larger and deeper models, BIBREF24 use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT BIBREF35 , BIBREF36 , but we find that using smaller batches is beneficial in low-resource settings. More aggressive dropout, including dropping whole words at random BIBREF37 , is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition."
],
[
"Finally, we implement and test the lexical model by BIBREF24 , which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step INLINEFORM0 is the weighted average of source embeddings INLINEFORM1 (the attention weights INLINEFORM2 are shared with the main model). After a feedforward layer (with skip connection), the lexical model's output INLINEFORM3 is combined with the original model's hidden state INLINEFORM4 before softmax computation. INLINEFORM5 ",
" Our implementation adds dropout and layer normalization to the lexical model.",
"",
""
],
[
"We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.",
"As a second language pair, we evaluate our systems on a Korean–English dataset with around 90000 parallel sentences of training data, 1000 for development, and 2000 for testing.",
"For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30000 merge operations, shared between German and English, and independently for Korean INLINEFORM0 English.",
"To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.",
"Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU BIBREF40 , BIBREF41 . Like BIBREF39 , we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012)."
],
[
"We use Moses BIBREF42 to train a PBSMT system. We use MGIZA BIBREF43 to train word alignments, and lmplz BIBREF44 for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA BIBREF45 – we perform multiple runs where indicated. Unlike BIBREF3 , we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see SECREF5 )."
],
[
"We train neural systems with Nematus BIBREF46 . Our baseline mostly follows the settings in BIBREF3 ; we use adam BIBREF47 and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work).",
"We subsequently add the methods described in section SECREF3 , namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch size, model depth, regularization parameters and learning rate. Detailed hyperparameters are reported in Appendix SECREF7 ."
],
[
"Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6–7 BLEU in both data conditions.",
"In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.",
"For a comparison with PBSMT, and across different data settings, consider Figure FIGREF19 , which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by BIBREF3 . However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix SECREF8 .",
"For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table TABREF20 . Our results far outperform the RNN-based results reported by BIBREF48 , and are on par with the best reported results on this dataset.",
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
],
[
"Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semi-supervised workflows, for instance for the back-translation of monolingual data."
],
[
"Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212_169888). Biao Zhang acknowledges the support of the Baidu Scholarship."
],
[
"Table TABREF23 lists hyperparameters used for the different experiments in the ablation study (Table 2). Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1)."
],
[
"Table TABREF24 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten (`bloodstained') or Spaniern (`Spaniards', `Spanish'), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns ('that', 'which', 'who'), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a more-or-less fluent, but semantically inadequate translation: erobert ('conquered') is translated into doing, and richtig aufgezeichnet ('registered correctly', `recorded correctly') into really the first thing."
]
],
"section_name": [
"Introduction",
"Low-Resource Translation Quality Compared Across Systems",
"Improving Low-Resource Neural Machine Translation",
"Mainstream Improvements",
"Language Representation",
"Hyperparameter Tuning",
"Lexical Model",
"Data and Preprocessing",
"PBSMT Baseline",
"NMT Systems",
"Results",
"Conclusions",
"Acknowledgments",
"Hyperparameters",
"Sample Translations"
]
} | {
"answers": [
{
"annotation_id": [
"073418dd5dee73e79f085f846b12ab2255d1fba9",
"8ebf6954a9db622ffa0e1a1a578dc757efb66253"
],
"answer": [
{
"evidence": [
"We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.",
"To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.",
"FLOAT SELECTED: Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE→EN data, and for KO→EN data."
],
"extractive_spans": [],
"free_form_answer": "Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development",
"highlighted_evidence": [
"We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.",
"Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.",
"FLOAT SELECTED: Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE→EN data, and for KO→EN data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6–7 BLEU in both data conditions.",
"In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.",
"FLOAT SELECTED: Table 2: German→English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported."
],
"extractive_spans": [
"ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6–7 BLEU in both data conditions.\n\nIn the ultra-low data condition, reducing the BPE vocabulary size is very effecti",
"FLOAT SELECTED: Table 2: German→English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"b518fdaf97adaadd15159d3125599dd99ca75555"
],
"answer": [
{
"evidence": [
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
],
"extractive_spans": [
"10.37 BLEU"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2482e2af43d793c30436fba78a147768185b2d29"
],
"answer": [
{
"evidence": [
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
],
"extractive_spans": [
"gu-EtAl:2018:EMNLP1"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ed93260f3f867af4f9275e5615fda86474ea51ee"
],
"answer": [
{
"evidence": [
"While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:"
],
"extractive_spans": [
"highly data-inefficient",
"underperform phrase-based statistical machine translation"
],
"free_form_answer": "",
"highlighted_evidence": [
"While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what amounts of size were used on german-english?",
"what were their experimental results in the low-resource dataset?",
"what are the methods they compare with in the korean-english dataset?",
"what pitfalls are mentioned in the paper?"
],
"question_id": [
"4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb",
"07d7652ad4a0ec92e6b44847a17c378b0d9f57f5",
"9f3444c9fb2e144465d63abf58520cddd4165a01",
"2348d68e065443f701d8052018c18daa4ecc120e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 4: Translations of the first sentence of the test set using NMT system trained on varying amounts of training data. Under low resource conditions, NMT produces fluent output unrelated to the input.",
"Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE→EN data, and for KO→EN data.",
"Table 2: German→English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported.",
"Figure 2: German→English learning curve, showing BLEU as a function of the amount of parallel training data, for PBSMT and NMT.",
"Table 3: Results on full IWSLT14 German→English data on tokenized and lowercased test set with multi-bleu.perl.",
"Table 4: Korean→English results. Mean and standard deviation of three training runs reported.",
"Table 5: Configurations of NMT systems reported in Table 2. Empty fields indicate that hyperparameter was unchanged compared to previous systems.",
"Table 6: German→English translation examples with phrase-based SMT and NMT systems trained on 100k/3.2M words of parallel data."
],
"file": [
"1-Figure4-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Figure2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"10-Table5-1.png",
"11-Table6-1.png"
]
} | [
"what amounts of size were used on german-english?"
] | [
[
"1905.11901-Data and Preprocessing-3",
"1905.11901-3-Table1-1.png",
"1905.11901-4-Table2-1.png",
"1905.11901-Data and Preprocessing-0",
"1905.11901-Results-1",
"1905.11901-Results-0"
]
] | [
"Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development"
] | 46 |
1912.13109 | "Hinglish"Language -- Modeling a Messy Code-Mixed Language | With a sharp rise in fluency and users of "Hinglish" in linguistically diverse country, India, it has increasingly become important to analyze social content written in this language in platforms such as Twitter, Reddit, Facebook. This project focuses on using deep learning techniques to tackle a classification problem in categorizing social content written in Hindi-English into Abusive, Hate-Inducing and Not offensive categories. We utilize bi-directional sequence models with easy text augmentation techniques such as synonym replacement, random insertion, random swap, and random deletion to produce a state of the art classifier that outperforms the previous work done on analyzing this dataset. | {
"paragraphs": [
[
"Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:",
"\"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!\"",
"The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others."
],
[
"From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:",
"Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.",
"Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.",
"No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.",
"Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.",
"Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data."
],
[
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.",
"The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work."
],
[
"In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture."
],
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
[
"The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:",
"Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.",
"Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.",
"Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process",
"Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.",
"Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.",
"Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:",
"Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.",
"Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.",
"Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.",
"Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.",
"Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.",
"Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:"
],
[
"We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau."
],
[
"For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:",
"The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c."
],
[
"Among the model architectures we experimented with and without data augmentation were:",
"Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.",
"Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.",
"Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.",
"Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy."
],
[
"Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.",
"Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.",
"Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.",
"Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.",
"Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.",
"Number of dense FC layers.",
"Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance."
],
[
"During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results."
],
[
"The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial."
],
[
"[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media",
"[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)",
"[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997",
"[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)",
"[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery",
"[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems",
"[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs",
"[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library",
"[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)"
]
],
"section_name": [
"Introduction",
"Introduction ::: Modeling challenges",
"Related Work ::: Transfer learning based approaches",
"Related Work ::: Hybrid models",
"Dataset and Features",
"Dataset and Features ::: Challenges",
"Model Architecture",
"Model Architecture ::: Loss function",
"Model Architecture ::: Models",
"Model Architecture ::: Hyper parameters",
"Results",
"Conclusion and Future work",
"References"
]
} | {
"answers": [
{
"annotation_id": [
"7011aa54bc26a8fc6341a2dcdb252137b10afb54"
],
"answer": [
{
"evidence": [
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.",
"The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work."
],
"extractive_spans": [
"Ternary Trans-CNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.\n\nThe approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"115ade40d6ac911be5ffa8d7d732c22c6e822f35",
"bf1ad37030290082d5397af72edc7a56f648141e"
],
"answer": [
{
"evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
"extractive_spans": [
"HEOT ",
"A labelled dataset for a corresponding english tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
"extractive_spans": [
"HEOT"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"bf818377320e6257fc663920044efc482d2d8fb3",
"e510717bb49fa0f0e8faff405e1ce3bbbde46c6a"
],
"answer": [
{
"evidence": [
"Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data."
],
"extractive_spans": [
"3189 rows of text messages"
],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:",
"FLOAT SELECTED: Table 3: Train-test split"
],
"extractive_spans": [],
"free_form_answer": "Resulting dataset was 7934 messages for train and 700 messages for test.",
"highlighted_evidence": [
"The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:",
"FLOAT SELECTED: Table 3: Train-test split"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"fdb7b5252df7cb221bb9b696fddcc5e070453392"
],
"answer": [
{
"evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
"extractive_spans": [
"A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al",
"HEOT obtained from one of the past studies done by Mathur et al"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2f35875b3d410f546700ef96c2c2926092dbb5b0"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"a4a998955b75604e43627ad8b411e15dfa039b88"
],
"answer": [
{
"evidence": [
"Related Work ::: Transfer learning based approaches",
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.",
"The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.",
"Related Work ::: Hybrid models",
"In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture."
],
"extractive_spans": [
"Ternary Trans-CNN ",
"Hybrid multi-channel CNN and LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
" Transfer learning based approaches\nMathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.\n\nThe approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.\n\nRelated Work ::: Hybrid models\nIn another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ec3208718af7624c0ffd8c9ec5d9f4d04217b9ab"
],
"answer": [
{
"evidence": [
"Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We aim to find such content in the social media focusing on the tweets."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"bd8f9113da801bf11685ae686a6e0ca758f17b83"
],
"answer": [
{
"evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
"extractive_spans": [
"HEOT ",
"A labelled dataset for a corresponding english tweets "
],
"free_form_answer": "",
"highlighted_evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the previous work's model?",
"What dataset is used?",
"How big is the dataset?",
"How is the dataset collected?",
"Was each text augmentation technique experimented individually?",
"What models do previous work use?",
"Does the dataset contain content from various social media platforms?",
"What dataset is used?"
],
"question_id": [
"792d7b579cbf7bfad8fe125b0d66c2059a174cf9",
"44a2a8e187f8adbd7d63a51cd2f9d2d324d0c98d",
"5908d7fb6c48f975c5dfc5b19bb0765581df2b25",
"cca3301f20db16f82b5d65a102436bebc88a2026",
"cfd67b9eeb10e5ad028097d192475d21d0b6845b",
"e1c681280b5667671c7f78b1579d0069cba72b0e",
"58d50567df71fa6c3792a0964160af390556757d",
"07c79edd4c29635dbc1c2c32b8df68193b7701c6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Annotated Data set",
"Table 2: Examples in the dataset",
"Table 3: Train-test split",
"Figure 1: Deep learning network used for the modeling",
"Figure 2: Results of various experiments"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"5-Figure1-1.png",
"5-Figure2-1.png"
]
} | [
"How big is the dataset?"
] | [
[
"1912.13109-Dataset and Features ::: Challenges-12",
"1912.13109-Introduction ::: Modeling challenges-5",
"1912.13109-4-Table3-1.png"
]
] | [
"Resulting dataset was 7934 messages for train and 700 messages for test."
] | 48 |
1703.04617 | Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering | The last several years have seen intensive interest in exploring neural-network-based models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline. | {
"paragraphs": [
[
"Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs.",
"The recent availability of relatively large training datasets (see Section \"Related Work\" for more details) has made it more feasible to train and estimate rather complex models in an end-to-end fashion for these problems, in which a whole model is fit directly with given question-answer tuples and the resulting model has shown to be rather effective.",
"In this paper, we take a closer look at modeling questions in such an end-to-end neural network framework, since we regard question understanding is of importance for such problems. We first introduced syntactic information to help encode questions. We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results on our competitive baselines."
],
[
"Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets. BIBREF0 released the MCTest data consisting of 500 short, fictional open-domain stories and 2000 questions. The CNN/Daily Mail dataset BIBREF1 contains news articles for close style machine comprehension, in which only entities are removed and tested for comprehension. Children's Book Test (CBT) BIBREF2 leverages named entities, common nouns, verbs, and prepositions to test reading comprehension. The Stanford Question Answering Dataset (SQuAD) BIBREF3 is more recently released dataset, which consists of more than 100,000 questions for documents taken from Wikipedia across a wide range of topics. The question-answer pairs are annotated through crowdsourcing. Answers are spans of text marked in the original documents. In this paper, we use SQuAD to evaluate our models.",
"Many neural network models have been studied on the SQuAD task. BIBREF6 proposed match LSTM to associate documents and questions and adapted the so-called pointer Network BIBREF7 to determine the positions of the answer text spans. BIBREF8 proposed a dynamic chunk reader to extract and rank a set of answer candidates. BIBREF9 focused on word representation and presented a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on the properties of words. BIBREF10 proposed a multi-perspective context matching (MPCM) model, which matched an encoded document and question from multiple perspectives. BIBREF11 proposed a dynamic decoder and so-called highway maxout network to improve the effectiveness of the decoder. The bi-directional attention flow (BIDAF) BIBREF12 used the bi-directional attention to obtain a question-aware context representation.",
"In this paper, we introduce syntactic information to encode questions with a specific form of recursive neural networks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . More specifically, we explore a tree-structured LSTM BIBREF13 , BIBREF14 which extends the linear-chain long short-term memory (LSTM) BIBREF17 to a recursive structure, which has the potential to capture long-distance interactions over the structures.",
"Different types of questions are often used to seek for different types of information. For example, a \"what\" question could have very different property from that of a \"why\" question, while they may share information and need to be trained together instead of separately. We view this as a \"adaptation\" problem to let different types of questions share a basic model but still discriminate them when needed. Specifically, we are motivated by the ideas \"i-vector\" BIBREF18 in speech recognition, where neural network based adaptation is performed among different (groups) of speakers and we focused instead on different types of questions here."
],
[
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.",
"We concatenate embedding at two levels to represent a word: the character composition and word-level embedding. The character composition feeds all characters of a word into a convolutional neural network (CNN) BIBREF19 to obtain a representation for the word. And we use the pre-trained 300-D GloVe vectors BIBREF20 (see the experiment section for details) to initialize our word-level embedding. Each word is therefore represented as the concatenation of the character-composition vector and word-level embedding. This is performed on both questions and documents, resulting in two matrices: the $\\mathbf {Q}^e \\in \\mathbb {R} ^{N\\times d_w}$ for a question and the $\\mathbf {D}^e \\in \\mathbb {R} ^{M\\times d_w}$ for a document, where $N$ is the question length (number of word tokens), $M$ is the document length, and $d_w$ is the embedding dimensionality.",
"The above word representation focuses on representing individual words, and an input encoder here employs recurrent neural networks to obtain the representation of a word under its context. We use bi-directional GRU (BiGRU) BIBREF21 for both documents and questions.",
"$${\\mathbf {Q}^c_i}&=\\text{BiGRU}(\\mathbf {Q}^e_i,i),\\forall i \\in [1, \\dots , N] \\\\\n{\\mathbf {D}^c_j}&=\\text{BiGRU}(\\mathbf {D}^e_j,j),\\forall j \\in [1, \\dots , M]$$ (Eq. 5) ",
"A BiGRU runs a forward and backward GRU on a sequence starting from the left and the right end, respectively. By concatenating the hidden states of these two GRUs for each word, we obtain the a representation for a question or document: $\\mathbf {Q}^c \\in \\mathbb {R} ^{N\\times d_c}$ for a question and $\\mathbf {D}^c \\in \\mathbb {R} ^{M\\times d_c}$ for a document.",
"Questions and documents interact closely. As in most previous work, our framework use both soft attention over questions and that over documents to capture the interaction between them. More specifically, in this soft-alignment layer, we first feed the contextual representation matrix $\\mathbf {Q}^c$ and $\\mathbf {D}^c$ to obtain alignment matrix $\\mathbf {U} \\in \\mathbb {R} ^{N\\times M}$ : ",
"$$\\mathbf {U}_{ij} =\\mathbf {Q}_i^c \\cdot \\mathbf {D}_j^{c\\mathrm {T}}, \\forall i \\in [1, \\dots , N], \\forall j \\in [1, \\dots , M]$$ (Eq. 7) ",
"Each $\\mathbf {U}_{ij}$ represents the similarity between a question word $\\mathbf {Q}_i^c$ and a document word $\\mathbf {D}_j^c$ .",
"Word-level Q-code Similar as in BIBREF12 , we obtain a word-level Q-code. Specifically, for each document word $w_j$ , we find which words in the question are relevant to it. To this end, $\\mathbf {a}_j\\in \\mathbb {R} ^{N}$ is computed with the following equation and used as a soft attention weight: ",
"$$\\mathbf {a}_j = softmax(\\mathbf {U}_{:j}), \\forall j \\in [1, \\dots , M]$$ (Eq. 8) ",
"With the attention weights computed, we obtain the encoding of the question for each document word $w_j$ as follows, which we call word-level Q-code in this paper: ",
"$$\\mathbf {Q}^w=\\mathbf {a}^{\\mathrm {T}} \\cdot \\mathbf {Q}^{c} \\in \\mathbb {R} ^{M\\times d_c}$$ (Eq. 9) ",
"Question-based filtering To better explore question understanding, we design this question-based filtering layer. As detailed later, different question representation can be easily incorporated to this layer in addition to being used as a filter to find key information in the document based on the question. This layer is expandable with more complicated question modeling.",
"In the basic form of question-based filtering, for each question word $w_i$ , we find which words in the document are associated. Similar to $\\mathbf {a}_j$ discussed above, we can obtain the attention weights on document words for each question word $w_i$ : ",
"$$\\mathbf {b}_i=softmax(\\mathbf {U}_{i:})\\in \\mathbb {R} ^{M}, \\forall i \\in [1, \\dots , N]$$ (Eq. 10) ",
"By pooling $\\mathbf {b}\\in \\mathbb {R} ^{N\\times M}$ , we can obtain a question-based filtering weight $\\mathbf {b}^f$ : ",
"$$\\mathbf {b}^f=norm(pooling(\\mathbf {b})) \\in \\mathbb {R} ^{M}$$ (Eq. 11) ",
"$$norm(\\mathbf {x})=\\frac{\\mathbf {x}}{\\sum _i x_i}$$ (Eq. 12) ",
"where the specific pooling function we used include max-pooling and mean-pooling. Then the document softly filtered based on the corresponding question $\\mathbf {D}^f$ can be calculated by: ",
"$$\\mathbf {D}_j^{f_{max}}=b^{f_{max}}_j \\mathbf {D}_j^{c}, \\forall j \\in [1, \\dots , M]$$ (Eq. 13) ",
"$$\\mathbf {D}_j^{f_{mean}}=b^{f_{mean}}_j \\mathbf {D}_j^{c}, \\forall j \\in [1, \\dots , M]$$ (Eq. 14) ",
"Through concatenating the document representation $\\mathbf {D}^c$ , word-level Q-code $\\mathbf {Q}^w$ and question-filtered document $\\mathbf {D}^f$ , we can finally obtain the alignment layer representation: ",
"$$\\mathbf {I}=[\\mathbf {D}^c, \\mathbf {Q}^w,\\mathbf {D}^c \\circ \\mathbf {Q}^w,\\mathbf {D}^c - \\mathbf {Q}^w, \\mathbf {D}^f, \\mathbf {b}^{f_{max}}, \\mathbf {b}^{f_{mean}}] \\in \\mathbb {R} ^{M \\times (6d_c+2)}$$ (Eq. 16) ",
"where \" $\\circ $ \" stands for element-wise multiplication and \" $-$ \" is simply the vector subtraction.",
"After acquiring the local alignment representation, key information in document and question has been collected, and the aggregation layer is then performed to find answers. We use three BiGRU layers to model the process that aggregates local information to make the global decision to find the answer spans. We found a residual architecture BIBREF22 as described in Figure 2 is very effective in this aggregation process: ",
"$$\\mathbf {I}^1_i=\\text{BiGRU}(\\mathbf {I}_i)$$ (Eq. 18) ",
"$$\\mathbf {I}^2_i=\\mathbf {I}^1_i + \\text{BiGRU}(\\mathbf {I}^1_i)$$ (Eq. 19) ",
"The SQuAD QA task requires a span of text to answer a question. We use a pointer network BIBREF7 to predict the starting and end position of answers as in BIBREF6 . Different from their methods, we use a two-directional prediction to obtain the positions. For one direction, we first predict the starting position of the answer span followed by predicting the end position, which is implemented with the following equations: ",
"$$P(s+)=softmax(W_{s+}\\cdot I^3)$$ (Eq. 23) ",
"$$P(e+)=softmax(W_{e+} \\cdot I^3 + W_{h+} \\cdot h_{s+})$$ (Eq. 24) ",
"where $\\mathbf {I}^3$ is inference layer output, $\\mathbf {h}_{s+}$ is the hidden state of the first step, and all $\\mathbf {W}$ are trainable matrices. We also perform this by predicting the end position first and then the starting position: ",
"$$P(e-)=softmax(W_{e-}\\cdot I^3)$$ (Eq. 25) ",
"$$P(s-)=softmax(W_{s-} \\cdot I^3 + W_{h-} \\cdot h_{e-})$$ (Eq. 26) ",
"We finally identify the span of an answer with the following equation: ",
"$$P(s)=pooling([P(s+), P(s-)])$$ (Eq. 27) ",
"$$P(e)=pooling([P(e+), P(e-)])$$ (Eq. 28) ",
"We use the mean-pooling here as it is more effective on the development set than the alternatives such as the max-pooling."
],
[
"The interplay of syntax and semantics of natural language questions is of interest for question representation. We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 , BIBREF14 . In general a TreeLSTM could perform semantic composition over given syntactic structures.",
"Unlike the chain-structured LSTM BIBREF17 , the TreeLSTM captures long-distance interaction on a tree. The update of a TreeLSTM node is described at a high level with Equation ( 31 ), and the detailed computation is described in (–). Specifically, the input of a TreeLSTM node is used to configure four gates: the input gate $\\mathbf {i}_t$ , output gate $\\mathbf {o}_t$ , and the two forget gates $\\mathbf {f}_t^L$ for the left child input and $\\mathbf {f}_t^R$ for the right. The memory cell $\\mathbf {c}_t$ considers each child's cell vector, $\\mathbf {c}_{t-1}^L$ and $\\mathbf {c}_{t-1}^R$ , which are gated by the left forget gate $\\mathbf {f}_t^L$ and right forget gate $\\mathbf {f}_t^R$ , respectively.",
"$$\\mathbf {h}_t &= \\text{TreeLSTM}(\\mathbf {x}_t, \\mathbf {h}_{t-1}^L, \\mathbf {h}_{t-1}^R), \\\\\n\n\\mathbf {h}_t &= \\mathbf {o}_t \\circ \\tanh (\\mathbf {c}_{t}),\\\\\n\\mathbf {o}_t &= \\sigma (\\mathbf {W}_o \\mathbf {x}_t + \\mathbf {U}_o^L \\mathbf {h}_{t-1}^L + \\mathbf {U}_o^R \\mathbf {h}_{t-1}^R), \\\\\\mathbf {c}_t &= \\mathbf {f}_t^L \\circ \\mathbf {c}_{t-1}^L + \\mathbf {f}_t^R \\circ \\mathbf {c}_{t-1}^R + \\mathbf {i}_t \\circ \\mathbf {u}_t, \\\\\\mathbf {f}_t^L &= \\sigma (\\mathbf {W}_f \\mathbf {x}_t + \\mathbf {U}_f^{LL} \\mathbf {h}_{t-1}^L + \\mathbf {U}_f^{LR} \\mathbf {h}_{t-1}^R),\\\\\n\\mathbf {f}_t^R &= \\sigma (\\mathbf {W}_f \\mathbf {x}_t + \\mathbf {U}_f^{RL} \\mathbf {h}_{t-1}^L + \\mathbf {U}_f^{RR} \\mathbf {h}_{t-1}^R), \\\\\\mathbf {i}_t &= \\sigma (\\mathbf {W}_i \\mathbf {x}_t + \\mathbf {U}_i^L \\mathbf {h}_{t-1}^L + \\mathbf {U}_i^R \\mathbf {h}_{t-1}^R), \\\\\\mathbf {u}_t &= \\tanh (\\mathbf {W}_c \\mathbf {x}_t + \\mathbf {U}_c^L \\mathbf {h}_{t-1}^L + \\mathbf {U}_c^R \\mathbf {h}_{t-1}^R),$$ (Eq. 31) ",
"where $\\sigma $ is the sigmoid function, $\\circ $ is the element-wise multiplication of two vectors, and all $\\mathbf {W}$ , $\\mathbf {U}$ are trainable matrices.",
"To obtain the parse tree information, we use Stanford CoreNLP (PCFG Parser) BIBREF23 , BIBREF24 to produce a binarized constituency parse for each question and build the TreeLSTM based on the parse tree. The root node of TreeLSTM is used as the representation for the whole question. More specifically, we use it as TreeLSTM Q-code $\\mathbf {Q}^{TL}\\in \\mathbb {R} ^{d_c}$ , by not only simply concatenating it to the alignment layer output but also using it as a question filter, just as we discussed in the question-based filtering section: ",
"$$\\mathbf {Q}^{TL}=\\text{TreeLSTM}(\\mathbf {Q}^e) \\in \\mathbb {R} ^{d_c}$$ (Eq. 32) ",
"$$\\mathbf {b}^{TL}=norm(\\mathbf {Q}^{TL} \\cdot \\mathbf {D}^{c\\mathrm {T}}) \\in \\mathbb {R} ^{M}$$ (Eq. 33) ",
"where $\\mathbf {I}_{new}$ is the new output of alignment layer, and function $repmat$ copies $\\mathbf {Q}^{TL}$ for M times to fit with $\\mathbf {I}$ .",
"Questions by nature are often composed to fulfill different types of information needs. For example, a \"when\" question seeks for different types of information (i.e., temporal information) than those for a \"why\" question. Different types of questions and the corresponding answers could potentially have different distributional regularity.",
"The previous models are often trained for all questions without explicitly discriminating different question types; however, for a target question, both the common features shared by all questions and the specific features for a specific type of question are further considered in this paper, as they could potentially obey different distributions. In this paper we further explicitly model different types of questions in the end-to-end training. We start from a simple way to first analyze the word frequency of all questions, and obtain top-10 most frequent question types: what, how, who, when, which, where, why, be, whose, and whom, in which be stands for the questions beginning with different forms of the word be such as is, am, and are. We explicitly encode question-type information to be an 11-dimensional one-hot vector (the top-10 question types and \"other\" question type). Each question type is with a trainable embedding vector. We call this explicit question type code, $\\mathbf {ET}\\in \\mathbb {R} ^{d_{ET}}$ . Then the vector for each question type is tuned during training, and is added to the system with the following equation: ",
"$$\\mathbf {I}_{new}=[\\mathbf {I}, repmat(\\mathbf {ET})]$$ (Eq. 38) ",
"As discussed, different types of questions and their answers may share common regularity and have separate property at the same time. We also view this as an adaptation problem in order to let different types of questions share a basic model but still discriminate them when needed. Specifically, we borrow ideas from speaker adaptation BIBREF18 in speech recognition, where neural-network-based adaptation is performed among different groups of speakers.",
"Conceptually we regard a type of questions as a group of acoustically similar speakers. Specifically we propose a question discriminative block or simply called a discriminative block (Figure 3 ) below to perform question adaptation. The main idea is described below: ",
"$$\\mathbf {x^\\prime } = f([\\mathbf {x}, \\mathbf {\\bar{x}}^c, \\mathbf {\\delta _x}])$$ (Eq. 40) ",
"For each input question $\\mathbf {x}$ , we can decompose it to two parts: the cluster it belong(i.e., question type) and the diverse in the cluster. The information of the cluster is encoded in a vector $\\mathbf {\\bar{x}}^c$ . In order to keep calculation differentiable, we compute the weight of all the clusters based on the distances of $\\mathbf {x}$ and each cluster center vector, in stead of just choosing the closest cluster. Then the discriminative vector $\\mathbf {\\delta _x}$ with regard to these most relevant clusters are computed. All this information is combined to obtain the discriminative information. In order to keep the full information of input, we also copy the input question $\\mathbf {x}$ , together with the acquired discriminative information, to a feed-forward layer to obtain a new representation $\\mathbf {x^\\prime }$ for the question.",
"More specifically, the adaptation algorithm contains two steps: adapting and updating, which is detailed as follows:",
"Adapting In the adapting step, we first compute the similarity score between an input question vector $\\mathbf {x}\\in \\mathbb {R} ^{h}$ and each centroid vector of $K$ clusters $~\\mathbf {\\bar{x}}\\in \\mathbb {R} ^{K \\times h}$ . Each cluster here models a question type. Unlike the explicit question type modeling discussed above, here we do not specify what question types we are modeling but let the system to learn. Specifically, we only need to pre-specific how many clusters, $K$ , we are modeling. The similarity between an input question and cluster centroid can be used to compute similarity weight $\\mathbf {w}^a$ : ",
"$$w_k^a = softmax(cos\\_sim(\\mathbf {x}, \\mathbf {\\bar{x}}_k), \\alpha ), \\forall k \\in [1, \\dots , K]$$ (Eq. 43) ",
"$$cos\\_sim(\\mathbf {u}, \\mathbf {v}) = \\frac{<\\mathbf {u},\\mathbf {v}>}{||\\mathbf {u}|| \\cdot ||\\mathbf {v}||}$$ (Eq. 44) ",
"We set $\\alpha $ equals 50 to make sure only closest class will have a high weight while maintain differentiable. Then we acquire a soft class-center vector $\\mathbf {\\bar{x}}^c$ : ",
"$$\\mathbf {\\bar{x}}^c = \\sum _k w^a_k \\mathbf {\\bar{x}}_k \\in \\mathbb {R} ^{h}$$ (Eq. 46) ",
"We then compute a discriminative vector $\\mathbf {\\delta _x}$ between the input question with regard to the soft class-center vector: ",
"$$\\mathbf {\\delta _x} = \\mathbf {x} - \\mathbf {\\bar{x}}^c$$ (Eq. 47) ",
"Note that $\\bar{\\mathbf {x}}^c$ here models the cluster information and $\\mathbf {\\delta _x}$ represents the discriminative information in the cluster. By feeding $\\mathbf {x}$ , $\\bar{\\mathbf {x}}^c$ and $\\mathbf {\\delta _x}$ into feedforward layer with Relu, we obtain $\\mathbf {x^{\\prime }}\\in \\mathbb {R} ^{K}$ : ",
"$$\\mathbf {x^{\\prime }} = Relu(\\mathbf {W} \\cdot [\\mathbf {x},\\bar{\\mathbf {x}}^c,\\mathbf {\\delta _x}])$$ (Eq. 48) ",
"With $\\mathbf {x^{\\prime }}$ ready, we can apply Discriminative Block to any question code and obtain its adaptation Q-code. In this paper, we use TreeLSTM Q-code as the input vector $\\mathbf {x}$ , and obtain TreeLSTM adaptation Q-code $\\mathbf {Q}^{TLa}\\in \\mathbb {R} ^{d_c}$ . Similar to TreeLSTM Q-code $\\mathbf {Q}^{TL}$ , we concatenate $\\mathbf {Q}^{TLa}$ to alignment output $\\mathbf {I}$ and also use it as a question filter: ",
"$$\\mathbf {Q}^{TLa} = Relu(\\mathbf {W} \\cdot [\\mathbf {Q}^{TL},\\overline{\\mathbf {Q}^{TL}}^c,\\mathbf {\\delta _{\\mathbf {Q}^{TL}}}])$$ (Eq. 49) ",
"$$\\mathbf {b}^{TLa}=norm(\\mathbf {Q}^{TLa} \\cdot \\mathbf {D}^{c\\mathrm {T}}) \\in \\mathbb {R} ^{M}$$ (Eq. 50) ",
"Updating The updating stage attempts to modify the center vectors of the $K$ clusters in order to fit each cluster to model different types of questions. The updating is performed according to the following formula: ",
"$$\\mathbf {\\bar{x}^{\\prime }}_k = (1-\\beta \\text{w}_k^a)\\mathbf {\\bar{x}}_k+\\beta \\text{w}_k^a\\mathbf {x}, \\forall k \\in [1, \\dots , K]$$ (Eq. 54) ",
"In the equation, $\\beta $ is an updating rate used to control the amount of each updating, and we set it to 0.01. When $\\mathbf {x}$ is far away from $K$ -th cluster center $\\mathbf {\\bar{x}}_k$ , $\\text{w}_k^a$ is close to be value 0 and the $k$ -th cluster center $\\mathbf {\\bar{x}}_k$ tends not to be updated. If $\\mathbf {x}$ is instead close to the $j$ -th cluster center $\\mathbf {\\bar{x}}_j$ , $\\mathbf {x}$0 is close to the value 1 and the centroid of the $\\mathbf {x}$1 -th cluster $\\mathbf {x}$2 will be updated more aggressively using $\\mathbf {x}$3 ."
],
[
"We test our models on Stanford Question Answering Dataset (SQuAD) BIBREF3 . The SQuAD dataset consists of more than 100,000 questions annotated by crowdsourcing workers on a selected set of Wikipedia articles, and the answer to each question is a span of text in the Wikipedia articles. Training data includes 87,599 instances and validation set has 10,570 instances. The test data is hidden and kept by the organizer. The evaluation of SQuAD is Exact Match (EM) and F1 score.",
"We use pre-trained 300-D Glove 840B vectors BIBREF20 to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized randomly with Gaussian samples. CharCNN filter length is 1,3,5, each is 50 dimensions. All vectors including word embedding are updated during training. The cluster number K in discriminative block is 100. The Adam method BIBREF25 is used for optimization. And the first momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. We will half learning rate when meet a bad iteration, and the patience is 7. Our early stop evaluation is the EM and F1 score of validation set. All hidden states of GRUs, and TreeLSTMs are 500 dimensions, while word-level embedding $d_w$ is 300 dimensions. We set max length of document to 500, and drop the question-document pairs beyond this on training set. Explicit question-type dimension $d_{ET}$ is 50. We apply dropout to the Encoder layer and aggregation layer with a dropout rate of 0.5."
],
[
"Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling).",
"Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set.",
"Figure UID61 shows the EM/F1 scores of different question types while Figure UID62 is the question type amount distribution on the development set. In Figure UID61 we can see that the average EM/F1 of the \"when\" question is highest and those of the \"why\" question is the lowest. From Figure UID62 we can see the \"what\" question is the major class.",
"Figure 5 shows the composition of F1 score. Take our best model as an example, we observed a 78.38% F1 score on the whole development set, which can be separated into two parts: one is where F1 score equals to 100%, which means an exact match. This part accounts for 69.10% of the entire development set. And the other part accounts for 30.90%, of which the average F1 score is 30.03%. For the latter, we can further divide it into two sub-parts: one is where the F1 score equals to 0%, which means that predict answer is totally wrong. This part occupies 14.89% of the total development set. The other part accounts for 16.01% of the development set, of which average F1 score is 57.96%. From this analysis we can see that reducing the zero F1 score (14.89%) is potentially an important direction to further improve the system."
],
[
"Closely modelling questions could be of importance for question answering and machine reading. In this paper, we introduce syntactic information to help encode questions in neural networks. We view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline."
]
],
"section_name": [
"Introduction",
"Related Work",
"The Baseline Model",
"Question Understanding and Adaptation",
"Set-Up",
"Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"3723cd0588687070d28ed836a630db0991b52dd6"
],
"answer": [
{
"evidence": [
"Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs."
],
"extractive_spans": [],
"free_form_answer": "machine comprehension",
"highlighted_evidence": [
"machine comprehension ",
"Nelufar "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"3f8a5651de2844ab9fc75f8b2d1302e3734fe09e"
]
},
{
"annotation_id": [
"22e620e1d1e5c7127bb207c662d72eeef7dec0b8"
],
"answer": [
{
"evidence": [
"Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set."
],
"extractive_spans": [
" 69.10%/78.38%"
],
"free_form_answer": "",
"highlighted_evidence": [
"69.10%/78.38%"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"3f8a5651de2844ab9fc75f8b2d1302e3734fe09e"
]
},
{
"annotation_id": [
"4c643ea11954f316a7a4a134ac5286bb1052fe50",
"e2961f8a8e69dcf43d6fa98994b67ea4ccda3d7d"
],
"answer": [
{
"evidence": [
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details."
],
"extractive_spans": [
"word embedding, input encoder, alignment, aggregation, and prediction."
],
"free_form_answer": "",
"highlighted_evidence": [
"word embedding, input encoder, alignment, aggregation, and prediction"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.",
"FLOAT SELECTED: Figure 1: A high level view of our basic model."
],
"extractive_spans": [
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction."
],
"free_form_answer": "",
"highlighted_evidence": [
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction.",
"FLOAT SELECTED: Figure 1: A high level view of our basic model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"3f8a5651de2844ab9fc75f8b2d1302e3734fe09e",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"41b32d83e097277737f7518cfc7c86a52c7bb2e6"
],
"answer": [
{
"evidence": [
"Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling)."
],
"extractive_spans": [
"Our model achieves a 68.73% EM score and 77.39% F1 score"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What MC abbreviate for?",
"how much of improvement the adaptation model can get?",
"what is the architecture of the baseline model?",
"What is the exact performance on SQUAD?"
],
"question_id": [
"a891039441e008f1fd0a227dbed003f76c140737",
"73738e42d488b32c9db89ac8adefc75403fa2653",
"6c8bd7fa1cfb1b2bbeb011cc9c712dceac0c8f06",
"fa218b297d9cdcae238cef71096752ce27ca8f4a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"Question Answering",
"question",
"question",
"question"
],
"topic_background": [
"research",
"research",
"research",
"familiar"
]
} | {
"caption": [
"Figure 1: A high level view of our basic model.",
"Figure 2: The inference layer implemented with a residual network.",
"Figure 3: The discriminative block for question discrimination and adaptation.",
"Table 1: The official leaderboard of single models on SQuAD test set as we submitted our systems (January 20, 2017).",
"Table 2: Performance of various Q-code on the development set.",
"Figure 4: Question Type Analysis",
"Figure 5: F1 Score Analysis."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"6-Figure3-1.png",
"8-Table1-1.png",
"8-Table2-1.png",
"9-Figure4-1.png",
"9-Figure5-1.png"
]
} | [
"What MC abbreviate for?"
] | [
[
"1703.04617-Introduction-0"
]
] | [
"machine comprehension"
] | 53 |
1909.00578 | SUM-QE: a BERT-based Summary Quality Estimation Model | We propose SumQE, a novel Quality Estimation model for summarization based on BERT. The model addresses linguistic quality aspects that are only indirectly captured by content-based approaches to summary evaluation, without involving comparison with human references. SumQE achieves very high correlations with human ratings, outperforming simpler models addressing these linguistic aspects. Predictions of the SumQE model can be used for system development, and to inform users of the quality of automatically produced summaries and other types of generated text. | {
"paragraphs": [
[
"Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.",
"Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form."
],
[
"Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.",
"Quality Estimation is well established in MT BIBREF15, BIBREF0, BIBREF1, BIBREF16, BIBREF17. QE methods provide a quality indicator for translation output at run-time without relying on human references, typically needed by MT evaluation metrics BIBREF4, BIBREF18. QE models for MT make use of large post-edited datasets, and apply machine learning methods to predict post-editing effort scores and quality (good/bad) labels.",
"We apply QE to summarization, focusing on linguistic qualities that reflect the readability and fluency of the generated texts. Since no post-edited datasets – like the ones used in MT – are available for summarization, we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria. Our proposed models achieve high correlation with human judgments, showing that it is possible to estimate summary quality without human references."
],
[
"We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).",
"The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score, and according to five linguistic quality criteria ($\\mathcal {Q}1, \\dots , \\mathcal {Q}5$), described in Figure FIGREF2, that do not involve comparison with a model summary. Annotators assigned scores on a five-point scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific $\\mathcal {Q}$. The overall score for a contestant with respect to a specific $\\mathcal {Q}$ is the average of the manual scores assigned to the summaries generated by the contestant. Note that the DUC-04 shared task involved seven $\\mathcal {Q}$s, but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20. We address these five criteria and use DUC data from 2005 onwards in our experiments."
],
[
"In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\\mathcal {R}$ predicts a quality score $S_{\\mathcal {Q}}$ as an affine transformation of $h$:",
"Non-linear regression could also be used, but a linear (affine) $\\mathcal {R}$ already performs well. We use BERT as our main encoder and fine-tune it in three ways, which leads to three versions of Sum-QE."
],
[
"The first version of Sum-QE uses five separate estimators, one per quality score, each having its own encoder $\\mathcal {E}_i$ (a separate BERT instance generating $h_i$) and regressor $\\mathcal {R}_i$ (a separate linear regression layer on top of the corresponding BERT instance):"
],
[
"The second version of Sum-QE uses one estimator to predict all five quality scores at once, from a single encoding $h$ of the summary, produced by a single BERT instance. The intuition is that $\\mathcal {E}$ will learn to create richer representations so that $\\mathcal {R}$ (an affine transformation of $h$ with 5 outputs) will be able to predict all quality scores:",
"where $\\mathcal {R}(h)[i]$ is the $i$-th element of the vector returned by $\\mathcal {R}$."
],
[
"The third version of Sum-QE is similar to BERT-FT-M-1, but we now use five different linear (affine) regressors, one per quality score:",
"Although BERT-FT-M-5 is mathematically equivalent to BERT-FT-M-1, in practice these two versions of Sum-QE produce different results because of implementation details related to how the losses of the regressors (five or one) are combined."
],
[
"This is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5)."
],
[
"This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences."
],
[
"For a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter."
],
[
"BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:",
"where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
],
[
"To evaluate our methods for a particular $\\mathcal {Q}$, we calculate the average of the predicted scores for the summaries of each particular contestant, and the average of the corresponding manual scores assigned to the contestant's summaries. We measure the correlation between the two (predicted vs. manual) across all contestants using Spearman's $\\rho $, Kendall's $\\tau $ and Pearson's $r$.",
"We train and test the Sum-QE and BiGRU-ATT versions using a 3-fold procedure. In each fold, we train on two datasets (e.g., DUC-05, DUC-06) and test on the third (e.g., DUC-07). We follow the same procedure with the three BiGRU-based models. Hyper-perameters are tuned on a held out subset from the training set of each fold."
],
[
"Table TABREF23 shows Spearman's $\\rho $, Kendall's $\\tau $ and Pearson's $r$ for all datasets and models. The three fine-tuned BERT versions clearly outperform all other methods. Multi-task versions seem to perform better than single-task ones in most cases. Especially for $\\mathcal {Q}4$ and $\\mathcal {Q}5$, which are highly correlated, the multi-task BERT versions achieve the best overall results. BiGRU-ATT also benefits from multi-task learning.",
"The correlation of Sum-QE with human judgments is high or very high BIBREF23 for all $\\mathcal {Q}$s in all datasets, apart from $\\mathcal {Q}2$ in DUC-05 where it is only moderate. Manual scores for $\\mathcal {Q}2$ in DUC-05 are the highest among all $\\mathcal {Q}$s and years (between 4 and 5) and with the smallest standard deviation, as shown in Table TABREF24. Differences among systems are thus small in this respect, and although Sum-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure FIGREF26.",
"BEST-ROUGE has a negative correlation with the ground-truth scores for $\\mathcal {Q}$2 since it does not account for repetitions. The BiGRU-based models also reach their lowest performance on $\\mathcal {Q}$2 in DUC-05. A possible reason for the higher relative performance of the BERT-based models, which achieve a moderate positive correlation, is that BiGRU captures long-distance relations less effectively than BERT, which utilizes Transformers BIBREF24 and has a larger receptive field. A possible improvement would be a stacked BiGRU, since the states of higher stack layers have a larger receptive field as well.",
"The BERT multi-task versions perform better with highly correlated qualities like $\\mathcal {Q}4$ and $\\mathcal {Q}5$ (as illustrated in Figures 2 to 4 in the supplementary material). However, there is not a clear winner among them. Mathematical equivalence does not lead to deterministic results, especially when random initialization and stochastic learning algorithms are involved. An in-depth exploration of this point would involve further investigation, which will be part of future work."
],
[
"We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries. Sum-QE successfully predicts qualitative aspects of summaries that recall-oriented evaluation metrics fail to approximate. Leveraging powerful BERT representations, it achieves high correlations with human scores for most linguistic qualities rated, on three different datasets. Future work involves extending the Sum-QE model to capture content-related aspects, either in combination with existing evaluation metrics (like Pyramid and ROUGE) or, preferably, by identifying important information in the original text and modelling its preservation in the proposed summaries. This would preserve Sum-QE's independence from human references, a property of central importance in real-life usage scenarios and system development settings.",
"The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles. We believe that Sum-QE could be easily applied to other domains. A small amount of annotated data would be needed for fine-tuning – especially in domains with specialized vocabulary (e.g., biomedical) – but the model could also be used out of the box. A concrete estimation of performance in this setting will be part of future work. Also, the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort.",
"Finally, Sum-QE could serve to assess the quality of other types of texts, not only summaries. It could thus be applied to other text generation tasks, such as natural language generation and sentence compression."
],
[
"We would like to thank the anonymous reviewers for their helpful feedback on this work. The work has been partly supported by the Research Center of the Athens University of Economics and Business, and by the French National Research Agency under project ANR-16-CE33-0013."
]
],
"section_name": [
"Introduction",
"Related Work",
"Datasets",
"Methods ::: The Sum-QE Model",
"Methods ::: The Sum-QE Model ::: Single-task (BERT-FT-S-1):",
"Methods ::: The Sum-QE Model ::: Multi-task with one regressor (BERT-FT-M-1):",
"Methods ::: The Sum-QE Model ::: Multi-task with 5 regressors (BERT-FT-M-5):",
"Methods ::: Baselines ::: BiGRU s with attention:",
"Methods ::: Baselines ::: ROUGE:",
"Methods ::: Baselines ::: Language model (LM):",
"Methods ::: Baselines ::: Next sentence prediction:",
"Experiments",
"Results",
"Conclusion and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"8498b608303a9387fdac2f1ac707b9a33a37fd3a"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Spearman’s ρ, Kendall’s τ and Pearson’s r correlations on DUC-05, DUC-06 and DUC-07 for Q1–Q5. BEST-ROUGE refers to the version that achieved best correlations and is different across years."
],
"extractive_spans": [],
"free_form_answer": "High correlation results range from 0.472 to 0.936",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Spearman’s ρ, Kendall’s τ and Pearson’s r correlations on DUC-05, DUC-06 and DUC-07 for Q1–Q5. BEST-ROUGE refers to the version that achieved best correlations and is different across years."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2e17f86d69a8f8863a117ba13065509831282ea0"
],
"answer": [
{
"evidence": [
"We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems)."
],
"extractive_spans": [
"datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2b5d1eaaa4c82c9192fe7605e823228ecbb0f67b",
"aa8614bc1fc6b2d85516f91f8ae65b4aab7542e1"
],
"answer": [
{
"evidence": [
"Methods ::: Baselines ::: BiGRU s with attention:",
"This is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).",
"Methods ::: Baselines ::: ROUGE:",
"This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.",
"Methods ::: Baselines ::: Language model (LM):",
"For a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.",
"Methods ::: Baselines ::: Next sentence prediction:",
"BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:",
"where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
],
"extractive_spans": [
"BiGRU s with attention",
"ROUGE",
"Language model (LM)",
"Next sentence prediction"
],
"free_form_answer": "",
"highlighted_evidence": [
"Methods ::: Baselines ::: BiGRU s with attention:\nThis is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).\n\nMethods ::: Baselines ::: ROUGE:\nThis baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.\n\nMethods ::: Baselines ::: Language model (LM):\nFor a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.\n\nMethods ::: Baselines ::: Next sentence prediction:\nBERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:\n\nwhere $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Methods ::: Baselines ::: BiGRU s with attention:",
"This is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).",
"Methods ::: Baselines ::: ROUGE:",
"This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.",
"Methods ::: Baselines ::: Language model (LM):",
"For a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.",
"Methods ::: Baselines ::: Next sentence prediction:",
"BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:",
"where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
],
"extractive_spans": [],
"free_form_answer": "BiGRUs with attention, ROUGE, Language model, and next sentence prediction ",
"highlighted_evidence": [
"Methods ::: Baselines ::: BiGRU s with attention:\nThis is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).\n\nMethods ::: Baselines ::: ROUGE:\nThis baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.\n\nMethods ::: Baselines ::: Language model (LM):\nFor a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.\n\nMethods ::: Baselines ::: Next sentence prediction:\nBERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:\n\nwhere $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"d02df1fd3e9510f8fad08f27dd84562f9eb24662"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: SUM-QE rates summaries with respect to five linguistic qualities (Dang, 2006a). The datasets we use for tuning and evaluation contain human assigned scores (from 1 to 5) for each of these categories."
],
"extractive_spans": [],
"free_form_answer": "Grammaticality, non-redundancy, referential clarity, focus, structure & coherence",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: SUM-QE rates summaries with respect to five linguistic qualities (Dang, 2006a). The datasets we use for tuning and evaluation contain human assigned scores (from 1 to 5) for each of these categories."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What are their correlation results?",
"What dataset do they use?",
"What simpler models do they look at?",
"What linguistic quality aspects are addressed?"
],
"question_id": [
"ff28d34d1aaa57e7ad553dba09fc924dc21dd728",
"ae8354e67978b7c333094c36bf9d561ca0c2d286",
"02348ab62957cb82067c589769c14d798b1ceec7",
"3748787379b3a7d222c3a6254def3f5bfb93a60e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: SUM-QE rates summaries with respect to five linguistic qualities (Dang, 2006a). The datasets we use for tuning and evaluation contain human assigned scores (from 1 to 5) for each of these categories.",
"Figure 2: Illustration of different flavors of the investigated neural QE methods. An encoder (E) converts the summary to a dense vector representation h. A regressor Ri predicts a quality score SQi using h. E is either a BiGRU with attention (BiGRU-ATT) or BERT (SUM-QE).R has three flavors, one single-task (a) and two multi-task (b, c).",
"Table 1: Spearman’s ρ, Kendall’s τ and Pearson’s r correlations on DUC-05, DUC-06 and DUC-07 for Q1–Q5. BEST-ROUGE refers to the version that achieved best correlations and is different across years.",
"Table 2: Mean manual scores (± standard deviation) for each Q across datasets. Q2 is the hardest to predict because it has the highest scores and the lowest standard deviation.",
"Figure 3: Comparison of the mean gold scores assigned for Q2 and Q3 to each of the 32 systems in the DUC05 dataset, and the corresponding scores predicted by SUM-QE. Scores range from 1 to 5. The systems are sorted in descending order according to the gold scores. SUM-QE makes more accurate predictions forQ2 than for Q3, but struggles to put the systems in the correct order."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Figure3-1.png"
]
} | [
"What are their correlation results?",
"What simpler models do they look at?",
"What linguistic quality aspects are addressed?"
] | [
[
"1909.00578-4-Table1-1.png"
],
[
"1909.00578-Methods ::: Baselines ::: BiGRU s with attention:-0",
"1909.00578-Methods ::: Baselines ::: Next sentence prediction:-0",
"1909.00578-Methods ::: Baselines ::: Next sentence prediction:-1",
"1909.00578-Methods ::: Baselines ::: Language model (LM):-0",
"1909.00578-Methods ::: Baselines ::: ROUGE:-0"
],
[
"1909.00578-1-Figure1-1.png"
]
] | [
"High correlation results range from 0.472 to 0.936",
"BiGRUs with attention, ROUGE, Language model, and next sentence prediction ",
"Grammaticality, non-redundancy, referential clarity, focus, structure & coherence"
] | 54 |
1910.11471 | Machine Translation from Natural Language to Code using Long-Short Term Memory | Making computer programming language more understandable and easy for the human is a longstanding problem. From assembly language to present day’s object-oriented programming, concepts came to make programming easier so that a programmer can focus on the logic and the architecture rather than the code and language itself. To go a step further in this journey of removing human-computer language barrier, this paper proposes machine learning approach using Recurrent Neural Network (RNN) and Long-Short Term Memory (LSTM) to convert human language into programming language code. The programmer will write expressions for codes in layman’s language, and the machine learning model will translate it to the targeted programming language. The proposed approach yields result with 74.40% accuracy. This can be further improved by incorporating additional techniques, which are also discussed in this paper. | {
"paragraphs": [
[
"Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–",
"Programming languages are diverse",
"An individual person expresses logical statements differently than other",
"Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time",
"In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed."
],
[
"Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–"
],
[
"According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages."
],
[
"One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-"
],
[
"Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?",
"Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.",
"A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.",
"Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language."
],
[
"The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied."
],
[
"SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code."
],
[
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
[
"To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational."
],
[
"In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.",
"In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction."
],
[
"Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).",
"Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–",
"\"define the method tzname with 2 arguments: self and dt.\"",
"is translated into–",
"def __init__ ( self , regex ) :.",
"The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax."
],
[
"The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.",
"The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future."
],
[
"We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund."
]
],
"section_name": [
"Introduction",
"Problem Description",
"Problem Description ::: Programming Language Diversity",
"Problem Description ::: Human Language Factor",
"Problem Description ::: NLP of statements",
"Proposed Methodology",
"Proposed Methodology ::: Statistical Machine Translation",
"Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation",
"Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation",
"Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training",
"Result Analysis",
"Conclusion & Future Works",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"712162ee41fcd33e17f5974b52db5ef08caa28ef",
"ca3b72709cbea8e97d402eef60ef949c8818ae6f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–",
"\"define the method tzname with 2 arguments: self and dt.\"",
"is translated into–",
"def __init__ ( self , regex ) :.",
"The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax."
],
"extractive_spans": [
"incorporating coding syntax tree model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–\n\n\"define the method tzname with 2 arguments: self and dt.\"\n\nis translated into–\n\ndef __init__ ( self , regex ) :.\n\nThe translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"4fe5615cf767f286711731cd0059c208e82a0974",
"e21d60356450f2765a322002352ee1b8ceb50253"
],
"answer": [
{
"evidence": [
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
"extractive_spans": [],
"free_form_answer": "A parallel corpus where the source is an English expression of code and the target is Python code.",
"highlighted_evidence": [
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
"extractive_spans": [
" text-code parallel corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f669e556321ae49a72f0b9be6c4b7831e37edf1d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c499a5ca56894e542c2c4eabe925b81a2ea4618e"
],
"answer": [
{
"evidence": [
"In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation."
],
"extractive_spans": [
"seq2seq translation"
],
"free_form_answer": "",
"highlighted_evidence": [
"For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"469be6a5ce7968933dd77a4449dd88ee01d3d579"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2ef1c3976eec3f9d17efac630b098f10d86931e4"
],
"answer": [
{
"evidence": [
"The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial."
],
"extractive_spans": [
"phrase-based word embedding",
"Abstract Syntax Tree(AST)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3aa253475c66a97de49bc647af6be28b75a92be4"
],
"answer": [
{
"evidence": [
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
"extractive_spans": [
"Python"
],
"free_form_answer": "",
"highlighted_evidence": [
"In target data, the code is written in Python programming language."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d07da696fb0d6e94d658c0950e239bb87edb1633"
],
"answer": [
{
"evidence": [
"Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17)."
],
"extractive_spans": [
"validation data"
],
"free_form_answer": "",
"highlighted_evidence": [
"During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data.",
"After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40%"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What additional techniques are incorporated?",
"What dataset do they use?",
"Do they compare to other models?",
"What is the architecture of the system?",
"How long are expressions in layman's language?",
"What additional techniques could be incorporated to further improve accuracy?",
"What programming language is target language?",
"What dataset is used to measure accuracy?"
],
"question_id": [
"db9021ddd4593f6fadf172710468e2fdcea99674",
"8ea4bd4c1d8a466da386d16e4844ea932c44a412",
"92240eeab107a4f636705b88f00cefc4f0782846",
"4196d329061f5a9d147e1e77aeed6a6bd9b35d18",
"a37e4a21ba98b0259c36deca0d298194fa611d2f",
"321429282557e79061fe2fe02a9467f3d0118cdd",
"891cab2e41d6ba962778bda297592c916b432226",
"1eeabfde99594b8d9c6a007f50b97f7f527b0a17"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"computer vision",
"computer vision",
"computer vision",
"computer vision"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Text-Code bi-lingual corpus",
"Fig. 2. Neural training model architecture of Text-To-Code",
"Fig. 3. Accuracy gain in progress of training the RNN"
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png"
]
} | [
"What dataset do they use?"
] | [
[
"1910.11471-Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation-0"
]
] | [
"A parallel corpus where the source is an English expression of code and the target is Python code."
] | 56 |
1910.09399 | A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis | Text-to-image synthesis refers to computational methods which translate human written textual descriptions, in the form of keywords or sentences, into images with similar semantic meaning to the text. In earlier research, image synthesis relied mainly on word to image correlation analysis combined with supervised methods to find best alignment of the visual content matching to the text. Recent progress in deep learning (DL) has brought a new set of unsupervised deep learning methods, particularly deep generative models which are able to generate realistic visual images using suitably trained neural network models. In this paper, we review the most recent development in the text-to-image synthesis research domain. Our survey first introduces image synthesis and its challenges, and then reviews key concepts such as generative adversarial networks (GANs) and deep convolutional encoder-decoder neural networks (DCNN). After that, we propose a taxonomy to summarize GAN based text-to-image synthesis into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANS, and Motion Enhancement GANs. We elaborate the main objective of each group, and further review typical GAN architectures in each group. The taxonomy and the review outline the techniques and the evolution of different approaches, and eventually provide a clear roadmap to summarize the list of contemporaneous solutions that utilize GANs and DCNNs to generate enthralling results in categories such as human faces, birds, flowers, room interiors, object reconstruction from edge maps (games) etc. The survey will conclude with a comparison of the proposed solutions, challenges that remain unresolved, and future developments in the text-to-image synthesis domain. | {
"paragraphs": [
[
"“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)",
"– Yann LeCun",
"A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3."
],
[
"In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.",
"The major limitation of the traditional learning based text-to-image synthesis approaches is that they lack the ability to generate new image content; they can only change the characteristics of the given/training images. Alternatively, research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content. For example, Attribute2Image BIBREF5 models each image as a composite of foreground and background. In addition, a layered generative model with disentangled latent variables is learned, using a variational auto-encoder, to generate visual content. Because the learning is customized/conditioned by given attributes, the generative models of Attribute2Image can generate images with respect to different attributes, such as gender, hair color, age, etc., as shown in Figure FIGREF5."
],
[
"Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.",
"First introduced by Ian Goodfellow et al. BIBREF9, generative adversarial networks (GANs) consist of two neural networks paired with a discriminator and a generator. These two models compete with one another, with the generator attempting to produce synthetic/fake samples that will fool the discriminator and the discriminator attempting to differentiate between real (genuine) and synthetic samples. Because GANs' adversarial training aims to cause generators to produce images similar to the real (training) images, GANs can naturally be used to generate synthetic images (image synthesis), and this process can even be customized further by using text descriptions to specify the types of images to generate, as shown in Figure FIGREF6.",
"Much like text-to-speech and speech-to-text conversion, there exists a wide variety of problems that text-to-image synthesis could solve in the computer vision field specifically BIBREF8, BIBREF12. Nowadays, researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks, generative adversarial networks, and a combination of multiple methods, often called multimodal learning methods BIBREF8. For simplicity, multiple learning methods will be referred to as multimodal learning hereafter BIBREF13. Researchers often describe multimodal learning as a method that incorporates characteristics from several methods, algorithms, and ideas. This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8, BIBREF14, BIBREF15, BIBREF16, BIBREF17.",
"black In this survey, we focus primarily on reviewing recent works that aim to solve the challenge of text-to-image synthesis using generative adversarial networks (GANs). In order to provide a clear roadmap, we propose a taxonomy to summarize reviewed GANs into four major categories. Our review will elaborate the motivations of methods in each category, analyze typical models, their network architectures, and possible drawbacks for further improvement. The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8.",
"black The remainder of the survey is organized as follows. Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique. Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation, as they are the engines that make text-to-image synthesis possible and are essential building blocks to achieve photo-realistic images from text descriptions. Section 4 proposes a taxonomy to summarize GAN based text-to-image synthesis, discusses models and architectures of novel works focused solely on text-to-image synthesis. This section will also draw key contributions from these works in relation to their applications. Section 5 reviews GAN based text-to-image synthesis benchmarks, performance metrics, and comparisons, including a simple review of GANs for other applications. In section 6, we conclude with a brief summary and outline ideas for future interesting developments in the field of text-to-image synthesis."
],
[
"With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.",
"Although GANs are becoming increasingly popular, very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20, BIBREF21. Survey papers specifically attuned to analyzing different contributions to text-to-image synthesis using GANs are even more scarce. We have thus found two surveys BIBREF6, BIBREF7 on image synthesis using GANs, which are the two most closely related publications to our survey objective. In the following paragraphs, we briefly summarize each of these surveys and point out how our objectives differ from theirs.",
"In BIBREF6, the authors provide an overview of image synthesis using GANs. In this survey, the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs, including a section dedicated to core concepts of GANs, namely generators, discriminators, and the min-max game analogy, and some enhancements to the original GAN model, such as conditional GANs, addition of variational auto-encoders, etc.. In this survey, we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper. Three types of approaches for image generation are reviewed, including direct methods (single generator and discriminator), hierarchical methods (two or more generator-discriminator pairs, each with a different goal), and iterative methods (each generator-discriminator pair generates a gradually higher-resolution image). Following the introduction, BIBREF6 discusses methods for text-to-image and image-to-image synthesis, respectively, and also describes several evaluation metrics for synthetic images, including inception scores and Frechet Inception Distance (FID), and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions.",
"Different from the above survey, which has a relatively broad scope in GANs, our objective is heavily focused on text-to-image synthesis. Although this topic, text-to-image synthesis, has indeed been covered in BIBREF6, they did so in a much less detailed fashion, mostly listing the many different works in a time-sequential order. In comparison, we will review several representative methods in the field and outline their models and contributions in detail.",
"Similarly to BIBREF6, the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model. In addition, the paper covers the review of two types of applications: (1) unconstrained applications of image synthesis such as super-resolution, image inpainting, etc., and (2) constrained image synthesis applications, namely image-to-image, text-to-image, and sketch-to image, and also discusses image and video editing using GANs. Again, the scope of this paper is intrinsically comprehensive, while we focus specifically on text-to-image and go into more detail regarding the contributions of novel state-of-the-art models.",
"Other surveys have been published on related matters, mainly related to the advancements and applications of GANs BIBREF22, BIBREF23, but we have not found any prior works which focus specifically on text-to-image synthesis using GANs. To our knowledge, this is the first paper to do so.",
"black"
],
[
"In this section, we first introduce preliminary knowledge of GANs and one of its commonly used variants, conditional GAN (i.e. cGAN), which is the building block for many GAN based text-to-image synthesis models. After that, we briefly separate GAN based text-to-image synthesis into two types, Simple GAN frameworks vs. Advanced GAN frameworks, and discuss why advanced GAN architecture for image synthesis.",
"black Notice that the simple vs. advanced GAN framework separation is rather too brief, our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories, based on their objective and designs."
],
[
"Before moving on to a discussion and analysis of works applying GANs for text-to-image synthesis, there are some preliminary concepts, enhancements of GANs, datasets, and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing.",
"As stated previously, GANs were introduced by Ian Goodfellow et al. BIBREF9 in 2014, and consist of two deep neural networks, a generator and a discriminator, which are trained independently with conflicting goals: The generator aims to generate samples closely related to the original data distribution and fool the discriminator, while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source. A conceptual view of the generative adversarial network (GAN) architecture is shown in Figure FIGREF11.",
"The training of GANs is an iterative process that, with each iteration, updates the generator and the discriminator with the goal of each defeating the other. leading each model to become increasingly adept at its specific task until a threshold is reached. This is analogous to a min-max game between the two models, according to the following equation:",
"In Eq. (DISPLAY_FORM10), $x$ denotes a multi-dimensional sample, e.g., an image, and $z$ denotes a multi-dimensional latent space vector, e.g., a multidimensional data point following a predefined distribution function such as that of normal distributions. $D_{\\theta _d}()$ denotes a discriminator function, controlled by parameters $\\theta _d$, which aims to classify a sample into a binary space. $G_{\\theta _g}()$ denotes a generator function, controlled by parameters $\\theta _g$, which aims to generate a sample from some latent space vector. For example, $G_{\\theta _g}(z)$ means using a latent vector $z$ to generate a synthetic/fake image, and $D_{\\theta _d}(x)$ means to classify an image $x$ as binary output (i.e. true/false or 1/0). In the GAN setting, the discriminator $D_{\\theta _d}()$ is learned to distinguish a genuine/true image (labeled as 1) from fake images (labeled as 0). Therefore, given a true image $x$, the ideal output from the discriminator $D_{\\theta _d}(x)$ would be 1. Given a fake image generated from the generator $G_{\\theta _g}(z)$, the ideal prediction from the discriminator $D_{\\theta _d}(G_{\\theta _g}(z))$ would be 0, indicating the sample is a fake image.",
"Following the above definition, the $\\min \\max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\\theta _d$) and generator ($\\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\\max _{\\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\\min _{\\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs.",
"Generator - In image synthesis, the generator network can be thought of as a mapping from one representation space (latent space) to another (actual data) BIBREF21. When it comes to image synthesis, all of the images in the data space fall into some distribution in a very complex and high-dimensional feature space. Sampling from such a complex space is very difficult, so GANs instead train a generator to create synthetic images from a much more simple feature space (usually random noise) called the latent space. The generator network performs up-sampling of the latent space and is usually a deep neural network consisting of several convolutional and/or fully connected layers BIBREF21. The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data (in our case, images) that the discriminator classifies as real.",
"Discriminator - The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space, and is also generally a deep neural network consisting of several convolution and/or fully connected layers. However, the discriminator performs down-sampling as opposed to up-sampling. Like the generator, it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake.",
"In GANs, the ideal outcome is for both the generator's and discriminator's cost functions to converge so that the generator produces photo-realistic images that are indistinguishable from real data, and the discriminator at the same time becomes an expert at differentiating between real and synthetic data. This, however, is not possible since a reduction in cost of one model generally leads to an increase in cost of the other. This phenomenon makes training GANs very difficult, and training them simultaneously (both models performing gradient descent in parallel) often leads to a stable orbit where neither model is able to converge. To combat this, the generator and discriminator are often trained independently. In this case, the GAN remains the same, but there are different training stages. In one stage, the weights of the generator are kept constant and gradient descent updates the weights of the discriminator, and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator. This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25."
],
[
"Conditional Generative Adversarial Networks (cGAN) are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9. The objective function of the cGAN is defined in Eq. (DISPLAY_FORM13) which is very similar to the GAN objective function in Eq. (DISPLAY_FORM10) except that the inputs to both discriminator and generator are conditioned by a class label $y$.",
"The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model, allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves, concurrently. Whereas the original GAN was trained only with samples from the data distribution, resulting in the generated sample reflecting the general data distribution, cGAN enables directing the model to generate more tailored outputs.",
"In Figure FIGREF14, the condition vector is the class label (text string) \"Red bird\", which is fed to both the generator and discriminator. It is important, however, that the condition vector is related to the real data. If the model in Figure FIGREF14 was trained with the same set of real data (red birds) but the condition text was \"Yellow fish\", the generator would learn to create images of red birds when conditioned with the text \"Yellow fish\".",
"Note that the condition vector in cGAN can come in many forms, such as texts, not just limited to the class label. Such a unique design provides a direct solution to generate images conditioned by predefined specifications. As a result, cGAN has been used in text-to-image synthesis since the very first day of its invention although modern approaches can deliver much better text-to-image synthesis results.",
"black"
],
[
"In order to generate images from text, one simple solution is to employ the conditional GAN (cGAN) designs and add conditions to the training samples, such that the GAN is trained with respect to the underlying conditions. Several pioneer works have followed similar designs for text-to-image synthesis.",
"black An essential disadvantage of using cGAN for text-to-image synthesis is that that it cannot handle complicated textual descriptions for image generation, because cGAN uses labels as conditions to restrict the GAN inputs. If the text inputs have multiple keywords (or long text descriptions) they cannot be used simultaneously to restrict the input. Instead of using text as conditions, another two approaches BIBREF8, BIBREF16 use text as input features, and concatenate such features with other features to train discriminator and generator, as shown in Figure FIGREF15(b) and (c). To ensure text being used as GAN input, a feature embedding or feature representation learning BIBREF29, BIBREF30 function $\\varphi ()$ is often introduced to convert input text as numeric features, which are further concatenated with other features to train GANs.",
"black"
],
[
"Motivated by the GAN and conditional GAN (cGAN) design, many GAN based frameworks have been proposed to generate images, with different designs and architectures, such as using multiple discriminators, using progressively trained discriminators, or using hierarchical discriminators. Figure FIGREF17 outlines several advanced GAN frameworks in the literature. In addition to these frameworks, many news designs are being proposed to advance the field with rather sophisticated designs. For example, a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators, blackeach focusing on a different aspect of the images, to lead the generator towards creating images that are photo-realistic on multiple levels. Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction (like most discriminator in GANs does), resulting a new GAN structure outperforming text conditioned auxiliary classifier (TAC-GAN) BIBREF16 and generating diverse, realistic, and relevant to the input text regardless of class.",
"black In the following section, we will first propose a taxonomy that summarizes advanced GAN frameworks for text-to-image synthesis, and review most recent proposed solutions to the challenge of generating photo-realistic images conditioned on natural language text descriptions using GANs. The solutions we discuss are selected based on relevance and quality of contributions. Many publications exist on the subject of image-generation using GANs, but in this paper we focus specifically on models for text-to-image synthesis, with the review emphasizing on the “model” and “contributions” for text-to-image synthesis. At the end of this section, we also briefly review methods using GANs for other image-synthesis applications.",
"black"
],
[
"In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.",
"black"
],
[
"Although the ultimate goal of Text-to-Image synthesis is to generate images closely related to the textual descriptions, the relevance of the images to the texts are often validated from different perspectives, due to the inherent diversity of human perceptions. For example, when generating images matching to the description “rose flowers”, some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors. Other users, may seek to generate high quality rose flowers with a nice background (e.g. garden). The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance, e.g. roses, begonia, and peony. The fourth group of users may want to not only generate flower images, but also use them to form a meaningful action, e.g. a video clip showing flower growth, performing a magic show using those flowers, or telling a love story using the flowers.",
"blackFrom the text-to-Image synthesis point of view, the first group of users intend to precisely control the semantic of the generated images, and their goal is to match the texts and images at the semantic level. The second group of users are more focused on the resolutions and the qualify of the images, in addition to the requirement that the images and texts are semantically related. For the third group of users, their goal is to diversify the output images, such that their images carry diversified visual appearances and are also semantically related. The fourth user group adds a new dimension in image synthesis, and aims to generate sequences of images which are coherent in temporal order, i.e. capture the motion information.",
"black Based on the above descriptions, we categorize GAN based Text-to-Image Synthesis into a taxonomy with four major categories, as shown in Fig. FIGREF24.",
"Semantic Enhancement GANs: Semantic enhancement GANs represent pioneer works of GAN frameworks for text-to-image synthesis. The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts. This objective is mainly achieved by using a neural network to encode texts as dense features, which are further fed to a second network to generate images matching to the texts.",
"Resolution Enhancement GANs: Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts. This is mainly achieved through a multi-stage GAN framework, where the outputs from earlier stage GANs are fed to the second (or later) stage GAN to generate better qualify images.",
"Diversity Enhancement GANs: Diversity enhancement GANs intend to diversify the output images, such that the generated images are not only semantically related but also have different types and visual appearance. This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts, in order to maximize the output diversity.",
"Motion Enhancement GANs: Motion enhancement GANs intend to add a temporal dimension to the output images, such that they can form meaningful actions with respect to the text descriptions. This goal mainly achieved though a two-step process which first generates images matching to the “actions” of the texts, followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order.",
"black In the following, we will introduce how these GAN frameworks evolve for text-to-image synthesis, and will also review some typical methods of each category.",
"black"
],
[
"Semantic relevance is one the of most important criteria of the text-to-image synthesis. For most GNAs discussed in this survey, they are required to generate images semantically related to the text descriptions. However, the semantic relevance is a rather subjective measure, and images are inherently rich in terms of its semantics and interpretations. Therefore, many GANs are further proposed to enhance the text-to-image synthesis from different perspectives. In this subsection, we will review several classical approaches which are commonly served as text-to-image synthesis baseline.",
"black"
],
[
"Deep convolution generative adversarial network (DC-GAN) BIBREF8 represents the pioneer work for text-to-image synthesis using GANs. Its main goal is to train a deep convolutional generative adversarial network (DC-GAN) on text features. During this process these text features are encoded by another neural network. This neural network is a hybrid convolutional recurrent network at the character level. Concurrently, both neural networks have also feed-forward inference in the way they condition text features. Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field. However, actual artificial intelligence (AI) systems are far from achieving this task BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Lately, recurrent neural networks led the way to develop frameworks that learn discriminatively on text features. At the same time, generative adversarial networks (GANs) began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces, birds, flowers, and non-common images such as room interiorsBIBREF8. DC-GAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms, the recurrent neural networks (RNN) and generative adversarial networks (GANs), with the sole purpose of speeding the generation of text-to-image synthesis.",
"black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.",
"black"
],
[
"Following the pioneer DC-GAN framework BIBREF8, many researches propose revised network structures (e.g. different discriminaotrs) in order to improve images with better semantic relevance to the texts. Based on the deep convolutional adversarial network (DC-GAN) network architecture, GAN-CLS with image-text matching discriminator, GAN-INT learned with text manifold interpolation and GAN-INT-CLS which combines both are proposed to find semantic match between text and image. Similar to the DC-GAN architecture, an adaptive loss function (i.e. Perceptual Loss BIBREF48) is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant features(e.g. background) from source images BIBREF49. Regarding to the Perceptual Losses, three loss functions (i.e. Pixel reconstruction loss, Activation reconstruction loss and Texture reconstruction loss) are proposed in BIBREF50 in which they construct the network architectures based on the DC-GAN, i.e. GAN-INT-CLS-Pixel, GAN-INT-CLS-VGG and GAN-INT-CLS-Gram with respect to three losses. In BIBREF49, a residual transformation unit is added in the network to retain similar structure of the source image.",
"black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN, a pair of discriminators with different architectures (i.e. Paired-D GAN) is proposed to synthesize background and foreground from a source image seperately BIBREF51. Meanwhile, the skip-connection in the generator is employed to more precisely retain background information in the source image.",
"black"
],
[
"When synthesising images, most text-to-image synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts. This is likely problematic because most images naturally consist of two crucial components: foreground and background. Without properly separating these two components, it's hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation.",
"black In order to enhance the semantic relevance of the images, a multi-conditional GAN (MC-GAN) BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a text-described foreground object which does not exist in the source image. A unique feature of MC-GAN is that it proposes a synthesis block in which the background feature is extracted from the given image without non-linear function (i.e. only using convolution and batch normalization) and the foreground feature is the feature map from the previous layer.",
"black Because MC-GAN is able to properly model the background and foreground of the generated images, a unique strength of MC-GAN is that users are able to provide a base image and MC-GAN is able to preserve the background information of the base image to generate new images. black"
],
[
"Due to the fact that training GANs will be much difficult when generating high-resolution images, a two stage GAN (i.e. stackGAN) is proposed in which rough images(i.e. low-resolution images) are generated in stage-I and refined in stage-II. To further improve the quality of generated images, the second version of StackGAN (i.e. Stack++) is proposed to use multi-stage GANs to generate multi-scale images. A color-consistency regularization term is also added into the loss to keep the consistency of images in different scales.",
"black While stackGAN and StackGAN++ are both built on the global sentence vector, AttnGAN is proposed to use attention mechanism (i.e. Deep Attentional Multimodal Similarity Model (DAMSM)) to model the multi-level information (i.e. word level and sentence level) into GANs. In the following, StackGAN, StackGAN++ and AttnGAN will be explained in detail.",
"black Recently, Dynamic Memory Generative Adversarial Network (i.e. DM-GAN)BIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images."
],
[
"In 2017, Zhang et al. proposed a model for generating photo-realistic images from text descriptions called StackGAN (Stacked Generative Adversarial Network) BIBREF33. In their work, they define a two-stage model that uses two cascaded GANs, each corresponding to one of the stages. The stage I GAN takes a text description as input, converts the text description to a text embedding containing several conditioning variables, and generates a low-quality 64x64 image with rough shapes and colors based on the computed conditioning variables. The stage II GAN then takes this low-quality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result. The output of stage II is a photorealistic 256$times$256 image that resembles the text description with compelling accuracy.",
"One major contribution of StackGAN is the use of cascaded GANs for text-to-image synthesis through a sketch-refinement process. By conditioning the stage II GAN on the image produced by the stage I GAN and text description, the stage II GAN is able to correct defects in the stage I output, resulting in high-quality 256x256 images. Prior works have utilized “stacked” GANs to separate the image generation process into structure and style BIBREF42, multiple stages each generating lower-level representations from higher-level representations of the previous stage BIBREF35, and multiple stages combined with a laplacian pyramid approach BIBREF54, which was introduced for image compression by P. Burt and E. Adelson in 1983 and uses the differences between consecutive down-samples of an original image to reconstruct the original image from its down-sampled version BIBREF55. However, these works did not use text descriptions to condition their generator models.",
"Conditioning Augmentation is the other major contribution of StackGAN. Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8. StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training. This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same. The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33."
],
[
"Proposed by the same users as StackGAN, StackGAN++ is also a stacked GAN model, but organizes the generators and discriminators in a “tree-like” structure BIBREF47 with multiple stages. The first stage combines a noise vector and conditioning variables (with Conditional Augmentation introduced in BIBREF33) for input to the first generator, which generates a low-resolution image, 64$\\times $64 by default (this can be changed depending on the desired number of stages). Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higher-resolution images. These stages do not use the noise vector again, as the creators assume that the randomness it introduces is already preserved in the output of the first stage. The final stage produces a 256$\\times $256 high-quality image.",
"StackGAN++ introduces the joint conditional and unconditional approximation in their designs BIBREF47. The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables (measuring how accurately the image represents the description) as well as the loss between the image and real images (probability of the image being real or fake). The generators then aim to minimize the sum of these losses, improving the final result."
],
[
"Attentional Generative Adversarial Network (AttnGAN) BIBREF10 is very similar, in terms of its structure, to StackGAN++ BIBREF47, discussed in the previous section, but some novel components are added. Like previous works BIBREF56, BIBREF8, BIBREF33, BIBREF47, a text encoder generates a text embedding with conditioning variables based on the overall sentence. Additionally, the text encoder generates a separate text embedding with conditioning variables based on individual words. This process is optimized to produce meaningful variables using a bidirectional recurrent neural network (BRNN), more specifically bidirectional Long Short Term Memory (LSTM) BIBREF57, which, for each word in the description, generates conditions based on the previous word as well as the next word (bidirectional). The first stage of AttnGAN generates a low-resolution image based on the sentence-level text embedding and random noise vector. The output is fed along with the word-level text embedding to an “attention model”, which matches the word-level conditioning variables to regions of the stage I image, producing a word-context matrix. This is then fed to the next stage of the model along with the raw previous stage output. Each consecutive stage works in the same manner, but produces gradually higher-resolution images conditioned on the previous stage.",
"Two major contributions were introduced in AttnGAN: the attentional generative network and the Deep Attentional Multimodal Similarity Model (DAMSM) BIBREF47. The attentional generative network matches specific regions of each stage's output image to conditioning variables from the word-level text embedding. This is a very worthy contribution, allowing each consecutive stage to focus on specific regions of the image independently, adding “attentional” details region by region as opposed to the whole image. The DAMSM is also a key feature introduced by AttnGAN, which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more fine-grained word level. Table TABREF48 shows scores from different metrics for StackGAN, StackGAN++, AttnGAN, and HDGAN on the CUB, Oxford, and COCO datasets. The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset."
],
[
"Hierarchically-nested adversarial network (HDGAN) is a method proposed by BIBREF36, and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions. These semantic text descriptions are applied on images from diverse datasets. This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36. Hierarchical networks helps regularize mid-level manifestations. In addition to regularize mid-level manifestations, it assists the training of the generator in order to capture highly complex still media elements. These elements are captured in statistical order to train the generator based on settings extracted directly from the image. The latter is an ideal scenario. However, this paper aims to incorporate a single-stream architecture. This single-stream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators. Once jointed discriminators are setup in an optimum manner, the single-stream architecture will then advance generated images to achieve a much higher resolution BIBREF36.",
"The main contributions of the HDGANs include the introduction of a visual-semantic similarity measure BIBREF36. This feature will aid in the evaluation of the consistency of generated images. In addition to checking the consistency of generated images, one of the key objectives of this step is to test the logical consistency of the end product BIBREF36. The end product in this case would be images that are semantically mapped from text-based natural language descriptions to each area on the picture e.g. a wing on a bird or petal on a flower. Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field. Coupled with GAN and multimodal learning architectures, this field has seen tremendous growth BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Based on these advancements, HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36. In other words, it takes sentences and treats them as a hierarchical structure. This has some positive and negative implications in most cases. For starters, it makes it more complex to generate compelling images. However, one of the key benefits of this elaborate process is the realism obtained once all processes are completed. In addition, one common feature added to this process is the ability to identify parts of sentences with bounding boxes. If a sentence includes common characteristics of a bird, it will surround the attributes of such bird with bounding boxes. In practice, this should happen if the desired image have other elements such as human faces (e.g. eyes, hair, etc), flowers (e.g. petal size, color, etc), or any other inanimate object (e.g. a table, a mug, etc). Finally, HDGANs evaluated some of its claims on common ideal text-to-image datasets such as CUB, COCO, and Oxford-102 BIBREF8, BIBREF36, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. These datasets were first utilized on earlier works BIBREF8, and most of them sport modified features such image annotations, labels, or descriptions. The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI.",
"black"
],
[
"In this subsection, we introduce text-to-image synthesis methods which try to maximize the diversity of the output images, based on the text descriptions.",
"black"
],
[
"Two issues arise in the traditional GANs BIBREF58 for image synthesis: (1) scalabilirty problem: traditional GANs cannot predict a large number of image categories; and (2) diversity problem: images are often subject to one-to-many mapping, so one image could be labeled as different tags or being described using different texts. To address these problems, GAN conditioned on additional information, e.g. cGAN, is an alternative solution. However, although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions, they often output images with similar types and visual appearance.",
"black Slightly different from the cGAN, auxiliary classifier GANs (AC-GAN) BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images. The overall structure of AC-GAN is shown in Fig. FIGREF15(c). In AC-GAN, every generated image is associated with a class label, in addition to the true/fake label which are commonly used in GAN or cGAN. The discriminator of AC-GAN not only outputs a probability distribution over sources (i.e. whether the image is true or fake), it also output a probability distribution over the class label (i.e. predict which class the image belong to).",
"black By using an auxiliary classifier layer to predict the class of the image, AC-GAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes, resulting in diversified synthesis images. The results show that AC-GAN can generate images with high diversity.",
"black"
],
[
"Building on the AC-GAN, TAC-GAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis. The architecture of TAC-GAN is shown in Fig. FIGREF15(d), which is similar to AC-GAN. Overall, the major difference between TAC-GAN and AC-GAN is that TAC-GAN conditions the generated images on text descriptions instead of on a class label. This design makes TAC-GAN more generic for image synthesis.",
"black For TAC-GAN, it imposes restrictions on generated images in both texts and class labels. The input vector of TAC-GAN's generative network is built based on a noise vector and embedded vector representation of textual descriptions. The discriminator of TAC-GAN is similar to that of the AC-GAN, which not only predicts whether the image is fake or not, but also predicts the label of the images. A minor difference of TAC-GAN's discriminator, compared to that of the AC-GAN, is that it also receives text information as input before performing its classification.",
"black The experiments and validations, on the Oxford-102 flowers dataset, show that the results produced by TAC-GAN are “slightly better” that other approaches, including GAN-INT-CLS and StackGAN.",
"black"
],
[
"In order to improve the diversity of the output images, both AC-GAN and TAC-GAN's discriminators predict class labels of the synthesised images. This process likely enforces the semantic diversity of the images, but class labels are inherently restrictive in describing image semantics, and images described by text can be matched to multiple labels. Therefore, instead of predicting images' class labels, an alternative solution is to directly quantify their semantic relevance.",
"black The architecture of Text-SeGAN is shown in Fig. FIGREF15(e). In order to directly quantify semantic relevance, Text-SeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels. The estimated semantic reference is a fractional value ranging between 0 and 1, with a higher value reflecting better semantic relevance between the image and text. Due to this unique design, an inherent advantage of Text-SeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input.",
"black Experiments and validations, on Oxford-102 flower dataset, show that Text-SeGAN can generate diverse images that are semantically relevant to the input text. In addition, the results of Text-SeGAN show improved inception score compared to other approaches, including GAN-INT-CLS, StackGAN, TAC-GAN, and HDGAN.",
"black"
],
[
"Due to the inherent complexity of the visual images, and the diversity of text descriptions (i.e. same words could imply different meanings), it is difficulty to precisely match the texts to the visual images at the semantic levels. For most methods we have discussed so far, they employ a direct text to image generation process, but there is no validation about how generated images comply with the text in a reverse fashion.",
"black To ensure the semantic consistency and diversity, MirrorGAN BIBREF60 employs a mirror structure, which reversely learns from generated images to output texts (an image-to-text process) to further validate whether generated are indeed consistent to the input texts. MirrowGAN includes three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). The back to back Text-to-Image (T2I) and Image-to-Text (I2T) are combined to progressively enhance the diversity and semantic consistency of the generated images.",
"black In order to enhance the diversity of the output image, Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects, allowing users to precisely specific the relationships between objects in the images. In order to convert the visual scene graph as input for GAN to generate images, this method uses graph convolution to process input graphs. It computes a scene layout by predicting bounding boxes and segmentation masks for objects. After that, it converts the computed layout to an image with a cascaded refinement network.",
"black"
],
[
"Instead of focusing on generating static images, another line of text-to-image synthesis research focuses on generating videos (i.e. sequences of images) from texts. In this context, the synthesised videos are often useful resources for automated assistance or story telling.",
"black"
],
[
"One early/interesting work of motion enhancement GANs is to generate spoofed speech and lip-sync videos (or talking face) of Barack Obama (i.e. ObamaNet) based on text input BIBREF62. This framework is consisted of three parts, i.e. text to speech using “Char2Wav”, mouth shape representation synced to the audio using a time-delayed LSTM and “video generation” conditioned on the mouth shape using “U-Net” architecture. Although the results seem promising, ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation.",
"black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language (e.g. text) into sign language video sequences (i.e. T2S) BIBREF63. This is often achieved through a two step process: converting texts as meaningful units to generate images, followed by a learning component to arrange images into sequential order for best representation. More specifically, using RNN based machine translation methods, texts are translated into sign language gloss sequences. Then, glosses are mapped to skeletal pose sequences using a lookup-table. To generate videos, a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built.",
"black"
],
[
"In BIBREF64, a text-to-video model (T2V) is proposed based on the cGAN in which the input is the isometric Gaussian noise with the text-gist vector served as the generator. A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text, followed by a hybrid framework combining a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN).",
"black More specifically, T2V relies on two types of features, static features and dynamic features, to generate videos. Static features, called “gist” are used to sketch text-conditioned background color and object layout structure. Dynamic features, on the other hand, are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks. The text-gist vector is generated by a gist generator which maintains static information (e.g. background) and a text2filter which captures the dynamic information (i.e. actions) in the text to generate videos.",
"black As demonstrated in the paper BIBREF64, the generated videos are semantically related to the texts, but have a rather low quality (e.g. only $64 \\times 64$ resolution).",
"black"
],
[
"Different from T2V which generates videos from a single text, StoryGAN aims to produce dynamic scenes consistent of specified texts (i.e. story written in a multi-sentence paragraph) using a sequential GAN model BIBREF65. Story encoder, context encoder, and discriminators are the main components of this model. By using stochastic sampling, the story encoder intends to learn an low-dimensional embedding vector for the whole story to keep the continuity of the story. The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN. Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency.",
"black The experiments and comparisons, on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering, show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index (SSIM), visual qualify, consistence, and relevance (the last three measure are based on human evaluation)."
],
[
"Computer vision applications have strong potential for industries including but not limited to the medical, government, military, entertainment, and online social media fields BIBREF7, BIBREF66, BIBREF67, BIBREF68, BIBREF69, BIBREF70. Text-to-image synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas.",
"Text-to-image synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7, BIBREF8, BIBREF10. Deep convolutional networks have contributed to several breakthroughs in image, video, speech, and audio processing. This learning method intends, among other possibilities, to help translate sequential text descriptions to images supplemented by one or many additional methods. Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences. Advances in the computer vision, deep convolutional nets, and semantic units have shined light and redirected focus to this research area of text-to-image synthesis, having as its prime directive: to aid in the generation of compelling images with as much fidelity to text descriptions as possible.",
"To date, models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8. Though flowers and birds are the most common objects studied thus far, research has been applied to other classes as well. For example, there have been studies focused solely on human faces BIBREF7, BIBREF8, BIBREF71, BIBREF72.",
"It’s a fascinating time for computer vision AI and deep learning researchers and enthusiasts. The consistent advancement in hardware, software, and contemporaneous development of computer vision AI research disrupts multiple industries. These advances in technology allow for the extraction of several data types from a variety of sources. For example, image data captured from a variety of photo-ready devices, such as smart-phones, and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70. The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on real-world data."
],
[
"A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43. In addition, the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48.",
"In order to synthesize images from text descriptions, many frameworks have taken a minimalistic approach by creating small and background-less images BIBREF73. In most cases, the experiments were conducted on simple datasets, initially containing images of birds and flowers. BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB, MSCOCO, and Oxford-102 datasets, which facilitated the work on text-to-image synthesis for several papers released more recently.",
"While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark, there are three main datasets that are commonly used for evaluation of proposed GAN models for text-to-image synthesis: CUB BIBREF75, Oxford BIBREF76, COCO BIBREF77, and CIFAR-10 BIBREF78. CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40-258 images each and matching text descriptions. These datasets contain individual objects, with the text description corresponding to that object, making them relatively simple. COCO BIBREF77 is much more complex, containing 328k images with 91 different object types. CIFAI-10 BIBREF78 dataset consists of 60000 32$times$32 colour images in 10 classes, with 6000 images per class. In contrast to CUB and Oxford, whose images each contain an individual object, COCO’s images may contain multiple objects, each with a label, so there are many labels per image. The total number of labels over the 328k images is 2.5 million BIBREF77."
],
[
"Several evaluation metrics are used for judging the images produced by text-to-image GANs. Proposed by BIBREF25, Inception Scores (IS) calculates the entropy (randomness) of the conditional distribution, obtained by applying the Inception Model introduced in BIBREF79, and marginal distribution of a large set of generated images, which should be low and high, respectively, for meaningful images. Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution, and high entropy of the marginal distribution means that the set of generated images is diverse, which are both desired features. The IS score is then computed as the KL-divergence between the two entropies. FCN-scores BIBREF2 are computed in a similar manner, relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution. Therefore, if the FCN classifier classifies a set of synthetic images accurately, the image is probably realistic, and the corresponding GAN gets a high FCN score. Frechet Inception Distance (FID) BIBREF80 is the other commonly used evaluation metric, and takes a different approach, actually comparing the generated images to real images in the distribution. A high FID means there is little relationship between statistics of the synthetic and real images and vice versa, so lower FIDs are better.",
"black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48. In addition, Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores (IS)."
],
[
"While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.",
"blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.",
"blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception."
],
[
"It is worth noting that although this survey mainly focuses on text-to-image synthesis, there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to. For example, BIBREF72 used Sem-Latent GANs to generate images of faces based on facial attributes, producing impressive results that, at a glance, could be mistaken for real faces. BIBREF82, BIBREF70, and BIBREF83 demonstrated great success in generating text descriptions from images (image captioning) with great accuracy, with BIBREF82 using an attention-based model that automatically learns to focus on salient objects and BIBREF83 using deep visual-semantic alignments. Finally, there is a contribution made by StackGAN++ that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional, namely a color-regularization term BIBREF47. This additional term aims to keep the samples generated from the same input at different stages more consistent in color, which resulted in significantly better results for the unconditional model."
],
[
"The recent advancement in text-to-image synthesis research opens the door to several compelling methods and architectures. The main objective of text-to-image synthesis initially was to create images from simple labels, and this objective later scaled to natural languages. In this paper, we reviewed novel methods that generate, in our opinion, the most visually-rich and photo-realistic images, from text-based natural language. These generated images often rely on generative adversarial networks (GANs), deep convolutional decoder networks, and multimodal learning methods.",
"blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images."
],
[
"The authors declare that there is no conflict of interest regarding the publication of this article."
]
],
"section_name": [
"Introduction",
"Introduction ::: blackTraditional Learning Based Text-to-image Synthesis",
"Introduction ::: GAN Based Text-to-image Synthesis",
"Related Work",
"Preliminaries and Frameworks",
"Preliminaries and Frameworks ::: Generative Adversarial Neural Network",
"Preliminaries and Frameworks ::: cGAN: Conditional GAN",
"Preliminaries and Frameworks ::: Simple GAN Frameworks for Text-to-Image Synthesis",
"Preliminaries and Frameworks ::: Advanced GAN Frameworks for Text-to-Image Synthesis",
"Text-to-Image Synthesis Taxonomy and Categorization",
"Text-to-Image Synthesis Taxonomy and Categorization ::: GAN based Text-to-Image Synthesis Taxonomy",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN Extensions",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: MC-GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN++",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: AttnGAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: HDGAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: AC-GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: TAC-GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: Text-SeGAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: MirrorGAN and Scene Graph GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: ObamaNet and T2S",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: T2V",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: StoryGAN",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Applications",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Datasets",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Evaluation Metrics",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: GAN Based Text-to-image Synthesis Results Comparison",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Notable Mentions",
"Conclusion",
"conflict of interest"
]
} | {
"answers": [
{
"annotation_id": [
"45a2b7dc749c642c3ed415dd5a44202ad8b6ac61",
"b4fc38fa3c0347286c4cae9d60f5bb527cf6ae85"
],
"answer": [
{
"evidence": [
"Following the above definition, the $\\min \\max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\\theta _d$) and generator ($\\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\\max _{\\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\\min _{\\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs."
],
"extractive_spans": [
"unsupervised "
],
"free_form_answer": "",
"highlighted_evidence": [
"Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8."
],
"extractive_spans": [
"Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis"
],
"free_form_answer": "",
"highlighted_evidence": [
"Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"31015e42a831e288126a933eac9521a9e04d65d0"
],
"answer": [
{
"evidence": [
"blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images."
],
"extractive_spans": [
"give more independence to the several learning methods (e.g. less human intervention) involved in the studies",
"increasing the size of the output images"
],
"free_form_answer": "",
"highlighted_evidence": [
"Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ddd78b6aa4dc2e986a9b1ab93331c47e29896f01"
],
"answer": [
{
"evidence": [
"While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.",
"blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.",
"blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception."
],
"extractive_spans": [
"HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset",
"In terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor",
"text to image synthesis is continuously improving the results for better visual perception and interception"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset.",
"In terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis.",
"In addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f9a6b735c8b98ce2874c4eb5e4a122b468b6a66d"
],
"answer": [
{
"evidence": [
"In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.",
"FLOAT SELECTED: Figure 9. A Taxonomy and categorization of advanced GAN frameworks for Text-to-Image Synthesis. We categorize advanced GAN frameworks into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. The relationship between relevant frameworks and their publication date are also outlined as a reference."
],
"extractive_spans": [],
"free_form_answer": "Semantic Enhancement GANs: DC-GANs, MC-GAN\nResolution Enhancement GANs: StackGANs, AttnGAN, HDGAN\nDiversity Enhancement GANs: AC-GAN, TAC-GAN etc.\nMotion Enhancement GAGs: T2S, T2V, StoryGAN",
"highlighted_evidence": [
"In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24.",
"FLOAT SELECTED: Figure 9. A Taxonomy and categorization of advanced GAN frameworks for Text-to-Image Synthesis. We categorize advanced GAN frameworks into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. The relationship between relevant frameworks and their publication date are also outlined as a reference."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Is text-to-image synthesis trained is suppervized or unsuppervized manner?",
"What challenges remain unresolved?",
"What is the conclusion of comparison of proposed solution?",
"What is typical GAN architecture for each text-to-image synhesis group?"
],
"question_id": [
"e96adf8466e67bd19f345578d5a6dc68fd0279a1",
"c1477a6c86bd1670dd17407590948000c9a6b7c6",
"e020677261d739c35c6f075cde6937d0098ace7f",
"6389d5a152151fb05aae00b53b521c117d7b5e54"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. Early research on text-to-image synthesis (Zhu et al., 2007). The system uses correlation between keywords (or keyphrase) and images and identifies informative and “picturable” text units, then searches for the most likely image parts conditioned on the text, and eventually optimizes the picture layout conditioned on both the text and image parts.",
"Figure 2. Supervised learning based text-to-image synthesis (Yan et al., 2016a). The supervised learning process aims to learn layered generative models to generate visual content. Because the learning is customized/conditioned by the given attributes, the generative models of Attribute2Image can generative images with respect to different attributes, such as hair color, age, etc.",
"Figure 3. Generative adversarial neural network (GAN) based text-to-image synthesis (Huang et al., 2018). GAN based text-to-image synthesis combines discriminative and generative learning to train neural networks resulting in the generated images semantically resemble to the training samples or tailored to a subset of training images (i.e. conditioned outputs). ϕ() is a feature embedding function, which converts text as feature vector. z is a latent vector following normal distributions with zero mean. x̂ = G(z,ϕ(t) denotes a synthetic image generated from the generator, using latent vector z and the text features ϕ(t) as the input. D(x̂,ϕ(t)) denotes the prediction of the discriminator based on the input x̂ the generated image and ϕ(t) text information of the generated image. The explanations about the generators and discriminators are detailed in Section 3.1.",
"Figure 4. A visual summary of GAN based text-to-image (T2I) synthesis process, and the summary of GAN based frameworks/methods reviewed in the survey.",
"Figure 5. A conceptual view of the GenerativeAdversarial Network (GAN) architecture. The Generator G(z) is trained to generate synthetic/fake resemble to real samples, from a random noise distribution. The fake samples are fed to the Discriminator D(x) along with real samples. The Discriminator is trained to differentiate fake samples from real samples. The iterative training of the generator and the discriminator helps GAN deliver good generator generating samples very close to the underlying training samples.",
"Figure 6. A conceptual view of the conditional GAN architecture. The Generator G(z|y) generates samples from a random noise distribution and some condition vector (in this case text). The fake samples are fed to the Discriminator D(x|y) along with real samples and the same condition vector, and the Discriminator calculates the probability that the fake sample came from the real data distribution.",
"Figure 7. A simple architecture comparisons between five GAN networks for text-to-image synthesis. This figure also explains how texts are fed as input to train GAN to generate images. (a) Conditional GAN (cGAN) (Mirza and Osindero, 2014a) use labels to condition the input to the generator and the discriminator. The final output is discriminator similar to generic GAN; (b) Manifold interpolation matchingaware discriminator GAN (GAN-INT-CLS) (Reed et al., 2016b) feeds text input to both generator and discriminator (texts are preprocessed as embedding features, using function ϕ(), and concatenated with other input, before feeding to both generator and discriminator). The final output is discriminator similar to generic GAN; (c) Auxiliary classifier GAN (AC-GAN) (Odena et al., 2017b) uses an auxiliary classifier layer to predict the class of the image to ensure that the output consists of images from different classes, resulting in diversified synthesis images; (d) text conditioned auxiliary classifier GAN (TACGAN) (Dash et al., 2017a) share similar design as GAN-INT-CLS, whereas the output include both a discriminator and a classifier (which can be used for classification); and (e) text conditioned semantic classifier GAN (Text-SeGAN) (Cha et al., 2019a) uses a regression layer to estimate the semantic relevance between the image, so the generated images are not limited to certain classes and are semantically matching to the text input.",
"Figure 8. A high level comparison of several advanced GANs framework for text-to-image synthesis. All frameworks take text (red triangle) as input and generate output images. From left to right, (A) uses multiple discriminators and one generator (Durugkar et al., 2017; Nguyen et al., 2017), (B) uses multiple stage GANs where the output from one GAN is fed to the next GAN as input (Zhang et al., 2017b; Denton et al., 2015b), (C) progressively trains symmetric discriminators and generators (Huang et al., 2017), and (D) uses a single-stream generator with a hierarchically-nested discriminator trained from end-to-end (Zhang et al., 2018d).",
"Figure 9. A Taxonomy and categorization of advanced GAN frameworks for Text-to-Image Synthesis. We categorize advanced GAN frameworks into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. The relationship between relevant frameworks and their publication date are also outlined as a reference.",
"Table 1. A summary of different GANs and datasets used for validation. AX symbol indicates that the model was evaluated using the corresponding dataset",
"Table 2. A summary of performance of different methods with respect to the three benchmark datasets and four performancemetrics: Inception Score (IS), Frechet Inception Distance (FID), Human Classifier (HC), and SSIM scores. The generative adversarial networks inlcude DCGAN, GAN-INT-CLS, DongGAN, Paired-D-GAN, StackGAN, StackGAN++, AttnGAN, ObjGAN,HDGAN, DM-GAN, TAC-GAN, Text-SeGAN, Scene Graph GAN, and MirrorGAN. The three benchmark datasets include CUB, Oxford, and COCO datasets. A dash indicates that no data was found.",
"Figure 10. Performance comparison between 14 GANs with respect to their Inception Scores (IS).",
"Figure 11. Examples of best images of “birds” generated by GAN-INT-CLS, StackGAN, StackGAN++, AttnGAN, and HDGAN. Images reprinted from Zhang et al. (2017b,b, 2018b); Xu et al. (2017), and Zhang et al. (2018d), respectively.",
"Figure 12. Examples of best images of “a plate of vegetables” generated by GAN-INT-CLS, StackGAN, StackGAN++, AttnGAN, and HDGAN. Images reprinted from Zhang et al. (2017b,b, 2018b); Xu et al. (2017), and Zhang et al. (2018d), respectively."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"7-Figure5-1.png",
"8-Figure6-1.png",
"9-Figure7-1.png",
"10-Figure8-1.png",
"12-Figure9-1.png",
"18-Table1-1.png",
"20-Table2-1.png",
"21-Figure10-1.png",
"21-Figure11-1.png",
"22-Figure12-1.png"
]
} | [
"What is typical GAN architecture for each text-to-image synhesis group?"
] | [
[
"1910.09399-Text-to-Image Synthesis Taxonomy and Categorization-0",
"1910.09399-12-Figure9-1.png"
]
] | [
"Semantic Enhancement GANs: DC-GANs, MC-GAN\nResolution Enhancement GANs: StackGANs, AttnGAN, HDGAN\nDiversity Enhancement GANs: AC-GAN, TAC-GAN etc.\nMotion Enhancement GAGs: T2S, T2V, StoryGAN"
] | 57 |
1807.03367 | Talk the Walk: Navigating New York City through Grounded Dialogue | We introduce"Talk The Walk", the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a"guide"and a"tourist") that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task. | {
"paragraphs": [
[
"0pt0.03.03 *",
"0pt0.030.03 *",
"0pt0.030.03",
"We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task."
],
[
"As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.",
"We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .",
"Grounded language learning has (re-)gained traction in the AI community, and much attention is currently devoted to virtual embodiment—the development of multi-agent communication tasks in virtual environments—which has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 . Various related tasks have recently been introduced, but in each case with some limitations. Although visually grounded dialogue tasks BIBREF1 , BIBREF2 comprise perceptual grounding and multi-agent interaction, their agents are passive observers and do not act in the environment. By contrast, instruction-following tasks, such as VNL BIBREF3 , involve action and perception but lack natural language interaction with other agents. Furthermore, some of these works use simulated environments BIBREF4 and/or templated language BIBREF5 , which arguably oversimplifies real perception or natural language, respectively. See Table TABREF15 for a comparison.",
"Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.",
"We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.",
"This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of “perfect perception” and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language."
],
[
"We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.",
"The tourist's location is given as a tuple INLINEFORM0 , where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations. Upon a turning action, the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.",
"The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 , each coming from one of the following categories:",
" Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater ",
"The right-side of Figure FIGREF3 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions."
],
[
"For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.",
"The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 , which is only known to the guide. The tourist perceives a “street view” planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a “mental map” of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful—i.e., when INLINEFORM3 —or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection."
],
[
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
[
"The Talk The Walk dataset consists of over 10k successful dialogues—see Table FIGREF66 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed INLINEFORM0 of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances. Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all successful dialogues). An example from the dataset is shown in Appendix SECREF14 . The dataset is available at https://github.com/facebookresearch/talkthewalk."
],
[
"We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix SECREF12 ."
],
[
"The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section UID21 , we further introduce two simplifying assumptions—perfect perception and orientation-agnosticism—so as to overcome some of the difficulties we encountered in preliminary experiments.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baseline—see Appendix SECREF13 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmark-grounding and action-grounding capabilities of localization models, we assume “perfect perception”: in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 -location, i.e. INLINEFORM2 . If the INLINEFORM3 -location does not have any visible landmarks, we return a single “empty corner” symbol. We stress that our findings—including a novel architecture for grounding actions into an overhead map, see Section UID28 —should carry over to settings without the perfect perception assumption.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current INLINEFORM0 coordinates, respectively. Note that actions are now coupled to an orientation on the map—e.g. up is equal to going north—and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with “perfect perception”, implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.",
"Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section SECREF34 , the task requires communication about a short (random) path—i.e., not only a sequence of observations but also actions—in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map. In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.",
"The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Emergent language A tourist, starting from a random location, takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 . Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates. As the tourist navigates, the agent perceives INLINEFORM5 state-observations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 . Given the observations INLINEFORM10 and actions INLINEFORM11 , the tourist generates a message INLINEFORM12 which is communicated to the other agent. The objective of the guide is to predict the location INLINEFORM13 from the tourist's message INLINEFORM14 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location INLINEFORM0 , the utterance itself as message INLINEFORM1 , and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 , respectively. Similar to the emergent language setting, the guide's objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 . We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 ."
],
[
"This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location."
],
[
"For each of the communication channels, we outline the procedure for generating a message INLINEFORM0 . Given a set of state observations INLINEFORM1 , we represent each observation by summing the INLINEFORM2 -dimensional embeddings of the observed landmarks, i.e. for INLINEFORM3 , INLINEFORM4 , where INLINEFORM5 is the landmark embedding lookup table. In addition, we embed action INLINEFORM6 into a INLINEFORM7 -dimensional embedding INLINEFORM8 via a look-up table INLINEFORM9 . We experiment with three types of communication channel.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vectors The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step INLINEFORM0 , pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message INLINEFORM1 , where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 . In a similar fashion, we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 . We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions:",
" INLINEFORM0 ",
"The action message INLINEFORM0 is produced in the same way, and we obtain the final tourist message INLINEFORM1 through concatenating the messages.",
"The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients BIBREF9 , BIBREF10 to train the parameters INLINEFORM0 of the tourist model. That is, we estimate the gradient by INLINEFORM1 ",
" where the reward function INLINEFORM0 is the negative guide's loss (see Section SECREF25 ) and INLINEFORM1 a state-value baseline to reduce variance. We use a linear transformation over the concatenated embeddings as baseline prediction, i.e. INLINEFORM2 , and train it with a mean squared error loss.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 , and extract its last hidden state INLINEFORM1 . We use a separate LSTM encoder for action embeddings INLINEFORM2 , and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step: DISPLAYFORM0 ",
" where INLINEFORM0 a look-up table, taking input tokens INLINEFORM1 . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: INLINEFORM2 . At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss."
],
[
"Given a tourist message INLINEFORM0 describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication. Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. INLINEFORM0 . To extract the action embedding for time step INLINEFORM1 , we apply a linear layer to the action message, i.e. INLINEFORM2 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete For discrete communication, we obtain observation INLINEFORM0 by applying a linear layer to the observation message, i.e. INLINEFORM1 . Similar to the continuous communication model, we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language The message INLINEFORM0 contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message INLINEFORM1 , consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 , with a bidirectional LSTM: DISPLAYFORM0 ",
" where INLINEFORM0 is the word embedding look-up table. We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 : DISPLAYFORM0 ",
"where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding: INLINEFORM1 . We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., INLINEFORM3 .",
"We represent the guide's map as INLINEFORM0 , where in this case INLINEFORM1 , where each INLINEFORM2 -dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guide's landmark embeddings for that location.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for translating landmark embeddings according to state transitions (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding INLINEFORM0 is 1-dimensional, then a left action can be realized through application of the following INLINEFORM1 kernel: INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9-dimensional vector INLINEFORM1 , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 : DISPLAYFORM0 ",
" We learn a 3x3 convolutional kernel INLINEFORM0 , with INLINEFORM1 features, and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e. INLINEFORM3 , and subsequently taking the Hadamard product: INLINEFORM4 . For each action step INLINEFORM5 , we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 , where we zero-pad the input to maintain identical spatial dimensions.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction model We repeat the MASC operation INLINEFORM0 times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: INLINEFORM1 . We score locations by taking the dot-product of the observation embedding INLINEFORM2 , which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores: DISPLAYFORM0 ",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 , natural language messages may differ in the number of communicated observations and actions. Hence, we predict INLINEFORM1 from the communicated message. Specifically, we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN, and subsequently sample INLINEFORM3 from the resulting multinomial distribution: DISPLAYFORM0 ",
"We jointly train the INLINEFORM0 -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline."
],
[
"To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the “No MASC” model uses INLINEFORM0 , the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 . We also share the weights of this convolution at each time step.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction upper-bound Because we have access to the class-conditional likelihood INLINEFORM0 , we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions."
],
[
"In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work."
],
[
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Task is not too easy The upper-bound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist ( INLINEFORM0 35% accuracy). This is an important result for the full navigation task because the need for two-way communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57% (even for INLINEFORM0 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy."
],
[
"We first report the results for tourist localization with emergent language in Table TABREF32 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for INLINEFORM0 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for INLINEFORM1 . On the other hand, no-MASC models hit a plateau at 43%. In Appendix SECREF11 , we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance."
],
[
"We report the results of tourist localization with natural language in Table TABREF36 . We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only INLINEFORM0 on the test set. Here, we report localization from a single utterance, but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 . We also show that MASC outperform no-MASC models for natural language communication.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Generated utterances We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 , and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps."
],
[
"Table TABREF36 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm SECREF12 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and INLINEFORM0 ) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target."
],
[
"We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC—a novel grounding mechanism to learn state-transition from the tourist's message—and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research."
],
[
"The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Related tasks There has been a long line of work involving related tasks. Early work on task-oriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue BIBREF2 , BIBREF1 , knowledge-base-grounded discourse BIBREF29 or negotiation tasks BIBREF36 . At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment BIBREF13 , BIBREF50 , BIBREF5 , BIBREF39 , BIBREF19 , BIBREF18 , following-up on early work in this area BIBREF38 , BIBREF20 . An early example of navigation using neural networks is BIBREF28 , who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes BIBREF27 , BIBREF26 or large cities BIBREF17 , BIBREF40 , but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied BIBREF18 , BIBREF48 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world BIBREF15 , BIBREF45 . On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 , BIBREF31 . In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 , BIBREF46 . Recently, grounding has also been applied to the learning of sentence representations BIBREF32 , image captioning BIBREF37 , BIBREF49 , visual question answering BIBREF12 , BIBREF22 , visual reasoning BIBREF30 , BIBREF42 , and grounded machine translation BIBREF43 , BIBREF23 . Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment BIBREF35 , BIBREF21 , BIBREF41 , BIBREF24 , BIBREF36 , BIBREF47 , BIBREF34 ."
],
[
"For the emergent communication models, we use an embedding size INLINEFORM0 . The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters BIBREF33 . We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/talkthewalk."
],
[
"First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. INLINEFORM0 ) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context."
],
[
"After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length INLINEFORM0 (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length INLINEFORM1 during the full-task evaluation. For random trajectories, guide training uses the same path length INLINEFORM2 as is used during evaluation. We use a pre-trained tourist model with greedy decoding for generating the tourist utterances. Table TABREF40 summarizes the results.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.",
"We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 . We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight."
],
[
"We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table TABREF41 . In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predicted INLINEFORM0 (over the test set) increases from 1 to 2 when more dialogue context is included."
],
[
"Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions."
],
[
"We provide pseudo-code for evaluation of localization models on the full task in Algorithm SECREF12 , as well as results for all emergent communication models in Table TABREF55 .",
" INLINEFORM0 INLINEFORM1 ",
" INLINEFORM0 take new action INLINEFORM1 INLINEFORM2 ",
"Performance evaluation of location prediction model on full Talk The Walk setup"
],
[
"While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.",
"The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure FIGREF62 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner.",
"For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation INLINEFORM0 .",
"We run the following pre-trained feature extractors over the extracted images:",
"For the text recognition model, we use a learned look-up table INLINEFORM0 to embed the extracted text features INLINEFORM1 , and fuse all embeddings of four images through a bag of embeddings, i.e., INLINEFORM2 . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. INLINEFORM3 . We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings BIBREF16 . For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. INLINEFORM4 , before we pass it through a linear layer to predict the class probabilities: INLINEFORM5 . We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.",
"To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.",
"The F1 scores for the described methods in Table TABREF65 . We compare to an “all positive” baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work."
],
[
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URL ANONYMIZED, for more details on how this split is realized.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Example",
"Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: Hello, what are you near?",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT",
"Tourist: Hello, in front of me is a Brooks Brothers",
"Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT",
"Guide: Is that a shop or restaurant?",
"Tourist: ACTION:TURNLEFT",
"Tourist: It is a clothing shop.",
"Tourist: ACTION:TURNLEFT",
"Guide: You need to go to the intersection in the northwest corner of the map",
"Tourist: ACTION:TURNLEFT",
"Tourist: There appears to be a bank behind me.",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: Ok, turn left then go straight up that road",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT",
" ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT",
"Guide: There should be shops on two of the corners but you",
" need to go to the corner without a shop.",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT",
"Guide: let me know when you get there.",
"Tourist: on my left is Radio city Music hall",
"Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Tourist: I can't go straight any further.",
"Guide: ok. turn so that the theater is on your right.",
"Guide: then go straight",
"Tourist: That would be going back the way I came",
"Guide: yeah. I was looking at the wrong bank",
"Tourist: I'll notify when I am back at the brooks brothers, and the bank.",
"Tourist: ACTION:TURNRIGHT",
"Guide: make a right when the bank is on your left",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT",
"Tourist: Making the right at the bank.",
"Tourist: ACTION:FORWARD ACTION:FORWARD",
"Tourist: I can't go that way.",
"Tourist: ACTION:TURNLEFT",
"Tourist: Bank is ahead of me on the right",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT",
"Guide: turn around on that intersection",
"Tourist: I can only go to the left or back the way I just came.",
"Tourist: ACTION:TURNLEFT",
"Guide: you're in the right place. do you see shops on the corners?",
"Guide: If you're on the corner with the bank, cross the street",
"Tourist: I'm back where I started by the shop and the bank.",
"Tourist: ACTION:TURNRIGHT",
"Guide: on the same side of the street?",
"Tourist: crossing the street now",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT",
"Tourist: there is an I love new york shop across the street on the left from me now",
"Tourist: ACTION:TURNRIGHT ACTION:FORWARD",
"Guide: ok. I'll see if it's right.",
"Guide: EVALUATE_LOCATION",
"Guide: It's not right.",
"Tourist: What should I be on the look for?",
"Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: There should be shops on two corners but you need to be on one of the corners",
" without the shop.",
"Guide: Try the other corner.",
"Tourist: this intersection has 2 shop corners and a bank corner",
"Guide: yes. that's what I see on the map.",
"Tourist: should I go to the bank corner? or one of the shop corners?",
" or the blank corner (perhaps a hotel)",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: Go to the one near the hotel. The map says the hotel is a little",
" further down but it might be a little off.",
"Tourist: It's a big hotel it's possible.",
"Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT",
"Tourist: I'm on the hotel corner",
"Guide: EVALUATE_LOCATION"
]
],
"section_name": [
null,
"Introduction",
"Talk The Walk",
"Task",
"Data Collection",
"Dataset Statistics",
"Experiments",
"Tourist Localization",
"Model",
"The Tourist",
"The Guide",
"Comparisons",
"Results and Discussion",
"Analysis of Localization Task",
"Emergent Language Localization",
"Natural Language Localization",
"Localization-based Baseline",
"Conclusion",
"Related Work",
"Implementation Details",
"Additional Natural Language Experiments",
"Tourist Generation Models",
"Localization from Human Utterances",
"Visualizing MASC predictions",
"Evaluation on Full Setup",
"Landmark Classification",
"Dataset Details"
]
} | {
"answers": [
{
"annotation_id": [
"2e3c476fd6c267447136656da446e9bb41953f03",
"83b6b215aff8b6d9e9fa3308c962e0a916725a78"
],
"answer": [
{
"evidence": [
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk)."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .",
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location.",
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"73af0af52c32977bb9ccbd3aa9fb3294b5883647"
],
"answer": [
{
"evidence": [
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
"extractive_spans": [
"crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"d214afafe6bd69ae7f9c19125ce11b923ef6e105"
],
"answer": [
{
"evidence": [
"Tourist: I can't go straight any further.",
"Guide: ok. turn so that the theater is on your right.",
"Guide: then go straight",
"Tourist: That would be going back the way I came",
"Guide: yeah. I was looking at the wrong bank",
"Tourist: I'll notify when I am back at the brooks brothers, and the bank.",
"Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: make a right when the bank is on your left",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT",
"Tourist: Making the right at the bank.",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT",
"Tourist: I can't go that way.",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT",
"Tourist: Bank is ahead of me on the right",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT",
"Guide: turn around on that intersection",
"Tourist: I can only go to the left or back the way I just came.",
"Guide: you're in the right place. do you see shops on the corners?",
"Guide: If you're on the corner with the bank, cross the street",
"Tourist: I'm back where I started by the shop and the bank."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"Tourist: I can't go straight any further.\n\nGuide: ok. turn so that the theater is on your right.\n\nGuide: then go straight\n\nTourist: That would be going back the way I came\n\nGuide: yeah. I was looking at the wrong bank\n\nTourist: I'll notify when I am back at the brooks brothers, and the bank.\n\nTourist: ACTION:TURNRIGHT\n\nGuide: make a right when the bank is on your left\n\nTourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT\n\nTourist: Making the right at the bank.\n\nTourist: ACTION:FORWARD ACTION:FORWARD\n\nTourist: I can't go that way.\n\nTourist: ACTION:TURNLEFT\n\nTourist: Bank is ahead of me on the right\n\nTourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT\n\nGuide: turn around on that intersection\n\nTourist: I can only go to the left or back the way I just came.\n\nTourist: ACTION:TURNLEFT\n\nGuide: you're in the right place. do you see shops on the corners?\n\nGuide: If you're on the corner with the bank, cross the street\n\nTourist: I'm back where I started by the shop and the bank.\n\nTourist: ACTION:TURNRIGHT"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4acbf4a7c3f8dc02bc259031930c18db54159fa1"
],
"answer": [
{
"evidence": [
"In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work."
],
"extractive_spans": [
"localization accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"09a25106160ae412e6a625f9b056e12d2f98ec82"
],
"answer": [
{
"evidence": [
"Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication."
],
"extractive_spans": [
" dataset on Mechanical Turk involving human perception, action and communication"
],
"free_form_answer": "",
"highlighted_evidence": [
" Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Did the authors use crowdsourcing platforms?",
"How was the dataset collected?",
"What language do the agents talk in?",
"What evaluation metrics did the authors look at?",
"What data did they use?"
],
"question_id": [
"0cd0755ac458c3bafbc70e4268c1e37b87b9721b",
"c1ce652085ef9a7f02cb5c363ce2b8757adbe213",
"96be67b1729c3a91ddf0ec7d6a80f2aa75e30a30",
"b85ab5f862221fac819cf2fef239bcb08b9cafc6",
"7e34501255b89d64b9598b409d73f96489aafe45"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Example of the Talk The Walk task: two agents, a “tourist” and a “guide”, interact with each other via natural language in order to have the tourist navigate towards the correct location. The guide has access to a map and knows the target location but not the tourist location, while the tourist does not know the way but can navigate in a 360-degree street view environment.",
"Table 1: Talk The Walk grounds human generated dialogue in (real-life) perception and action.",
"Table 2: Accuracy results for tourist localization with emergent language, showing continuous (Cont.) and discrete (Disc.) communication, along with the prediction upper bound. T denotes the length of the path and a 3 in the “MASC” column indicates that the model is conditioned on the communicated actions.",
"Table 3: Localization accuracy of tourist communicating in natural language.",
"Table 5: Localization given last {1, 3, 5} dialogue utterances (including the guide). We observe that 1) performance increases when more utterances are included; and 2) MASC outperforms no-MASC in all cases; and 3) mean T̂ increases when more dialogue context is included.",
"Table 7: Full task performance of localization models trained on human and random trajectories. There are small benefits for training on random trajectories, but the most important hyperparameter is to condition the tourist utterance on a single observation (i.e. trajectories of size T = 0.)",
"Table 6: Localization performance using pretrained tourist (via imitation learning) with beam search decoding of varying beam size. We find that larger beam-sizes lead to worse localization performance.",
"Table 8: Samples from the tourist models communicating in natural language.",
"Figure 2: We show MASC values of two action sequences for tourist localization via discrete communication with T = 3 actions. In general, we observe that the first action always corresponds to the correct state-transition, whereas the second and third are sometimes mixed. For instance, in the top example, the first two actions are correctly predicted but the third action is not (as the MASC corresponds to a “no action”). In the bottom example, the second action appears as the third MASC.",
"Table 9: Accuracy of localization models on full task, using evaluation protocol defined in Algorithm 1. We report the average over 3 runs.",
"Figure 3: Result of running the text recognizer of [20] on four examples of the Hell’s Kitchen neighborhood. Top row: two positive examples. Bottom row: example of false negative (left) and many false positives (right)",
"Figure 4: Frequency of landmark classes",
"Table 10: Results for landmark classification.",
"Figure 5: Map of New York City with red rectangles indicating the captured neighborhoods of the Talk The Walk dataset.",
"Figure 6: Set of instructions presented to turkers before starting their first task.",
"Figure 7: (cont.) Set of instructions presented to turkers before starting their first task."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"13-Table5-1.png",
"13-Table7-1.png",
"13-Table6-1.png",
"14-Table8-1.png",
"14-Figure2-1.png",
"15-Table9-1.png",
"17-Figure3-1.png",
"18-Figure4-1.png",
"18-Table10-1.png",
"19-Figure5-1.png",
"21-Figure6-1.png",
"22-Figure7-1.png"
]
} | [
"What language do the agents talk in?"
] | [
[
"1807.03367-Dataset Details-40",
"1807.03367-Dataset Details-25",
"1807.03367-Dataset Details-39",
"1807.03367-Dataset Details-2",
"1807.03367-Dataset Details-20",
"1807.03367-Dataset Details-4",
"1807.03367-Dataset Details-38",
"1807.03367-Dataset Details-27",
"1807.03367-Dataset Details-44",
"1807.03367-Dataset Details-31",
"1807.03367-Dataset Details-28",
"1807.03367-Dataset Details-37",
"1807.03367-Dataset Details-43",
"1807.03367-Dataset Details-32",
"1807.03367-Dataset Details-35",
"1807.03367-Dataset Details-33",
"1807.03367-Dataset Details-24",
"1807.03367-Dataset Details-42",
"1807.03367-Dataset Details-26",
"1807.03367-Dataset Details-29"
]
] | [
"English"
] | 62 |
1910.03891 | Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding | The goal of representation learning of knowledge graph is to encode both entities and relations into a low-dimensional embedding spaces. Many recent works have demonstrated the benefits of knowledge graph embedding on knowledge graph completion task, such as relation extraction. However, we observe that: 1) existing method just take direct relations between entities into consideration and fails to express high-order structural relationship between entities; 2) these methods just leverage relation triples of KGs while ignoring a large number of attribute triples that encoding rich semantic information. To overcome these limitations, this paper propose a novel knowledge graph embedding method, named KANE, which is inspired by the recent developments of graph convolutional networks (GCN). KANE can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. Further analysis verify the efficiency of our method and the benefits brought by the attention mechanism. | {
"paragraphs": [
[
"In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\\textit {head entity}, relation, \\textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\\textit {Donald Trump}, Born In, \\textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\\textit {Born}$ and $\\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\\textit {Donald Trump},Father of, \\textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\\textit {Donald Trump},Born, \\textit {\"June 14, 1946\"})$.",
"Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\\textbf {t}$ of tail entity should be close to the head entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ if $(h, r, t)$ holds, which indicates $\\textbf {h}+\\textbf {r}\\approx \\textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.",
"While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\\textit {Donald Trump}\\stackrel{Father of}{\\longrightarrow }\\textit {Ivanka Trump}\\stackrel{Spouse}{\\longrightarrow }\\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.",
"Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.",
"Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.",
"In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.",
"The main contributions of this study are as follows:",
"1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.",
"2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.",
"3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations."
],
[
"In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\\textbf {h}$ and $\\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\\textbf {t}$ of tail entity should be close to the head entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction\". Hence, TransE defines the following loss function:",
"TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.",
"In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.",
"Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\\textbf {M}_{r,1}$ and $\\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\\textbf {M}_{r,1}\\textbf {h}-\\textbf {M}_{r,2}\\textbf {t}||_{1}$.",
"Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective."
],
[
"In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:",
"Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \\subset E \\times R \\times E $, where $E \\subset I \\cup B $ is set of entities, $R \\subset I$ is set of relations between entities. Similarly, $ T_{A} \\subset E \\times R \\times A $ is the set of attribute triples, where $ A \\subset I \\cup B \\cup L $ is the set of attribute values.",
"Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\\in T_{R} $, and attribute triples in form of $ (h, r, a)\\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\\lbrace h,t|(h,r,t)\\in T_{R} \\cup (h,r,a)\\in T_{A}\\rbrace $ is set of entities, $R =\\lbrace r|(h,r,t)\\in T_{R} \\cup (h,r,a)\\in T_{A}\\rbrace $ is set of relations, $A=\\lbrace a|(h,r,a)\\in T_{A}\\rbrace $, respectively.",
"The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\\textbf {h}, \\textbf {r}, \\textbf {t})$ and $ (\\textbf {h}, \\textbf {r}, \\textbf {a})$, where Boldfaced $\\textbf {h}\\in \\mathbb {R}^{k}$, $\\textbf {r}\\in \\mathbb {R}^{k}$, $\\textbf {t}\\in \\mathbb {R}^{k}$ and $\\textbf {a}\\in \\mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.",
"Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework."
],
[
"In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively."
],
[
"The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification."
],
[
"The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.",
"Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\\textbf {a}$ can be defined as follows.",
"where $\\textbf {w}_{i}\\in \\mathbb {R}^{k}$ is the word embedding of $w_{i}$.",
"Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.",
"LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.",
"where $f_{lstm}$ is the LSTM network."
],
[
"Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.",
"Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\\textbf {h}\\in \\mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\\mathcal {N}_{h} = \\lbrace t,a|(h,r,t)\\in T_{R} \\cup (h,r,a)\\in T_{A}\\rbrace $. The purpose of attentive embedding propagation is encode $\\mathcal {N}_{h}$ and output a vector $\\vec{\\textbf {h}}$ as the new embedding of entity $h$.",
"In order to obtain sufficient expressive power, one learnable linear transformation $\\textbf {W}\\in \\mathbb {R}^{k^{^{\\prime }} \\times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\\vec{\\textbf {h}}$ can be formulated as follows:",
"where $\\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .",
"In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:",
"Hereafter, we implement the attention coefficients $\\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:",
"where the leakyRelu is selected as activation function.",
"As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\\textbf {t}$ of head entity should be close to the tail entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ if $(h, r, t)$ holds.",
"Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:",
"Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:",
"where $\\mathop {\\Big |\\Big |}$ represents concatenation, $ \\pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\\textbf {W}^{i}$ denotes the linear transformation of input embedding.",
"Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:",
"In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:",
"After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail."
],
[
"Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.",
"knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\\textbf {t}$ of tail entity should be close to the head entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\\textbf {h}+\\textbf {r}- \\textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as",
"where $\\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \\cup T_{A}$ is the set of valid triples, and $T^{\\prime }$ is set of corrupted triples which can be formulated as:",
"Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:",
"where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\\sigma (x)$ is sigmoid function $\\sigma (x) = \\frac{1}{1+e^{-x}}$.",
"We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\\textbf {h}$, $\\textbf {r}$, $\\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\\textbf {h}^{\\tau +1}\\leftarrow \\textbf {h}^{\\tau }-\\lambda \\nabla _{\\textbf {h}}\\mathcal {L}$, where $\\tau $ labels the iteration step and $\\lambda $ is the learning rate."
],
[
"In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24."
],
[
"In evaluation, we compare our method with three types of models:",
"1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.",
"2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.",
"3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.",
"In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \\in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets."
],
[
"In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric."
],
[
"Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power."
],
[
"Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods."
],
[
"The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named \"raw\" and \"filter\" in order to avoid misleading behavior.",
"The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models."
],
[
"Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods."
]
],
"section_name": [
"Introduction",
"Related Work",
"Problem Formulation",
"Proposed Model",
"Proposed Model ::: Overall Architecture",
"Proposed Model ::: Attribute Embedding Layer",
"Proposed Model ::: Embedding Propagation Layer",
"Proposed Model ::: Output Layer and Training Details",
"Experiments ::: Date sets",
"Experiments ::: Experiments Setting",
"Experiments ::: Entity Classification ::: Evaluation Protocol.",
"Experiments ::: Entity Classification ::: Test Performance.",
"Experiments ::: Entity Classification ::: Efficiency Evaluation.",
"Experiments ::: Knowledge Graph Completion",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"b27e860ab0d3f3d3c9f7fe0a2f8907d38965d7a2"
],
"answer": [
{
"evidence": [
"Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.",
"FLOAT SELECTED: Table 2: Entity classification results in accuracy. We run all models 10 times and report mean ± standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K."
],
"extractive_spans": [],
"free_form_answer": "Accuracy of best proposed method KANE (LSTM+Concatenation) are 0.8011, 0.8592, 0.8605 compared to best state-of-the art method R-GCN + LR 0.7721, 0.8193, 0.8229 on three datasets respectively.",
"highlighted_evidence": [
"Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets.",
"FLOAT SELECTED: Table 2: Entity classification results in accuracy. We run all models 10 times and report mean ± standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0015edbc5f0346d09d14eb8118aaf4d850f19556"
],
"answer": [
{
"evidence": [
"Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods."
],
"extractive_spans": [
"we use t-SNE tool BIBREF27 to visualize the learned embedding"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"60863ee85123b18acf4f57b81e292c1ce2f19fc1"
],
"answer": [
{
"evidence": [
"1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.",
"2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.",
"3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets."
],
"extractive_spans": [
"TransE, TransR and TransH",
"PTransE, and ALL-PATHS",
"R-GCN BIBREF24 and KR-EAR BIBREF26"
],
"free_form_answer": "",
"highlighted_evidence": [
"1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines.",
"2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18.",
"3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7f3bec79e3400d3867b79b98f14c7b312b109ab7",
"c70897a9aaf396da5ce44f08ae000d6f238bfc88"
],
"answer": [
{
"evidence": [
"In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24."
],
"extractive_spans": [
"FB24K",
"DBP24K",
"Game30K"
],
"free_form_answer": "",
"highlighted_evidence": [
"First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24."
],
"extractive_spans": [
"Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"c722c3ab454198d9287cddd3713f3785a8ade0ef"
],
"answer": [
{
"evidence": [
"The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification."
],
"extractive_spans": [
"To capture both high-order structural information of KGs, we used an attention-based embedding propagation method."
],
"free_form_answer": "",
"highlighted_evidence": [
"The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method.",
"The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e288360a2009adb48d0b87242ef71a9e1734a82b"
],
"answer": [
{
"evidence": [
"In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24."
],
"extractive_spans": [
"entity types or concepts BIBREF13",
"relations paths BIBREF17",
" textual descriptions BIBREF11, BIBREF12",
"logical rules BIBREF23",
"deep neural network models BIBREF24"
],
"free_form_answer": "",
"highlighted_evidence": [
"Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How much better is performance of proposed method than state-of-the-art methods in experiments?",
"What further analysis is done?",
"What seven state-of-the-art methods are used for comparison?",
"What three datasets are used to measure performance?",
"How does KANE capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner?",
"What are recent works on knowedge graph embeddings authors mention?"
],
"question_id": [
"52f7e42fe8f27d800d1189251dfec7446f0e1d3b",
"00e6324ecd454f5d4b2a4b27fcf4104855ff8ee2",
"aa0d67c2a1bc222d1f2d9e5d51824352da5bb6dc",
"cf0085c1d7bd9bc9932424e4aba4e6812d27f727",
"586b7470be91efe246c3507b05e30651ea6b9832",
"31b20a4bab09450267dfa42884227103743e3426"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Subgraph of a knowledge graph contains entities, relations and attributes.",
"Figure 2: Illustration of the KANE architecture.",
"Table 1: The statistics of datasets.",
"Table 2: Entity classification results in accuracy. We run all models 10 times and report mean ± standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K.",
"Figure 3: Test accuracy with increasing epoch.",
"Table 3: Results of knowledge graph completion (FB24K)",
"Figure 4: Test accuracy by varying parameter.",
"Figure 5: The t-SNE visualization of entity embeddings in Game30K."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Figure3-1.png",
"7-Table3-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png"
]
} | [
"How much better is performance of proposed method than state-of-the-art methods in experiments?"
] | [
[
"1910.03891-Experiments ::: Entity Classification ::: Test Performance.-0",
"1910.03891-6-Table2-1.png"
]
] | [
"Accuracy of best proposed method KANE (LSTM+Concatenation) are 0.8011, 0.8592, 0.8605 compared to best state-of-the art method R-GCN + LR 0.7721, 0.8193, 0.8229 on three datasets respectively."
] | 66 |
1610.00879 | A Computational Approach to Automatic Prediction of Drunk Texting | Alcohol abuse may lead to unsociable behavior such as crime, drunk driving, or privacy leaks. We introduce automatic drunk-texting prediction as the task of identifying whether a text was written when under the influence of alcohol. We experiment with tweets labeled using hashtags as distant supervision. Our classifiers use a set of N-gram and stylistic features to detect drunk tweets. Our observations present the first quantitative evidence that text contains signals that can be exploited to detect drunk-texting. | {
"paragraphs": [
[
"The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.",
"A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction."
],
[
"Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.",
"Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.",
"Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 ."
],
[
"Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:"
],
[
"We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:",
"The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.",
"Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled."
],
[
"The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.",
"Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features."
],
[
"Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class."
],
[
"Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction."
],
[
"Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1."
],
[
"Some categories of errors that occur are:",
"Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.",
"Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.",
"Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it."
],
[
"In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.",
"Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application."
]
],
"section_name": [
"Introduction",
"Motivation",
"Definition and Challenges",
"Dataset Creation",
"Feature Design",
"Evaluation",
"Performance for Datasets 1 and 2",
"Performance for Held-out Dataset H",
"Error Analysis",
"Conclusion & Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"9673c8660ce783e03520c8e10c5ec0167cb2bce2"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: Word cloud for drunk tweets"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: Word cloud for drunk tweets"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"c3aebfe695d105d331a1b20e57ea7351ff9a6a0a"
],
"answer": [
{
"evidence": [
"A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"f8c23d7f79a2917e681146c5ac96156f70d8052b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Performance of human evaluators and our classifiers (trained on all features), for Dataset-H as the test set"
],
"extractive_spans": [],
"free_form_answer": "Human evaluators",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Performance of human evaluators and our classifiers (trained on all features), for Dataset-H as the test set"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"292a984fb6a227b6a54d3c36bde5d550a67b8329",
"7c7c413a0794b49fd5a8ec103b583532c56e4f7c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction"
],
"extractive_spans": [],
"free_form_answer": "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalisation, Length, Emoticon (Presence/Count ) \n and Sentiment Ratio",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction"
],
"extractive_spans": [],
"free_form_answer": "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalization, Length, Emoticon (Presence/Count), Sentiment Ratio.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1d9d381cc6f219b819bf4445168e5bc27c65ffff"
],
"answer": [
{
"evidence": [
"Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2d5e36194e68acf93a75c8e44c93e33fe697ed42"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"19cbce0e0847cb0c02eed760d2bbe3d0eb3caee1"
],
"answer": [
{
"evidence": [
"The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"Do the authors mention any confounds to their study?",
"What baseline model is used?",
"What stylistic features are used to detect drunk texts?",
"Is the data acquired under distant supervision verified by humans at any stage?",
"What hashtags are used for distant supervision?",
"Do the authors equate drunk tweeting with drunk texting? "
],
"question_id": [
"45306b26447ea4b120655d6bb2e3636079d3d6e0",
"0c08af6e4feaf801185f2ec97c4da04c8b767ad6",
"6412e97373e8e9ae3aa20aa17abef8326dc05450",
"957bda6b421ef7d2839c3cec083404ac77721f14",
"368317b4fd049511e00b441c2e9550ded6607c37",
"b3ec918827cd22b16212265fcdd5b3eadee654ae",
"387970ebc7ef99f302f318d047f708274c0e8f21"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Word cloud for drunk tweets",
"Table 1: Our Feature Set for Drunk-texting Prediction",
"Table 2: Performance of our features on Datasets 1 and 2",
"Table 4: Cohen’s Kappa for three annotators (A1A3)",
"Table 3: Top stylistic features for Datasets 1 and 2 obtained using Chi-squared test-based ranking",
"Table 5: Performance of human evaluators and our classifiers (trained on all features), for Dataset-H as the test set"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table4-1.png",
"4-Table3-1.png",
"4-Table5-1.png"
]
} | [
"What baseline model is used?",
"What stylistic features are used to detect drunk texts?"
] | [
[
"1610.00879-4-Table5-1.png"
],
[
"1610.00879-3-Table1-1.png"
]
] | [
"Human evaluators",
"LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalization, Length, Emoticon (Presence/Count), Sentiment Ratio."
] | 67 |
1704.05572 | Answering Complex Questions Using Open Information Extraction | While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge. | {
"paragraphs": [
[
"Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .",
"Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.",
"Consider the following question from an Alaska state 4th grade science test:",
"Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon",
"This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).",
"A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.",
"To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions."
],
[
"We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB)."
],
[
"We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple."
],
[
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
[
"Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\\lbrace a_i\\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.",
"Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $\n&\\textit {tf}(x, q)=1\\; \\textmd {if x} \\in q ; \\textit {idf}(x) = log(1 + N/n_x) \\\\\n&\\textit {tf-idf}(t, q)=\\sum _{x \\in t\\cap q} idf(x)\n$ ",
" where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .",
"On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\\prime }_{qa}$ ."
],
[
"Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \\cup T^{\\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .",
"The qterms, answer choices, and tuples fields form the set of possible vertices, $\\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\\mathcal {G}(\\mathcal {V}, \\mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.",
"Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.",
"Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.",
"Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.",
"We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).",
"We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).",
"Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
],
[
"Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .",
"We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.",
"We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\\prime }_{qa}$ . Additionally, TableILP uses $\\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.",
"We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions."
],
[
"Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.",
"Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength."
],
[
"We describe four classes of failures that we observed, and the future work they suggest.",
"Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water",
"In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.",
"Lossy IE: Which action is the best method to separate a mixture of salt and water? ...",
"The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.",
"Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor",
"TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.",
"Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...",
"Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver."
],
[
"We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures."
],
[
"To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE. "
],
[
"We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows."
],
[
"For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score."
],
[
"We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \\ldots )$ , we add a triple $(S; P; O_i)$ to this table."
]
],
"section_name": [
"Introduction",
"Related Work",
"Tuple Inference Solver",
"Tuple KB",
"Tuple Selection",
"Support Graph Search",
"Experiments",
"Results",
"Error Analysis",
"Conclusion",
"Appendix: ILP Model Details",
"Experiment Details",
"Using curated tables with TupleInf",
"Using Open IE tuples with TableILP"
]
} | {
"answers": [
{
"annotation_id": [
"3dc26c840c9d93a07e7cfd50dae2ec9e454e39e4",
"b66d581a485f807a457f36777a1ab22dbf849998"
],
"answer": [
{
"evidence": [
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T).",
"We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\\prime }_{qa}$ . Additionally, TableILP uses $\\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams."
],
"extractive_spans": [
"domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. ",
"Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T).",
"The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"a6c4425bc88c8d30a2aa9a7a2a791025314fadef"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: TUPLEINF is significantly better at structured reasoning than TABLEILP.9"
],
"extractive_spans": [],
"free_form_answer": "51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: TUPLEINF is significantly better at structured reasoning than TABLEILP.9"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"ab7b691d0d2b23ca9201a02c67ca98202f0e2067"
],
"answer": [
{
"evidence": [
"Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\\lbrace a_i\\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.",
"Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\\textit {tf}(x, q)=1\\; \\textmd {if x} \\in q ; \\textit {idf}(x) = log(1 + N/n_x) \\\\ &\\textit {tf-idf}(t, q)=\\sum _{x \\in t\\cap q} idf(x) $"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\\lbrace a_i\\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.",
"Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ ."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"f840a836eee0180d2c976457f8b3052d8e78050c"
]
},
{
"annotation_id": [
"34d51905bd8bea5030d4b5e1095cac2ab2266afe"
],
"answer": [
{
"evidence": [
"We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \\ldots )$ , we add a triple $(S; P; O_i)$ to this table."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \\ldots )$ , we add a triple $(S; P; O_i)$ to this table."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"6413e76a47bda832eb45a35af9100d6ae8db32cc"
],
"answer": [
{
"evidence": [
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"extractive_spans": [
"for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S",
"take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$"
],
"free_form_answer": "",
"highlighted_evidence": [
"For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"14ed4878b0c2a3d3def83d2973038ed102fbdd63"
],
"answer": [
{
"evidence": [
"Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"fc0aef9fb401b68ee551d7e92fde4f03903c31d9"
],
"answer": [
{
"evidence": [
"We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\\prime }_{qa}$ . Additionally, TableILP uses $\\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.",
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"extractive_spans": [
"domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining.",
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. ",
"We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"f3306bb0b0a58fcbca6b4227c4126e8923213e0f"
],
"answer": [
{
"evidence": [
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"extractive_spans": [
"for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S",
"take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$"
],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"0014dfeeb1ed23852c5301f81e02d1710a9c8c78"
],
"answer": [
{
"evidence": [
"Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"no",
"no",
"no"
],
"question": [
"What corpus was the source of the OpenIE extractions?",
"What is the accuracy of the proposed technique?",
"Is an entity linking process used?",
"Are the OpenIE extractions all triples?",
"What method was used to generate the OpenIE extractions?",
"Can the method answer multi-hop questions?",
"What was the textual source to which OpenIE was applied?",
"What OpenIE method was used to generate the extractions?",
"Is their method capable of multi-hop reasoning?"
],
"question_id": [
"2fffff59e57b8dbcaefb437a6b3434fc137f813b",
"eb95af36347ed0e0808e19963fe4d058e2ce3c9f",
"cd1792929b9fa5dd5b1df0ae06fc6aece4c97424",
"65d34041ffa4564385361979a08706b10b92ebc7",
"e215fa142102f7f9eeda9c9eb8d2aeff7f2a33ed",
"a8545f145d5ea2202cb321c8f93e75ad26fcf4aa",
"417dabd43d6266044d38ed88dbcb5fdd7a426b22",
"fed230cef7c130f6040fb04304a33bbc17ca3a36",
"7917d44e952b58ea066dc0b485d605c9a1fe3dda"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"information extraction",
"information extraction",
"information extraction"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An example support graph linking a question (top), two tuples from the KB (colored) and an answer option (nitrogen).",
"Table 2: TUPLEINF is significantly better at structured reasoning than TABLEILP.9",
"Table 1: High-level ILP constraints; we report results for ~w = (2, 4, 4, 4, 2); the model can be improved with more careful parameter selection",
"Table 3: TUPLEINF is complementarity to IR, resulting in a strong ensemble"
],
"file": [
"3-Figure1-1.png",
"4-Table2-1.png",
"4-Table1-1.png",
"5-Table3-1.png"
]
} | [
"What is the accuracy of the proposed technique?"
] | [
[
"1704.05572-4-Table2-1.png"
]
] | [
"51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge"
] | 68 |
1707.03904 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. We also describe a retrieval system for extracting relevant sentences and documents from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). We evaluate several baselines on both datasets, ranging from simple heuristics to powerful neural models, and show that these lag behind human performance by 16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at https://github.com/bdhingra/quasar . | {
"paragraphs": [
[
"Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source, to information seeking questions posed in natural language. Depending on the knowledge source available there are two main approaches for factoid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase BIBREF1 , are easier to process automatically since the information is organized according to a fixed schema. In this case the question is parsed into a logical form in order to query against the KB. However, even the largest KBs are often incomplete BIBREF2 , BIBREF3 , and hence can only answer a limited subset of all possible factoid questions.",
"For this reason the focus is now shifting towards unstructured sources, such as Wikipedia articles, which hold a vast quantity of information in textual form and, in principle, can be used to answer a much larger collection of questions. Extracting the correct answer from unstructured text is, however, challenging, and typical QA pipelines consist of the following two components: (1) searching for the passages relevant to the given question, and (2) reading the retrieved text in order to select a span of text which best answers the question BIBREF4 , BIBREF5 .",
"Like most other language technologies, the current research focus for both these steps is firmly on machine learning based approaches for which performance improves with the amount of data available. Machine reading performance, in particular, has been significantly boosted in the last few years with the introduction of large-scale reading comprehension datasets such as CNN / DailyMail BIBREF6 and Squad BIBREF7 . State-of-the-art systems for these datasets BIBREF8 , BIBREF9 focus solely on step (2) above, in effect assuming the relevant passage of text is already known.",
"In this paper, we introduce two new datasets for QUestion Answering by Search And Reading – Quasar. The datasets each consist of factoid question-answer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehension. Quasar-S consists of 37,362 cloze-style questions constructed from definitions of software entities available on the popular website Stack Overflow. The answer to each question is restricted to be another software entity, from an output vocabulary of 4874 entities. Quasar-T consists of 43,013 trivia questions collected from various internet sources by a trivia enthusiast. The answers to these questions are free-form spans of text, though most are noun phrases.",
"While production quality QA systems may have access to the entire world wide web as a knowledge source, for Quasar we restrict our search to specific background corpora. This is necessary to avoid uninteresting solutions which directly extract answers from the sources from which the questions were constructed. For Quasar-S we construct the knowledge source by collecting top 50 threads tagged with each entity in the dataset on the Stack Overflow website. For Quasar-T we use ClueWeb09 BIBREF0 , which contains about 1 billion web pages collected between January and February 2009. Figure 1 shows some examples.",
"Unlike existing reading comprehension tasks, the Quasar tasks go beyond the ability to only understand a given passage, and require the ability to answer questions given large corpora. Prior datasets (such as those used in BIBREF4 ) are constructed by first selecting a passage and then constructing questions about that passage. This design (intentionally) ignores some of the subproblems required to answer open-domain questions from corpora, namely searching for passages that may contain candidate answers, and aggregating information/resolving conflicts between candidates from many passages. The purpose of Quasar is to allow research into these subproblems, and in particular whether the search step can benefit from integration and joint training with downstream reading systems.",
"Additionally, Quasar-S has the interesting feature of being a closed-domain dataset about computer programming, and successful approaches to it must develop domain-expertise and a deep understanding of the background corpus. To our knowledge it is one of the largest closed-domain QA datasets available. Quasar-T, on the other hand, consists of open-domain questions based on trivia, which refers to “bits of information, often of little importance\". Unlike previous open-domain systems which rely heavily on the redundancy of information on the web to correctly answer questions, we hypothesize that Quasar-T requires a deeper reading of documents to answer correctly.",
"We evaluate Quasar against human testers, as well as several baselines ranging from naïve heuristics to state-of-the-art machine readers. The best performing baselines achieve $33.6\\%$ and $28.5\\%$ on Quasar-S and Quasar-T, while human performance is $50\\%$ and $60.6\\%$ respectively. For the automatic systems, we see an interesting tension between searching and reading accuracies – retrieving more documents in the search phase leads to a higher coverage of answers, but makes the comprehension task more difficult. We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short. We plan to release these annotations along with the datasets, and our retrieved documents for each question."
],
[
"Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each Quasar variant."
],
[
"The software question set was built from the definitional “excerpt” entry for each tag (entity) on StackOverflow. For example the excerpt for the “java“ tag is, “Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).” Not every excerpt includes the tag being defined (which we will call the “head tag”), so we prepend the head tag to the front of the string to guarantee relevant results later on in the pipeline. We then completed preprocessing of the software questions by downcasing and tokenizing the string using a custom tokenizer compatible with special characters in software terms (e.g. “.net”, “c++”). Each preprocessed excerpt was then converted to a series of cloze questions using a simple heuristic: first searching the string for mentions of other entities, then repleacing each mention in turn with a placeholder string (Figure 2 ).",
"This heuristic is noisy, since the software domain often overloads existing English words (e.g. “can” may refer to a Controller Area Network bus; “swap” may refer to the temporary storage of inactive pages of memory on disk; “using” may refer to a namespacing keyword). To improve precision we scored each cloze based on the relative incidence of the term in an English corpus versus in our StackOverflow one, and discarded all clozes scoring below a threshold. This means our dataset does not include any cloze questions for terms which are common in English (such as “can” “swap” and “using”, but also “image” “service” and “packet”). A more sophisticated entity recognition system could make recall improvements here.",
"The trivia question set was built from a collection of just under 54,000 trivia questions collected by Reddit user 007craft and released in December 2015. The raw dataset was noisy, having been scraped from multiple sources with variable attention to detail in formatting, spelling, and accuracy. We filtered the raw questions to remove unparseable entries as well as any True/False or multiple choice questions, for a total of 52,000 free-response style questions remaining. The questions range in difficulty, from straightforward (“Who recorded the song `Rocket Man”' “Elton John”) to difficult (“What was Robin Williams paid for Disney's Aladdin in 1982” “Scale $485 day + Picasso Painting”) to debatable (“According to Earth Medicine what's the birth totem for march” “The Falcon”)"
],
[
"The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.",
"Context documents for each query were generated in a two-phase fashion, first collecting a large pool of semirelevant text, then filling a temporary index with short or long pseudodocuments from the pool, and finally selecting a set of $N$ top-ranking pseudodocuments (100 short or 20 long) from the temporary index.",
"For Quasar-S, the pool of text for each question was composed of 50+ question-and-answer threads scraped from http://stackoverflow.com. StackOverflow keeps a running tally of the top-voted questions for each tag in their knowledge base; we used Scrapy to pull the top 50 question posts for each tag, along with any answer-post responses and metadata (tags, authorship, comments). From each thread we pulled all text not marked as code, and split it into sentences using the Stanford NLP sentence segmenter, truncating sentences to 2048 characters. Each sentence was marked with a thread identifier, a post identifier, and the tags for the thread. Long pseudodocuments were either the full post (in the case of question posts), or the full post and its head question (in the case of answer posts), comments included. Short pseudodocuments were individual sentences.",
"To build the context documents for Quasar-S, the pseudodocuments for the entire corpus were loaded into a disk-based lucene index, each annotated with its thread ID and the tags for the thread. This index was queried for each cloze using the following lucene syntax:",
"[noitemsep] ",
"SHOULD(PHRASE(question text))",
"SHOULD(BOOLEAN(question text))",
"MUST(tags:$headtag)",
"where “question text” refers to the sequence of tokens in the cloze question, with the placeholder removed. The first SHOULD term indicates that an exact phrase match to the question text should score highly. The second SHOULD term indicates that any partial match to tokens in the question text should also score highly, roughly in proportion to the number of terms matched. The MUST term indicates that only pseudodocuments annotated with the head tag of the cloze should be considered.",
"The top $100N$ pseudodocuments were retrieved, and the top $N$ unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded.",
"For Quasar-T, the pool of text for each question was composed of 100 HTML documents retrieved from ClueWeb09. Each question-answer pair was converted to a #combine query in the Indri query language to comply with the ClueWeb09 batch query service, using simple regular expression substitution rules to remove (s/[.(){}<>:*`_]+//g) or replace (s/[,?']+/ /g) illegal characters. Any questions generating syntax errors after this step were discarded. We then extracted the plaintext from each HTML document using Jericho. For long pseudodocuments we used the full page text, truncated to 2048 characters. For short pseudodocuments we used individual sentences as extracted by the Stanford NLP sentence segmenter, truncated to 200 characters.",
"To build the context documents for the trivia set, the pseudodocuments from the pool were collected into an in-memory lucene index and queried using the question text only (the answer text was not included for this step). The structure of the query was identical to the query for Quasar-S, without the head tag filter:",
"[noitemsep] ",
"SHOULD(PHRASE(question text))",
"SHOULD(BOOLEAN(question text))",
"The top $100N$ pseudodocuments were retrieved, and the top $N$ unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded."
],
[
"The list of candidate solutions provided with each record is guaranteed to contain the correct answer to the question. Quasar-S used a closed vocabulary of 4874 tags as its candidate list. Since the questions in Quasar-T are in free-response format, we constructed a separate list of candidate solutions for each question. Since most of the correct answers were noun phrases, we took each sequence of NN* -tagged tokens in the context document, as identified by the Stanford NLP Maxent POS tagger, as the candidate list for each record. If this list did not include the correct answer, it was added to the list."
],
[
"Once context documents had been built, we extracted the subset of questions where the answer string, excluded from the query for the two-phase search, was nonetheless present in the context document. This subset allows us to evaluate the performance of the reading system independently from the search system, while the full set allows us to evaluate the performance of Quasar as a whole. We also split the full set into training, validation and test sets. The final size of each data subset after all discards is listed in Table 1 ."
],
[
"Evaluation is straightforward on Quasar-S since each answer comes from a fixed output vocabulary of entities, and we report the average accuracy of predictions as the evaluation metric. For Quasar-T, the answers may be free form spans of text, and the same answer may be expressed in different terms, which makes evaluation difficult. Here we pick the two metrics from BIBREF7 , BIBREF19 . In preprocessing the answer we remove punctuation, white-space and definite and indefinite articles from the strings. Then, exact match measures whether the two strings, after preprocessing, are equal or not. For F1 match we first construct a bag of tokens for each string, followed be preprocessing of each token, and measure the F1 score of the overlap between the two bags of tokens. These metrics are far from perfect for Quasar-T; for example, our human testers were penalized for entering “0” as answer instead of “zero”. However, a comparison between systems may still be meaningful."
],
[
"To put the difficulty of the introduced datasets into perspective, we evaluated human performance on answering the questions. For each dataset, we recruited one domain expert (a developer with several years of programming experience for Quasar-S, and an avid trivia enthusiast for Quasar-T) and $1-3$ non-experts. Each volunteer was presented with randomly selected questions from the development set and asked to answer them via an online app. The experts were evaluated in a “closed-book” setting, i.e. they did not have access to any external resources. The non-experts were evaluated in an “open-book” setting, where they had access to a search engine over the short pseudo-documents extracted for each dataset (as described in Section \"Context Retrieval\" ). We decided to use short pseudo-documents for this exercise to reduce the burden of reading on the volunteers, though we note that the long pseudo-documents have greater coverage of answers.",
"We also asked the volunteers to provide annotations to categorize the type of each question they were asked, and a label for whether the question was ambiguous. For Quasar-S the annotators were asked to mark the relation between the head entity (from whose definition the cloze was constructed) and the answer entity. For Quasar-T the annotators were asked to mark the genre of the question (e.g., Arts & Literature) and the entity type of the answer (e.g., Person). When multiple annotators marked the same question differently, we took the majority vote when possible and discarded ties. In total we collected 226 relation annotations for 136 questions in Quasar-S, out of which 27 were discarded due to conflicting ties, leaving a total of 109 annotated questions. For Quasar-T we collected annotations for a total of 144 questions, out of which 12 we marked as ambiguous. In the remaining 132, a total of 214 genres were annotated (a question could be annotated with multiple genres), while 10 questions had conflicting entity-type annotations which we discarded, leaving 122 total entity-type annotations. Figure 3 shows the distribution of these annotations."
],
[
"We evaluate several baselines on Quasar, ranging from simple heuristics to deep neural networks. Some predict a single token / entity as the answer, while others predict a span of tokens.",
"MF-i (Maximum Frequency) counts the number of occurrences of each candidate answer in the retrieved context and returns the one with maximum frequency. MF-e is the same as MF-i except it excludes the candidates present in the query. WD (Word Distance) measures the sum of distances from a candidate to other non-stopword tokens in the passage which are also present in the query. For the cloze-style Quasar-S the distances are measured by first aligning the query placeholder to the candidate in the passage, and then measuring the offsets between other tokens in the query and their mentions in the passage. The maximum distance for any token is capped at a specified threshold, which is tuned on the validation set.",
"For Quasar-T we also test the Sliding Window (SW) and Sliding Window + Distance (SW+D) baselines proposed in BIBREF13 . The scores were computed for the list of candidate solutions described in Section \"Context Retrieval\" .",
"For Quasar-S, since the answers come from a fixed vocabulary of entities, we test language model baselines which predict the most likely entity to appear in a given context. We train three n-gram baselines using the SRILM toolkit BIBREF21 for $n=3,4,5$ on the entire corpus of all Stack Overflow posts. The output predictions are restricted to the output vocabulary of entities.",
"We also train a bidirectional Recurrent Neural Network (RNN) language model (based on GRU units). This model encodes both the left and right context of an entity using forward and backward GRUs, and then concatenates the final states from both to predict the entity through a softmax layer. Training is performed on the entire corpus of Stack Overflow posts, with the loss computed only over mentions of entities in the output vocabulary. This approach benefits from looking at both sides of the cloze in a query to predict the entity, as compared to the single-sided n-gram baselines.",
"Reading comprehension models are trained to extract the answer from the given passage. We test two recent architectures on Quasar using publicly available code from the authors .",
"The GA Reader BIBREF8 is a multi-layer neural network which extracts a single token from the passage to answer a given query. At the time of writing it had state-of-the-art performance on several cloze-style datasets for QA. For Quasar-S we train and test GA on all instances for which the correct answer is found within the retrieved context. For Quasar-T we train and test GA on all instances where the answer is in the context and is a single token.",
"The BiDAF model BIBREF9 is also a multi-layer neural network which predicts a span of text from the passage as the answer to a given query. At the time of writing it had state-of-the-art performance among published models on the Squad dataset. For Quasar-T we train and test BiDAF on all instances where the answer is in the retrieved context."
],
[
"Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy. The performance of the baseline among these instances is referred to as the Reading Accuracy, and the overall performance (which is a product of the two) is referred to as the Overall Accuracy. In Figure 4 we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient – finding the few most relevant ones will allow the reader to work best.",
"In Tables 2 and 3 we compare all baselines when the context size is tuned to maximize the overall accuracy on the validation set. For Quasar-S the best performing baseline is the BiRNN language model, which achieves $33.6\\%$ accuracy. The GA model achieves $48.3\\%$ accuracy on the set of instances for which the answer is in context, however, a search accuracy of only $65\\%$ means its overall performance is lower. This can improve with improved retrieval. For Quasar-T, both the neural models significantly outperform the heuristic models, with BiDAF getting the highest F1 score of $28.5\\%$ .",
"The best performing baselines, however, lag behind human performance by $16.4\\%$ and $32.1\\%$ for Quasar-S and Quasar-T respectively, indicating the strong potential for improvement. Interestingly, for human performance we observe that non-experts are able to match or beat the performance of experts when given access to the background corpus for searching the answers. We also emphasize that the human performance is limited by either the knowledge of the experts, or the usefulness of the search engine for non-experts; it should not be viewed as an upper bound for automatic systems which can potentially use the entire background corpus. Further analysis of the human and baseline performance in each category of annotated questions is provided in Appendix \"Performance Analysis\" ."
],
[
"We have presented the Quasar datasets for promoting research into two related tasks for QA – searching a large corpus of text for relevant passages, and reading the passages to extract answers. We have also described baseline systems for the two tasks which perform reasonably but lag behind human performance. While the searching performance improves as we retrieve more context, the reading performance typically goes down. Hence, future work, in addition to improving these components individually, should also focus on joint approaches to optimizing the two on end-task performance. The datasets, including the documents retrieved by our system and the human annotations, are available at https://github.com/bdhingra/quasar."
],
[
"This work was funded by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google."
],
[
"Table 4 includes the definition of all the annotated relations for Quasar-S."
],
[
"Figure 5 shows a comparison of the human performance with the best performing baseline for each category of annotated questions. We see consistent differences between the two, except in the following cases. For Quasar-S, Bi-RNN performs comparably to humans for the developed-with and runs-on categories, but much worse in the has-component and is-a categories. For Quasar-T, BiDAF performs comparably to humans in the sports category, but much worse in history & religion and language, or when the answer type is a number or date/time."
]
],
"section_name": [
"Introduction",
"Dataset Construction",
"Question sets",
"Context Retrieval",
"Candidate solutions",
"Postprocessing",
"Metrics",
"Human Evaluation",
"Baseline Systems",
"Results",
"Conclusion",
"Acknowledgments",
"Quasar-S Relation Definitions",
"Performance Analysis"
]
} | {
"answers": [
{
"annotation_id": [
"00112b6bc9f87e8d1943add164637a03ebc74336"
],
"answer": [
{
"evidence": [
"Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each Quasar variant.",
"The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.",
"Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy. The performance of the baseline among these instances is referred to as the Reading Accuracy, and the overall performance (which is a product of the two) is referred to as the Overall Accuracy. In Figure 4 we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient – finding the few most relevant ones will allow the reader to work best."
],
"extractive_spans": [],
"free_form_answer": "The dataset comes with a ranked set of relevant documents. Hence the baselines do not use a retrieval system.",
"highlighted_evidence": [
"Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution.",
"The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.",
"Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy.",
"Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1d87720d0db14aa36d083b7dc3999984c4489389"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"somewhat"
],
"question": [
"Which retrieval system was used for baselines?"
],
"question_id": [
"dcb18516369c3cf9838e83168357aed6643ae1b8"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Example short-document instances from QUASAR-S (top) and QUASAR-T (bottom)",
"Figure 2: Cloze generation",
"Table 1: Dataset Statistics. Single-Token refers to the questions whose answer is a single token (for QUASAR-S all answers come from a fixed vocabulary). Answer in Short (Long) indicates whether the answer is present in the retrieved short (long) pseudo-documents.",
"Figure 3: Distribution of manual annotations for QUASAR. Description of the QUASAR-S annotations is in Appendix A.",
"Figure 4: Variation of Search, Read and Overall accuracies as the number of context documents is varied.",
"Table 2: Performance comparison on QUASAR-S. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with †. Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.",
"Table 3: Performance comparison on QUASAR-T. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with †. Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.**We were unable to run BiDAF with more than 10 short-documents / 1 long-documents, and GA with more than 10 long-documents due to memory errors.",
"Table 4: Description of the annotated relations between the head entity, from whose definition the cloze is constructed, and the answer entity which fills in the cloze. These are the same as the descriptions shown to the annotators.",
"Figure 5: Performance comparison of humans and the best performing baseline across the categories annotated for the development set."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"6-Table1-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"9-Table2-1.png",
"9-Table3-1.png",
"11-Table4-1.png",
"11-Figure5-1.png"
]
} | [
"Which retrieval system was used for baselines?"
] | [
[
"1707.03904-Context Retrieval-0",
"1707.03904-Dataset Construction-0",
"1707.03904-Results-0"
]
] | [
"The dataset comes with a ranked set of relevant documents. Hence the baselines do not use a retrieval system."
] | 70 |
1911.07228 | Error Analysis for Vietnamese Named Entity Recognition on Deep Neural Network Models | In recent years, Vietnamese Named Entity Recognition (NER) systems have had a great breakthrough when using Deep Neural Network methods. This paper describes the primary errors of the state-of-the-art NER systems on Vietnamese language. After conducting experiments on BLSTM-CNN-CRF and BLSTM-CRF models with different word embeddings on the Vietnamese NER dataset. This dataset is provided by VLSP in 2016 and used to evaluate most of the current Vietnamese NER systems. We noticed that BLSTM-CNN-CRF gives better results, therefore, we analyze the errors on this model in detail. Our error-analysis results provide us thorough insights in order to increase the performance of NER for the Vietnamese language and improve the quality of the corpus in the future works. | {
"paragraphs": [
[
"Named Entity Recognition (NER) is one of information extraction subtasks that is responsible for detecting entity elements from raw text and can determine the category in which the element belongs, these categories include the names of persons, organizations, locations, expressions of times, quantities, monetary values and percentages.",
"The problem of NER is described as follow:",
"Input: A sentence S consists a sequence of $n$ words: $S= w_1,w_2,w_3,…,w_n$ ($w_i$: the $i^{th}$ word)",
"Output: The sequence of $n$ labels $y_1,y_2,y_3,…,y_n$. Each $y_i$ label represents the category which $w_i$ belongs to.",
"For example, given a sentence:",
"Input: vietnamGiám đốc điều hành Tim Cook của Apple vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện Flint Center, Cupertino.",
"(Apple CEO Tim Cook introduces 2 new, larger iPhones, Smart Watch at Cupertino Flint Center event)",
"The algorithm will output:",
"Output: vietnam⟨O⟩Giám đốc điều hành⟨O⟩ ⟨PER⟩Tim Cook⟨PER⟩ ⟨O⟩của⟨O⟩ ⟨ORG⟩Apple⟨ORG⟩ ⟨O⟩vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện⟨O⟩ ⟨ORG⟩Flint Center⟨ORG⟩, ⟨LOC⟩Cupertino⟨LOC⟩.",
"With LOC, PER, ORG is Name of location, person, organization respectively. Note that O means Other (Not a Name entity). We will not denote the O label in the following examples in this article because we only care about name of entities.",
"In this paper, we analyze common errors of the previous state-of-the-art techniques using Deep Neural Network (DNN) on VLSP Corpus. This may contribute to the later researchers the common errors from the results of these state-of-the-art models, then they can rely on to improve the model.",
"Section 2 discusses the related works to this paper. We will present a method for evaluating and analyzing the types of errors in Section 3. The data used for testing and analysis of errors will be introduced in Section 4, we also talk about deep neural network methods and pre-trained word embeddings for experimentation in this section. Section 5 will detail the errors and evaluations. In the end is our contribution to improve the above errors."
],
[
"Previously publicly available NER systems do not use DNN, for example, the MITRE Identification Scrubber Toolkit (MIST) BIBREF0, Stanford NER BIBREF1, BANNER BIBREF2 and NERsuite BIBREF3. NER systems for Vietnamese language processing used traditional machine learning methods such as Maximum Entropy Markov Model (MEMM), Support Vector Machine (SVM) and Conditional Random Field (CRF). In particular, most of the toolkits for NER task attempted to use MEMM BIBREF4, and CRF BIBREF5 to solve this problem.",
"Nowadays, because of the increase in data, DNN methods are used a lot. They have archived great results when it comes to NER tasks, for example, Guillaume Lample et al with BLSTM-CRF in BIBREF6 report 90.94 F1 score, Chiu et al with BLSTM-CNN in BIBREF7 got 91.62 F1 score, Xeuzhe Ma and Eduard Hovy with BLSTM-CNN-CRF in BIBREF8 achieved F1 score of 91.21, Thai-Hoang Pham and Phuong Le-Hong with BLSTM-CNN-CRF in BIBREF9 got 88.59% F1 score. These DNN models are also the state-of-the-art models."
],
[
"The results of our analysis experiments are reported in precision and recall over all labels (name of person, location, organization and miscellaneous). The process of analyzing errors has 2 steps:",
"Step 1: We use two state-of-the-art models including BLSTM-CNN-CRF and BLSTM-CRF to train and test on VLSP’s NER corpus. In our experiments, we implement word embeddings as features to the two systems.",
"Step 2: Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2).",
"A token (an entity name maybe contain more than one word) will be extracted as a correct entity by the model if both of the followings are correct:",
"The length of it (range) is correct: The word beginning and the end is the same as gold data (annotator).",
"The label (tag) of it is correct: The label is the same as in gold data.",
"If it is not meet two above requirements, it will be the wrong entity (an error). Therefore, we divide the errors into five different types which are described in detail as follows:",
"No extraction: The error where the model did not extract tokens as a name entity (NE) though the tokens were annotated as a NE.",
"LSTM-CNN-CRF: vietnam Việt_Nam",
"Annotator: vietnam⟨LOC⟩ Việt_Nam ⟨LOC⟩",
"No annotation: The error where the model extracted tokens as an NE though the tokens were not annotated as a NE.",
"LSTM-CNN-CRF: vietnam⟨PER⟩ Châu Âu ⟨PER⟩",
"Annotator: vietnamChâu Âu",
"Wrong range: The error where the model extracted tokens as an NE and only the range was wrong. (The extracted tokens were partially annotated or they were the part of the annotated tokens).",
"LSTM-CNN-CRF: vietnam⟨PER⟩ Ca_sĩ Nguyễn Văn A ⟨PER⟩",
"Annotator:",
"vietnamCa_sĩ ⟨PER⟩ Nguyễn Văn A ⟨PER⟩",
"Wrong tag: The error where the model extracted tokens as an NE and only the tag type was wrong.",
"LSTM-CNN-CRF: vietnamKhám phá ⟨PER⟩ Yangsuri ⟨PER⟩",
"Annotator:",
"vietnamKhám phá ⟨LOC⟩ Yangsuri ⟨LOC⟩",
"Wrong range and tag: The error where the model extracted tokens as an NE and both the range and the tag type were wrong.",
"LSTM-CNN-CRF: vietnam⟨LOC⟩ gian_hàng Apple ⟨LOC⟩",
"Annotator:",
"vietnamgian_hàng ⟨ORG⟩ Apple ⟨ORG⟩",
"We compare the predicted NEs to the gold NEs ($Fig. 1$), if they have the same range, the predicted NE is a correct or Wrong tag. If it has different range with the gold NE, we will see what type of wrong it is. If it does not have any overlap, it is a No extraction. If it has an overlap and the tag is the same at gold NE, it is a Wrong range. Finally, it is a Wrong range and tag if it has an overlap but the tag is different. The steps in Fig. 2 is the same at Fig. 1 and the different only is we compare the gold NE to the predicted NE, and No extraction type will be No annotation."
],
[
"To conduct error analysis of the model, we used the corpus which are provided by VLSP 2016 - Named Entity Recognition. The dataset contains four different types of label: Location (LOC), Person (PER), Organization (ORG) and Miscellaneous - Name of an entity that do not belong to 3 types above (Table TABREF15). Although the corpus has more information about the POS and chunks, but we do not use them as features in our model.",
"There are two folders with 267 text files of training data and 45 text files of test data. They all have their own format. We take 21 first text files and 22 last text files and 22 sentences of the 22th text file and 55 sentences of the 245th text file to be a development data. The remaining files are going to be the training data. The test file is the same at the file VSLP gave. Finally, we have 3 text files only based on the CoNLL 2003 format: train, dev and test."
],
[
"We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:",
"Kyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps. His word embedding is the vector of 100 dimension and it has about 10k words.",
"Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia. The format is the same at Kyubyong's, but their embedding is the vector of 300 dimension, and they have about 200k words"
],
[
"Based on state-of-the-art methods for NER, BLSTM-CNN-CRF is the end-to-end deep neural network model that achieves the best result on F-score BIBREF9. Therefore, we decide to conduct the experiment on this model and analyze the errors.",
"We run experiment with the Ma and Hovy (2016) model BIBREF8, source code provided by (Motoki Sato) and analysis the errors from this result. Before we decide to analysis on this result, we have run some other methods, but this one with Vietnamese pre-trained word embeddings provided by Kyubyong Park obtains the best result. Other results are shown in the Table 2."
],
[
"Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.",
"We compare the outputs of BLSTM-CNN-CRF model (predicted) to the annotated data (gold) and analyzed the errors. Table 3 shows perfomance of the BLSTM-CNN-CRF model. In our experiments, we use three evaluation parameters (precision, recall, and F1 score) to access our experimental result. They will be described as follow in Table 3. The \"correctNE\", the number of correct label for entity that the model can found. The \"goldNE\", number of the real label annotated by annotator in the gold data. The \"foundNE\", number of the label the model find out (no matter if they are correct or not).",
"In Table 3 above, we can see that recall score on ORG label is lowest. The reason is almost all the ORG label on test file is name of some brands that do not appear on training data and pre-trained word embedding. On the other side, the characters inside these brand names also inside the other names of person in the training data. The context from both side of the sentence (future- and past-feature) also make the model \"think\" the name entity not as it should be.",
"Table 4 shows that the biggest number of errors is No extraction. The errors were counted by using logical sum (OR) of the gold labels and predicted labels (predicted by the model). The second most frequent error was Wrong tag means the model extract it's a NE but wrong tag."
],
[
"First of all, we will compare the predicted NEs to the gold NEs (Fig. 1). Table 4 shows the summary of errors by types based on the gold labels, the \"correct\" is the number of gold tag that the model predicted correctly, \"error\" is the number of gold tag that the model predicted incorrectly, and \"total\" is sum of them. Four columns next show the number of type errors on each label.",
"Table 5 shows that Person, Location and Organization is the main reason why No extraction and Wrong tag are high.",
"After analyzing based on the gold NEs, we figure out the reason is:",
"Almost all the NEs is wrong, they do not appear on training data and pre-trained embedding. These NEs vector will be initial randomly, therefore, these vectors are poor which means have no semantic aspect.",
"The \"weird\" ORG NE in the sentence appear together with other words have context of PER, so this \"weird\" ORG NE is going to be label at PER.",
"For example:",
"gold data: vietnamVĐV được xem là đầu_tiên ký hợp_đồng quảng_cáo là võ_sĩ ⟨PER⟩ Trần Quang Hạ ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨LOC⟩ Hiroshima ⟨LOC⟩.",
"(The athlete is considered the first to sign a contract of boxing Tran Quang Ha after winning the gold medal Asiad Hiroshima)",
"predicted data: vietnam…là võ_sĩ ⟨PER⟩Trần Quang Hạ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨PER⟩Hiroshima⟨PER⟩.",
"Some mistakes of the model are from training set, for example, anonymous person named \"P.\" appears many times in the training set, so when model meets \"P.\" in context of \"P. 3 vietnamQuận 9\" (Ward 3, District 9) – \"P.\" stands for vietnam\"Phường\" (Ward) model will predict \"P.\" as a PER.",
"Training data: vietnamnếu ⟨PER⟩P.⟨PER⟩ có ở đây – (If P. were here) Predicted data: vietnam⟨PER⟩P. 3⟨PER⟩, Gò_vấp – (Ward 3, Go_vap District)"
],
[
"Table 6 shows the summary of errors by types based on the predicted data. After analyzing the errors on predicted and gold data, we noticed that the difference of these errors are mainly in the No anotation and No extraction. Therefore, we only mention the main reasons for the No anotation:",
"Most of the wrong labels that model assigns are brand names (Ex: Charriol, Dream, Jupiter, ...), words are abbreviated vietnam(XKLD – xuất khẩu lao động (labour export)), movie names, … All of these words do not appear in training data and word embedding. Perhaps these reasons are the followings:",
"The vectors of these words are random so the semantic aspect is poor.",
"The hidden states of these words also rely on past feature (forward pass) and future feature (backward pass) of the sentence. Therefore, they are assigned wrongly because of their context.",
"These words are primarily capitalized or all capital letters, so they are assigned as a name entity. This error is caused by the CNN layer extract characters information of the word.",
"Table 7 shows the detail of errors on predicted data where we will see number kind of errors on each label."
],
[
"After considering the training and test data, we realized that this data has many problems need to be fixed in the next run experiments. The annotators are not consistent between the training data and the test data, more details are shown as follow:",
"The organizations are labeled in the train data but not labeled in the test data:",
"Training data: vietnam⟨ORG⟩ Sở Y_tế ⟨ORG⟩ (Department of Health)",
"Test data: vietnamSở Y_tế (Department of Health)",
"Explanation: vietnam\"Sở Y_tế\" in train and test are the same name of organization entity. However the one in test data is not labeled.",
"The entity has the same meaning but is assigned differently between the train data and the test:",
"Training data: vietnam⟨MISC⟩ người Việt ⟨MISC⟩ (Vietnamese people)",
"Test data: vietnamdân ⟨LOC⟩ Việt ⟨LOC⟩ (Vietnamese people)",
"Explanation: vietnamBoth \"người Việt\" in train data and \"dân Việt\" in test data are the same meaning, but they are assigned differently.",
"The range of entities are differently between the train data and the test data:",
"Training data: vietnam⟨LOC⟩ làng Atâu ⟨LOC⟩ (Atâu village)",
"Test data: vietnamlàng ⟨LOC⟩ Hàn_Quốc ⟨LOC⟩ (Korea village)",
"Explanation: The two villages differ only in name, but they are labeled differently in range",
"Capitalization rules are not unified with a token is considered an entity:",
"Training data: vietnam⟨ORG⟩ Công_ty Inmasco ⟨ORG⟩ (Inmasco Company)",
"Training data: vietnamcông_ty con (Subsidiaries)",
"Test data: vietnamcông_ty ⟨ORG⟩ Yeon Young Entertainment ⟨ORG⟩ (Yeon Young Entertainment company)",
"Explanation: If it comes to a company with a specific name, it should be labeled vietnam⟨ORG⟩ Công_ty Yeon Young Entertainment ⟨ORG⟩ with \"C\" in capital letters."
],
[
"In this paper, we have presented a thorough study of distinctive error distributions produced by Bi-LSTM-CNN-CRF for the Vietnamese language. This would be helpful for researchers to create better NER models.",
"Based on the analysis results, we suggest some possible directions for improvement of model and for the improvement of data-driven NER for the Vietnamese language in future:",
"The word at the begin of the sentence is capitalized, so, if the name of person is at this position, model will ignore them (no extraction). To improve this issue, we can use the POS feature together with BIO format (Inside, Outside, Beginning) BIBREF6 at the top layer (CRF).",
"If we can unify the labeling of the annotators between the train, dev and test sets. We will improve data quality and classifier.",
"It is better if there is a pre-trained word embeddings that overlays the data, and segmentation algorithm need to be more accurately."
]
],
"section_name": [
"Introduction",
"Related work",
"Error-analysis method",
"Data and model ::: Data sets",
"Data and model ::: Pre-trained word Embeddings",
"Data and model ::: Model",
"Experiment and Results",
"Experiment and Results ::: Error analysis on gold data",
"Experiment and Results ::: Analysis on predicted data",
"Experiment and Results ::: Errors of annotators",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"20b9bd9b3d0d70cf39bfdd986a5fd5d78f702e0f"
],
"answer": [
{
"evidence": [
"We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:",
"Kyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps. His word embedding is the vector of 100 dimension and it has about 10k words.",
"Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia. The format is the same at Kyubyong's, but their embedding is the vector of 300 dimension, and they have about 200k words"
],
"extractive_spans": [
"Kyubyong Park",
"Edouard Grave et al BIBREF11"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:\n\nKyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps.",
"Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"005a24a2b8b811b9cdc7cafd54a4b71a9e9d480f"
],
"answer": [
{
"evidence": [
"Step 2: Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2)."
],
"extractive_spans": [
"No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag"
],
"free_form_answer": "",
"highlighted_evidence": [
"Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"fa1cc9386772d41918c8d3a69201067dbdbf5dba"
],
"answer": [
{
"evidence": [
"Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.",
"FLOAT SELECTED: Table 2. F1 score of two models with different pre-trained word embeddings"
],
"extractive_spans": [],
"free_form_answer": "Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF ",
"highlighted_evidence": [
"Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.",
"FLOAT SELECTED: Table 2. F1 score of two models with different pre-trained word embeddings"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What word embeddings were used?",
"What type of errors were produced by the BLSTM-CNN-CRF system?",
"How much better was the BLSTM-CNN-CRF than the BLSTM-CRF?"
],
"question_id": [
"f46a907360d75ad566620e7f6bf7746497b6e4a9",
"79d999bdf8a343ce5b2739db3833661a1deab742",
"71d59c36225b5ee80af11d3568bdad7425f17b0c"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Chart flow to analyze errors based on gold labels",
"Fig. 2. Chart flow to analyze errors based on predicted labels",
"Table 1. Number type of each tags in the corpus",
"Table 2. F1 score of two models with different pre-trained word embeddings",
"Table 3. Performances of LSTM-CNN-CRF on the Vietnamese NER corpus",
"Table 4. Summary of error results on gold data",
"Table 5. Summary of detailed error results on gold data",
"Table 6. Summary of error results on predicted data",
"Table 7. Summary of detailed error results on predicted data"
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"5-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Table6-1.png",
"10-Table7-1.png"
]
} | [
"How much better was the BLSTM-CNN-CRF than the BLSTM-CRF?"
] | [
[
"1911.07228-Experiment and Results-0",
"1911.07228-7-Table2-1.png"
]
] | [
"Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF "
] | 71 |
1603.07044 | Recurrent Neural Network Encoder with Attention for Community Question Answering | We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on any linguistic processing, and can be applied to different languages or domains. Further improvements are observed when we extend the RNN encoders with a neural attention mechanism that encourages reasoning over entire sequences. To deal with practical issues such as data sparsity and imbalanced labels, we apply various techniques such as transfer learning and multitask learning. Our experiments on the SemEval-2016 cQA task show 10% improvement on a MAP score compared to an information retrieval-based approach, and achieve comparable performance to a strong handcrafted feature-based method. | {
"paragraphs": [
[
"Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question.",
"Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.",
"To overcome these issues, most previous work (e.g. SemEval 2015 BIBREF1 ) relied heavily on additional features and reasoning capabilities. In BIBREF2 , a neural attention-based model was proposed for automatically recognizing entailment relations between pairs of natural language sentences. In this study, we first modify this model for all three cQA tasks. We also extend this framework into a jointly trained model when the external resources are available, i.e. selecting an external comment when we know the question that the external comment answers (Task C).",
"Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features. By applying RNN-based encoders, we avoid heavily engineered features and learn the representation automatically. In addition, an attention mechanism augments encoders with the ability to attend to past outputs directly. This becomes helpful when encoding longer sequences, since we no longer need to compress all information into a fixed-length vector representation.",
"In our view, existing annotated cQA corpora are generally too small to properly train an end-to-end neural network. To address this, we investigate transfer learning by pretraining the recurrent systems on other corpora, and also generating additional instances from existing cQA corpus."
],
[
"Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. BIBREF3 and BIBREF4 utilized rich non-textual features such as answer's profile. BIBREF5 syntactically analyzed the question and extracted name entity features. BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment.",
"More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer BIBREF7 BIBREF8 . While these approaches showed effectiveness, it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task.",
"Very recent work on answer selection also involved the use of neural networks. BIBREF9 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem. BIBREF10 proposed several convolutional neural network (CNN) architectures for cQA. Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis. During classification, we feed the extracted vector into a feed-forward neural network directly instead of using mean/max pooling on top of each time steps."
],
[
"In this section, we first discuss long short-term memory (LSTM) units and an associated attention mechanism. Next, we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism. Finally, we apply these models to predict question-question similarity, question-comment similarity, and question-external comment similarity."
],
[
"LSTMs have shown great success in many different fields. An LSTM unit contains a memory cell with self-connections, as well as three multiplicative gates to control information flow. Given input vector $x_t$ , previous hidden outputs $h_{t-1}$ , and previous cell state $c_{t-1}$ , LSTM units operate as follows: ",
"$$X &= \\begin{bmatrix}\nx_t\\\\[0.3em]\nh_{t-1}\\\\[0.3em]\n\\end{bmatrix}\\\\\ni_t &= \\sigma (\\mathbf {W_{iX}}X + \\mathbf {W_{ic}}c_{t-1} + \\mathbf {b_i})\\\\\nf_t &= \\sigma (\\mathbf {W_{fX}}X + \\mathbf {W_{fc}}c_{t-1} + \\mathbf {b_f})\\\\\no_t &= \\sigma (\\mathbf {W_{oX}}X + \\mathbf {W_{oc}}c_{t-1} + \\mathbf {b_o})\\\\\nc_t &= f_t \\odot c_{t-1} + i_t \\odot tanh(\\mathbf {W_{cX}}X + \\mathbf {b_c})\\\\\nh_t &= o_t \\odot tanh(c_t)$$ (Eq. 3) ",
"where $i_t$ , $f_t$ , $o_t$ are input, forget, and output gates, respectively. The sigmoid function $\\sigma ()$ is a soft gate function controlling the amount of information flow. $W$ s and $b$ s are model parameters to learn."
],
[
"A traditional RNN encoder-decoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixed-length dense vector that can be used as input to subsequent classification models, or to initialize the hidden state of a secondary decoder. However, the requirement to compress all necessary information into a single fixed length vector can be problematic. A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding. Thus, the encoder no longer needs to represent an entire sequence with one vector; instead, it encodes information into a sequence of vectors, and adaptively chooses a subset of the vectors when decoding."
],
[
"In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.",
"The representations of the two objects are generated independently in this manner. However, we are more interested in the relationship instead of the object representations themselves. Therefore, we consider a serialized LSTM-encoder model in the right side of Figure 1 that is similar to that in BIBREF2 , but also allows an augmented feature input to the FNN classifier.",
"Figure 2 illustrates our attention framework in more detail. The first LSTM reads one object, and passes information through hidden units to the second LSTM. The second LSTM then reads the other object and generates the representation of this pair after the entire sequence is processed. We build another FNN that takes this representation as input to classify the relationship of this pair.",
"By adding an attention mechanism to the encoder, we allow the second LSTM to attend to the sequence of output vectors from the first LSTM, and hence generate a weighted representation of first object according to both objects. Let $h_N$ be the last output of second LSTM and $M = [h_1, h_2, \\cdots , h_L]$ be the sequence of output vectors of the first object. The weighted representation of the first object is ",
"$$h^{\\prime } = \\sum _{i=1}^{L} \\alpha _i h_i$$ (Eq. 7) ",
"The weight is computed by ",
"$$\\alpha _i = \\dfrac{exp(a(h_i,h_N))}{\\sum _{j=1}^{L}exp(a(h_j,h_N))}$$ (Eq. 8) ",
"where $a()$ is the importance model that produces a higher score for $(h_i, h_N)$ if $h_i$ is useful to determine the object pair's relationship. We parametrize this model using another FNN. Note that in our framework, we also allow other augmented features (e.g., the ranking score from the IR system) to enhance the classifier. So the final input to the classifier will be $h_N$ , $h^{\\prime }$ , as well as augmented features."
],
[
"For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).",
"Figure 3 shows our framework: the three lower models are separate serialized LSTM-encoders for the three respective object pairs, whereas the upper model is an FNN that takes as input the concatenation of the outputs of three encoders, and predicts the relationships for all three pairs. More specifically, the output layer consists of three softmax layers where each one is intended to predict the relationship of one particular pair.",
"For the overall loss function, we combine three separate loss functions using a heuristic weight vector $\\beta $ that allocates a higher weight to the main task (oriQ-relC relationship prediction) as follows: ",
"$$\\mathcal {L} = \\beta _1 \\mathcal {L}_1 + \\beta _2 \\mathcal {L}_2 + \\beta _3 \\mathcal {L}_3$$ (Eq. 11) ",
"By doing so, we hypothesize that the related tasks can improve the main task by leveraging commonality among all tasks."
],
[
"We evaluate our approach on all three cQA tasks. We use the cQA datasets provided by the Semeval 2016 task . The cQA data is organized as follows: there are 267 original questions, each question has 10 related question, and each related question has 10 comments. Therefore, for task A, there are a total number of 26,700 question-comment pairs. For task B, there are 2,670 question-question pairs. For task C, there are 26,700 question-comment pairs. The test dataset includes 50 questions, 500 related questions and 5,000 comments which do not overlap with the training set. To evaluate the performance, we use mean average precision (MAP) and F1 score."
],
[
"Table 2 shows the initial results using the RNN encoder for different tasks. We observe that the attention model always gets better results than the RNN without attention, especially for task C. However, the RNN model achieves a very low F1 score. For task B, it is even worse than the random baseline. We believe the reason is because for task B, there are only 2,670 pairs for training which is very limited training for a reasonable neural network. For task C, we believe the problem is highly imbalanced data. Since the related comments did not directly comment on the original question, more than $90\\%$ of the comments are labeled as irrelevant to the original question. The low F1 (with high precision and low recall) means our system tends to label most comments as irrelevant. In the following section, we investigate methods to address these issues."
],
[
"One way to improve models trained on limited data is to use external data to pretrain the neural network. We therefore considered two different datasets for this task.",
"Cross-domain: The Stanford natural language inference (SNLI) corpus BIBREF17 has a huge amount of cleaned premise and hypothesis pairs. Unfortunately the pairs are for a different task. The relationship between the premise and hypothesis may be similar to the relation between questions and comments, but may also be different.",
"In-domain: since task A seems has reasonable performance, and the network is also well-trained, we could use it directly to initialize task B.",
"To utilize the data, we first trained the model on each auxiliary data (SNLI or Task A) and then removed the softmax layer. After that, we retrain the network using the target data with a softmax layer that was randomly initialized.",
"For task A, the SNLI cannot improve MAP or F1 scores. Actually it slightly hurts the performance. We surmise that it is probably because the domain is different. Further investigation is needed: for example, we could only use the parameter for embedding layers etc. For task B, the SNLI yields a slight improvement on MAP ( $0.2\\%$ ), and Task A could give ( $1.2\\%$ ) on top of that. No improvement was observed on F1. For task C, pretraining by task A is also better than using SNLI (task A is $1\\%$ better than the baseline, while SNLI is almost the same).",
"In summary, the in-domain pretraining seems better, but overall, the improvement is less than we expected, especially for task B, which only has very limited target data. We will not make a conclusion here since more investigation is needed."
],
[
"As mentioned in Section \"Modeling Question-External Comments\" , we also explored a multitask learning framework that jointly learns to predict the relationships of all three tasks. We set $0.8$ for the main task (task C) and $0.1$ for the other auxiliary tasks. The MAP score did not improve, but F1 increases to $0.1617$ . We believe this is because other tasks have more balanced labels, which improves the shared parameters for task C."
],
[
"There are many sources of external question-answer pairs that could be used in our tasks. For example: WebQuestion (was introduced by the authors of SEMPRE system BIBREF18 ) and The SimpleQuestions dataset . All of them are positive examples for our task and we can easily create negative examples from it. Initial experiments indicate that it is very easy to overfit these obvious negative examples. We believe this is because our negative examples are non-informative for our task and just introduce noise.",
"Since the external data seems to hurt the performance, we try to use the in-domain pairs to enhance task B and task C. For task B, if relative question 1 (rel1) and relative question 2 (rel2) are both relevant to the original question, then we add a positive sample (rel1, rel2, 1). If either rel1 and rel2 is irrelevant and the other is relevant, we add a negative sample (rel1, rel2, 0). After doing this, the samples of task B increase from $2,670$ to $11,810$ . By applying this method, the MAP score increased slightly from $0.5723$ to $0.5789$ but the F1 score improved from $0.4334$ to $0.5860$ .",
"For task C, we used task A's data directly. The results are very similar with a slight improvement on MAP, but large improvement on F1 score from $0.1449$ to $0.2064$ ."
],
[
"To further enhance the system, we incorporate a one hot vector of the original IR ranking as an additional feature into the FNN classifier. Table 3 shows the results. In comparing the models with and without augmented features, we can see large improvement for task B and C. The F1 score for task A degrades slightly but MAP improves. This might be because task A already had a substantial amount of training data."
],
[
"Table 4 gives the final comparison between different models (we only list the MAP score because it is the official score for the challenge). Since the two baseline models did not use any additional data, in this table our system was also restricted to the provided training data. For task A, we can see that if there is enough training data our single system already performs better than a very strong feature-rich based system. For task B, since only limited training data is given, both feature-rich based system and our system are worse than the IR system. For task C, our system also got comparable results with the feature-rich based system. If we do a simple system combination (average the rank score) between our system and the IR system, the combined system will give large gains on tasks B and C. This implies that our system is complimentary with the IR system."
],
[
"In addition to quantitative analysis, it is natural to qualitatively evaluate the performance of the attention mechanism by visualizing the weight distribution of each instance. We randomly picked several instances from the test set in task A, for which the sentence lengths are more moderate for demonstration. These examples are shown in Figure 5 , and categorized into short, long, and noisy sentences for discussion. A darker blue patch refers to a larger weight relative to other words in the same sentence."
],
[
"Figure 5 illustrates two cQA examples whose questions are relatively short. The comments corresponding to these questions are “...snorkeling two days ago off the coast of dukhan...” and “the doha international airport...”. We can observe that our model successfully learns to focus on the most representative part of the question pertaining to classifying the relationship, which is \"place for snorkeling\" for the first example and “place can ... visited in qatar” for the second example."
],
[
"In Figure 5 , we investigate two examples with longer questions, which both contain 63 words. Interestingly, the distribution of weights does not become more uniform; the model still focuses attention on a small number of hot words, for example, “puppy dog for ... mall” and “hectic driving in doha ... car insurance ... quite costly”. Additionally, some words that appear frequently but carry little information for classification are assigned very small weights, such as I/we/my, is/am, like, and to."
],
[
"Due to the open nature of cQA forums, some content is noisy. Figure 5 is an example with excessive usage of question marks. Again, our model exhibits its robustness by allocating very low weights to the noise symbols and therefore excludes the noninformative content."
],
[
"In this paper, we demonstrate that a general RNN encoder framework can be applied to community question answering tasks. By adding a neural attention mechanism, we showed quantitatively and qualitatively that attention can improve the RNN encoder framework. To deal with a more realistic scenario, we expanded the framework to incorporate metadata as augmented inputs to a FNN classifier, and pretrained models on larger datasets, increasing both stability and performance. Our model is consistently better than or comparable to a strong feature-rich baseline system, and is superior to an IR-based system when there is a reasonable amount of training data.",
"Our model is complimentary with an IR-based system that uses vast amounts of external resources but trained for general purposes. By combining the two systems, it exceeds the feature-rich and IR-based system in all three tasks.",
"Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015.",
"Future work could proceed in two directions: first, we can enrich the existing system by incorporating available metadata and preprocessing data with morphological normalization and out-of-vocabulary mappings; second, we can reinforce our model by carrying out word-by-word and history-aware attention mechanisms instead of attending only when reading the last word."
]
],
"section_name": [
"Introduction",
"Related Work",
"Method",
"LSTM Models",
"Neural Attention",
"Predicting Relationships of Object Pairs with an Attention Model",
"Modeling Question-External Comments",
"Experiments",
"Preliminary Results",
"Robust Parameter Initialization",
"Multitask Learning",
"Augmented data",
"Augmented features",
"Comparison with Other Systems",
"Analysis of Attention Mechanism",
"Short Sentences",
"Long Sentences",
"Noisy Sentence",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"005fda1710dc27880d84605c9bb3971e626fda3b"
],
"answer": [
{
"evidence": [
"Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.",
"In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.",
"For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC)."
],
"extractive_spans": [],
"free_form_answer": "Multitask learning is used for the task of predicting relevance of a comment on a different question to a given question, where the supplemental tasks are predicting relevance between the questions, and between the comment and the corresponding question",
"highlighted_evidence": [
"Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C).",
"In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant.",
"For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
},
{
"annotation_id": [
"4c8ee0a6a696fcf32952cf3af380a67a2f13d3dc"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
},
{
"annotation_id": [
"0e10c370139082b10a811c4b9dd46fb990dc2ea7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Compared with other systems (bold is best)."
],
"extractive_spans": [],
"free_form_answer": "0.007 MAP on Task A, 0.032 MAP on Task B, 0.055 MAP on Task C",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Compared with other systems (bold is best)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
},
{
"annotation_id": [
"fd5c3d425ea41f2498d7231e6f3f86aa27294e59"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
},
{
"annotation_id": [
"92d9c65afd196f9731be8244c24e2fa52f2ff870"
],
"answer": [
{
"evidence": [
"Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
}
],
"nlp_background": [
"five",
"five",
"two",
"two",
"two"
],
"paper_read": [
"somewhat",
"somewhat",
"no",
"no",
"no"
],
"question": [
"What supplemental tasks are used for multitask learning?",
"Is the improvement actually coming from using an RNN?",
"How much performance gap between their approach and the strong handcrafted method?",
"What is a strong feature-based method?",
"Did they experimnet in other languages?"
],
"question_id": [
"efc65e5032588da4a134d121fe50d49fe8fe5e8c",
"a30958c7123d1ad4723dcfd19d8346ccedb136d5",
"08333e4dd1da7d6b5e9b645d40ec9d502823f5d7",
"bc1bc92920a757d5ec38007a27d0f49cb2dde0d1",
"942eb1f7b243cdcfd47f176bcc71de2ef48a17c4"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question answering",
"question answering",
"Question Answering",
"Question Answering",
"Question Answering"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: RNN encoder for related question/comment selection.",
"Figure 2: Neural attention model for related question/comment selection.",
"Figure 3: Joint learning for external comment selection.",
"Figure 4: IR-based system and feature-rich based system.",
"Table 2: The RNN encoder results for cQA tasks (bold is best).",
"Table 3: cQA task results with augmented features (bold is best).",
"Table 4: Compared with other systems (bold is best).",
"Figure 5: Visualization of attention mechanism on short, long, and noisy sentences."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"5-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Figure5-1.png"
]
} | [
"What supplemental tasks are used for multitask learning?",
"How much performance gap between their approach and the strong handcrafted method?"
] | [
[
"1603.07044-Predicting Relationships of Object Pairs with an Attention Model-0",
"1603.07044-Modeling Question-External Comments-0",
"1603.07044-Introduction-1"
],
[
"1603.07044-7-Table4-1.png"
]
] | [
"Multitask learning is used for the task of predicting relevance of a comment on a different question to a given question, where the supplemental tasks are predicting relevance between the questions, and between the comment and the corresponding question",
"0.007 MAP on Task A, 0.032 MAP on Task B, 0.055 MAP on Task C"
] | 72 |
1902.09314 | Attentional Encoder Network for Targeted Sentiment Classification | Targeted sentiment classification aims at determining the sentimental tendency towards specific targets. Most of the previous approaches model context and target words with RNN and attention. However, RNNs are difficult to parallelize and truncated backpropagation through time brings difficulty in remembering long-term patterns. To address this issue, this paper proposes an Attentional Encoder Network (AEN) which eschews recurrence and employs attention based encoders for the modeling between context and target. We raise the label unreliability issue and introduce label smoothing regularization. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of our model. | {
"paragraphs": [
[
"Targeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over “opinion targets” that explicitly appear in the sentence. For example, given a sentence “I hated their service, but their food was great”, the sentiment polarities for the target “service” and “food” are negative and positive respectively. A target is usually an entity or an entity aspect.",
"In recent years, neural network models are designed to automatically learn useful low-dimensional representations from targets and contexts and obtain promising results BIBREF0 , BIBREF1 . However, these neural network models are still in infancy to deal with the fine-grained targeted sentiment classification task.",
"Attention mechanism, which has been successfully used in machine translation BIBREF2 , is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target. There are already some studies use attention to generate target-specific sentence representations BIBREF3 , BIBREF4 , BIBREF5 or to transform sentence representations according to target words BIBREF6 . However, these studies depend on complex recurrent neural networks (RNNs) as sequence encoder to compute hidden semantics of texts.",
"The first problem with previous works is that the modeling of text relies on RNNs. RNNs, such as LSTM, are very expressive, but they are hard to parallelize and backpropagation through time (BPTT) requires large amounts of memory and computation. Moreover, essentially every training algorithm of RNN is the truncated BPTT, which affects the model's ability to capture dependencies over longer time scales BIBREF7 . Although LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information, this usually requires a large amount of training data. Another problem that previous studies ignore is the label unreliability issue, since neutral sentiment is a fuzzy sentimental state and brings difficulty for model learning. As far as we know, we are the first to raise the label unreliability issue in the targeted sentiment classification task.",
"This paper propose an attention based model to solve the problems above. Specifically, our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words. To deal with the label unreliability issue, we employ a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT BIBREF8 to this task and show our model enhances the performance of basic BERT model. Experimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models.",
"The main contributions of this work are presented as follows:"
],
[
"The research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods.",
"Traditional machine learning methods, including rule-based methods BIBREF9 and statistic-based methods BIBREF10 , mainly focus on extracting a set of features like sentiment lexicons features and bag-of-words features to train a sentiment classifier BIBREF11 . The performance of these methods highly depends on the effectiveness of the feature engineering works, which are labor intensive.",
"In recent years, neural network methods are getting more and more attention as they do not need handcrafted features and can encode sentences with low-dimensional word vectors where rich semantic information stained. In order to incorporate target words into a model, Tang et al. tang2016effective propose TD-LSTM to extend LSTM by using two single-directional LSTM to model the left context and right context of the target word respectively. Tang et al. tang2016aspect design MemNet which consists of a multi-hop attention mechanism with an external memory to capture the importance of each context word concerning the given target. Multiple attention is paid to the memory represented by word embeddings to build higher semantic information. Wang et al. wang2016attention propose ATAE-LSTM which concatenates target embeddings with word representations and let targets participate in computing attention weights. Chen et al. chen2017recurrent propose RAM which adopts multiple-attention mechanism on the memory built with bidirectional LSTM and nonlinearly combines the attention results with gated recurrent units (GRUs). Ma et al. ma2017interactive propose IAN which learns the representations of the target and context with two attention networks interactively."
],
[
"Given a context sequence INLINEFORM0 and a target sequence INLINEFORM1 , where INLINEFORM2 is a sub-sequence of INLINEFORM3 . The goal of this model is to predict the sentiment polarity of the sentence INLINEFORM4 over the target INLINEFORM5 .",
"Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT."
],
[
"Let INLINEFORM0 to be the pre-trained GloVe BIBREF12 embedding matrix, where INLINEFORM1 is the dimension of word vectors and INLINEFORM2 is the vocabulary size. Then we map each word INLINEFORM3 to its corresponding embedding vector INLINEFORM4 , which is a column in the embedding matrix INLINEFORM5 .",
"BERT embedding uses the pre-trained BERT to generate word vectors of sequence. In order to facilitate the training and fine-tuning of BERT model, we transform the given context and target to “[CLS] + context + [SEP]” and “[CLS] + target + [SEP]” respectively."
],
[
"The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings. This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT).",
"Multi-Head Attention (MHA) is the attention that can perform multiple attention function in parallel. Different from Transformer BIBREF13 , we use Intra-MHA for introspective context words modeling and Inter-MHA for context-perceptive target words modeling, which is more lightweight and target is modeled according to a given context.",
"An attention function maps a key sequence INLINEFORM0 and a query sequence INLINEFORM1 to an output sequence INLINEFORM2 : DISPLAYFORM0 ",
" where INLINEFORM0 denotes the alignment function which learns the semantic relevance between INLINEFORM1 and INLINEFORM2 : DISPLAYFORM0 ",
" where INLINEFORM0 are learnable weights.",
"MHA can learn n_head different scores in parallel child spaces and is very powerful for alignments. The INLINEFORM0 outputs are concatenated and projected to the specified hidden dimension INLINEFORM1 , namely, DISPLAYFORM0 ",
" where “ INLINEFORM0 ” denotes vector concatenation, INLINEFORM1 , INLINEFORM2 is the output of the INLINEFORM3 -th head attention and INLINEFORM4 .",
"Intra-MHA, or multi-head self-attention, is a special situation for typical attention mechanism that INLINEFORM0 . Given a context embedding INLINEFORM1 , we can get the introspective context representation INLINEFORM2 by: DISPLAYFORM0 ",
" The learned context representation INLINEFORM0 is aware of long-term dependencies.",
"Inter-MHA is the generally used form of attention mechanism that INLINEFORM0 is different from INLINEFORM1 . Given a context embedding INLINEFORM2 and a target embedding INLINEFORM3 , we can get the context-perceptive target representation INLINEFORM4 by: DISPLAYFORM0 ",
"After this interactive procedure, each given target word INLINEFORM0 will have a composed representation selected from context embeddings INLINEFORM1 . Then we get the context-perceptive target words modeling INLINEFORM2 .",
"A Point-wise Convolution T ransformation (PCT) can transform contextual information gathered by the MHA. Point-wise means that the kernel sizes are 1 and the same transformation is applied to every single token belonging to the input. Formally, given a input sequence INLINEFORM0 , PCT is defined as: DISPLAYFORM0 ",
" where INLINEFORM0 stands for the ELU activation, INLINEFORM1 is the convolution operator, INLINEFORM2 and INLINEFORM3 are the learnable weights of the two convolutional kernels, INLINEFORM4 and INLINEFORM5 are biases of the two convolutional kernels.",
"Given INLINEFORM0 and INLINEFORM1 , PCTs are applied to get the output hidden states of the attentional encoder layer INLINEFORM2 and INLINEFORM3 by: DISPLAYFORM0 "
],
[
"After we obtain the introspective context representation INLINEFORM0 and the context-perceptive target representation INLINEFORM1 , we employ another MHA to obtain the target-specific context representation INLINEFORM2 by: DISPLAYFORM0 ",
" The multi-head attention function here also has its independent parameters."
],
[
"We get the final representations of the previous outputs by average pooling, concatenate them as the final comprehensive representation INLINEFORM0 , and use a full connected layer to project the concatenated vector into the space of the targeted INLINEFORM1 classes. DISPLAYFORM0 ",
" where INLINEFORM0 is the predicted sentiment polarity distribution, INLINEFORM1 and INLINEFORM2 are learnable parameters."
],
[
"Since neutral sentiment is a very fuzzy sentimental state, training samples which labeled neutral are unreliable. We employ a Label Smoothing Regularization (LSR) term in the loss function. which penalizes low entropy output distributions BIBREF14 . LSR can reduce overfitting by preventing a network from assigning the full probability to each training example during training, replaces the 0 and 1 targets for a classifier with smoothed values like 0.1 or 0.9.",
"For a training sample INLINEFORM0 with the original ground-truth label distribution INLINEFORM1 , we replace INLINEFORM2 with DISPLAYFORM0 ",
" where INLINEFORM0 is the prior distribution over labels , and INLINEFORM1 is the smoothing parameter. In this paper, we set the prior label distribution to be uniform INLINEFORM2 .",
"LSR is equivalent to the KL divergence between the prior label distribution INLINEFORM0 and the network's predicted distribution INLINEFORM1 . Formally, LSR term is defined as: DISPLAYFORM0 ",
"The objective function (loss function) to be optimized is the cross-entropy loss with INLINEFORM0 and INLINEFORM1 regularization, which is defined as: DISPLAYFORM0 ",
" where INLINEFORM0 is the ground truth represented as a one-hot vector, INLINEFORM1 is the predicted sentiment distribution vector given by the output layer, INLINEFORM2 is the coefficient for INLINEFORM3 regularization term, and INLINEFORM4 is the parameter set."
],
[
"We conduct experiments on three datasets: SemEval 2014 Task 4 BIBREF15 dataset composed of Restaurant reviews and Laptop reviews, and ACL 14 Twitter dataset gathered by Dong et al. dong2014adaptive. These datasets are labeled with three sentiment polarities: positive, neutral and negative. Table TABREF31 shows the number of training and test instances in each category.",
"Word embeddings in AEN-GloVe do not get updated in the learning process, but we fine-tune pre-trained BERT in AEN-BERT. Embedding dimension INLINEFORM0 is 300 for GloVe and is 768 for pre-trained BERT. Dimension of hidden states INLINEFORM1 is set to 300. The weights of our model are initialized with Glorot initialization BIBREF16 . During training, we set label smoothing parameter INLINEFORM2 to 0.2 BIBREF14 , the coefficient INLINEFORM3 of INLINEFORM4 regularization item is INLINEFORM5 and dropout rate is 0.1. Adam optimizer BIBREF17 is applied to update all the parameters. We adopt the Accuracy and Macro-F1 metrics to evaluate the performance of the model."
],
[
"In order to comprehensively evaluate and analysis the performance of AEN-GloVe, we list 7 baseline models and design 4 ablations of AEN-GloVe. We also design a basic BERT-based model to evaluate the performance of AEN-BERT.",
" ",
"Non-RNN based baselines:",
" INLINEFORM0 Feature-based SVM BIBREF18 is a traditional support vector machine based model with extensive feature engineering.",
" INLINEFORM0 Rec-NN BIBREF0 firstly uses rules to transform the dependency tree and put the opinion target at the root, and then learns the sentence representation toward target via semantic composition using Recursive NNs.",
" INLINEFORM0 MemNet BIBREF19 uses multi-hops of attention layers on the context word embeddings for sentence representation to explicitly captures the importance of each context word.",
" ",
"RNN based baselines:",
" INLINEFORM0 TD-LSTM BIBREF1 extends LSTM by using two LSTM networks to model the left context with target and the right context with target respectively. The left and right target-dependent representations are concatenated for predicting the sentiment polarity of the target.",
" INLINEFORM0 ATAE-LSTM BIBREF3 strengthens the effect of target embeddings, which appends the target embeddings with each word embeddings and use LSTM with attention to get the final representation for classification.",
" INLINEFORM0 IAN BIBREF4 learns the representations of the target and context with two LSTMs and attentions interactively, which generates the representations for targets and contexts with respect to each other.",
" INLINEFORM0 RAM BIBREF5 strengthens MemNet by representing memory with bidirectional LSTM and using a gated recurrent unit network to combine the multiple attention outputs for sentence representation.",
" ",
"AEN-GloVe ablations:",
" INLINEFORM0 AEN-GloVe w/o PCT ablates PCT module.",
" INLINEFORM0 AEN-GloVe w/o MHA ablates MHA module.",
" INLINEFORM0 AEN-GloVe w/o LSR ablates label smoothing regularization.",
" INLINEFORM0 AEN-GloVe-BiLSTM replaces the attentional encoder layer with two bidirectional LSTM.",
" ",
"Basic BERT-based model:",
" INLINEFORM0 BERT-SPC feeds sequence “[CLS] + context + [SEP] + target + [SEP]” into the basic BERT model for sentence pair classification task."
],
[
"Table TABREF34 shows the performance comparison of AEN with other models. BERT-SPC and AEN-BERT obtain substantial accuracy improvements, which shows the power of pre-trained BERT on small-data task. The overall performance of AEN-BERT is better than BERT-SPC, which suggests that it is important to design a downstream network customized to a specific task. As the prior knowledge in the pre-trained BERT is not specific to any particular domain, further fine-tuning on the specific task is necessary for releasing the true power of BERT.",
"The overall performance of TD-LSTM is not good since it only makes a rough treatment of the target words. ATAE-LSTM, IAN and RAM are attention based models, they stably exceed the TD-LSTM method on Restaurant and Laptop datasets. RAM is better than other RNN based models, but it does not perform well on Twitter dataset, which might because bidirectional LSTM is not good at modeling small and ungrammatical text.",
"Feature-based SVM is still a competitive baseline, but relying on manually-designed features. Rec-NN gets the worst performances among all neural network baselines as dependency parsing is not guaranteed to work well on ungrammatical short texts such as tweets and comments. Like AEN, MemNet also eschews recurrence, but its overall performance is not good since it does not model the hidden semantic of embeddings, and the result of the last attention is essentially a linear combination of word embeddings."
],
[
"As shown in Table TABREF34 , the performances of AEN-GloVe ablations are incomparable with AEN-GloVe in both accuracy and macro-F1 measure. This result shows that all of these discarded components are crucial for a good performance. Comparing the results of AEN-GloVe and AEN-GloVe w/o LSR, we observe that the accuracy of AEN-GloVe w/o LSR drops significantly on all three datasets. We could attribute this phenomenon to the unreliability of the training samples with neutral sentiment. The overall performance of AEN-GloVe and AEN-GloVe-BiLSTM is relatively close, AEN-GloVe performs better on the Restaurant dataset. More importantly, AEN-GloVe has fewer parameters and is easier to parallelize.",
"To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .",
"RNN-based and BERT-based models indeed have larger model size. ATAE-LSTM, IAN, RAM, and AEN-GloVe-BiLSTM are all attention based RNN models, memory optimization for these models will be more difficult as the encoded hidden states must be kept simultaneously in memory in order to perform attention mechanisms. MemNet has the lowest model size as it only has one shared attention layer and two linear layers, it does not calculate hidden states of word embeddings. AEN-GloVe's lightweight level ranks second, since it takes some more parameters than MemNet in modeling hidden states of sequences. As a comparison, the model size of AEN-GloVe-BiLSTM is more than twice that of AEN-GloVe, but does not bring any performance improvements."
],
[
"In this work, we propose an attentional encoder network for the targeted sentiment classification task. which employs attention based encoders for the modeling between context and target. We raise the the label unreliability issue add a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of the proposed model."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Methodology",
"Embedding Layer",
"Attentional Encoder Layer",
"Target-specific Attention Layer",
"Output Layer",
"Regularization and Model Training",
"Datasets and Experimental Settings",
"Model Comparisons",
"Main Results",
"Model Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0064ff0d9e06a701f36bb4baabb7d086c3311fd6"
],
"answer": [
{
"evidence": [
"The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings. This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"dfb36457161c897a38f62432f6193613b02071e8"
],
"answer": [
{
"evidence": [
"To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .",
"FLOAT SELECTED: Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold."
],
"extractive_spans": [],
"free_form_answer": "Proposed model has 1.16 million parameters and 11.04 MB.",
"highlighted_evidence": [
"Statistical results are reported in Table TABREF37 .",
"FLOAT SELECTED: Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5cfeb55daf47a1b7845791e8c4a7ed3da8a2ccfd"
],
"answer": [
{
"evidence": [
"Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT."
],
"extractive_spans": [
"overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer."
],
"free_form_answer": "",
"highlighted_evidence": [
"Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they use multi-attention heads?",
"How big is their model?",
"How is their model different from BERT?"
],
"question_id": [
"9bffc9a9c527e938b2a95ba60c483a916dbd1f6b",
"8434974090491a3c00eed4f22a878f0b70970713",
"b67420da975689e47d3ea1c12b601851018c4071"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Overall architecture of the proposed AEN.",
"Table 1: Statistics of the datasets.",
"Table 2: Main results. The results of baseline models are retrieved from published papers. Top 2 scores are in bold.",
"Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png"
]
} | [
"How big is their model?"
] | [
[
"1902.09314-Model Analysis-1",
"1902.09314-7-Table3-1.png"
]
] | [
"Proposed model has 1.16 million parameters and 11.04 MB."
] | 73 |
1910.11769 | DENS: A Dataset for Multi-class Emotion Analysis | We introduce a new dataset for multi-class emotion analysis from long-form narratives in English. The Dataset for Emotions of Narrative Sequences (DENS) was collected from both classic literature available on Project Gutenberg and modern online narratives available on Wattpad, annotated using Amazon Mechanical Turk. A number of statistics and baseline benchmarks are provided for the dataset. Of the tested techniques, we find that the fine-tuning of a pre-trained BERT model achieves the best results, with an average micro-F1 score of 60.4%. Our results show that the dataset provides a novel opportunity in emotion analysis that requires moving beyond existing sentence-level techniques. | {
"paragraphs": [
[
"Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms.",
"Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1, BIBREF2. These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification).",
"Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad).",
"In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement."
],
[
"Using the categorical basic emotion model BIBREF3, BIBREF4, BIBREF5 studied creating lexicons from tweets for use in emotion analysis. Recently, BIBREF1, BIBREF6 and BIBREF2 proposed shared-tasks for multi-class emotion analysis based on tweets.",
"Fewer works have been reported on understanding emotions in narratives. Emotional Arc BIBREF7 is one recent advance in this direction. The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg.",
"For labelled datasets on narratives, BIBREF8 provided a sentence-level annotated corpus of childrens' stories and BIBREF9 provided phrase-level annotations on selected Project Gutenberg titles.",
"To the best of our knowledge, the dataset in this work is the first to provide multi-class emotion labels on passages, selected from both Project Gutenberg and modern narratives. The dataset is available upon request for non-commercial, research only purposes."
],
[
"In this section, we describe the process used to collect and annotate the dataset."
],
[
"The dataset is annotated based on a modified Plutchik’s wheel of emotions.",
"The original Plutchik’s wheel consists of 8 primary emotions: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Trust, Disgust. In addition, more complex emotions can be formed by combing two basic emotions. For example, Love is defined as a combination of Joy and Trust (Fig. 1).",
"The intensity of an emotion is also captured in Plutchik's wheel. For example, the primary emotion of Anger can vary between Annoyance (mild) and Rage (intense).",
"We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre. We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions.",
"We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories. Hence, we modified our annotation scheme by removing Trust and adding Love. We also added the Neutral category to denote passages that do not exhibit any emotional content.",
"The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral."
],
[
"We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens.",
"In long-form narratives, many non-conversational passages are intended for transition or scene introduction, and may not carry any emotion. We divided the eligible passages into two parts, and one part was pruned using selected emotion-rich but ambiguous lexicons such as cry, punch, kiss, etc.. Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages. See Appendix SECREF25 for the lexicons used."
],
[
"MTurk was set up using the standard sentiment template and instructed the crowd annotators to `pick the best/major emotion embodied in the passage'.",
"We further provided instructions to clarify the intensity of an emotion, such as: “Rage/Annoyance is a form of Anger”, “Serenity/Ecstasy is a form of Joy”, and “Love includes Romantic/Family/Friendship”, along with sample passages.",
"We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\\kappa $ score of greater than $0.4$.",
"For passages without majority agreement between annotators, we consolidated their labels using in-house data annotators who are experts in narrative content. A passage is accepted as valid if the in-house annotator's label matched any one of the MTurk annotators' labels. The remaining passages are discarded. We provide the fraction of annotator agreement for each label in the dataset.",
"Though passages may lose some emotional context when read independently of the complete narrative, we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions.",
"During the annotation process, several annotators had suggested for us to include additional emotions such as confused, pain, and jealousy, which are common to narratives. As they were not part of the original Plutchik’s wheel, we decided to not include them. An interesting future direction is to study the relationship between emotions such as ‘pain versus sadness’ or ‘confused versus surprise’ and improve the emotion model for narratives."
],
[
"The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.",
"The vocabulary size is 28K (when lowercased). It contains over 1600 unique titles across multiple categories, including 88 titles (1520 passages) from Project Gutenberg. All of the modern narratives were written after the year 2000, with notable amount of themes in coming-of-age, strong-female-lead, and LGBTQ+. The genre distribution is listed in Table TABREF8.",
"In the final dataset, 21.0% of the data has consensus between all annotators, 73.5% has majority agreement, and 5.48% has labels assigned after consultation with in-house annotators.",
"The distribution of data points over labels with top lexicons (lower-cased, normalized) is shown in Table TABREF9. Note that the Disgust category is very small and should be discarded. Furthermore, we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well.",
"Table TABREF10 shows a few examples labelled data from classic titles. More examples can be found in Table TABREF26 in the Appendix SECREF27."
],
[
"We performed benchmark experiments on the dataset using several different algorithms. In all experiments, we have discarded the data labelled with Surprise and Disgust.",
"We pre-processed the data by using the SpaCy pipeline. We masked out named entities with entity-type specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification.",
"Benchmark results are shown in Table TABREF17. The dataset is approximately balanced after discarding the Surprise and Disgust classes. We report the average micro-F1 scores, with 5-fold cross validation for each technique.",
"We provide a brief overview of each benchmark experiment below. Among all of the benchmarks, Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 achieved the best performance with a 0.604 micro-F1 score.",
"Overall, we observed that deep-learning based techniques performed better than lexical based methods. This suggests that a method which attends to context and themes could do well on the dataset."
],
[
"We computed bag-of-words-based benchmarks using the following methods:",
"Classification with TF-IDF + Linear SVM (TF-IDF + SVM)",
"Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)",
"Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)",
"Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)"
],
[
"We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier."
],
[
"For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.",
"The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training."
],
[
"One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.",
"Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.",
"The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.",
"Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments."
],
[
"Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.",
"We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function."
],
[
"Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.",
"We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\\textrm {{\\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%."
],
[
"We introduce DENS, a dataset for multi-class emotion analysis from long-form narratives in English. We provide a number of benchmark results based on models ranging from bag-of-word models to methods based on pre-trained language models (ELMo and BERT).",
"Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis. The results also demonstrate that attention-based models could significantly improve performance on classification tasks such as emotion analysis.",
"Interesting future directions for this work include: 1. incorporating common-sense knowledge into emotion analysis to capture semantic context and 2. using few-shot learning to bootstrap and improve performance of underrepresented emotions.",
"Finally, as narrative passages often involve interactions between multiple emotions, one avenue for future datasets could be to focus on the multi-emotion complexities of human language and their contextual interactions."
],
[
"Table TABREF26 shows sample passages from classic titles with corresponding labels."
]
],
"section_name": [
"Introduction",
"Background",
"Dataset",
"Dataset ::: Plutchik’s Wheel of Emotions",
"Dataset ::: Passage Selection",
"Dataset ::: Mechanical Turk (MTurk)",
"Dataset ::: Dataset Statistics",
"Benchmarks",
"Benchmarks ::: Bag-of-Words-based Benchmarks",
"Benchmarks ::: Doc2Vec + SVM",
"Benchmarks ::: Hierarchical RNN",
"Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)",
"Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)",
"Benchmarks ::: Fine-tuned BERT",
"Conclusion",
"Appendices ::: Sample Data"
]
} | {
"answers": [
{
"annotation_id": [
"42eb0c70a3fc181f2418a7a3d55c836817cc4d8b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Benchmark results (averaged 5-fold cross validation)",
"We computed bag-of-words-based benchmarks using the following methods:",
"Classification with TF-IDF + Linear SVM (TF-IDF + SVM)",
"Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)",
"Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)",
"Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)"
],
"extractive_spans": [
"Depeche + SVM"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Benchmark results (averaged 5-fold cross validation)",
"We computed bag-of-words-based benchmarks using the following methods:\n\nClassification with TF-IDF + Linear SVM (TF-IDF + SVM)\n\nClassification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)\n\nClassification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)\n\nCombination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"008f3d1972460817cb88951faf690c344574e4af"
],
"answer": [
{
"evidence": [
"The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral."
],
"extractive_spans": [],
"free_form_answer": "9",
"highlighted_evidence": [
"The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ea3a6a6941f3f9c06074abbb4da37590578ff09c"
],
"answer": [
{
"evidence": [
"We computed bag-of-words-based benchmarks using the following methods:",
"Classification with TF-IDF + Linear SVM (TF-IDF + SVM)",
"Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)",
"Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)",
"Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)",
"Benchmarks ::: Doc2Vec + SVM",
"We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.",
"Benchmarks ::: Hierarchical RNN",
"For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.",
"The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.",
"Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)",
"One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.",
"Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.",
"The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.",
"Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.",
"Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)",
"Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.",
"We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.",
"Benchmarks ::: Fine-tuned BERT",
"Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.",
"We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\\textrm {{\\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%."
],
"extractive_spans": [
"TF-IDF + SVM",
"Depeche + SVM",
"NRC + SVM",
"TF-NRC + SVM",
"Doc2Vec + SVM",
" Hierarchical RNN",
"BiRNN + Self-Attention",
"ELMo + BiRNN",
" Fine-tuned BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"We computed bag-of-words-based benchmarks using the following methods:\n\nClassification with TF-IDF + Linear SVM (TF-IDF + SVM)\n\nClassification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)\n\nClassification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)\n\nCombination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)\n\nBenchmarks ::: Doc2Vec + SVM\nWe also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.\n\nBenchmarks ::: Hierarchical RNN\nFor this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.\n\nThe outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.\n\nBenchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)\nOne challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.\n\nSelf-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.\n\nThe benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.\n\nNote that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.\n\nBenchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)\nDeep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.\n\nWe used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.\n\nBenchmarks ::: Fine-tuned BERT\nBidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.\n\nWe used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\\textrm {{\\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"8789ec900d3da8e32409fff8df9c4bba5f18520e"
],
"answer": [
{
"evidence": [
"The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words."
],
"extractive_spans": [
"9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1a8a6f5247e266cb460d5555b64674b590003ec2"
],
"answer": [
{
"evidence": [
"We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\\kappa $ score of greater than $0.4$."
],
"extractive_spans": [
"3 "
],
"free_form_answer": "",
"highlighted_evidence": [
" Each passage was labelled by 3 unique annotators."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Which tested technique was the worst performer?",
"How many emotions do they look at?",
"What are the baseline benchmarks?",
"What is the size of this dataset?",
"How many annotators were there?"
],
"question_id": [
"a4e66e842be1438e5cd8d7cb2a2c589f494aee27",
"cb78e280e3340b786e81636431834b75824568c3",
"2941874356e98eb2832ba22eae9cb08ec8ce0308",
"4e50e9965059899d15d3c3a0c0a2d73e0c5802a0",
"67d8e50ddcc870db71c94ad0ad7f8a59a6c67ca6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"dataset",
"dataset",
"dataset",
"dataset",
"dataset"
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Plutchik’s wheel of emotions (Wikimedia, 2011)",
"Table 1: Genre distribution of the modern narratives",
"Table 4: Benchmark results (averaged 5-fold cross validation)",
"Table 2: Dataset label distribution"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table4-1.png",
"4-Table2-1.png"
]
} | [
"How many emotions do they look at?"
] | [
[
"1910.11769-Dataset ::: Plutchik’s Wheel of Emotions-5"
]
] | [
"9"
] | 75 |
1909.13375 | Tag-based Multi-Span Extraction in Reading Comprehension | With models reaching human performance on many popular reading comprehension datasets in recent years, a new dataset, DROP, introduced questions that were expected to present a harder challenge for reading comprehension models. Among these new types of questions were "multi-span questions", questions whose answers consist of several spans from either the paragraph or the question itself. Until now, only one model attempted to tackle multi-span questions as a part of its design. In this work, we suggest a new approach for tackling multi-span questions, based on sequence tagging, which differs from previous approaches for answering span questions. We show that our approach leads to an absolute improvement of 29.7 EM and 15.1 F1 compared to existing state-of-the-art results, while not hurting performance on other question types. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataset. | {
"paragraphs": [
[
"The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.",
"DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as \"multi-span questions\".",
"Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach."
],
[
"Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as \"heads\". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the \"head predictor\".",
"Numerically-aware BERT (NABERT+) BIBREF6 introduced two main improvements over NAQANET. The first was to replace the QANET encoder with BERT. This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics. The second improvement was to the arithmetic head, consisting of the addition of \"standard numbers\" and \"templates\". Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head, regardless of their occurrence in the passage. Templates were an attempt to enrich the head's arithmetic capabilities, by adding the ability of doing simple multiplications and divisions between up to three numbers.",
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable.",
"Additionally, MTMSN introduced two new other, non span-related, components. The first was a new \"negation\" head, meant to deal with questions deemed as requiring logical negation (e.g. \"How many percent were not German?\"). The second was improving the arithmetic head by using beam search to re-rank candidate arithmetic expressions."
],
[
"Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.",
"The basic structure of our model is shared with NABERT+, which in turn is shared with that of NAQANET (the model initially released with DROP). Consequently, meticulously presenting every part of our model would very likely prove redundant. As a reasonable compromise, we will introduce the shared parts with more brevity, and will go into greater detail when presenting our contributions."
],
[
"Assume there are $K$ answer heads in the model and their weights denoted by $\\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\\in \\left\\lbrace 1,\\ldots \\,K\\right\\rbrace $ such that the probability of an answer $y$ is",
"where each component of the mixture corresponds to an output head such that",
"Note that a head is not always capable of producing the correct answer $y_\\text{gold}$ for each type of question, in which case $p\\left(y_\\text{gold} \\vert z ; x^{P},x^{Q},\\theta \\right)=0$. For example, the arithmetic head, whose output is always a single number, cannot possibly produce a correct answer for a multi-span question.",
"For a multi-span question with an answer composed of $l$ spans, denote $y_{{\\text{gold}}_{\\textit {MS}}}=\\left\\lbrace y_{{\\text{gold}}_1}, \\ldots , y_{{\\text{gold}}_l} \\right\\rbrace $. NAQANET and NABERT+ had no head capable of outputting correct answers for multi-span questions. Instead of ignoring them in training, both models settled on using \"semi-correct answers\": each $y_\\text{gold} \\in y_{{\\text{gold}}_{\\textit {MS}}}$ was considered to be a correct answer (only in training). By deliberately encouraging the model to provide partial answers for multi-span questions, they were able to improve the corresponding F1 score. As our model does have a head with the ability to answer multi-span questions correctly, we didn't provide the aforementioned semi-correct answers to any of the other heads. Otherwise, we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multi-span questions."
],
[
"Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.",
"Summary vectors. The summary vectors are two fixed-size learned representations of the question and the passage, which serve as an input for some of the heads. To create the summary vectors, first define $\\mathbf {T}$ as BERT's output on a $(x^{P},x^{Q})$ input. Then, let $\\mathbf {T}^{P}$ and $\\mathbf {T}^{Q}$ be subsequences of T that correspond to $x^P$ and $x^Q$ respectively. Finally, let us also define Bdim as the dimension of the tokens in $\\mathbf {T}$ (e.g 768 for BERTbase), and have $\\mathbf {W}^P \\in \\mathbb {R}^\\texttt {Bdim}$ and $\\mathbf {W}^Q \\in \\mathbb {R}^\\texttt {Bdim}$ as learned linear layers. Then, the summary vectors are computed as:",
"Head predictor. A learned categorical variable with its number of outcomes equal to the number of answer heads in the model. Used to assign probabilities for using each of the heads in prediction.",
"where FFN is a two-layer feed-forward network with RELU activation.",
"Passage span. Define $\\textbf {W}^S \\in \\mathbb {R}^\\texttt {Bdim}$ and $\\textbf {W}^E \\in \\mathbb {R}^\\texttt {Bdim}$ as learned vectors. Then the probabilities of the start and end positions of a passage span are computed as",
"Question span. The probabilities of the start and end positions of a question span are computed as",
"where $\\textbf {e}^{|\\textbf {T}^Q|}\\otimes \\textbf {h}^P$ repeats $\\textbf {h}^P$ for each component of $\\textbf {T}^Q$.",
"Count. Counting is treated as a multi-class prediction problem with the numbers 0-9 as possible labels. The label probabilities are computed as",
"Arithmetic. As in NAQNET, this head obtains all of the numbers from the passage, and assigns a plus, minus or zero (\"ignore\") for each number. As BERT uses wordpiece tokenization, some numbers are broken up into multiple tokens. Following NABERT+, we chose to represent each number by its first wordpiece. That is, if $\\textbf {N}^i$ is the set of tokens corresponding to the $i^\\text{th}$ number, we define a number representation as $\\textbf {h}_i^N = \\textbf {N}^i_0$.",
"The selection of the sign for each number is a multi-class prediction problem with options $\\lbrace 0, +, -\\rbrace $, and the probabilities for the signs are given by",
"As for NABERT+'s two additional arithmetic features, we decided on using only the standard numbers, as the benefits from using templates were deemed inconclusive. Note that unlike the single-span heads, which are related to our introduction of a multi-span head, the arithmetic and count heads were not intended to play a significant role in our work. We didn't aim to improve results on these types of questions, perhaps only as a by-product of improving the general reading comprehension ability of our model."
],
[
"A subset of questions that wasn't directly dealt with by the base models (NAQANET, NABERT+) is questions that have an answer which is composed of multiple non-continuous spans. We suggest a head that will be able to deal with both single-span and multi-span questions.",
"To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans.",
"As words are broken up by the wordpiece tokenization for BERT, we decided on only considering the representation of the first sub-token of the word to tag, following the NER task from BIBREF2.",
"For the $i$-th token of an input, the probability to be assigned a $\\text{tag} \\in \\left\\lbrace {\\mathtt {B},\\mathtt {I},\\mathtt {O}} \\right\\rbrace $ is computed as"
],
[
"To train our model, we try to maximize the log-likelihood of the correct answer $p(y_\\text{gold};x^{P},x^{Q},\\theta )$ as defined in Section SECREF2. If no head is capable of predicting the gold answer, the sample is skipped.",
"We enumerate over every answer head $z\\in \\left\\lbrace \\textit {PS}, \\textit {QS}, \\textit {C}, \\textit {A}, \\textit {MS}\\right\\rbrace $ (Passage Span, Question Span, Count, Arithmetic, Multi-Span) to compute each of the objective's addends:",
"Note that we are in a weakly supervised setup: the answer type is not given, and neither is the correct arithmetic expression required for deriving some answers. Therefore, it is possible that $y_\\text{gold}$ could be derived by more than one way, even from the same head, with no indication of which is the \"correct\" one.",
"We use the weakly supervised training method used in NABERT+ and NAQANET. Based on BIBREF9, for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood .",
"For a datapoint $\\left(y, x^{P}, x^{Q} \\right)$ let $\\chi ^z$ be the set of all possible ways to get $y$ for answer head $z\\in \\left\\lbrace \\textit {PS}, \\textit {QS}, \\textit {C}, \\textit {A}, \\textit {MS}\\right\\rbrace $. Then, as in NABERT+, we have",
"Finally, for the arithmetic head, let $\\mu $ be the set of all the standard numbers and the numbers from the passage, and let $\\mathbf {\\chi }^{\\textit {A}}$ be the set of correct sign assignments to these numbers. Then, we have"
],
[
"Denote by ${\\chi }^{\\textit {MS}}$ the set of correct tag sequences. If the concatenation of a question and a passage is $m$ tokens long, then denote a correct tag sequence as $\\left(\\text{tag}_1,\\ldots ,\\text{tag}_m\\right)$.",
"We approximate the likelihood of a tag sequence by assuming independence between the sequence's positions, and multiplying the likelihoods of all the correct tags in the sequence. Then, we have"
],
[
"Since a given multi-span answer is a collection of spans, it is required to obtain its matching tag sequences in order to compute the training objective.",
"In what we consider to be a correct tag sequence, each answer span will be marked at least once. Due to the weakly supervised setup, we consider all the question/passage spans that match the answer spans as being correct. To illustrate, consider the following simple example. Given the text \"X Y Z Z\" and the correct multi-span answer [\"Y\", \"Z\"], there are three correct tag sequences: $\\mathtt {O\\,B\\,B\\,B}$,$\\quad $ $\\mathtt {O\\,B\\,B\\,O}$,$\\quad $ $\\mathtt {O\\,B\\,O\\,B}$."
],
[
"The number of correct tag sequences can be expressed by",
"where $s$ is the number of spans in the answer and $\\#_i$ is the number of times the $i^\\text{th}$ span appears in the text.",
"For questions with a reasonable amount of correct tag sequences, we generate all of them before the training starts. However, there is a small group of questions for which the amount of such sequences is between 10,000 and 100,000,000 - too many to generate and train on. In such cases, inspired by BIBREF9, instead of just using an arbitrary subset of the correct sequences, we use beam search to generate the top-k predictions of the training model, and then filter out the incorrect sequences. Compared to using an arbitrary subset, using these sequences causes the optimization to be done with respect to answers more compatible with the model. If no correct tag sequences were predicted within the top-k, we use the tag sequence that has all of the answer spans marked."
],
[
"Based on the outputs $\\textbf {p}_{i}^{{\\text{tag}}_{i}}$ we would like to predict the most likely sequence given the $\\mathtt {BIO}$ constraints. Denote $\\textit {validSeqs}$ as the set of all $\\mathtt {BIO}$ sequences of length $m$ that are valid according to the rules specified in Section SECREF5. The $\\mathtt {BIO}$ tag sequence to predict is then",
"We considered the following approaches:"
],
[
"A natural candidate for getting the most likely sequence is Viterbi decoding, BIBREF10 with transition probabilities learned by a $\\mathtt {BIO}$ constrained Conditional Random Field (CRF) BIBREF11. However, further inspection of our sequence's properties reveals that such a computational effort is probably not necessary, as explained in following paragraphs."
],
[
"Due to our use of $\\mathtt {BIO}$ tags and their constraints, observe that past tag predictions only affect future tag predictions from the last $\\mathtt {B}$ prediction and as long as the best tag to predict is $\\mathtt {I}$. Considering the frequency and length of the correct spans in the question and the passage, effectively there's no effect of past sequence's positions on future ones, other than a very few positions ahead. Together with the fact that at each prediction step there are no more than 3 tags to consider, it means using beam search to get the most likely sequence is very reasonable and even allows near-optimal results with small beam width values."
],
[
"Notice that greedy tagging does not enforce the $\\mathtt {BIO}$ constraints. However, since the multi-span head's training objective adheres to the $\\mathtt {BIO}$ constraints via being given the correct tag sequences, we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well. In case there are violations, their amendment is required post-prediction. Albeit faster, greedy tagging resulted in a small performance hit, as seen in Table TABREF26."
],
[
"We tokenize the passage, question, and all answer texts using the BERT uncased wordpiece tokenizer from huggingface. The tokenization resulting from each $(x^P,x^Q)$ input pair is truncated at 512 tokens so it can be fed to BERT as an input. However, before tokenizing the dataset texts, we perform additional preprocessing as listed below."
],
[
"The raw dataset included almost a thousand of HTML entities that did not get parsed properly, e.g \" \" instead of a simple space. In addition, we fixed some quirks that were introduced by the original Wikipedia parsing method. For example, when encountering a reference to an external source that included a specific page from that reference, the original parser ended up introducing a redundant \":<PAGE NUMBER>\" into the parsed text."
],
[
"Although we previously stated that we aren't focusing on improving arithmetic performance, while analyzing the training process we encountered two arithmetic-related issues that could be resolved rather quickly: a precision issue and a number extraction issue. Regarding precision, we noticed that while either generating expressions for the arithmetic head, or using the arithmetic head to predict a numeric answer, the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations. For example, $5.8 + 6.6 = 12.3999...$ instead of $12.4$. This issue has caused a significant performance hit of about 1.5 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places, assuming that no answer requires a greater precision. Regarding number extraction, we noticed that some numeric entities, required in order to produce a correct answer, weren't being extracted from the passage. Examples include ordinals (121st, 189th) and some \"per-\" units (1,580.7/km2, 1050.95/month)."
],
[
"The training dataset contains multi-span questions with answers that are clearly incorrect, with examples shown in Table TABREF22. In order to mitigate this, we applied an answer-cleaning technique using a pretrained Named Entity Recognition (NER) model BIBREF12 in the following manner: (1) Pre-define question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions. (2) For a given answer of a filtered question, remove any span that does not contain at least one token of the expected type, where the types are determined by applying the NER model on the passage. For example, if a question starts with \"who scored\", we expect that any valid span will include a person entity ($\\mathtt {PER}$). By applying such rules, we discovered that at least 3% of the multi-span questions in the training dataset included incorrect spans. As our analysis of prefixes wasn't exhaustive, we believe that this method could yield further gains. Table TABREF22 shows a few of our cleaning method results, where we perfectly clean the first two questions, and partially clean a third question."
],
[
"The starting point for our implementation was the NABERT+ model, which in turn was based on allenai's NAQANET. Our implementation can be found on GitHub. All three models utilize the allennlp framework. The pretrained BERT models were supplied by huggingface. For our base model we used bert-base-uncased. For our large models we used the standard bert-large-uncased-whole-word-masking and the squad fine-tuned bert-large-uncased- whole-word-masking-finetuned-squad.",
"Due to limited computational resources, we did not perform any hyperparameter searching. We preferred to focus our efforts on the ablation studies, in hope to gain further insights on the effect of the components that we ourselves introduced. For ease of performance comparison, we followed NABERT+'s training settings: we used the BERT Adam optimizer from huggingface with default settings and a learning rate of $1e^{-5}$. The only difference was that we used a batch size of 12. We trained our base model for 20 epochs. For the large models we used a batch size of 3 with a learning rate of $5e^{-6}$ and trained for 5 epochs, except for the model without the single-span heads that was trained with a batch size of 2 for 7 epochs. F1 was used as our validation metric. All models were trained on a single GPU with 12-16GB of memory."
],
[
"Table TABREF24 shows the results on DROP's development set. Compared to our base models, our large models exhibit a substantial improvement across all metrics."
],
[
"We can see that our base model surpasses the NABERT+ baseline in every metric. The major improvement in multi-span performance was expected, as our multi-span head was introduced specifically to tackle this type of questions. For the other types, most of the improvement came from better preprocessing. A more detailed discussion could be found in Section (SECREF36)."
],
[
"Notice that different BERTlarge models were used, so the comparison is less direct. Overall, our large models exhibits similar results to those of MTMSNlarge.",
"For multi-span questions we achieve a significantly better performance. While a breakdown of metrics was only available for MTMSNlarge, notice that even when comparing these metrics to our base model, we still achieve a 12.2 absolute improvement in EM, and a 2.3 improvement in F1. All that, while keeping in mind we compare a base model to a large model (for reference, note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1). Our best model, large-squad, exhibits a huge improvement of 29.7 in EM and 15.1 in F1 compared to MTMSNlarge.",
"When comparing single-span performance, our best model exhibits slightly better results, but it should be noted that it retains the single-span heads from NABERT+, while in MTMSN they have one head to predict both single-span and multi-span answers. For a fairer comparison, we trained our model with the single-span heads removed, where our multi-span head remained the only head aimed for handling span questions. With this no-single-span-heads setting, while our multi-span performance even improved a bit, our single-span performance suffered a slight drop, ending up trailing by 0.8 in EM and 0.6 in F1 compared to MTMSN. Therefore, it could prove beneficial to try and analyze the reasons behind each model's (ours and MTMSN) relative advantages, and perhaps try to combine them into a more holistic approach of tackling span questions."
],
[
"Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions."
],
[
"In order to analyze the effect of each of our changes, we conduct ablation studies on the development set, depicted in Table TABREF26.",
"Not using the simple preprocessing from Section SECREF17 resulted in a 2.5 point decrease in both EM and F1. The numeric questions were the most affected, with their performance dropping by 3.5 points. Given that number questions make up about 61% of the dataset, we can deduce that our improved number handling is responsible for about a 2.1 point gain, while the rest could be be attributed to the improved Wikipedia parsing.",
"Although NER span cleaning (Section SECREF23) affected only 3% of the multi-span questions, it provided a solid improvement of 5.4 EM in multi-span questions and 1.5 EM in single-span questions. The single-span improvement is probably due to the combination of better multi-span head learning as a result of fixing multi-span questions and the fact that the multi-span head can answer single-span questions as well.",
"Not using the single-span heads results in a slight drop in multi-span performance, and a noticeable drop in single-span performance. However when performing the same comparison between our large models (see Table TABREF24), this performance gap becomes significantly smaller.",
"As expected, not using the multi-span head causes the multi-span performance to plummet. Note that for this ablation test the single-span heads were permitted to train on multi-span questions.",
"Compared to using greedy decoding in the prediction of multi-span questions, using beam search results in a small improvement. We used a beam with of 5, and didn't perform extensive tuning of the beam width."
],
[
"In this work, we introduced a new approach for tackling multi-span questions in reading comprehension datasets. This approach is based on individually tagging each token with a categorical tag, relying on the tokens' contextual representation to bridge the information gap resulting from the tokens being tagged individually.",
"First, we show that integrating this new approach into an existing model, NABERT+, does not hinder performance on other questions types, while substantially improving the results on multi-span questions. Later, we compare our results to the current state-of-the-art on multi-span questions. We show that our model has a clear advantage in handling multi-span questions, with a 29.7 absolute improvement in EM, and a 15.1 absolute improvement in F1. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataeset. Finally, we present some ablation studies, analyzing the benefit gained from individual components of our model.",
"We believe that combining our tag-based approach for handling multi-span questions with current successful techniques for handling single-span questions could prove beneficial in finding better, more holistic ways, of tackling span questions in general."
],
[
"Currently, For each individual span, we optimize the average likelihood over all its possible tag sequences (see Section SECREF9). A different approach could be not taking each possible tag sequence into account but only the most likely one. This could provide the model more flexibility during training and the ability to focus on the more \"correct\" tag sequences."
],
[
"As mentioned in Section SECREF5, we only considered the representation of the first wordpiece sub-token in our model. It would be interesting to see how different approaches to utilize the other sub-tokens' representations in the tagging task affect the results."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Model ::: NABERT+",
"Model ::: NABERT+ ::: Heads Shared with NABERT+",
"Model ::: Multi-Span Head",
"Model ::: Objective and Training",
"Model ::: Objective and Training ::: Multi-Span Head Training Objective",
"Model ::: Objective and Training ::: Multi-Span Head Correct Tag Sequences",
"Model ::: Objective and Training ::: Dealing with too Many Correct Tag Sequences",
"Model ::: Tag Sequence Prediction with the Multi-Span Head",
"Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Viterbi Decoding",
"Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Beam Search",
"Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Greedy Tagging",
"Preprocessing",
"Preprocessing ::: Simple Preprocessing ::: Improved Textual Parsing",
"Preprocessing ::: Simple Preprocessing ::: Improved Handling of Numbers",
"Preprocessing ::: Using NER for Cleaning Up Multi-Span Questions",
"Training",
"Results and Discussion ::: Performance on DROP's Development Set",
"Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to the NABERT+ Baseline",
"Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to MTMSN",
"Results and Discussion ::: Performance on DROP's Test Set",
"Results and Discussion ::: Ablation Studies",
"Conclusion",
"Future Work ::: A Different Loss for Multi-span Questions",
"Future Work ::: Explore Utilization of Non-First Wordpiece Sub-Tokens"
]
} | {
"answers": [
{
"annotation_id": [
"eb32830971e006411f8136f81ff218c63213dc22"
],
"answer": [
{
"evidence": [
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable."
],
"extractive_spans": [],
"free_form_answer": "Only MTMSM specifically tried to tackle the multi-span questions. Their approach consisted of two parts: first train a dedicated categorical variable to predict the number of spans to extract and the second was to generalize the single-span head method of extracting a span",
"highlighted_evidence": [
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"b9cb9e533523d40fc08fe9fe6f00405cae72353d"
],
"answer": [
{
"evidence": [
"To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans."
],
"extractive_spans": [
"To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span"
],
"free_form_answer": "",
"highlighted_evidence": [
"To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"e361bbf537c1249359e6d7634f9e6488e688c131"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2. Performance of different models on DROP’s development set in terms of Exact Match (EM) and F1."
],
"extractive_spans": [],
"free_form_answer": "For single-span questions, the proposed LARGE-SQUAD improve performance of the MTMSNlarge baseline for 2.1 EM and 1.55 F1.\nFor number type question, MTMSNlarge baseline have improvement over LARGE-SQUAD for 3,11 EM and 2,98 F1. \nFor date question, LARGE-SQUAD have improvements in 2,02 EM but MTMSNlarge have improvement of 4,39 F1.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Performance of different models on DROP’s development set in terms of Exact Match (EM) and F1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"00d59243ba4b523fab5776695ac6ab22f0f5b8d0"
],
"answer": [
{
"evidence": [
"Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.",
"FLOAT SELECTED: Table 3. Comparing test and development set results of models from the official DROP leaderboard"
],
"extractive_spans": [],
"free_form_answer": "The proposed model achieves EM 77,63 and F1 80,73 on the test and EM 76,95 and F1 80,25 on the dev",
"highlighted_evidence": [
"Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.",
"FLOAT SELECTED: Table 3. Comparing test and development set results of models from the official DROP leaderboard"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"3ec8399148afa26c5b69d8d430c68cd413913834"
],
"answer": [
{
"evidence": [
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable."
],
"extractive_spans": [
"MTMSN BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What approach did previous models use for multi-span questions?",
"How they use sequence tagging to answer multi-span questions?",
"What is difference in peformance between proposed model and state-of-the art on other question types?",
"What is the performance of proposed model on entire DROP dataset?",
"What is the previous model that attempted to tackle multi-span questions as a part of its design?"
],
"question_id": [
"9ab43f941c11a4b09a0e4aea61b4a5b4612e7933",
"5a02a3dd26485a4e4a77411b50b902d2bda3731b",
"579941de2838502027716bae88e33e79e69997a6",
"9a65cfff4d99e4f9546c72dece2520cae6231810",
"a9def7958eac7b9a780403d4f136927f756bab83"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1. Examples of faulty answers for multi-span questions in the training dataset, with their perfect clean answers, and answers generated by our cleaning method",
"Table 2. Performance of different models on DROP’s development set in terms of Exact Match (EM) and F1.",
"Table 3. Comparing test and development set results of models from the official DROP leaderboard",
"Table 4. Ablation tests results summary on DROP’s development set."
],
"file": [
"6-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png"
]
} | [
"What approach did previous models use for multi-span questions?",
"What is difference in peformance between proposed model and state-of-the art on other question types?",
"What is the performance of proposed model on entire DROP dataset?"
] | [
[
"1909.13375-Related Work-2"
],
[
"1909.13375-6-Table2-1.png"
],
[
"1909.13375-Results and Discussion ::: Performance on DROP's Test Set-0",
"1909.13375-6-Table3-1.png"
]
] | [
"Only MTMSM specifically tried to tackle the multi-span questions. Their approach consisted of two parts: first train a dedicated categorical variable to predict the number of spans to extract and the second was to generalize the single-span head method of extracting a span",
"For single-span questions, the proposed LARGE-SQUAD improve performance of the MTMSNlarge baseline for 2.1 EM and 1.55 F1.\nFor number type question, MTMSNlarge baseline have improvement over LARGE-SQUAD for 3,11 EM and 2,98 F1. \nFor date question, LARGE-SQUAD have improvements in 2,02 EM but MTMSNlarge have improvement of 4,39 F1.",
"The proposed model achieves EM 77,63 and F1 80,73 on the test and EM 76,95 and F1 80,25 on the dev"
] | 79 |
1909.00430 | Transfer Learning Between Related Tasks Using Expected Label Proportions | Deep learning systems thrive on abundance of labeled training data but such data is not always available, calling for alternative methods of supervision. One such method is expectation regularization (XR) (Mann and McCallum, 2007), where models are trained based on expected label proportions. We propose a novel application of the XR framework for transfer learning between related tasks, where knowing the labels of task A provides an estimation of the label proportion of task B. We then use a model trained for A to label a large corpus, and use this corpus with an XR loss to train a model for task B. To make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure. We demonstrate the approach on the task of Aspect-based Sentiment classification, where we effectively use a sentence-level sentiment predictor to train accurate aspect-based predictor. The method improves upon fully supervised neural system trained on aspect-level data, and is also cumulative with LM-based pretraining, as we demonstrate by improving a BERT-based Aspect-based Sentiment model. | {
"paragraphs": [
[
"Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) BIBREF0 , in which the model is trained to fit the conditional probabilities of labels given features.",
"In this work we consider the case of correlated tasks, in the sense that knowing the labels for task A provides information on the expected label composition of task B. We demonstrate the approach using sentence-level and aspect-level sentiment analysis, which we use as a running example: knowing that a sentence has positive sentiment label (task A), we can expect that most aspects within this sentence (task B) will also have positive label. While this expectation may be noisy on the individual example level, it holds well in aggregate: given a set of positively-labeled sentences, we can robustly estimate the proportion of positively-labeled aspects within this set. For example, in a random set of positive sentences, we expect to find 90% positive aspects, while in a set of negative sentences, we expect to find 70% negative aspects. These proportions can be easily either guessed or estimated from a small set.",
"We propose a novel application of the XR framework for transfer learning in this setup. We present an algorithm (Sec SECREF12 ) that, given a corpus labeled for task A (sentence-level sentiment), learns a classifier for performing task B (aspect-level sentiment) instead, without a direct supervision signal for task B. We note that the label information for task A is only used at training time. Furthermore, due to the stochastic nature of the estimation, the task A labels need not be fully accurate, allowing us to make use of noisy predictions which are assigned by an automatic classifier (Sections SECREF12 and SECREF4 ). In other words, given a medium-sized sentiment corpus with sentence-level labels, and a large collection of un-annotated text from the same distribution, we can train an accurate aspect-level sentiment classifier.",
"The XR loss allows us to use task A labels for training task B predictors. This ability seamlessly integrates into other semi-supervised schemes: we can use the XR loss on top of a pre-trained model to fine-tune the pre-trained representation to the target task, and we can also take the model trained using XR loss and plentiful data and fine-tune it to the target task using the available small-scale annotated data. In Section SECREF56 we explore these options and show that our XR framework improves the results also when applied on top of a pre-trained Bert-based model BIBREF9 .",
"Finally, to make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure (Section SECREF19 ). Source code is available at https://github.com/MatanBN/XRTransfer."
],
[
"An effective way to supplement small annotated datasets is to use lightly supervised learning, in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. Previous work in lightly-supervised learning focused on training classifiers by using prior knowledge of label proportions BIBREF2 , BIBREF3 , BIBREF10 , BIBREF0 , BIBREF11 , BIBREF12 , BIBREF7 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF8 or prior knowledge of features label associations BIBREF1 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In the context of NLP, BIBREF17 suggested to use distributional similarities of words to train sequence models for part-of-speech tagging and a classified ads information extraction task. BIBREF19 used background lexical information in terms of word-class associations to train a sentiment classifier. BIBREF21 , BIBREF22 suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner."
],
[
"Expectation Regularization (XR) BIBREF0 is a lightly supervised learning method, in which the model is trained to fit the conditional probabilities of labels given features. In the context of NLP, XR was used by BIBREF20 to train twitter-user attribute prediction using hundreds of noisy distributional expectations based on census demographics. Here, we suggest using XR to train a target task (aspect-level sentiment) based on the output of a related source-task classifier (sentence-level sentiment).",
"The main idea of XR is moving from a fully supervised situation in which each data-point INLINEFORM0 has an associated label INLINEFORM1 , to a setup in which sets of data points INLINEFORM2 are associated with corresponding label proportions INLINEFORM3 over that set.",
"Formally, let INLINEFORM0 be a set of data points, INLINEFORM1 be a set of INLINEFORM2 class labels, INLINEFORM3 be a set of sets where INLINEFORM4 for every INLINEFORM5 , and let INLINEFORM6 be the label distribution of set INLINEFORM7 . For example, INLINEFORM8 would indicate that 70% of data points in INLINEFORM9 are expected to have class 0, 20% are expected to have class 1 and 10% are expected to have class 2. Let INLINEFORM10 be a parameterized function with parameters INLINEFORM11 from INLINEFORM12 to a vector of conditional probabilities over labels in INLINEFORM13 . We write INLINEFORM14 to denote the probability assigned to the INLINEFORM15 th event (the conditional probability of INLINEFORM16 given INLINEFORM17 ).",
"A typically objective when training on fully labeled data of INLINEFORM0 pairs is to maximize likelihood of labeled data using the cross entropy loss, INLINEFORM1 ",
"Instead, in XR our data comes in the form of pairs INLINEFORM0 of sets and their corresponding expected label proportions, and we aim to optimize INLINEFORM1 to fit the label distribution INLINEFORM2 over INLINEFORM3 , for all INLINEFORM4 .",
"As counting the number of predicted class labels over a set INLINEFORM0 leads to a non-differentiable objective, BIBREF0 suggest to relax it and use instead the model's posterior distribution INLINEFORM1 over the set: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 indicates the INLINEFORM1 th entry in INLINEFORM2 . Then, we would like to set INLINEFORM3 such that INLINEFORM4 and INLINEFORM5 are close. BIBREF0 suggest to use KL-divergence for this. KL-divergence is composed of two parts: INLINEFORM6 INLINEFORM7 ",
"Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0 ",
"Notice that computing INLINEFORM0 requires summation over INLINEFORM1 for the entire set INLINEFORM2 , which can be prohibitive. We present batched approximation (Section SECREF19 ) to overcome this.",
" BIBREF0 find that XR might find a degenerate solution. For example, in a three class classification task, where INLINEFORM0 , it might find a solution such that INLINEFORM1 for every instance, as a result, every instance will be classified the same. To avoid this, BIBREF0 suggest to penalize flat distributions by using a temperature coefficient T likewise: DISPLAYFORM0 ",
"Where z is a feature vector and W and b are the linear classifier parameters."
],
[
"In the aspect-based sentiment classification (ABSC) task, we are given a sentence and an aspect, and need to determine the sentiment that is expressed towards the aspect. For example the sentence “Excellent food, although the interior could use some help.“ has two aspects: food and interior, a positive sentiment is expressed about the food, but a negative sentiment is expressed about the interior. A sentence INLINEFORM0 , may contain 0 or more aspects INLINEFORM1 , where each aspect corresponds to a sub-sequence of the original sentence, and has an associated sentiment label (Neg, Pos, or Neu). Concretely, we follow the task definition in the SemEval-2015 and SemEval-2016 shared tasks BIBREF23 , BIBREF24 , in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects.",
"While sentence-level sentiment labels are relatively easy to obtain, aspect-level annotation are much more scarce, as demonstrated in the small datasets of the SemEval shared tasks."
],
[
"[t!] Inputs: A dataset INLINEFORM0 , batch size INLINEFORM1 , differentiable classifier INLINEFORM2 [H] not converged INLINEFORM3 random( INLINEFORM4 ) INLINEFORM5 random-choice( INLINEFORM6 , INLINEFORM7 ) INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 Compute loss INLINEFORM12 (eq (4)) Compute gradients and update INLINEFORM13 INLINEFORM14 Stochastic Batched XR",
"Consider two classification tasks over a shared input space, a source task INLINEFORM0 from INLINEFORM1 to INLINEFORM2 and a target task INLINEFORM3 from INLINEFORM4 to INLINEFORM5 , which are related through a conditional distribution INLINEFORM6 . In other words, a labeling decision for task INLINEFORM7 induces an expected label distribution over the task INLINEFORM8 . For a set of datapoints INLINEFORM9 that share a source label INLINEFORM10 , we expect to see a target label distribution of INLINEFORM11 .",
"Given a large unlabeled dataset INLINEFORM0 , a small labeled dataset for the target task INLINEFORM1 , classifier INLINEFORM2 (or sufficient training data to train one) for the source task, we wish to use INLINEFORM3 and INLINEFORM4 to train a good classifier INLINEFORM5 for the target task. This can be achieved using the following procedure.",
"Apply INLINEFORM0 to INLINEFORM1 , resulting in a noisy source-side labels INLINEFORM2 for the target task.",
"Estimate the conditional probability INLINEFORM0 table using MLE estimates over INLINEFORM1 INLINEFORM2 ",
"where INLINEFORM0 is a counting function over INLINEFORM1 .",
"Apply INLINEFORM0 to the unlabeled data INLINEFORM1 resulting in labels INLINEFORM2 . Split INLINEFORM3 into INLINEFORM4 sets INLINEFORM5 according to the labeling induced by INLINEFORM6 : INLINEFORM7 ",
"Use Algorithm SECREF12 to train a classifier for the target task using input pairs INLINEFORM0 and the XR loss.",
"In words, by using XR training, we use the expected label proportions over the target task given predicted labels of the source task, to train a target-class classifier."
],
[
" BIBREF0 and following work take the base classifier INLINEFORM0 to be a logistic regression classifier, for which they manually derive gradients for the XR loss and train with LBFGs BIBREF25 . However, nothing precludes us from using an arbitrary neural network instead, as long as it culminates in a softmax layer.",
"One complicating factor is that the computation of INLINEFORM0 in equation ( EQREF5 ) requires a summation over INLINEFORM1 for the entire set INLINEFORM2 , which in our setup may contain hundreds of thousands of examples, making gradient computation and optimization impractical. We instead proposed a stochastic batched approximation in which, instead of requiring that the full constraint set INLINEFORM3 will match the expected label posterior distribution, we require that sufficiently large random subsets of it will match the distribution. At each training step we compute the loss and update the gradient with respect to a different random subset. Specifically, in each training step we sample a random pair INLINEFORM4 , sample a random subset INLINEFORM5 of INLINEFORM6 of size INLINEFORM7 , and compute the local XR loss of set INLINEFORM8 : DISPLAYFORM0 ",
"where INLINEFORM0 is computed by summing over the elements of INLINEFORM1 rather than of INLINEFORM2 in equations ( EQREF5 –2). The stochastic batched XR training algorithm is given in Algorithm SECREF12 . For large enough INLINEFORM3 , the expected label distribution of the subset is the same as that of the complete set."
],
[
"We demonstrate the procedure given above by training Aspect-based Sentiment Classifier (ABSC) using sentence-level sentiment signals."
],
[
"We observe that while the sentence-level sentiment does not determine the sentiment of individual aspects (a positive sentence may contain negative remarks about some aspects), it is very predictive of the proportion of sentiment labels of the fragments within a sentence. Positively labeled sentences are likely to have more positive aspects and fewer negative ones, and vice-versa for negatively-labeled sentences. While these proportions may vary on the individual sentence level, we expect them to be stable when aggregating fragments from several sentences: when considering a large enough sample of fragments that all come from positively labeled sentences, we expect the different samples to have roughly similar label proportions to each other. This situation is idealy suited for performing XR training, as described in section SECREF12 .",
"The application to ABSC is almost straightforward, but is complicated a bit by the decomposition of sentences into fragments: each sentence level decision now corresponds to multiple fragment-level decisions. Thus, we apply the sentence-level (task A) classifier INLINEFORM0 on the aspect-level corpus INLINEFORM1 by applying it on the sentence level and then associating the predicted sentence labels with each of the fragments, resulting in fragment-level labeling. Similarly, when we apply INLINEFORM2 to the unlabeled data INLINEFORM3 we again do it at the sentence level, but the sets INLINEFORM4 are composed of fragments, not sentences: INLINEFORM5 ",
"We then apply algorithm SECREF12 as is: at each step of training we sample a source label INLINEFORM0 Pos,Neg,Neu INLINEFORM1 , sample INLINEFORM2 fragments from INLINEFORM3 , and use the XR loss to fit the expected fragment-label proportions over these INLINEFORM4 fragments to INLINEFORM5 . Figure FIGREF21 illustrates the procedure."
],
[
"We model the ABSC problem by associating each (sentence,aspect) pair with a sentence-fragment, and constructing a neural classifier from fragments to sentiment labels. We heuristically decompose a sentence into fragments. We use the same BiLSTM based neural architecture for both sentence classification and fragment classification.",
"We now describe the procedure we use to associate a sentence fragment with each (sentence,aspect) pairs. The shared tasks data associates each aspect with a pivot-phrase INLINEFORM0 , where pivot phrase INLINEFORM1 is defined as a pre-determined sequence of words that is contained within the sentence. For a sentence INLINEFORM2 , a set of pivot phrases INLINEFORM3 and a specific pivot phrase INLINEFORM4 , we consult the constituency parse tree of INLINEFORM5 and look for tree nodes that satisfy the following conditions:",
"The node governs the desired pivot phrase INLINEFORM0 .",
"The node governs either a verb (VB, VBD, VBN, VBG, VBP, VBZ) or an adjective (JJ, JJR, JJS), which is different than any INLINEFORM0 .",
"The node governs a minimal number of pivot phrases from INLINEFORM0 , ideally only INLINEFORM1 .",
"We then select the highest node in the tree that satisfies all conditions. The span governed by this node is taken as the fragment associated with aspect INLINEFORM0 . The decomposition procedure is demonstrated in Figure FIGREF22 .",
"When aspect-level information is given, we take the pivot-phrases to be the requested aspects. When aspect-level information is not available, we take each noun in the sentence to be a pivot-phrase.",
"Our classification model is a simple 1-layer BiLSTM encoder (a concatenation of the last states of a forward and a backward running LSTMs) followed by a linear-predictor. The encoder is fed either a complete sentence or a sentence fragment."
],
[
"Table TABREF44 compares these baselines to three XR conditions.",
"The first condition, BiLSTM-XR-Dev, performs XR training on the automatically-labeled sentence-level dataset. The only access it has to aspect-level annotation is for estimating the proportions of labels for each sentence-level label, which is done based on the validation set of SemEval-2015 (i.e., 20% of the train set). The XR setting is very effective: without using any in-task data, this model already surpasses all other models, both supervised and semi-supervised, except for the BIBREF35 , BIBREF34 models which achieve higher F1 scores. We note that in contrast to XR, the competing models have complete access to the supervised aspect-based labels. The second condition, BiLSTM-XR, is similar but now the model is allowed to estimate the conditional label proportions based on the entire aspect-based training set (the classifier still does not have direct access to the labels beyond the aggregate proportion information). This improves results further, showing the importance of accurately estimating the proportions. Finally, in BiLSTM-XR+Finetuning, we follow the XR training with fully supervised fine-tuning on the small labeled dataset, using the attention-based model of BIBREF35 . This achieves the best results, and surpasses also the semi-supervised BIBREF35 baseline on accuracy, and matching it on F1.",
"We report significance tests for the robustness of the method under random parameter initialization. Our reported numbers are averaged over five random initialization. Since the datasets are unbalanced w.r.t the label distribution, we report both accuracy and macro-F1.",
"The XR training is also more stable than the other semi-supervised baselines, achieving substantially lower standard deviations across different runs."
],
[
"In each experiment in this section we estimate the proportions using the SemEval-2015 train set.",
"How does the XR training scale with the amount of unlabeled data? Figure FIGREF54 a shows the macro-F1 scores on the entire SemEval-2016 dataset, with different unlabeled corpus sizes (measured in number of sentences). An unannotated corpus of INLINEFORM0 sentences is sufficient to surpass the results of the INLINEFORM1 sentence-level trained classifier, and more unannotated data further improves the results.",
"Our method requires a sentence level classifier INLINEFORM0 to label both the target-task corpus and the unlabeled corpus. How does the quality of this classifier affect the overall XR training? We vary the amount of supervision used to train INLINEFORM1 from 0 sentences (assigning the same label to all sentences), to 100, 1000, 5000 and 10000 sentences. We again measure macro-F1 on the entire SemEval 2016 corpus.",
"The results in Figure FIGREF54 b show that when using the prior distributions of aspects (0), the model struggles to learn from this signal, it learns mostly to predict the majority class, and hence reaches very low F1 scores of 35.28. The more data given to the sentence level classifier, the better the potential results will be when training with our method using the classifier labels, with a classifiers trained on 100,1000,5000 and 10000 labeled sentences, we get a F1 scores of 53.81, 58.84, 61.81, 65.58 respectively. Improvements in the source task classifier's quality clearly contribute to the target task accuracy.",
"The Stochastic Batched XR algorithm (Algorithm SECREF12 ) samples a batch of INLINEFORM0 examples at each step to estimate the posterior label distribution used in the loss computation. How does the size of INLINEFORM1 affect the results? We use INLINEFORM2 fragments in our main experiments, but smaller values of INLINEFORM3 reduce GPU memory load and may train better in practice. We tested our method with varying values of INLINEFORM4 on a sample of INLINEFORM5 , using batches that are composed of fragments of 5, 25, 100, 450, 1000 and 4500 sentences. The results are shown in Figure FIGREF54 c. Setting INLINEFORM6 result in low scores. Setting INLINEFORM7 yields better F1 score but with high variance across runs. For INLINEFORM8 fragments the results begin to stabilize, we also see a slight decrease in F1-scores with larger batch sizes. We attribute this drop despite having better estimation of the gradients to the general trend of larger batch sizes being harder to train with stochastic gradient methods."
],
[
"The XR training can be performed also over pre-trained representations. We experiment with two pre-training methods: (1) pre-training by training the BiLSTM model to predict the noisy sentence-level predictions. (2) Using the pre-trained Bert representation BIBREF9 . For (1), we compare the effect of pre-train on unlabeled corpora of sizes of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 sentences. Results in Figure FIGREF54 d show that this form of pre-training is effective for smaller unlabeled corpora but evens out for larger ones.",
"For the Bert experiments, we experiment with the Bert-base model with INLINEFORM1 sets, 30 epochs for XR training or sentence level fine-tuning and 15 epochs for aspect based fine-tuning, on each training method we evaluated the model on the dev set after each epoch and the best model was chosen. We compare the following setups:",
"-Bert INLINEFORM0 Aspect Based Finetuning: pretrained bert model finetuned to the aspect based task.",
"-Bert INLINEFORM0 : A pretrained bert model finetuned to the sentence level task on the INLINEFORM1 sentences, and tested by predicting fragment-level sentiment.",
"-Bert INLINEFORM0 INLINEFORM1 INLINEFORM2 Aspect Based Finetuning: pretrained bert model finetuned to the sentence level task, and finetuned again to the aspect based one.",
"-Bert INLINEFORM0 XR: pretrained bert model followed by XR training using our method.",
"-Bert INLINEFORM0 XR INLINEFORM1 Aspect Based Finetuning: pretrained bert followed by XR training and then fine-tuned to the aspect level task.",
"The results are presented in Table TABREF55 . As before, aspect-based fine-tuning is beneficial for both SemEval-16 and SemEval-15. Training a BiLSTM with XR surpasses pre-trained bert models and using XR training on top of the pre-trained Bert models substantially increases the results even further."
],
[
"We presented a transfer learning method based on expectation regularization (XR), and demonstrated its effectiveness for training aspect-based sentiment classifiers using sentence-level supervision. The method achieves state-of-the-art results for the task, and is also effective for improving on top of a strong pre-trained Bert model. The proposed method provides an additional data-efficient tool in the modeling arsenal, which can be applied on its own or together with another training method, in situations where there is a conditional relations between the labels of a source task for which we have supervision, and a target task for which we don't.",
"While we demonstrated the approach on the sentiment domain, the required conditional dependence between task labels is present in many situations. Other possible application of the method includes training language identification of tweets given geo-location supervision (knowing the geographical region gives a prior on languages spoken), training predictors for renal failure from textual medical records given classifier for diabetes (there is a strong correlation between the two conditions), training a political affiliation classifier from social media tweets based on age-group classifiers, zip-code information, or social-status classifiers (there are known correlations between all of these to political affiliation), training hate-speech detection based on emotion detection, and so on."
],
[
"The work was supported in part by The Israeli Science Foundation (grant number 1555/15)."
]
],
"section_name": [
"Introduction",
"Lightly Supervised Learning",
"Expectation Regularization (XR)",
"Aspect-based Sentiment Classification",
"Transfer-training between related tasks with XR",
"Stochastic Batched Training for Deep XR",
"Application to Aspect-based Sentiment",
"Relating the classification tasks",
"Classification Architecture",
"Main Results",
"Further experiments",
"Pre-training, Bert",
"Discussion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"8f217f179202ac3fbdd22ceb878a60b4ca2b14c8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"c4972dbb4595bf72a99bc4fc9e530d5cc07683ff"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"caedefe56dedd1f6fa029b6f8ee71fab6a65f1c5"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)."
],
"extractive_spans": [],
"free_form_answer": "BiLSTM-XR-Dev Estimation accuracy is 83.31 for SemEval-15 and 87.68 for SemEval-16.\nBiLSTM-XR accuracy is 83.31 for SemEval-15 and 88.12 for SemEval-16.\n",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0109c97a8e3ec8291b6dadbba5e09ce4a13b13be"
],
"answer": [
{
"evidence": [
"Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0"
],
"extractive_spans": [
"DISPLAYFORM0"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How much more data does the model trained using XR loss have access to, compared to the fully supervised model?",
"Does the system trained only using XR loss outperform the fully supervised neural system?",
"How accurate is the aspect based sentiment classifier trained only using the XR loss?",
"How is the expectation regularization loss defined?"
],
"question_id": [
"547be35cff38028648d199ad39fb48236cfb99ee",
"47a30eb4d0d6f5f2ff4cdf6487265a25c1b18fd8",
"e42fbf6c183abf1c6c2321957359c7683122b48e",
"e574f0f733fb98ecef3c64044004aa7a320439be"
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Illustration of the algorithm. Cs is applied to Du resulting in ỹ for each sentence, Uj is built according with the fragments of the same labelled sentences, the probabilities for each fragment in Uj are summed and normalized, the XR loss in equation (4) is calculated and the network is updated.",
"Figure 2: Illustration of the decomposition procedure, when given a1=“duck confit“ and a2= “foie gras terrine with figs“ as the pivot phrases.",
"Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b).",
"Figure 3: Macro-F1 scores for the entire SemEval-2016 dataset of the different analyses. (a) the contribution of unlabeled data. (b) the effect of sentence classifier quality. (c) the effect of k. (d) the effect of sentence-level pretraining vs. corpus size.",
"Table 2: BERT pre-training: average accuracies and Macro-F1 scores from five runs and their stdev. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all non XR baseline methods, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively."
],
"file": [
"5-Figure1-1.png",
"5-Figure2-1.png",
"7-Table1-1.png",
"9-Figure3-1.png",
"9-Table2-1.png"
]
} | [
"How accurate is the aspect based sentiment classifier trained only using the XR loss?"
] | [
[
"1909.00430-7-Table1-1.png"
]
] | [
"BiLSTM-XR-Dev Estimation accuracy is 83.31 for SemEval-15 and 87.68 for SemEval-16.\nBiLSTM-XR accuracy is 83.31 for SemEval-15 and 88.12 for SemEval-16.\n"
] | 80 |
1910.11493 | The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection | The SIGMORPHON 2019 shared task on cross-lingual transfer and contextual analysis in morphology examined transfer learning of inflection between 100 language pairs, as well as contextual lemmatization and morphosyntactic description in 66 languages. The first task evolves past years' inflection tasks by examining transfer of morphological inflection knowledge from a high-resource language to a low-resource language. This year also presents a new second challenge on lemmatization and morphological feature analysis in context. All submissions featured a neural component and built on either this year's strong baselines or highly ranked systems from previous years' shared tasks. Every participating team improved in accuracy over the baselines for the inflection task (though not Levenshtein distance), and every team in the contextual analysis task improved on both state-of-the-art neural and non-neural baselines. | {
"paragraphs": [
[
"While producing a sentence, humans combine various types of knowledge to produce fluent output—various shades of meaning are expressed through word selection and tone, while the language is made to conform to underlying structural rules via syntax and morphology. Native speakers are often quick to identify disfluency, even if the meaning of a sentence is mostly clear.",
"Automatic systems must also consider these constraints when constructing or processing language. Strong enough language models can often reconstruct common syntactic structures, but are insufficient to properly model morphology. Many languages implement large inflectional paradigms that mark both function and content words with a varying levels of morphosyntactic information. For instance, Romanian verb forms inflect for person, number, tense, mood, and voice; meanwhile, Archi verbs can take on thousands of forms BIBREF0. Such complex paradigms produce large inventories of words, all of which must be producible by a realistic system, even though a large percentage of them will never be observed over billions of lines of linguistic input. Compounding the issue, good inflectional systems often require large amounts of supervised training data, which is infeasible in many of the world's languages.",
"This year's shared task is concentrated on encouraging the construction of strong morphological systems that perform two related but different inflectional tasks. The first task asks participants to create morphological inflectors for a large number of under-resourced languages, encouraging systems that use highly-resourced, related languages as a cross-lingual training signal. The second task welcomes submissions that invert this operation in light of contextual information: Given an unannotated sentence, lemmatize each word, and tag them with a morphosyntactic description. Both of these tasks extend upon previous morphological competitions, and the best submitted systems now represent the state of the art in their respective tasks."
],
[
"Annotated resources for the world's languages are not distributed equally—some languages simply have more as they have more native speakers willing and able to annotate more data. We explore how to transfer knowledge from high-resource languages that are genetically related to low-resource languages.",
"The first task iterates on last year's main task: morphological inflection BIBREF1. Instead of giving some number of training examples in the language of interest, we provided only a limited number in that language. To accompany it, we provided a larger number of examples in either a related or unrelated language. Each test example asked participants to produce some other inflected form when given a lemma and a bundle of morphosyntactic features as input. The goal, thus, is to perform morphological inflection in the low-resource language, having hopefully exploited some similarity to the high-resource language. Models which perform well here can aid downstream tasks like machine translation in low-resource settings. All datasets were resampled from UniMorph, which makes them distinct from past years.",
"The mode of the task is inspired by BIBREF2, who fine-tune a model pre-trained on a high-resource language to perform well on a low-resource language. We do not, though, require that models be trained by fine-tuning. Joint modeling or any number of methods may be explored instead."
],
[
"The model will have access to type-level data in a low-resource target language, plus a high-resource source language. We give an example here of Asturian as the target language with Spanish as the source language.",
""
],
[
"We score the output of each system in terms of its predictions' exact-match accuracy and the average Levenshtein distance between the predictions and their corresponding true forms."
],
[
"Although inflection of words in a context-agnostic manner is a useful evaluation of the morphological quality of a system, people do not learn morphology in isolation.",
"In 2018, the second task of the CoNLL–SIGMORPHON Shared Task BIBREF1 required submitting systems to complete an inflectional cloze task BIBREF3 given only the sentential context and the desired lemma – an example of the problem is given in the following lines: A successful system would predict the plural form “dogs”. Likewise, a Spanish word form ayuda may be a feminine noun or a third-person verb form, which must be disambiguated by context.",
"",
"This year's task extends the second task from last year. Rather than inflect a single word in context, the task is to provide a complete morphological tagging of a sentence: for each word, a successful system will need to lemmatize and tag it with a morphsyntactic description (MSD).",
"width=",
"Context is critical—depending on the sentence, identical word forms realize a large number of potential inflectional categories, which will in turn influence lemmatization decisions. If the sentence were instead “The barking dogs kept us up all night”, “barking” is now an adjective, and its lemma is also “barking”."
],
[
"We presented data in 100 language pairs spanning 79 unique languages. Data for all but four languages (Basque, Kurmanji, Murrinhpatha, and Sorani) are extracted from English Wiktionary, a large multi-lingual crowd-sourced dictionary with morphological paradigms for many lemmata. 20 of the 100 language pairs are either distantly related or unrelated; this allows speculation into the relative importance of data quantity and linguistic relatedness."
],
[
"For each language, the basic data consists of triples of the form (lemma, feature bundle, inflected form), as in tab:sub1data. The first feature in the bundle always specifies the core part of speech (e.g., verb). For each language pair, separate files contain the high- and low-resource training examples.",
"All features in the bundle are coded according to the UniMorph Schema, a cross-linguistically consistent universal morphological feature set BIBREF8, BIBREF9."
],
[
"For each of the Wiktionary languages, Wiktionary provides a number of tables, each of which specifies the full inflectional paradigm for a particular lemma. As in the previous iteration, tables were extracted using a template annotation procedure described in BIBREF10."
],
[
"From each language's collection of paradigms, we sampled the training, development, and test sets as in 2018. Crucially, while the data were sampled in the same fashion, the datasets are distinct from those used for the 2018 shared task.",
"Our first step was to construct probability distributions over the (lemma, feature bundle, inflected form) triples in our full dataset. For each triple, we counted how many tokens the inflected form has in the February 2017 dump of Wikipedia for that language. To distribute the counts of an observed form over all the triples that have this token as its form, we follow the method used in the previous shared task BIBREF1, training a neural network on unambiguous forms to estimate the distribution over all, even ambiguous, forms. We then sampled 12,000 triples without replacement from this distribution. The first 100 were taken as training data for low-resource settings. The first 10,000 were used as high-resource training sets. As these sets are nested, the highest-count triples tend to appear in the smaller training sets.",
"The final 2000 triples were randomly shuffled and then split in half to obtain development and test sets of 1000 forms each. The final shuffling was performed to ensure that the development set is similar to the test set. By contrast, the development and test sets tend to contain lower-count triples than the training set."
],
[
"We further adopted some changes to increase compatibility. Namely, we corrected some annotation errors created while scraping Wiktionary for the 2018 task, and we standardized Romanian t-cedilla and t-comma to t-comma. (The same was done with s-cedilla and s-comma.)"
],
[
"Our data for task 2 come from the Universal Dependencies treebanks BIBREF11, which provides pre-defined training, development, and test splits and annotations in a unified annotation schema for morphosyntax and dependency relationships. Unlike the 2018 cloze task which used UD data, we require no manual data preparation and are able to leverage all 107 monolingual treebanks. As is typical, data are presented in CoNLL-U format, although we modify the morphological feature and lemma fields."
],
[
"The morphological annotations for the 2019 shared task were converted to the UniMorph schema BIBREF10 according to BIBREF12, who provide a deterministic mapping that increases agreement across languages. This also moves the part of speech into the bundle of morphological features. We do not attempt to individually correct any errors in the UD source material. Further, some languages received additional pre-processing. In the Finnish data, we removed morpheme boundaries that were present in the lemmata (e.g., puhe#kieli $\\mapsto $ puhekieli `spoken+language'). Russian lemmata in the GSD treebank were presented in all uppercase; to match the 2018 shared task, we lowercased these. In development and test data, all fields except for form and index within the sentence were struck."
],
[
"We include four neural sequence-to-sequence models mapping lemma into inflected word forms: soft attention BIBREF13, non-monotonic hard attention BIBREF14, monotonic hard attention and a variant with offset-based transition distribution BIBREF15. Neural sequence-to-sequence models with soft attention BIBREF13 have dominated previous SIGMORPHON shared tasks BIBREF16. BIBREF14 instead models the alignment between characters in the lemma and the inflected word form explicitly with hard attention and learns this alignment and transduction jointly. BIBREF15 shows that enforcing strict monotonicity with hard attention is beneficial in tasks such as morphological inflection where the transduction is mostly monotonic. The encoder is a biLSTM while the decoder is a left-to-right LSTM. All models use multiplicative attention and have roughly the same number of parameters. In the model, a morphological tag is fed to the decoder along with target character embeddings to guide the decoding. During the training of the hard attention model, dynamic programming is applied to marginalize all latent alignments exactly."
],
[
"BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees."
],
[
"BIBREF18: This is a state-of-the-art neural model that also performs joint morphological tagging and lemmatization, but also accounts for the exposure bias with the application of maximum likelihood (MLE). The model stitches the tagger and lemmatizer together with the use of jackknifing BIBREF19 to expose the lemmatizer to the errors made by the tagger model during training. The morphological tagger is based on a character-level biLSTM embedder that produces the embedding for a word, and a word-level biLSTM tagger that predicts a morphological tag sequence for each word in the sentence. The lemmatizer is a neural sequence-to-sequence model BIBREF15 that uses the decoded morphological tag sequence from the tagger as an additional attribute. The model uses hard monotonic attention instead of standard soft attention, along with a dynamic programming based training scheme."
],
[
"The SIGMORPHON 2019 shared task received 30 submissions—14 for task 1 and 16 for task 2—from 23 teams. In addition, the organizers' baseline systems were evaluated."
],
[
"Five teams participated in the first Task, with a variety of methods aimed at leveraging the cross-lingual data to improve system performance.",
"The University of Alberta (UAlberta) performed a focused investigation on four language pairs, training cognate-projection systems from external cognate lists. Two methods were considered: one which trained a high-resource neural encoder-decoder, and projected the test data into the HRL, and one that projected the HRL data into the LRL, and trained a combined system. Results demonstrated that certain language pairs may be amenable to such methods.",
"The Tuebingen University submission (Tuebingen) aligned source and target to learn a set of edit-actions with both linear and neural classifiers that independently learned to predict action sequences for each morphological category. Adding in the cross-lingual data only led to modest gains.",
"AX-Semantics combined the low- and high-resource data to train an encoder-decoder seq2seq model; optionally also implementing domain adaptation methods to focus later epochs on the target language.",
"The CMU submission first attends over a decoupled representation of the desired morphological sequence before using the updated decoder state to attend over the character sequence of the lemma. Secondly, in order to reduce the bias of the decoder's language model, they hallucinate two types of data that encourage common affixes and character copying. Simply allowing the model to learn to copy characters for several epochs significantly out-performs the task baseline, while further improvements are obtained through fine-tuning. Making use of an adversarial language discriminator, cross lingual gains are highly-correlated to linguistic similarity, while augmenting the data with hallucinated forms and multiple related target language further improves the model.",
"The system from IT-IST also attends separately to tags and lemmas, using a gating mechanism to interpolate the importance of the individual attentions. By combining the gated dual-head attention with a SparseMax activation function, they are able to jointly learn stem and affix modifications, improving significantly over the baseline system.",
"The relative system performance is described in tab:sub2team, which shows the average per-language accuracy of each system. The table reflects the fact that some teams submitted more than one system (e.g. Tuebingen-1 & Tuebingen-2 in the table)."
],
[
"Nine teams submitted system papers for Task 2, with several interesting modifications to either the baseline or other prior work that led to modest improvements.",
"Charles-Saarland achieved the highest overall tagging accuracy by leveraging multi-lingual BERT embeddings fine-tuned on a concatenation of all available languages, effectively transporting the cross-lingual objective of Task 1 into Task 2. Lemmas and tags are decoded separately (with a joint encoder and separate attention); Lemmas are a sequence of edit-actions, while tags are calculated jointly. (There is no splitting of tags into features; tags are atomic.)",
"CBNU instead lemmatize using a transformer network, while performing tagging with a multilayer perceptron with biaffine attention. Input words are first lemmatized, and then pipelined to the tagger, which produces atomic tag sequences (i.e., no splitting of features).",
"The team from Istanbul Technical University (ITU) jointly produces lemmatic edit-actions and morphological tags via a two level encoder (first word embeddings, and then context embeddings) and separate decoders. Their system slightly improves over the baseline lemmatization, but significantly improves tagging accuracy.",
"The team from the University of Groningen (RUG) also uses separate decoders for lemmatization and tagging, but uses ELMo to initialize the contextual embeddings, leading to large gains in performance. Furthermore, joint training on related languages further improves results.",
"CMU approaches tagging differently than the multi-task decoding we've seen so far (baseline is used for lemmatization). Making use of a hierarchical CRF that first predicts POS (that is subsequently looped back into the encoder), they then seek to predict each feature separately. In particular, predicting POS separately greatly improves results. An attempt to leverage gold typological information led to little gain in the results; experiments suggest that the system is already learning the pertinent information.",
"The team from Ohio State University (OHIOSTATE) concentrates on predicting tags; the baseline lemmatizer is used for lemmatization. To that end, they make use of a dual decoder that first predicts features given only the word embedding as input; the predictions are fed to a GRU seq2seq, which then predicts the sequence of tags.",
"The UNT HiLT+Ling team investigates a low-resource setting of the tagging, by using parallel Bible data to learn a translation matrix between English and the target language, learning morphological tags through analogy with English.",
"The UFAL-Prague team extends their submission from the UD shared task (multi-layer LSTM), replacing the pretrained embeddings with BERT, to great success (first in lemmatization, 2nd in tagging). Although they predict complete tags, they use the individual features to regularize the decoder. Small gains are also obtained from joining multi-lingual corpora and ensembling.",
"CUNI–Malta performs lemmatization as operations over edit actions with LSTM and ReLU. Tagging is a bidirectional LSTM augmented by the edit actions (i.e., two-stage decoding), predicting features separately.",
"The Edinburgh system is a character-based LSTM encoder-decoder with attention, implemented in OpenNMT. It can be seen as an extension of the contextual lemmatization system Lematus BIBREF20 to include morphological tagging, or alternatively as an adaptation of the morphological re-inflection system MED BIBREF21 to incorporate context and perform analysis rather than re-inflection. Like these systems it uses a completely generic encoder-decoder architecture with no specific adaptation to the morphological processing task other than the form of the input. In the submitted version of the system, the input is split into short chunks corresponding to the target word plus one word of context on either side, and the system is trained to output the corresponding lemmas and tags for each three-word chunk.",
"Several teams relied on external resources to improve their lemmatization and feature analysis. Several teams made use of pre-trained embeddings. CHARLES-SAARLAND-2 and UFALPRAGUE-1 used pretrained contextual embeddings (BERT) provided by Google BIBREF22. CBNU-1 used a mix of pre-trained embeddings from the CoNLL 2017 shared task and fastText. Further, some teams trained their own embeddings to aid performance."
],
[
"In general, the application of typology to natural language processing BIBREF23, BIBREF24 provides an interesting avenue for multilinguality. Further, our shared task was designed to only leverage a single helper language, though many may exist with lexical or morphological overlap with the target language. Techniques like those of BIBREF25 may aid in designing universal inflection architectures. Neither task this year included unannotated monolingual corpora. Using such data is well-motivated from an L1-learning point of view, and may affect the performance of low-resource data settings.",
"In the case of inflection an interesting future topic could involve departing from orthographic representation and using more IPA-like representations, i.e. transductions over pronunciations. Different languages, in particular those with idiosyncratic orthographies, may offer new challenges in this respect.",
"Only one team tried to learn inflection in a multilingual setting—i.e. to use all training data to train one model. Such transfer learning is an interesting avenue of future research, but evaluation could be difficult. Whether any cross-language transfer is actually being learned vs. whether having more data better biases the networks to copy strings is an evaluation step to disentangle.",
"Creating new data sets that accurately reflect learner exposure (whether L1 or L2) is also an important consideration in the design of future shared tasks. One pertinent facet of this is information about inflectional categories—often the inflectional information is insufficiently prescribed by the lemma, as with the Romanian verbal inflection classes or nominal gender in German.",
"As we move toward multilingual models for morphology, it becomes important to understand which representations are critical or irrelevant for adapting to new languages; this may be probed in the style of BIBREF27, and it can be used as a first step toward designing systems that avoid catastrophic forgetting as they learn to inflect new languages BIBREF28.",
"Future directions for Task 2 include exploring cross-lingual analysis—in stride with both Task 1 and BIBREF29—and leveraging these analyses in downstream tasks."
],
[
"The SIGMORPHON 2019 shared task provided a type-level evaluation on 100 language pairs in 79 languages and a token-level evaluation on 107 treebanks in 66 languages, of systems for inflection and analysis. On task 1 (low-resource inflection with cross-lingual transfer), 14 systems were submitted, while on task 2 (lemmatization and morphological feature analysis), 16 systems were submitted. All used neural network models, completing a trend in past years' shared tasks and other recent work on morphology.",
"In task 1, gains from cross-lingual training were generally modest, with gains positively correlating with the linguistic similarity of the two languages.",
"In the second task, several methods were implemented by multiple groups, with the most successful systems implementing variations of multi-headed attention, multi-level encoding, multiple decoders, and ELMo and BERT contextual embeddings.",
"We have released the training, development, and test sets, and expect these datasets to provide a useful benchmark for future research into learning of inflectional morphology and string-to-string transduction."
],
[
"MS has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 771113)."
]
],
"section_name": [
"Introduction",
"Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection",
"Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection ::: Example",
"Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection ::: Evaluation",
"Tasks and Evaluation ::: Task 2: Morphological analysis in context",
"Data ::: Data for Task 1 ::: Language pairs",
"Data ::: Data for Task 1 ::: Data format",
"Data ::: Data for Task 1 ::: Extraction from Wiktionary",
"Data ::: Data for Task 1 ::: Sampling data splits",
"Data ::: Data for Task 1 ::: Other modifications",
"Data ::: Data for Task 2",
"Data ::: Data for Task 2 ::: Data conversion",
"Baselines ::: Task 1 Baseline",
"Baselines ::: Task 2 Baselines ::: Non-neural",
"Baselines ::: Task 2 Baselines ::: Neural",
"Results",
"Results ::: Task 1 Results",
"Results ::: Task 2 Results",
"Future Directions",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"012a77e1bbdaa410ad83a28a87526db74bd1e353"
],
"answer": [
{
"evidence": [
"BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees."
],
"extractive_spans": [],
"free_form_answer": "The Lemming model in BIBREF17",
"highlighted_evidence": [
"BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"What were the non-neural baselines used for the task?"
],
"question_id": [
"b65b1c366c8bcf544f1be5710ae1efc6d2b1e2f1"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"morphology"
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1: Sample language pair and data format for Task 1",
"Table 2: Task 1 Team Scores, averaged across all Languages; * indicates submissions were only applied to a subset of languages, making scores incomparable. † indicates that additional resources were used for training.",
"Table 3: Task 1 Accuracy scores",
"Table 4: Task 1 Levenshtein scores",
"Table 5: Task 2 Team Scores, averaged across all treebanks; * indicates submissions were only applied to a subset of languages, making scores incomparable. † indicates that additional external resources were used for training, and ‡ indicates that training data were shared across languages or treebanks.",
"Table 6: Task 2 Lemma Accuracy scores",
"Table 7: Task 2 Lemma Levenshtein scores"
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"10-Table7-1.png"
]
} | [
"What were the non-neural baselines used for the task?"
] | [
[
"1910.11493-Baselines ::: Task 2 Baselines ::: Non-neural-0"
]
] | [
"The Lemming model in BIBREF17"
] | 81 |
1908.10449 | Interactive Machine Comprehension with Information Seeking Agents | Existing machine reading comprehension (MRC) models do not scale effectively to real-world applications like web-level information retrieval and question answering (QA). We argue that this stems from the nature of MRC datasets: most of these are static environments wherein the supporting documents and all necessary information are fully observed. In this paper, we propose a simple method that reframes existing MRC datasets as interactive, partially observable environments. Specifically, we "occlude" the majority of a document's text and add context-sensitive commands that reveal "glimpses" of the hidden text to a model. We repurpose SQuAD and NewsQA as an initial case study, and then show how the interactive corpora can be used to train a model that seeks relevant information through sequential decision making. We believe that this setting can contribute in scaling models to web-level QA scenarios. | {
"paragraphs": [
[
"Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein.",
"The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As pointed out by BIBREF5, for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper `understanding' of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from `spoon-feeding' models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially.",
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL).",
"As an initial case study, we repurpose two well known, related corpora with different difficulty levels for our interactive MRC task: SQuAD and NewsQA. Table TABREF2 shows some examples of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match web-level QA and may lead to deeper comprehension of documents' content.",
"The main contributions of this work are as follows:",
"We describe a method to make MRC datasets interactive and formulate the new task as an RL problem.",
"We develop a baseline agent that combines a top performing MRC model and a state-of-the-art RL optimization algorithm and test it on our iMRC tasks.",
"We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting."
],
[
"Skip-reading BIBREF6, BIBREF7, BIBREF8 is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are useful, and therefore learn to skip irrelevant tokens based on the current input and their internal memory. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm BIBREF9. For example, the structural-jump-LSTM proposed in BIBREF10 learns to skip and jump over chunks of text. In a similar vein, BIBREF11 designed a QA task where the model reads streaming data unidirectionally, without knowing when the question will be provided. Skip-reading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied BIBREF12, BIBREF13.",
"Compared to skip-reading and multi-turn reading, our work enables an agent to jump through a document in a more dynamic manner, in some sense combining aspects of skip-reading and re-reading. For example, it can jump forward, backward, or to an arbitrary position, depending on the query. This also distinguishes the model we develop in this work from ReasoNet BIBREF13, where an agent decides when to stop unidirectional reading.",
"Recently, BIBREF14 propose DocQN, which is a DQN-based agent that leverages the (tree) structure of documents and navigates across sentences and paragraphs. The proposed method has been shown to outperform vanilla DQN and IR baselines on TriviaQA dataset. The main differences between our work and DocQA include: iMRC does not depend on extra meta information of documents (e.g., title, paragraph title) for building document trees as in DocQN; our proposed environment is partially-observable, and thus an agent is required to explore and memorize the environment via interaction; the action space in our setting (especially for the Ctrl+F command as defined in later section) is arguably larger than the tree sampling action space in DocQN.",
"Closely related to iMRC is work by BIBREF15, in which the authors introduce a collection of synthetic tasks to train and test information-seeking capabilities in neural models. We extend that work by developing a realistic and challenging text-based task.",
"Broadly speaking, our approach is also linked to the optimal stopping problem in the literature Markov decision processes (MDP) BIBREF16, where at each time-step the agent either continues or stops and accumulates reward. Here, we reformulate conventional QA tasks through the lens of optimal stopping, in hopes of improving over the shallow matching behaviors exhibited by many MRC systems."
],
[
"We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\\lbrace p, q, a\\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.",
"We first split every paragraph $p$ into a list of sentences $\\mathcal {S} = \\lbrace s_1, s_2, ..., s_n\\rbrace $, where $n$ stands for number of sentences in $p$. Given a question $q$, rather than showing the entire paragraph $p$, we only show an agent the first sentence $s_1$ and withhold the rest. The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer question $q$.",
"An agent decides when to stop interacting and output an answer, but the number of interaction steps is limited. Once an agent has exhausted its step budget, it is forced to answer the question."
],
[
"As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) BIBREF17. An iMRC data-point is a discrete-time POMDP defined by $(S, T, A, \\Omega , O, R, \\gamma )$, where $\\gamma \\in [0, 1]$ is the discount factor and the other elements are described in detail below.",
"Environment States ($S$): The environment state at turn $t$ in the game is $s_t \\in S$. It contains the complete internal information of the game, much of which is hidden from the agent. When an agent issues an action $a_t$, the environment transitions to state $s_{t+1}$ with probability $T(s_{t+1} | s_t, a_t)$). In this work, transition probabilities are either 0 or 1 (i.e., deterministic environment).",
"Actions ($A$): At each game turn $t$, the agent issues an action $a_t \\in A$. We will elaborate on the action space of iMRC in the action space section.",
"Observations ($\\Omega $): The text information perceived by the agent at a given game turn $t$ is the agent's observation, $o_t \\in \\Omega $, which depends on the environment state and the previous action with probability $O(o_t|s_t)$. In this work, observation probabilities are either 0 or 1 (i.e., noiseless observation). Reward Function ($R$): Based on its actions, the agent receives rewards $r_t = R(s_t, a_t)$. Its objective is to maximize the expected discounted sum of rewards $E \\left[\\sum _t \\gamma ^t r_t \\right]$."
],
[
"To better describe the action space of iMRC, we split an agent's actions into two phases: information gathering and question answering. During the information gathering phase, the agent interacts with the environment to collect knowledge. It answers questions with its accumulated knowledge in the question answering phase.",
"Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \\le k \\le n$:",
"previous: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_n & \\text{if $k = 1$,}\\\\ s_{k-1} & \\text{otherwise;} \\end{array}\\right.} $",
"next: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_1 & \\text{if $k = n$,}\\\\ s_{k+1} & \\text{otherwise;} \\end{array}\\right.} $",
"Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;",
"stop: terminate information gathering phase.",
"Question Answering: We follow the output format of both SQuAD and NewsQA, where an agent is required to point to the head and tail positions of an answer span within $p$. Assume that at step $t$ the agent stops interacting and the observation $o_t$ is $s_k$. The agent points to a head-tail position pair in $s_k$."
],
[
"Given the question “When is the deadline of AAAI?”, as a human, one might try searching “AAAI” on a search engine, follow the link to the official AAAI website, then search for keywords “deadline” or “due date” on the website to jump to a specific paragraph. Humans have a deep understanding of questions because of their significant background knowledge. As a result, the keywords they use to search are not limited to what appears in the question.",
"Inspired by this observation, we study 3 query types for the Ctrl+F $<$query$>$ command.",
"One token from the question: the setting with smallest action space. Because iMRC deals with Ctrl+F commands by exact string matching, there is no guarantee that all sentences are accessible from question tokens only.",
"One token from the union of the question and the current observation: an intermediate level where the action space is larger.",
"One token from the dataset vocabulary: the action space is huge (see Table TABREF16 for statistics of SQuAD and NewsQA). It is guaranteed that all sentences in all documents are accessible through these tokens."
],
[
"Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance ."
],
[
"As a baseline, we propose QA-DQN, an agent that adopts components from QANet BIBREF18 and adds an extra command generation module inspired by LSTM-DQN BIBREF19.",
"As illustrated in Figure FIGREF6, the agent consists of three components: an encoder, an action generator, and a question answerer. More precisely, at a game step $t$, the encoder reads observation string $o_t$ and question string $q$ to generate attention aggregated hidden representations $M_t$. Using $M_t$, the action generator outputs commands (defined in previous sections) to interact with iMRC. If the generated command is stop or the agent is forced to stop, the question answerer takes the current information at game step $t$ to generate head and tail pointers for answering the question; otherwise, the information gathering procedure continues.",
"In this section, we describe the high-level model structure and training strategies of QA-DQN. We refer readers to BIBREF18 for detailed information. We will release datasets and code in the near future."
],
[
"In this section, we use game step $t$ to denote one round of interaction between an agent with the iMRC environment. We use $o_t$ to denote text observation at game step $t$ and $q$ to denote question text. We use $L$ to refer to a linear transformation. $[\\cdot ;\\cdot ]$ denotes vector concatenation."
],
[
"The encoder consists of an embedding layer, two stacks of transformer blocks (denoted as encoder transformer blocks and aggregation transformer blocks), and an attention layer.",
"In the embedding layer, we aggregate both word- and character-level embeddings. Word embeddings are initialized by the 300-dimension fastText BIBREF20 vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors. A convolutional layer with 96 kernels of size 5 is used to aggregate the sequence of characters. We use a max pooling layer on the character dimension, then a multi-layer perceptron (MLP) of size 96 is used to aggregate the concatenation of word- and character-level representations. A highway network BIBREF21 is used on top of this MLP. The resulting vectors are used as input to the encoding transformer blocks.",
"Each encoding transformer block consists of four convolutional layers (with shared weights), a self-attention layer, and an MLP. Each convolutional layer has 96 filters, each kernel's size is 7. In the self-attention layer, we use a block hidden size of 96 and a single head attention mechanism. Layer normalization and dropout are applied after each component inside the block. We add positional encoding into each block's input. We use one layer of such an encoding block.",
"At a game step $t$, the encoder processes text observation $o_t$ and question $q$ to generate context-aware encodings $h_{o_t} \\in \\mathbb {R}^{L^{o_t} \\times H_1}$ and $h_q \\in \\mathbb {R}^{L^{q} \\times H_1}$, where $L^{o_t}$ and $L^{q}$ denote length of $o_t$ and $q$ respectively, $H_1$ is 96.",
"Following BIBREF18, we use a context-query attention layer to aggregate the two representations $h_{o_t}$ and $h_q$. Specifically, the attention layer first uses two MLPs to map $h_{o_t}$ and $h_q$ into the same space, with the resulting representations denoted as $h_{o_t}^{\\prime } \\in \\mathbb {R}^{L^{o_t} \\times H_2}$ and $h_q^{\\prime } \\in \\mathbb {R}^{L^{q} \\times H_2}$, in which, $H_2$ is 96.",
"Then, a tri-linear similarity function is used to compute the similarities between each pair of $h_{o_t}^{\\prime }$ and $h_q^{\\prime }$ items:",
"where $\\odot $ indicates element-wise multiplication and $w$ is trainable parameter vector of size 96.",
"We apply softmax to the resulting similarity matrix $S$ along both dimensions, producing $S^A$ and $S^B$. Information in the two representations are then aggregated as",
"where $h_{oq}$ is aggregated observation representation.",
"On top of the attention layer, a stack of aggregation transformer blocks is used to further map the observation representations to action representations and answer representations. The configuration parameters are the same as the encoder transformer blocks, except there are two convolution layers (with shared weights), and the number of blocks is 7.",
"Let $M_t \\in \\mathbb {R}^{L^{o_t} \\times H_3}$ denote the output of the stack of aggregation transformer blocks, in which $H_3$ is 96."
],
[
"The action generator takes $M_t$ as input and estimates Q-values for all possible actions. As described in previous section, when an action is a Ctrl+F command, it is composed of two tokens (the token “Ctrl+F” and the query token). Therefore, the action generator consists of three MLPs:",
"Here, the size of $L_{shared} \\in \\mathbb {R}^{95 \\times 150}$; $L_{action}$ has an output size of 4 or 2 depending on the number of actions available; the size of $L_{ctrlf}$ is the same as the size of a dataset's vocabulary size (depending on different query type settings, we mask out words in the vocabulary that are not query candidates). The overall Q-value is simply the sum of the two components:"
],
[
"Following BIBREF18, we append two extra stacks of aggregation transformer blocks on top of the encoder to compute head and tail positions:",
"Here, $M_{head}$ and $M_{tail}$ are outputs of the two extra transformer stacks, $L_0$, $L_1$, $L_2$ and $L_3$ are trainable parameters with output size 150, 150, 1 and 1, respectively."
],
[
"In iMRC, some questions may not be easily answerable based only on observation of a single sentence. To overcome this limitation, we provide an explicit memory mechanism to QA-DQN. Specifically, we use a queue to store strings that have been observed recently. The queue has a limited size of slots (we use queues of size [1, 3, 5] in this work). This prevents the agent from issuing next commands until the environment has been observed fully, in which case our task would degenerate to the standard MRC setting. The memory slots are reset episodically."
],
[
"Because the question answerer in QA-DQN is a pointing model, its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer. We design a heuristic reward to encourage and guide this behavior. In particular, we assign a reward if the agent halts at game step $k$ and the answer is a sub-string of $o_k$ (if larger memory slots are used, we assign this reward if the answer is a sub-string of the memory at game step $k$). We denote this reward as the sufficient information reward, since, if an agent sees the answer, it should have a good chance of having gathered sufficient information for the question (although this is not guaranteed).",
"Note this sufficient information reward is part of the design of QA-DQN, whereas the question answering score is the only metric used to evaluate an agent's performance on the iMRC task."
],
[
"As mentioned above, an agent might bypass Ctrl+F actions and explore an iMRC game only via next commands. We study this possibility in an ablation study, where we limit the agent to the Ctrl+F and stop commands. In this setting, an agent is forced to explore by means of search a queries."
],
[
"In this section, we describe our training strategy. We split the training pipeline into two parts for easy comprehension. We use Adam BIBREF22 as the step rule for optimization in both parts, with the learning rate set to 0.00025."
],
[
"iMRC games are interactive environments. We use an RL training algorithm to train the interactive information-gathering behavior of QA-DQN. We adopt the Rainbow algorithm proposed by BIBREF23, which integrates several extensions to the original Deep Q-Learning algorithm BIBREF24. Rainbox exhibits state-of-the-art performance on several RL benchmark tasks (e.g., Atari games).",
"During game playing, we use a mini-batch of size 10 and push all transitions (observation string, question string, generated command, reward) into a replay buffer of size 500,000. We do not compute losses directly using these transitions. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer, compute loss, and update the network.",
"Detailed hyper-parameter settings for action generation are shown in Table TABREF38."
],
[
"Similarly, we use another replay buffer to store question answering transitions (observation string when interaction stops, question string, ground-truth answer).",
"Because both iSQuAD and iNewsQA are converted from datasets that provide ground-truth answer positions, we can leverage this information and train the question answerer with supervised learning. Specifically, we only push question answering transitions when the ground-truth answer is in the observation string. For each transition, we convert the ground-truth answer head- and tail-positions from the SQuAD and NewsQA datasets to positions in the current observation string. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer and train the question answerer using the Negative Log-Likelihood (NLL) loss. We use a dropout rate of 0.1."
],
[
"In this study, we focus on three factors and their effects on iMRC and the performance of the QA-DQN agent:",
"different Ctrl+F strategies, as described in the action space section;",
"enabled vs. disabled next and previous actions;",
"different memory slot sizes.",
"Below we report the baseline agent's training performance followed by its generalization performance on test data."
],
[
"It remains difficult for RL agents to master multiple games at the same time. In our case, each document-question pair can be considered a unique game, and there are hundred of thousands of them. Therefore, as is common practice in the RL literature, we study an agent's training curves.",
"Due to the space limitations, we select several representative settings to discuss in this section and provide QA-DQN's training and evaluation curves for all experimental settings in the Appendix. We provide the agent's sufficient information rewards (i.e., if the agent stopped at a state where the observation contains the answer) during training in Appendix as well.",
"Figure FIGREF36 shows QA-DQN's training performance ($\\text{F}_1$ score) when next and previous actions are available. Figure FIGREF40 shows QA-DQN's training performance ($\\text{F}_1$ score) when next and previous actions are disabled. Note that all training curves are averaged over 3 runs with different random seeds and all evaluation curves show the one run with max validation performance among the three.",
"From Figure FIGREF36, we can see that the three Ctrl+F strategies show similar difficulty levels when next and previous are available, although QA-DQN works slightly better when selecting a word from the question as query (especially on iNewsQA). However, from Figure FIGREF40 we observe that when next and previous are disabled, QA-DQN shows significant advantage when selecting a word from the question as query. This may due to the fact that when an agent must use Ctrl+F to navigate within documents, the set of question words is a much smaller action space in contrast to the other two settings. In the 4-action setting, an agent can rely on issuing next and previous actions to reach any sentence in a document.",
"The effect of action space size on model performance is particularly clear when using a datasets' entire vocabulary as query candidates in the 2-action setting. From Figure FIGREF40 (and figures with sufficient information rewards in the Appendix) we see QA-DQN has a hard time learning in this setting. As shown in Table TABREF16, both datasets have a vocabulary size of more than 100k. This is much larger than in the other two settings, where on average the length of questions is around 10. This suggests that the methods with better sample efficiency are needed to act in more realistic problem settings with huge action spaces.",
"Experiments also show that a larger memory slot size always helps. Intuitively, with a memory mechanism (either implicit or explicit), an agent could make the environment closer to fully observed by exploring and memorizing observations. Presumably, a larger memory may further improve QA-DQN's performance, but considering the average number of sentences in each iSQuAD game is 5, a memory with more than 5 slots will defeat the purpose of our study of partially observable text environments.",
"Not surprisingly, QA-DQN performs worse in general on iNewsQA, in all experiments. As shown in Table TABREF16, the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD. This is analogous to games with larger maps in the RL literature, where the environment is partially observable. A better exploration (in our case, jumping) strategy may help QA-DQN to master such harder games."
],
[
"To study QA-DQN's ability to generalize, we select the best performing agent in each experimental setting on the validation set and report their performance on the test set. The agent's test performance is reported in Table TABREF41. In addition, to support our claim that the challenging part of iMRC tasks is information seeking rather than answering questions given sufficient information, we also report the $\\text{F}_1$ score of an agent when it has reached the piece of text that contains the answer, which we denote as $\\text{F}_{1\\text{info}}$.",
"From Table TABREF41 (and validation curves provided in appendix) we can observe that QA-DQN's performance during evaluation matches its training performance in most settings. $\\text{F}_{1\\text{info}}$ scores are consistently higher than the overall $\\text{F}_1$ scores, and they have much less variance across different settings. This supports our hypothesis that information seeking play an important role in solving iMRC tasks, whereas question answering given necessary information is relatively straightforward. This also suggests that an interactive agent that can better navigate to important sentences is very likely to achieve better performance on iMRC tasks."
],
[
"In this work, we propose and explore the direction of converting MRC datasets into interactive environments. We believe interactive, information-seeking behavior is desirable for neural MRC systems when knowledge sources are partially observable and/or too large to encode in their entirety — for instance, when searching for information on the internet, where knowledge is by design easily accessible to humans through interaction.",
"Despite being restricted, our proposed task presents major challenges to existing techniques. iMRC lies at the intersection of NLP and RL, which is arguably less studied in existing literature. We hope to encourage researchers from both NLP and RL communities to work toward solving this task.",
"For our baseline, we adopted an off-the-shelf, top-performing MRC model and RL method. Either component can be replaced straightforwardly with other methods (e.g., to utilize a large-scale pretrained language model).",
"Our proposed setup and baseline agent presently use only a single word with the query command. However, a host of other options should be considered in future work. For example, multi-word queries with fuzzy matching are more realistic. It would also be interesting for an agent to generate a vector representation of the query in some latent space. This vector could then be compared with precomputed document representations (e.g., in an open domain QA dataset) to determine what text to observe next, with such behavior tantamount to learning to do IR.",
"As mentioned, our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general. Almost all MRC datasets can be used to study interactive, information-seeking behavior through similar modifications. We hypothesize that such behavior can, in turn, help in solving real-world MRC problems involving search."
]
],
"section_name": [
"Introduction",
"Related Works",
"iMRC: Making MRC Interactive",
"iMRC: Making MRC Interactive ::: Interactive MRC as a POMDP",
"iMRC: Making MRC Interactive ::: Action Space",
"iMRC: Making MRC Interactive ::: Query Types",
"iMRC: Making MRC Interactive ::: Evaluation Metric",
"Baseline Agent",
"Baseline Agent ::: Model Structure",
"Baseline Agent ::: Model Structure ::: Encoder",
"Baseline Agent ::: Model Structure ::: Action Generator",
"Baseline Agent ::: Model Structure ::: Question Answerer",
"Baseline Agent ::: Memory and Reward Shaping ::: Memory",
"Baseline Agent ::: Memory and Reward Shaping ::: Reward Shaping",
"Baseline Agent ::: Memory and Reward Shaping ::: Ctrl+F Only Mode",
"Baseline Agent ::: Training Strategy",
"Baseline Agent ::: Training Strategy ::: Action Generation",
"Baseline Agent ::: Training Strategy ::: Question Answering",
"Experimental Results",
"Experimental Results ::: Mastering Training Games",
"Experimental Results ::: Generalizing to Test Set",
"Discussion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"6704ca0608ed345578616637b277f39d9fff4c98"
],
"answer": [
{
"evidence": [
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"01735ec7a3f9a56955a8d3c9badc04bbd753771f"
],
"answer": [
{
"evidence": [
"We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\\lbrace p, q, a\\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.",
"iMRC: Making MRC Interactive ::: Evaluation Metric",
"Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance ."
],
"extractive_spans": [],
"free_form_answer": "They evaluate F1 score and agent's test performance on their own built interactive datasets (iSQuAD and iNewsQA)",
"highlighted_evidence": [
"We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1.",
"iMRC: Making MRC Interactive ::: Evaluation Metric\nSince iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"33b2d26d064251c196238b0c3c455b208680f5fc"
],
"answer": [
{
"evidence": [
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
],
"extractive_spans": [
"Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
],
"free_form_answer": "",
"highlighted_evidence": [
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7f1096ea26f2374fc33f07acc67d80aeb7004dc2"
],
"answer": [
{
"evidence": [
"Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \\le k \\le n$:",
"previous: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_n & \\text{if $k = 1$,}\\\\ s_{k-1} & \\text{otherwise;} \\end{array}\\right.} $",
"next: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_1 & \\text{if $k = n$,}\\\\ s_{k+1} & \\text{otherwise;} \\end{array}\\right.} $",
"Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;",
"stop: terminate information gathering phase."
],
"extractive_spans": [
"previous",
"next",
"Ctrl+F $<$query$>$",
"stop"
],
"free_form_answer": "",
"highlighted_evidence": [
"Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \\le k \\le n$:\n\nprevious: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_n & \\text{if $k = 1$,}\\\\ s_{k-1} & \\text{otherwise;} \\end{array}\\right.} $\n\nnext: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_1 & \\text{if $k = n$,}\\\\ s_{k+1} & \\text{otherwise;} \\end{array}\\right.} $\n\nCtrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;\n\nstop: terminate information gathering phase."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"Do they provide decision sequences as supervision while training models?",
"What are the models evaluated on?",
"How do they train models in this setup?",
"What commands does their setup provide to models seeking information?"
],
"question_id": [
"1ef5fc4473105f1c72b4d35cf93d312736833d3d",
"5f9bd99a598a4bbeb9d2ac46082bd3302e961a0f",
"b2fab9ffbcf1d6ec6d18a05aeb6e3ab9a4dbf2ae",
"e9cf1b91f06baec79eb6ddfd91fc5d434889f652"
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668"
],
"search_query": [
"information seeking",
"information seeking",
"information seeking",
"information seeking"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Examples of interactive machine reading comprehension behavior. In the upper example, the agent has no memory of past observations, and thus it answers questions only with observation string at current step. In the lower example, the agent is able to use its memory to find answers.",
"Figure 1: A demonstration of the proposed iMRC pipeline, in which the QA-DQN agent is illustrated in shaddow. At a game step t, QA-DQN encodes the question and text observation into hidden representations Mt. An action generator takes Mt as input to generate commands to interact with the environment. If the agent generates stop at this game step, Mt is used to answer question by a question answerer. Otherwise, the iMRC environment will provide new text observation in response of the generated action.",
"Table 2: Statistics of iSQuAD and iNewsQA.",
"Figure 2: 4-action setting: QA-DQN’s F1 scores during training on iSQuAD and iNewsQA datasets with different Ctrl+F strategies and cache sizes. next and previous commands are available.",
"Table 3: Hyper-parameter setup for action generation.",
"Table 4: Experimental results on test set. #Action 4 denotes the settings as described in the action space section, #Action 2 indicates the setting where only Ctrl+F and stop are available. F1info indicates an agent’s F1 score iff sufficient information is in its observation.",
"Figure 3: 2-action setting: QA-DQN’s F1 scores during training on iSQuAD and iNewsQA datasets when using different Ctrl+F strategies and cache sizes. Note that next and previous are disabled.",
"Figure 4: Performance on iSQuAD training set. next and previous actions are available.",
"Figure 6: Performance on iSQuAD training set. next and previous actions are unavailable.",
"Figure 5: Performance on iSQuAD validation set. next and previous actions are available.",
"Figure 7: Performance on iSQuAD validation set. next and previous actions are unavailable.",
"Figure 8: Performance on iNewsQA training set. next and previous actions are available.",
"Figure 10: Performance on iNewsQA training set. next and previous actions are unavailable.",
"Figure 9: Performance on iNewsQA validation set. next and previous actions are available.",
"Figure 11: Performance on iNewsQA validation set. next and previous actions are unavailable."
],
"file": [
"1-Table1-1.png",
"2-Figure1-1.png",
"3-Table2-1.png",
"5-Figure2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Figure3-1.png",
"9-Figure4-1.png",
"9-Figure6-1.png",
"9-Figure5-1.png",
"9-Figure7-1.png",
"10-Figure8-1.png",
"10-Figure10-1.png",
"10-Figure9-1.png",
"10-Figure11-1.png"
]
} | [
"What are the models evaluated on?"
] | [
[
"1908.10449-iMRC: Making MRC Interactive-0",
"1908.10449-iMRC: Making MRC Interactive ::: Evaluation Metric-0"
]
] | [
"They evaluate F1 score and agent's test performance on their own built interactive datasets (iSQuAD and iNewsQA)"
] | 83 |
1910.03814 | Exploring Hate Speech Detection in Multimodal Publications | In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research. | {
"paragraphs": [
[
"Social Media platforms such as Facebook, Twitter or Reddit have empowered individuals' voices and facilitated freedom of expression. However they have also been a breeding ground for hate speech and other types of online harassment. Hate speech is defined in legal literature as speech (or any form of expression) that expresses (or seeks to promote, or has the capacity to increase) hatred against a person or a group of people because of a characteristic they share, or a group to which they belong BIBREF0. Twitter develops this definition in its hateful conduct policy as violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.",
"In this work we focus on hate speech detection. Due to the inherent complexity of this task, it is important to distinguish hate speech from other types of online harassment. In particular, although it might be offensive to many people, the sole presence of insulting terms does not itself signify or convey hate speech. And, the other way around, hate speech may denigrate or threaten an individual or a group of people without the use of any profanities. People from the african-american community, for example, often use the term nigga online, in everyday language, without malicious intentions to refer to folks within their community, and the word cunt is often used in non hate speech publications and without any sexist purpose. The goal of this work is not to discuss if racial slur, such as nigga, should be pursued. The goal is to distinguish between publications using offensive terms and publications attacking communities, which we call hate speech.",
"Modern social media content usually include images and text. Some of these multimodal publications are only hate speech because of the combination of the text with a certain image. That is because, as we have stated, the presence of offensive terms does not itself signify hate speech, and the presence of hate speech is often determined by the context of a publication. Moreover, users authoring hate speech tend to intentionally construct publications where the text is not enough to determine they are hate speech. This happens especially in Twitter, where multimodal tweets are formed by an image and a short text, which in many cases is not enough to judge them. In those cases, the image might give extra context to make a proper judgement. Fig. FIGREF5 shows some of such examples in MMHS150K.",
"The contributions of this work are as follows:",
"[noitemsep,leftmargin=*]",
"We propose the novel task of hate speech detection in multimodal publications, collect, annotate and publish a large scale dataset.",
"We evaluate state of the art multimodal models on this specific task and compare their performance with unimodal detection. Even though images are proved to be useful for hate speech detection, the proposed multimodal models do not outperform unimodal textual models.",
"We study the challenges of the proposed task, and open the field for future research."
],
[
"The literature on detecting hate speech on online textual publications is extensive. Schmidt and Wiegand BIBREF1 recently provided a good survey of it, where they review the terminology used over time, the features used, the existing datasets and the different approaches. However, the field lacks a consistent dataset and evaluation protocol to compare proposed methods. Saleem et al. BIBREF2 compare different classification methods detecting hate speech in Reddit and other forums. Wassem and Hovy BIBREF3 worked on hate speech detection on twitter, published a manually annotated dataset and studied its hate distribution. Later Wassem BIBREF4 extended the previous published dataset and compared amateur and expert annotations, concluding that amateur annotators are more likely than expert annotators to label items as hate speech. Park and Fung BIBREF5 worked on Wassem datasets and proposed a classification method using a CNN over Word2Vec BIBREF6 word embeddings, showing also classification results on racism and sexism hate sub-classes. Davidson et al. BIBREF7 also worked on hate speech detection on twitter, publishing another manually annotated dataset. They test different classifiers such as SVMs and decision trees and provide a performance comparison. Malmasi and Zampieri BIBREF8 worked on Davidson's dataset improving his results using more elaborated features. ElSherief et al. BIBREF9 studied hate speech on twitter and selected the most frequent terms in hate tweets based on Hatebase, a hate expression repository. They propose a big hate dataset but it lacks manual annotations, and all the tweets containing certain hate expressions are considered hate speech. Zhang et al. BIBREF10 recently proposed a more sophisticated approach for hate speech detection, using a CNN and a GRU BIBREF11 over Word2Vec BIBREF6 word embeddings. They show experiments in different datasets outperforming previous methods. Next, we summarize existing hate speech datasets:",
"[noitemsep,leftmargin=*]",
"RM BIBREF10: Formed by $2,435$ tweets discussing Refugees and Muslims, annotated as hate or non-hate.",
"DT BIBREF7: Formed by $24,783$ tweets annotated as hate, offensive language or neither. In our work, offensive language tweets are considered as non-hate.",
"WZ-LS BIBREF5: A combination of Wassem datasets BIBREF4, BIBREF3 labeled as racism, sexism, neither or both that make a total of $18,624$ tweets.",
"Semi-Supervised BIBREF9: Contains $27,330$ general hate speech Twitter tweets crawled in a semi-supervised manner.",
"Although often modern social media publications include images, not too many contributions exist that exploit visual information. Zhong et al. BIBREF12 worked on classifying Instagram images as potential cyberbullying targets, exploiting both the image content, the image caption and the comments. However, their visual information processing is limited to the use of features extracted by a pre-trained CNN, the use of which does not achieve any improvement. Hosseinmardi et al. BIBREF13 also address the problem of detecting cyberbullying incidents on Instagram exploiting both textual and image content. But, again, their visual information processing is limited to use the features of a pre-trained CNN, and the improvement when using visual features on cyberbullying classification is only of 0.01%."
],
[
"A typical task in multimodal visual and textual analysis is to learn an alignment between feature spaces. To do that, usually a CNN and a RNN are trained jointly to learn a joint embedding space from aligned multimodal data. This approach is applied in tasks such as image captioning BIBREF14, BIBREF15 and multimodal image retrieval BIBREF16, BIBREF17. On the other hand, instead of explicitly learning an alignment between two spaces, the goal of Visual Question Answering (VQA) is to merge both data modalities in order to decide which answer is correct. This problem requires modeling very precise correlations between the image and the question representations. The VQA task requirements are similar to our hate speech detection problem in multimodal publications, where we have a visual and a textual input and we need to combine both sources of information to understand the global context and make a decision. We thus take inspiration from the VQA literature for the tested models. Early VQA methods BIBREF18 fuse textual and visual information by feature concatenation. Later methods, such as Multimodal Compact Bilinear pooling BIBREF19, utilize bilinear pooling to learn multimodal features. An important limitation of these methods is that the multimodal features are fused in the latter model stage, so the textual and visual relationships are modeled only in the last layers. Another limitation is that the visual features are obtained by representing the output of the CNN as a one dimensional vector, which losses the spatial information of the input images. In a recent work, Gao et al. BIBREF20 propose a feature fusion scheme to overcome these limitations. They learn convolution kernels from the textual information –which they call question-guided kernels– and convolve them with the visual information in an earlier stage to get the multimodal features. Margffoy-Tuay et al. BIBREF21 use a similar approach to combine visual and textual information, but they address a different task: instance segmentation guided by natural language queries. We inspire in these latest feature fusion works to build the models for hate speech detection."
],
[
"Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps."
],
[
"We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter."
],
[
"We aim to create a multimodal hate speech database where all the instances contain visual and textual information that we can later process to determine if a tweet is hate speech or not. But a considerable amount of the images of the selected tweets contain only textual information, such as screenshots of other tweets. To ensure that all the dataset instances contain both visual and textual information, we remove those tweets. To do that, we use TextFCN BIBREF22, BIBREF23 , a Fully Convolutional Network that produces a pixel wise text probability map of an image. We set empirical thresholds to discard images that have a substantial total text probability, filtering out $23\\%$ of the collected tweets."
],
[
"We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers.",
"We received a lot of valuable feedback from the annotators. Most of them had understood the task correctly, but they were worried because of its subjectivity. This is indeed a subjective task, highly dependent on the annotator convictions and sensitivity. However, we expect to get cleaner annotations the more strong the attack is, which are the publications we are more interested on detecting. We also detected that several users annotate tweets for hate speech just by spotting slur. As already said previously, just the use of particular words can be offensive to many people, but this is not the task we aim to solve. We have not included in our experiments those hits that were made in less than 3 seconds, understanding that it takes more time to grasp the multimodal context and make a decision.",
"We do a majority voting between the three annotations to get the tweets category. At the end, we obtain $112,845$ not hate tweets and $36,978$ hate tweets. The latest are divided in $11,925$ racist, $3,495$ sexist, $3,870$ homophobic, 163 religion-based hate and $5,811$ other hate tweets (Fig. FIGREF17). In this work, we do not use hate sub-categories, and stick to the hate / not hate split. We separate balanced validation ($5,000$) and test ($10,000$) sets. The remaining tweets are used for training.",
"We also experimented using hate scores for each tweet computed given the different votes by the three annotators instead of binary labels. The results did not present significant differences to those shown in the experimental part of this work, but the raw annotations will be published nonetheless for further research.",
"As far as we know, this dataset is the biggest hate speech dataset to date, and the first multimodal hate speech dataset. One of its challenges is to distinguish between tweets using the same key offensive words that constitute or not an attack to a community (hate speech). Fig. FIGREF18 shows the percentage of hate and not hate tweets of the top keywords."
],
[
"All images are resized such that their shortest size has 500 pixels. During training, online data augmentation is applied as random cropping of $299\\times 299$ patches and mirroring. We use a CNN as the image features extractor which is an Imagenet BIBREF24 pre-trained Google Inception v3 architecture BIBREF25. The fine-tuning process of the Inception v3 layers aims to modify its weights to extract the features that, combined with the textual information, are optimal for hate speech detection."
],
[
"We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word."
],
[
"The text in the image can also contain important information to decide if a publication is hate speech or not, so we extract it and also input it to our model. To do so, we use Google Vision API Text Detection module BIBREF27. We input the tweet text and the text from the image separately to the multimodal models, so it might learn different relations between them and between them and the image. For instance, the model could learn to relate the image text with the area in the image where the text appears, so it could learn to interpret the text in a different way depending on the location where it is written in the image. The image text is also encoded by the LSTM as the hidden state after processing its last word."
],
[
"The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any)."
],
[
"The image is fed to the Inception v3 architecture and the 2048 dimensional feature vector after the last average pooling layer is used as the visual representation. This vector is then concatenated with the 150 dimension vectors of the LSTM last word hidden states of the image text and the tweet text, resulting in a 2348 feature vector. This vector is then processed by three fully connected layers of decreasing dimensionality $(2348, 1024, 512)$ with following corresponding batch normalization and ReLu layers until the dimensions are reduced to two, the number of classes, in the last classification layer. The FCM architecture is illustrated in Fig. FIGREF26."
],
[
"Instead of using the latest feature vector before classification of the Inception v3 as the visual representation, in the SCM we use the $8\\times 8\\times 2048$ feature map after the last Inception module. Then we concatenate the 150 dimension vectors encoding the tweet text and the tweet image text at each spatial location of that feature map. The resulting multimodal feature map is processed by two Inception-E blocks BIBREF28. After that, dropout and average pooling are applied and, as in the FCM model, three fully connected layers are used to reduce the dimensionality until the classification layer."
],
[
"The TKM design, inspired by BIBREF20 and BIBREF21, aims to capture interactions between the two modalities more expressively than concatenation models. As in SCM we use the $8\\times 8\\times 2048$ feature map after the last Inception module as the visual representation. From the 150 dimension vector encoding the tweet text, we learn $K_t$ text dependent kernels using independent fully connected layers that are trained together with the rest of the model. The resulting $K_t$ text dependent kernels will have dimensionality of $1\\times 1\\times 2048$. We do the same with the feature vector encoding the image text, learning $K_{it}$ kernels. The textual kernels are convolved with the visual feature map in the channel dimension at each spatial location, resulting in a $8\\times 8\\times (K_i+K_{it})$ multimodal feature map, and batch normalization is applied. Then, as in the SCM, the 150 dimension vectors encoding the tweet text and the tweet image text are concatenated at each spatial dimension. The rest of the architecture is the same as in SCM: two Inception-E blocks, dropout, average pooling and three fully connected layers until the classification layer. The number of tweet textual kernels $K_t$ and tweet image textual kernels $K_it$ is set to $K_t = 10$ and $K_it = 5$. The TKM architecture is illustrated in Fig. FIGREF29."
],
[
"We train the multimodal models with a Cross-Entropy loss with Softmax activations and an ADAM optimizer with an initial learning rate of $1e-4$. Our dataset suffers from a high class imbalance, so we weight the contribution to the loss of the samples to totally compensate for it. One of the goals of this work is to explore how every one of the inputs contributes to the classification and to prove that the proposed model can learn concurrences between visual and textual data useful to improve the hate speech classification results on multimodal data. To do that we train different models where all or only some inputs are available. When an input is not available, we set it to zeros, and we do the same when an image has no text."
],
[
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models.",
"First, notice that given the subjectivity of the task and the discrepancies between annotators, getting optimal scores in the evaluation metrics is virtually impossible. However, a system with relatively low metric scores can still be very useful for hate speech detection in a real application: it will fire on publications for which most annotators agree they are hate, which are often the stronger attacks. The proposed LSTM to detect hate speech when only text is available, gets similar results as the method presented in BIBREF7, which we trained with MMHS150K and the same splits. However, more than substantially advancing the state of the art on hate speech detection in textual publications, our key purpose in this work is to introduce and work on its detection on multimodal publications. We use LSTM because it provides a strong representation of the tweet texts.",
"The FCM trained only with images gets decent results, considering that in many publications the images might not give any useful information for the task. Fig. FIGREF33 shows some representative examples of the top hate and not hate scored images of this model. Many hate tweets are accompanied by demeaning nudity images, being sexist or homophobic. Other racist tweets are accompanied by images caricaturing black people. Finally, MEMES are also typically used in hate speech publications. The top scored images for not hate are portraits of people belonging to minorities. This is due to the use of slur inside these communities without an offensive intention, such as the word nigga inside the afro-american community or the word dyke inside the lesbian community. These results show that images can be effectively used to discriminate between offensive and non-offensive uses of those words.",
"Despite the model trained only with images proves that they are useful for hate speech detection, the proposed multimodal models are not able to improve the detection compared to the textual models. Besides the different architectures, we have tried different training strategies, such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information. Eventually, though, these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models. The proposed multimodal models, such as TKM, have shown good performance in other tasks, such as VQA. Next, we analyze why they do not perform well in this task and with this data:",
"[noitemsep,leftmargin=*]",
"Noisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.",
"Complexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.",
"Small set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate."
],
[
"In this work we have explored the task of hate speech detection on multimodal publications. We have created MMHS150K, to our knowledge the biggest available hate speech dataset, and the first one composed of multimodal data, namely tweets formed by image and text. We have trained different textual, visual and multimodal models with that data, and found out that, despite the fact that images are useful for hate speech detection, the multimodal models do not outperform the textual models. Finally, we have analyzed the challenges of the proposed task and dataset. Given that most of the content in Social Media nowadays is multimodal, we truly believe on the importance of pushing forward this research. The code used in this work is available in ."
]
],
"section_name": [
"Introduction",
"Related Work ::: Hate Speech Detection",
"Related Work ::: Visual and Textual Data Fusion",
"The MMHS150K dataset",
"The MMHS150K dataset ::: Tweets Gathering",
"The MMHS150K dataset ::: Textual Image Filtering",
"The MMHS150K dataset ::: Annotation",
"Methodology ::: Unimodal Treatment ::: Images.",
"Methodology ::: Unimodal Treatment ::: Tweet Text.",
"Methodology ::: Unimodal Treatment ::: Image Text.",
"Methodology ::: Multimodal Architectures",
"Methodology ::: Multimodal Architectures ::: Feature Concatenation Model (FCM)",
"Methodology ::: Multimodal Architectures ::: Spatial Concatenation Model (SCM)",
"Methodology ::: Multimodal Architectures ::: Textual Kernels Model (TKM)",
"Methodology ::: Multimodal Architectures ::: Training",
"Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"e759a4245a5ac52632d3fbc424192e9e72b16350"
],
"answer": [
{
"evidence": [
"The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any)."
],
"extractive_spans": [
"Feature Concatenation Model (FCM)",
"Spatial Concatenation Model (SCM)",
"Textual Kernels Model (TKM)"
],
"free_form_answer": "",
"highlighted_evidence": [
"To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"594bca16b30968bbe0e3b0f68318f1788f732491"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"374b00290fe0a9a6f8f123d6dc04c1c2cb7ce619"
],
"answer": [
{
"evidence": [
"Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps."
],
"extractive_spans": [
" $150,000$ tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"01747abc86fa3933552919b030e74fc9d6515178"
],
"answer": [
{
"evidence": [
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models.",
"FLOAT SELECTED: Table 1. Performance of the proposed models, the LSTM and random scores. The Inputs column indicate which inputs are available at training and testing time."
],
"extractive_spans": [],
"free_form_answer": "Unimodal LSTM vs Best Multimodal (FCM)\n- F score: 0.703 vs 0.704\n- AUC: 0.732 vs 0.734 \n- Mean Accuracy: 68.3 vs 68.4 ",
"highlighted_evidence": [
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available.",
"FLOAT SELECTED: Table 1. Performance of the proposed models, the LSTM and random scores. The Inputs column indicate which inputs are available at training and testing time."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"06bfc3c0173c2bf9e8f0e7a34d8857be185f1310"
],
"answer": [
{
"evidence": [
"Despite the model trained only with images proves that they are useful for hate speech detection, the proposed multimodal models are not able to improve the detection compared to the textual models. Besides the different architectures, we have tried different training strategies, such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information. Eventually, though, these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models. The proposed multimodal models, such as TKM, have shown good performance in other tasks, such as VQA. Next, we analyze why they do not perform well in this task and with this data:",
"[noitemsep,leftmargin=*]",
"Noisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.",
"Complexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.",
"Small set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate."
],
"extractive_spans": [
"Noisy data",
"Complexity and diversity of multimodal relations",
"Small set of multimodal examples"
],
"free_form_answer": "",
"highlighted_evidence": [
"Next, we analyze why they do not perform well in this task and with this data:\n\n[noitemsep,leftmargin=*]\n\nNoisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.\n\nComplexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.\n\nSmall set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e054bc12188dfd93e3491fde76dc37247f91051d"
],
"answer": [
{
"evidence": [
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models."
],
"extractive_spans": [
"F-score",
"Area Under the ROC Curve (AUC)",
"mean accuracy (ACC)",
"Precision vs Recall plot",
"ROC curve (which plots the True Positive Rate vs the False Positive Rate)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available.",
"Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e2962aa33290adc42fdac994cdf8f77b90532666"
],
"answer": [
{
"evidence": [
"We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter."
],
"extractive_spans": [
"Twitter API"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8b3d8e719caa03403c1779308c410d875d34f065"
],
"answer": [
{
"evidence": [
"Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps."
],
"extractive_spans": [
"$150,000$ tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"fef9d96af320e166ea80854dd890bffc92143437"
],
"answer": [
{
"evidence": [
"We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word."
],
"extractive_spans": [
" single layer LSTM with a 150-dimensional hidden state for hate / not hate classification"
],
"free_form_answer": "",
"highlighted_evidence": [
"We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8c2edb685d8f82b80bc60d335a6b53a86b855bd1"
],
"answer": [
{
"evidence": [
"The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any)."
],
"extractive_spans": [
"Feature Concatenation Model (FCM)",
"Spatial Concatenation Model (SCM)",
"Textual Kernels Model (TKM)"
],
"free_form_answer": "",
"highlighted_evidence": [
"To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c528a0f56b7aa65eeafa53dcc5747d171f526879"
],
"answer": [
{
"evidence": [
"We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers."
],
"extractive_spans": [
"No attacks to any community",
" racist",
"sexist",
"homophobic",
"religion based attacks",
"attacks to other communities"
],
"free_form_answer": "",
"highlighted_evidence": [
"We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What models do they propose?",
"Are all tweets in English?",
"How large is the dataset?",
"What is the results of multimodal compared to unimodal models?",
"What is author's opinion on why current multimodal models cannot outperform models analyzing only text?",
"What metrics are used to benchmark the results?",
"How is data collected, manual collection or Twitter api?",
"How many tweats does MMHS150k contains, 150000?",
"What unimodal detection models were used?",
"What different models for multimodal detection were proposed?",
"What annotations are available in the dataset - tweat used hate speach or not?"
],
"question_id": [
"6976296126e4a5c518e6b57de70f8dc8d8fde292",
"53640834d68cf3b86cf735ca31f1c70aa0006b72",
"b2b0321b0aaf58c3aa9050906ade6ef35874c5c1",
"4e9684fd68a242cb354fa6961b0e3b5c35aae4b6",
"2e632eb5ad611bbd16174824de0ae5efe4892daf",
"d1ff6cba8c37e25ac6b261a25ea804d8e58e09c0",
"24c0f3d6170623385283dfda7f2b6ca2c7169238",
"21a9f1cddd7cb65d5d48ec4f33fe2221b2a8f62e",
"a0ef0633d8b4040bf7cdc5e254d8adf82c8eed5e",
"b0799e26152197aeb3aa3b11687a6cc9f6c31011",
"4ce4db7f277a06595014db181342f8cb5cb94626"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. Tweets from MMHS150K where the visual information adds relevant context for the hate speech detection task.",
"Figure 2. Percentage of tweets per class in MMHS150K.",
"Figure 3. Percentage of hate and not hate tweets for top keywords of MMHS150K.",
"Figure 4. FCM architecture. Image and text representations are concatenated and processed by a set of fully connected layers.",
"Figure 5. TKM architecture. Textual kernels are learnt from the text representations, and convolved with the image representation.",
"Table 1. Performance of the proposed models, the LSTM and random scores. The Inputs column indicate which inputs are available at training and testing time.",
"Figure 7. Top scored examples for hate (top) and for not hate (bottom) for the FCM model trained only with images.",
"Figure 6. Precision vs Recall (left) and ROC curve (True Positive Rate vs False Positive Rate) (right) plots of the proposed models trained with the different inputs, the LSTM and random scores."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"7-Table1-1.png",
"7-Figure7-1.png",
"7-Figure6-1.png"
]
} | [
"What is the results of multimodal compared to unimodal models?"
] | [
[
"1910.03814-Results-0",
"1910.03814-7-Table1-1.png"
]
] | [
"Unimodal LSTM vs Best Multimodal (FCM)\n- F score: 0.703 vs 0.704\n- AUC: 0.732 vs 0.734 \n- Mean Accuracy: 68.3 vs 68.4 "
] | 84 |
1701.00185 | Self-Taught Convolutional Neural Networks for Short Text Clustering | Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC^2), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction methods. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets. | {
"paragraphs": [
[
"Short text clustering is of great importance due to its various applications, such as user profiling BIBREF0 and recommendation BIBREF1 , for nowaday's social media dataset emerged day by day. However, short text clustering has the data sparsity problem and most words only occur once in each short text BIBREF2 . As a result, the Term Frequency-Inverse Document Frequency (TF-IDF) measure cannot work well in short text setting. In order to address this problem, some researchers work on expanding and enriching the context of data from Wikipedia BIBREF3 or an ontology BIBREF4 . However, these methods involve solid Natural Language Processing (NLP) knowledge and still use high-dimensional representation which may result in a waste of both memory and computation time. Another way to overcome these issues is to explore some sophisticated models to cluster short texts. For example, Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture model-based approach for short text clustering. Yet how to design an effective model is an open question, and most of these methods directly trained based on Bag-of-Words (BoW) are shallow structures which cannot preserve the accurate semantic similarities.",
"Recently, with the help of word embedding, neural networks demonstrate their great performance in terms of constructing text representation, such as Recursive Neural Network (RecNN) BIBREF6 , BIBREF7 and Recurrent Neural Network (RNN) BIBREF8 . However, RecNN exhibits high time complexity to construct the textual tree, and RNN, using the hidden layer computed at the last word to represent the text, is a biased model where later words are more dominant than earlier words BIBREF9 . Whereas for the non-biased models, the learned representation of one text can be extracted from all the words in the text with non-dominant learned weights. More recently, Convolution Neural Network (CNN), as the most popular non-biased model and applying convolutional filters to capture local features, has achieved a better performance in many NLP applications, such as sentence modeling BIBREF10 , relation classification BIBREF11 , and other traditional NLP tasks BIBREF12 . Most of the previous works focus CNN on solving supervised NLP tasks, while in this paper we aim to explore the power of CNN on one unsupervised NLP task, short text clustering.",
"We systematically introduce a simple yet surprisingly powerful Self-Taught Convolutional neural network framework for Short Text Clustering, called STC INLINEFORM0 . An overall architecture of our proposed approach is illustrated in Figure FIGREF5 . We, inspired by BIBREF13 , BIBREF14 , utilize a self-taught learning framework into our task. In particular, the original raw text features are first embedded into compact binary codes INLINEFORM1 with the help of one traditional unsupervised dimensionality reduction function. Then text matrix INLINEFORM2 projected from word embeddings are fed into CNN model to learn the deep feature representation INLINEFORM3 and the output units are used to fit the pre-trained binary codes INLINEFORM4 . After obtaining the learned features, K-means algorithm is employed on them to cluster texts into clusters INLINEFORM5 . Obviously, we call our approach “self-taught” because the CNN model is learnt from the pseudo labels generated from the previous stage, which is quite different from the term “self-taught” in BIBREF15 . Our main contributions can be summarized as follows:",
"This work is an extension of our conference paper BIBREF16 , and they differ in the following aspects. First, we put forward a general a self-taught CNN framework in this paper which can flexibly couple various semantic features, whereas the conference version can be seen as a specific example of this work. Second, in this paper we use a new short text dataset, Biomedical, in the experiment to verify the effectiveness of our approach. Third, we put much effort on studying the influence of various different semantic features integrated in our self-taught CNN framework, which is not involved in the conference paper.",
"For the purpose of reproducibility, we make the datasets and software used in our experiments publicly available at the website.",
"The remainder of this paper is organized as follows: In Section SECREF2 , we first briefly survey several related works. In Section SECREF3 , we describe the proposed approach STC INLINEFORM0 and implementation details. Experimental results and analyses are presented in Section SECREF4 . Finally, conclusions are given in the last Section."
],
[
"In this section, we review the related work from the following two perspectives: short text clustering and deep neural networks."
],
[
"There have been several studies that attempted to overcome the sparseness of short text representation. One way is to expand and enrich the context of data. For example, Banerjee et al. BIBREF3 proposed a method of improving the accuracy of short text clustering by enriching their representation with additional features from Wikipedia, and Fodeh et al. BIBREF4 incorporate semantic knowledge from an ontology into text clustering. However, these works need solid NLP knowledge and still use high-dimensional representation which may result in a waste of both memory and computation time. Another direction is to map the original features into reduced space, such as Latent Semantic Analysis (LSA) BIBREF17 , Laplacian Eigenmaps (LE) BIBREF18 , and Locality Preserving Indexing (LPI) BIBREF19 . Even some researchers explored some sophisticated models to cluster short texts. For example, Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture model-based approach for short text clustering. Moreover, some studies even focus the above both two streams. For example, Tang et al. BIBREF20 proposed a novel framework which enrich the text features by employing machine translation and reduce the original features simultaneously through matrix factorization techniques.",
"Despite the above clustering methods can alleviate sparseness of short text representation to some extent, most of them ignore word order in the text and belong to shallow structures which can not fully capture accurate semantic similarities."
],
[
"Recently, there is a revival of interest in DNN and many researchers have concentrated on using Deep Learning to learn features. Hinton and Salakhutdinov BIBREF21 use DAE to learn text representation. During the fine-tuning procedure, they use backpropagation to find codes that are good at reconstructing the word-count vector.",
"More recently, researchers propose to use external corpus to learn a distributed representation for each word, called word embedding BIBREF22 , to improve DNN performance on NLP tasks. The Skip-gram and continuous bag-of-words models of Word2vec BIBREF23 propose a simple single-layer architecture based on the inner product between two word vectors, and Pennington et al. BIBREF24 introduce a new model for word representation, called GloVe, which captures the global corpus statistics.",
"In order to learn the compact representation vectors of sentences, Le and Mikolov BIBREF25 directly extend the previous Word2vec BIBREF23 by predicting words in the sentence, which is named Paragraph Vector (Para2vec). Para2vec is still a shallow window-based method and need a larger corpus to yield better performance. More neural networks utilize word embedding to capture true meaningful syntactic and semantic regularities, such as RecNN BIBREF6 , BIBREF7 and RNN BIBREF8 . However, RecNN exhibits high time complexity to construct the textual tree, and RNN, using the layer computed at the last word to represent the text, is a biased model. Recently, Long Short-Term Memory (LSTM) BIBREF26 and Gated Recurrent Unit (GRU) BIBREF27 , as sophisticated recurrent hidden units of RNN, has presented its advantages in many sequence generation problem, such as machine translation BIBREF28 , speech recognition BIBREF29 , and text conversation BIBREF30 . While, CNN is better to learn non-biased implicit features which has been successfully exploited for many supervised NLP learning tasks as described in Section SECREF1 , and various CNN based variants are proposed in the recent works, such as Dynamic Convolutional Neural Network (DCNN) BIBREF10 , Gated Recursive Convolutional Neural Network (grConv) BIBREF31 and Self-Adaptive Hierarchical Sentence model (AdaSent) BIBREF32 .",
"In the past few days, Visin et al. BIBREF33 have attempted to replace convolutional layer in CNN to learn non-biased features for object recognition with four RNNs, called ReNet, that sweep over lower-layer features in different directions: (1) bottom to top, (2) top to bottom, (3) left to right and (4) right to left. However, ReNet does not outperform state-of-the-art convolutional neural networks on any of the three benchmark datasets, and it is also a supervised learning model for classification. Inspired by Skip-gram of word2vec BIBREF34 , BIBREF23 , Skip-thought model BIBREF35 describe an approach for unsupervised learning of a generic, distributed sentence encoder. Similar as Skip-gram model, Skip-thought model trains an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded sentence and released an off-the-shelf encoder to extract sentence representation. Even some researchers introduce continuous Skip-gram and negative sampling to CNN for learning visual representation in an unsupervised manner BIBREF36 . This paper, from a new perspective, puts forward a general self-taught CNN framework which can flexibly couple various semantic features and achieve a good performance on one unsupervised learning task, short text clustering."
],
[
"Assume that we are given a dataset of INLINEFORM0 training texts denoted as: INLINEFORM1 , where INLINEFORM2 is the dimensionality of the original BoW representation. Denote its tag set as INLINEFORM3 and the pre-trained word embedding set as INLINEFORM4 , where INLINEFORM5 is the dimensionality of word vectors and INLINEFORM6 is the vocabulary size. In order to learn the INLINEFORM7 -dimensional deep feature representation INLINEFORM8 from CNN in an unsupervised manner, some unsupervised dimensionality reduction methods INLINEFORM9 are employed to guide the learning of CNN model. Our goal is to cluster these texts INLINEFORM10 into clusters INLINEFORM11 based on the learned deep feature representation while preserving the semantic consistency.",
"As depicted in Figure FIGREF5 , the proposed framework consist of three components, deep convolutional neural network (CNN), unsupervised dimensionality reduction function and K-means module. In the rest sections, we first present the first two components respectively, and then give the trainable parameters and the objective function to learn the deep feature representation. Finally, the last section describe how to perform clustering on the learned features."
],
[
"In this section, we briefly review one popular deep convolutional neural network, Dynamic Convolutional Neural Network (DCNN) BIBREF10 as an instance of CNN in the following sections, which as the foundation of our proposed method has been successfully proposed for the completely supervised learning task, text classification.",
"Taking a neural network with two convolutional layers in Figure FIGREF9 as an example, the network transforms raw input text to a powerful representation. Particularly, each raw text vector INLINEFORM0 is projected into a matrix representation INLINEFORM1 by looking up a word embedding INLINEFORM2 , where INLINEFORM3 is the length of one text. We also let INLINEFORM4 and INLINEFORM5 denote the weights of the neural networks. The network defines a transformation INLINEFORM6 INLINEFORM7 which transforms an input raw text INLINEFORM8 to a INLINEFORM9 -dimensional deep representation INLINEFORM10 . There are three basic operations described as follows:",
"Wide one-dimensional convolution This operation INLINEFORM0 is applied to an individual row of the sentence matrix INLINEFORM1 , and yields a resulting matrix INLINEFORM2 , where INLINEFORM3 is the width of convolutional filter.",
"Folding In this operation, every two rows in a feature map are simply summed component-wisely. For a map of INLINEFORM0 rows, folding returns a map of INLINEFORM1 rows, thus halving the size of the representation and yielding a matrix feature INLINEFORM2 . Note that folding operation does not introduce any additional parameters.",
"Dynamic INLINEFORM0 -max pooling Assuming the pooling parameter as INLINEFORM1 , INLINEFORM2 -max pooling selects the sub-matrix INLINEFORM3 of the INLINEFORM4 highest values in each row of the matrix INLINEFORM5 . For dynamic INLINEFORM6 -max pooling, the pooling parameter INLINEFORM7 is dynamically selected in order to allow for a smooth extraction of higher-order and longer-range features BIBREF10 . Given a fixed pooling parameter INLINEFORM8 for the topmost convolutional layer, the parameter INLINEFORM9 of INLINEFORM10 -max pooling in the INLINEFORM11 -th convolutional layer can be computed as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the total number of convolutional layers in the network."
],
[
"As described in Figure FIGREF5 , the dimensionality reduction function is defined as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 are the INLINEFORM1 -dimensional reduced latent space representations. Here, we take four popular dimensionality reduction methods as examples in our framework.",
"Average Embedding (AE): This method directly averages the word embeddings which are respectively weighted with TF and TF-IDF. Huang et al. BIBREF37 used this strategy as the global context in their task, and Socher et al. BIBREF7 and Lai et al. BIBREF9 used this method for text classification. The weighted average of all word vectors in one text can be computed as follows: DISPLAYFORM0 ",
"where INLINEFORM0 can be any weighting function that captures the importance of word INLINEFORM1 in the text INLINEFORM2 .",
"Latent Semantic Analysis (LSA): LSA BIBREF17 is the most popular global matrix factorization method, which applies a dimension reducing linear projection, Singular Value Decomposition (SVD), of the corresponding term/document matrix. Suppose the rank of INLINEFORM0 is INLINEFORM1 , LSA decompose INLINEFORM2 into the product of three other matrices: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are the singular values of INLINEFORM2 , INLINEFORM3 is a set of left singular vectors and INLINEFORM4 is a set of right singular vectors. LSA uses the top INLINEFORM5 vectors in INLINEFORM6 as the transformation matrix to embed the original text features into a INLINEFORM7 -dimensional subspace INLINEFORM8 BIBREF17 .",
"Laplacian Eigenmaps (LE): The top eigenvectors of graph Laplacian, defined on the similarity matrix of texts, are used in the method, which can discover the manifold structure of the text space BIBREF18 . In order to avoid storing the dense similarity matrix, many approximation techniques are proposed to reduce the memory usage and computational complexity for LE. There are two representative approximation methods, sparse similarity matrix and Nystr INLINEFORM0 m approximation. Following previous studies BIBREF38 , BIBREF13 , we select the former technique to construct the INLINEFORM1 local similarity matrix INLINEFORM2 by using heat kernel as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 is a tuning parameter (default is 1) and INLINEFORM1 represents the set of INLINEFORM2 -nearest-neighbors of INLINEFORM3 . By introducing a diagonal INLINEFORM4 matrix INLINEFORM5 whose entries are given by INLINEFORM6 , the graph Laplacian INLINEFORM7 can be computed by ( INLINEFORM8 ). The optimal INLINEFORM9 real-valued matrix INLINEFORM10 can be obtained by solving the following objective function: DISPLAYFORM0 ",
"where INLINEFORM0 is the trace function, INLINEFORM1 requires the different dimensions to be uncorrelated, and INLINEFORM2 requires each dimension to achieve equal probability as positive or negative).",
"Locality Preserving Indexing (LPI): This method extends LE to deal with unseen texts by approximating the linear function INLINEFORM0 BIBREF13 , and the subspace vectors are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the Riemannian manifold BIBREF19 . Similar as LE, we first construct the local similarity matrix INLINEFORM1 , then the graph Laplacian INLINEFORM2 can be computed by ( INLINEFORM3 ), where INLINEFORM4 measures the local density around INLINEFORM5 and is equal to INLINEFORM6 . Compute the eigenvectors INLINEFORM7 and eigenvalues INLINEFORM8 of the following generalized eigen-problem: DISPLAYFORM0 ",
"The mapping function INLINEFORM0 can be obtained and applied to the unseen data BIBREF38 .",
"All of the above methods claim a better performance in capturing semantic similarity between texts in the reduced latent space representation INLINEFORM0 than in the original representation INLINEFORM1 , while the performance of short text clustering can be further enhanced with the help of our framework, self-taught CNN."
],
[
"The last layer of CNN is an output layer as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 is the deep feature representation, INLINEFORM1 is the output vector and INLINEFORM2 is weight matrix.",
"In order to incorporate the latent semantic features INLINEFORM0 , we first binary the real-valued vectors INLINEFORM1 to the binary codes INLINEFORM2 by setting the threshold to be the media vector INLINEFORM3 . Then, the output vector INLINEFORM4 is used to fit the binary codes INLINEFORM5 via INLINEFORM6 logistic operations as follows: DISPLAYFORM0 ",
"All parameters to be trained are defined as INLINEFORM0 . DISPLAYFORM0 ",
"Given the training text collection INLINEFORM0 , and the pre-trained binary codes INLINEFORM1 , the log likelihood of the parameters can be written down as follows: DISPLAYFORM0 ",
"Following the previous work BIBREF10 , we train the network with mini-batches by back-propagation and perform the gradient-based optimization using the Adagrad update rule BIBREF39 . For regularization, we employ dropout with 50% rate to the penultimate layer BIBREF10 , BIBREF40 ."
],
[
"With the given short texts, we first utilize the trained deep neural network to obtain the semantic representations INLINEFORM0 , and then employ traditional K-means algorithm to perform clustering."
],
[
"We test our proposed approach on three public short text datasets. The summary statistics and semantic topics of these datasets are described in Table TABREF24 and Table TABREF25 .",
"SearchSnippets. This dataset was selected from the results of web search transaction using predefined phrases of 8 different domains by Phan et al. BIBREF41 .",
"StackOverflow. We use the challenge data published in Kaggle.com. The raw dataset consists 3,370,528 samples through July 31st, 2012 to August 14, 2012. In our experiments, we randomly select 20,000 question titles from 20 different tags as in Table TABREF25 .",
"Biomedical. We use the challenge data published in BioASQ's official website. In our experiments, we randomly select 20, 000 paper titles from 20 different MeSH major topics as in Table TABREF25 . As described in Table TABREF24 , the max length of selected paper titles is 53.",
"For these datasets, we randomly select 10% of data as the development set. Since SearchSnippets has been pre-processed by Phan et al. BIBREF41 , we do not further process this dataset. In StackOverflow, texts contain lots of computer terminology, and symbols and capital letters are meaningful, thus we do not do any pre-processed procedures. For Biomedical, we remove the symbols and convert letters into lower case."
],
[
"We use the publicly available word2vec tool to train word embeddings, and the most parameters are set as same as Mikolov et al. BIBREF23 to train word vectors on Google News setting, except of vector dimensionality using 48 and minimize count using 5. For SearchSnippets, we train word vectors on Wikipedia dumps. For StackOverflow, we train word vectors on the whole corpus of the StackOverflow dataset described above which includes the question titles and post contents. For Biomedical, we train word vectors on all titles and abstracts of 2014 training articles. The coverage of these learned vectors on three datasets are listed in Table TABREF32 , and the words not present in the set of pre-trained words are initialized randomly."
],
[
"In our experiment, some widely used text clustering methods are compared with our approach. Besides K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods, four baseline clustering methods are directly based on the popular unsupervised dimensionality reduction methods as described in Section SECREF11 . We further compare our approach with some other non-biased neural networks, such as bidirectional RNN. More details are listed as follows:",
"K-means K-means BIBREF42 on original keyword features which are respectively weighted with term frequency (TF) and term frequency-inverse document frequency (TF-IDF).",
"Skip-thought Vectors (SkipVec) This baseline BIBREF35 gives an off-the-shelf encoder to produce highly generic sentence representations. The encoder is trained using a large collection of novels and provides three encoder modes, that are unidirectional encoder (SkipVec (Uni)) with 2,400 dimensions, bidirectional encoder (SkipVec (Bi)) with 2,400 dimensions and combined encoder (SkipVec (Combine)) with SkipVec (Uni) and SkipVec (Bi) of 2,400 dimensions each. K-means is employed on the these vector representations respectively.",
"Recursive Neural Network (RecNN) In BIBREF6 , the tree structure is firstly greedy approximated via unsupervised recursive autoencoder. Then, semi-supervised recursive autoencoders are used to capture the semantics of texts based on the predicted structure. In order to make this recursive-based method completely unsupervised, we remove the cross-entropy error in the second phrase to learn vector representation and subsequently employ K-means on the learned vectors of the top tree node and the average of all vectors in the tree.",
"Paragraph Vector (Para2vec) K-means on the fixed size feature vectors generated by Paragraph Vector (Para2vec) BIBREF25 which is an unsupervised method to learn distributed representation of words and paragraphs. In our experiments, we use the open source software released by Mesnil et al. BIBREF43 .",
"Average Embedding (AE) K-means on the weighted average vectors of the word embeddings which are respectively weighted with TF and TF-IDF. The dimension of average vectors is equal to and decided by the dimension of word vectors used in our experiments.",
"Latent Semantic Analysis (LSA) K-means on the reduced subspace vectors generated by Singular Value Decomposition (SVD) method. The dimension of subspace is default set to the number of clusters, we also iterate the dimensions ranging from 10:10:200 to get the best performance, that is 10 on SearchSnippets, 20 on StackOverflow and 20 on Biomedical in our experiments.",
"Laplacian Eigenmaps (LE) This baseline, using Laplacian Eigenmaps and subsequently employing K-means algorithm, is well known as spectral clustering BIBREF44 . The dimension of subspace is default set to the number of clusters BIBREF18 , BIBREF38 , we also iterate the dimensions ranging from 10:10:200 to get the best performance, that is 20 on SearchSnippets, 70 on StackOverflow and 30 on Biomedical in our experiments.",
"Locality Preserving Indexing (LPI) This baseline, projecting the texts into a lower dimensional semantic space, can discover both the geometric and discriminating structures of the original feature space BIBREF38 . The dimension of subspace is default set to the number of clusters BIBREF38 , we also iterate the dimensions ranging from 10:10:200 to get the best performance, that is 20 on SearchSnippets, 80 on StackOverflow and 30 on Biomedical in our experiments.",
"bidirectional RNN (bi-RNN) We replace the CNN model in our framework as in Figure FIGREF5 with some bi-RNN models. Particularly, LSTM and GRU units are used in the experiments. In order to generate the fixed-length document representation from the variable-length vector sequences, for both bi-LSTM and bi-GRU based clustering methods, we further utilize three pooling methods: last pooling (using the last hidden state), mean pooling and element-wise max pooling. These pooling methods are respectively used in the previous works BIBREF45 , BIBREF27 , BIBREF46 and BIBREF9 . For regularization, the training gradients of all parameters with an INLINEFORM0 2 norm larger than 40 are clipped to 40, as the previous work BIBREF47 ."
],
[
"The clustering performance is evaluated by comparing the clustering results of texts with the tags/labels provided by the text corpus. Two metrics, the accuracy (ACC) and the normalized mutual information metric (NMI), are used to measure the clustering performance BIBREF38 , BIBREF48 . Given a text INLINEFORM0 , let INLINEFORM1 and INLINEFORM2 be the obtained cluster label and the label provided by the corpus, respectively. Accuracy is defined as: DISPLAYFORM0 ",
"where, INLINEFORM0 is the total number of texts, INLINEFORM1 is the indicator function that equals one if INLINEFORM2 and equals zero otherwise, and INLINEFORM3 is the permutation mapping function that maps each cluster label INLINEFORM4 to the equivalent label from the text data by Hungarian algorithm BIBREF49 .",
"Normalized mutual information BIBREF50 between tag/label set INLINEFORM0 and cluster set INLINEFORM1 is a popular metric used for evaluating clustering tasks. It is defined as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 is the mutual information between INLINEFORM1 and INLINEFORM2 , INLINEFORM3 is entropy and the denominator INLINEFORM4 is used for normalizing the mutual information to be in the range of [0, 1]."
],
[
"The most of parameters are set uniformly for these datasets. Following previous study BIBREF38 , the number of nearest neighbors in Eqn. ( EQREF15 ) is fixed to 15 when constructing the graph structures for LE and LPI. For CNN model, the networks has two convolutional layers. The widths of the convolutional filters are both 3. The value of INLINEFORM0 for the top INLINEFORM1 -max pooling in Eqn. ( EQREF10 ) is 5. The number of feature maps at the first convolutional layer is 12, and 8 feature maps at the second convolutional layer. Both those two convolutional layers are followed by a folding layer. We further set the dimension of word embeddings INLINEFORM2 as 48. Finally, the dimension of the deep feature representation INLINEFORM3 is fixed to 480. Moreover, we set the learning rate INLINEFORM4 as 0.01 and the mini-batch training size as 200. The output size INLINEFORM5 in Eqn. ( EQREF19 ) is set same as the best dimensions of subspace in the baseline method, as described in Section SECREF37 .",
"For initial centroids have significant impact on clustering results when utilizing the K-means algorithms, we repeat K-means for multiple times with random initial centroids (specifically, 100 times for statistical significance) as Huang BIBREF48 . The all subspace vectors are normalized to 1 before applying K-means and the final results reported are the average of 5 trials with all clustering methods on three text datasets."
],
[
"In Table TABREF43 and Table TABREF44 , we report the ACC and NMI performance of our proposed approaches and four baseline methods, K-means, SkipVec, RecNN and Para2vec based clustering methods. Intuitively, we get a general observation that (1) BoW based approaches, including K-means (TF) and K-means (TF-IDF), and SkipVec based approaches perform not well; (2) RecNN based approaches, both RecNN (Ave.) and RecNN (Top+Ave.), do better; (3) Para2vec makes a comparable performance with the most baselines; and (4) the evaluation clearly demonstrate the superiority of our proposed methods STC INLINEFORM0 . It is an expected results. For SkipVec based approaches, the off-the-shelf encoders are trained on the BookCorpus datasets BIBREF51 , and then applied to our datasets to extract the sentence representations. The SkipVec encoders can produce generic sentence representations but may not perform well for specific datasets, in our experiments, StackOverflow and Biomedical datasets consist of many computer terms and medical terms, such as “ASP.NET”, “XML”, “C#”, “serum” and “glycolytic”. When we take a more careful look, we find that RecNN (Top) does poorly, even worse than K-means (TF-IDF). The reason maybe that although recursive neural models introduce tree structure to capture compositional semantics, the vector of the top node mainly captures a biased semantic while the average of all vectors in the tree nodes, such as RecNN (Ave.), can be better to represent sentence level semantic. And we also get another observation that, although our proposed STC INLINEFORM1 -LE and STC INLINEFORM2 -LPI outperform both BoW based and RecNN based approaches across all three datasets, STC INLINEFORM3 -AE and STC INLINEFORM4 -LSA do just exhibit some similar performances as RecNN (Ave.) and RecNN (Top+Ave.) do in the datasets of StackOverflow and Biomedical.",
"We further replace the CNN model in our framework as in Figure FIGREF5 with some other non-biased models, such as bi-LSTM and bi-GRU, and report the results in Table TABREF46 and Table TABREF47 . As an instance, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models. From the results, we can see that bi-GRU and bi-LSTM based clustering methods do equally well, no clear winner, and both achieve great enhancements compared with LPI (best). Compared with these bi-LSTM/bi-GRU based models, the evaluation results still demonstrate the superiority of our approach methods, CNN based clustering model, in the most cases. As the results reported by Visin et al. BIBREF33 , despite bi-directional or multi-directional RNN models perform a good non-biased feature extraction, they yet do not outperform state-of-the-art CNN on some tasks.",
"In order to make clear what factors make our proposed method work, we report the bar chart results of ACC and MNI of our proposed methods and the corresponding baseline methods in Figure FIGREF49 and Figure FIGREF53 . It is clear that, although AE and LSA does well or even better than LE and LPI, especially in dataset of both StackOverflow and Biomedical, STC INLINEFORM0 -LE and STC INLINEFORM1 -LPI achieve a much larger performance enhancements than STC INLINEFORM2 -AE and STC INLINEFORM3 -LSA do. The possible reason is that the information the pseudo supervision used to guide the learning of CNN model that make difference. Especially, for AE case, the input features fed into CNN model and the pseudo supervision employed to guide the learning of CNN model are all come from word embeddings. There are no different semantic features to be used into our proposed method, thus the performance enhancements are limited in STC INLINEFORM4 -AE. For LSA case, as we known, LSA is to make matrix factorization to find the best subspace approximation of the original feature space to minimize the global reconstruction error. And as BIBREF24 , BIBREF52 recently point out that word embeddings trained with word2vec or some variances, is essentially to do an operation of matrix factorization. Therefore, the information between input and the pseudo supervision in CNN is not departed very largely from each other, and the performance enhancements of STC INLINEFORM5 -AE is also not quite satisfactory. For LE and LPI case, as we known that LE extracts the manifold structure of the original feature space, and LPI extracts both geometric and discriminating structure of the original feature space BIBREF38 . We guess that our approach STC INLINEFORM6 -LE and STC INLINEFORM7 -LPI achieve enhancements compared with both LE and LPI by a large margin, because both of LE and LPI get useful semantic features, and these features are also different from word embeddings used as input of CNN. From this view, we say that our proposed STC has potential to behave more effective when the pseudo supervision is able to get semantic meaningful features, which is different enough from the input of CNN.",
"Furthermore, from the results of K-means and AE in Table TABREF43 - TABREF44 and Figure FIGREF49 - FIGREF53 , we note that TF-IDF weighting gives a more remarkable improvement for K-means, while TF weighting works better than TF-IDF weighting for Average Embedding. Maybe the reason is that pre-trained word embeddings encode some useful information from external corpus and are able to get even better results without TF-IDF weighting. Meanwhile, we find that LE get quite unusual good performance than LPI, LSA and AE in SearchSnippets dataset, which is not found in the other two datasets. To get clear about this, and also to make a much better demonstration about our proposed approaches and other baselines, we further report 2-dimensional text embeddings on SearchSnippets in Figure FIGREF58 , using t-SNE BIBREF53 to get distributed stochastic neighbor embedding of the feature representations used in the clustering methods. We can see that the results of from AE and LSA seem to be fairly good or even better than the ones from LE and LPI, which is not the same as the results from ACC and NMI in Figure FIGREF49 - FIGREF53 . Meanwhile, RecNN (Ave.) performs better than BoW (both TF and TF-IDF) while RecNN (Top) does not, which is the same as the results from ACC and NMI in Table TABREF43 and Table TABREF44 . Then we guess that both ”the same as” and ”not the same as” above, is just a good example to illustrate that visualization tool, such as t-SNE, get some useful information for measuring results, which is different from the ones of ACC and NMI. Moreover, from this complementary view of t-SNE, we can see that our STC INLINEFORM0 -AE, STC INLINEFORM1 -LSA, STC INLINEFORM2 -LE, and STC INLINEFORM3 -LPI show more clear-cut margins among different semantic topics (that is, tags/labels), compared with AE, LSA, LE and LPI, respectively, as well as compared with both baselines, BoW and RecNN based ones.",
"From all these results, with three measures of ACC, NMI and t-SNE under three datasets, we can get a solid conclusion that our proposed approaches is an effective approaches to get useful semantic features for short text clustering."
],
[
"With the emergence of social media, short text clustering has become an increasing important task. This paper explores a new perspective to cluster short texts based on deep feature representation learned from the proposed self-taught convolutional neural networks. Our framework can be successfully accomplished without using any external tags/labels and complicated NLP pre-processing, and and our approach is a flexible framework, in which the traditional dimension reduction approaches could be used to get performance enhancement. Our extensive experimental study on three short text datasets shows that our approach can achieve a significantly better performance. In the future, how to select and incorporate more effective semantic features into the proposed framework would call for more research."
],
[
"We would like to thank reviewers for their comments, and acknowledge Kaggle and BioASQ for making the datasets available. This work is supported by the National Natural Science Foundation of China (No. 61602479, No. 61303172, No. 61403385) and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB02070005)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Short Text Clustering",
"Deep Neural Networks",
"Methodology",
"Deep Convolutional Neural Networks",
"Unsupervised Dimensionality Reduction",
"Learning",
"K-means for Clustering",
"Datasets",
"Pre-trained Word Vectors",
"Comparisons",
"Evaluation Metrics",
"Hyperparameter Settings",
"Results and Analysis",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"ce1b6507ec3bde25d3bf800bb829aae3b20f8e02"
],
"answer": [
{
"evidence": [
"The clustering performance is evaluated by comparing the clustering results of texts with the tags/labels provided by the text corpus. Two metrics, the accuracy (ACC) and the normalized mutual information metric (NMI), are used to measure the clustering performance BIBREF38 , BIBREF48 . Given a text INLINEFORM0 , let INLINEFORM1 and INLINEFORM2 be the obtained cluster label and the label provided by the corpus, respectively. Accuracy is defined as: DISPLAYFORM0"
],
"extractive_spans": [
"accuracy",
"normalized mutual information"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two metrics, the accuracy (ACC) and the normalized mutual information metric (NMI), are used to measure the clustering performance BIBREF38 , BIBREF48 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0a50b0b01688b81afa0e69e67c0d17fb4a0115bd"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models."
],
"extractive_spans": [],
"free_form_answer": "On SearchSnippets dataset ACC 77.01%, NMI 62.94%, on StackOverflow dataset ACC 51.14%, NMI 49.08%, on Biomedical dataset ACC 43.00%, NMI 38.18%",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"fd3954e5af3582cee36835e85c7a5efd5e121874"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models."
],
"extractive_spans": [],
"free_form_answer": "on SearchSnippets dataset by 6.72% in ACC, by 6.94% in NMI; on Biomedical dataset by 5.77% in ACC, 3.91% in NMI",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"019aab7aedefee06681de16eae65bd3031125b84"
],
"answer": [
{
"evidence": [
"In our experiment, some widely used text clustering methods are compared with our approach. Besides K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods, four baseline clustering methods are directly based on the popular unsupervised dimensionality reduction methods as described in Section SECREF11 . We further compare our approach with some other non-biased neural networks, such as bidirectional RNN. More details are listed as follows:"
],
"extractive_spans": [
"K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods"
],
"free_form_answer": "",
"highlighted_evidence": [
"In our experiment, some widely used text clustering methods are compared with our approach. Besides K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods, four baseline clustering methods are directly based on the popular unsupervised dimensionality reduction methods as described in Section SECREF11 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0c80e649e7d54bf39704d39397af73f3b4847199"
],
"answer": [
{
"evidence": [
"We test our proposed approach on three public short text datasets. The summary statistics and semantic topics of these datasets are described in Table TABREF24 and Table TABREF25 .",
"SearchSnippets. This dataset was selected from the results of web search transaction using predefined phrases of 8 different domains by Phan et al. BIBREF41 .",
"StackOverflow. We use the challenge data published in Kaggle.com. The raw dataset consists 3,370,528 samples through July 31st, 2012 to August 14, 2012. In our experiments, we randomly select 20,000 question titles from 20 different tags as in Table TABREF25 .",
"Biomedical. We use the challenge data published in BioASQ's official website. In our experiments, we randomly select 20, 000 paper titles from 20 different MeSH major topics as in Table TABREF25 . As described in Table TABREF24 , the max length of selected paper titles is 53."
],
"extractive_spans": [
"SearchSnippets",
"StackOverflow",
"Biomedical"
],
"free_form_answer": "",
"highlighted_evidence": [
"We test our proposed approach on three public short text datasets. ",
"SearchSnippets. This dataset was selected from the results of web search transaction using predefined phrases of 8 different domains by Phan et al. BIBREF41 .",
"StackOverflow. We use the challenge data published in Kaggle.com. ",
"Biomedical. We use the challenge data published in BioASQ's official website. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"What were the evaluation metrics used?",
"What were their performance results?",
"By how much did they outperform the other methods?",
"Which popular clustering methods did they experiment with?",
"What datasets did they use?"
],
"question_id": [
"62a6382157d5f9c1dce6e6c24ac5994442053002",
"9e04730907ad728d62049f49ac828acb4e0a1a2a",
"5a0841cc0628e872fe473874694f4ab9411a1d10",
"a5dd569e6d641efa86d2c2b2e970ce5871e0963f",
"785c054f6ea04701f4ab260d064af7d124260ccc"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: The architecture of our proposed STC2 framework for short text clustering. Solid and hollow arrows represent forward and backward propagation directions of features and gradients respectively. The STC2 framework consist of deep convolutional neural network (CNN), unsupervised dimensionality reduction function and K-means module on the deep feature representation from the top hidden layers of CNN.",
"Figure 2: The architecture of dynamic convolutional neural network [11]. An input text is first projected to a matrix feature by looking up word embeddings, and then goes through wide convolutional layers, folding layers and k-max pooling layers, which provides a deep feature representation before the output layer.",
"Table 1: Statistics for the text datasets. C: the number of classes; Num: the dataset size; Len.: the mean/max length of texts and |V |: the vocabulary size.",
"Table 3: Coverage of word embeddings on three datasets. |V | is the vocabulary size and |T | is the number of tokens.",
"Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"Figure 3: ACC results on three short text datasets using our proposed STC2 based on AE, LSA, LE and LPI.",
"Figure 4: NMI results on three short text datasets using our proposed STC2 based on AE, LSA, LE and LPI.",
"Figure 5: A 2-dimensional embedding of original keyword features weighted with (a) TF and (b) TF-IDF, (c) vectors of the top tree node in RecNN, (d) average vectors of all tree node in RecNN, (e) average embeddings weighted with TF, subspace features based on (f) LSA, (g) LE and (h) LPI, deep learned features from (i) STC2-AE, (j) STC2-LSA, (k) STC2-LE and (l) STC2-LPI. All above features are respectively used in K-means (TF), K-means (TF-IDF), RecNN (Top), RecNN (Ave.), AE (TF), LSA(best), LE (best), LPI (best), and our proposed STC2-AE, STC2-LSA, STC2-LE and STC2-LPI on SearchSnippets. (Best viewed in color)"
],
"file": [
"5-Figure1-1.png",
"9-Figure2-1.png",
"14-Table1-1.png",
"16-Table3-1.png",
"22-Table6-1.png",
"23-Table7-1.png",
"24-Figure3-1.png",
"25-Figure4-1.png",
"27-Figure5-1.png"
]
} | [
"What were their performance results?",
"By how much did they outperform the other methods?"
] | [
[
"1701.00185-23-Table7-1.png",
"1701.00185-22-Table6-1.png"
],
[
"1701.00185-23-Table7-1.png",
"1701.00185-22-Table6-1.png"
]
] | [
"On SearchSnippets dataset ACC 77.01%, NMI 62.94%, on StackOverflow dataset ACC 51.14%, NMI 49.08%, on Biomedical dataset ACC 43.00%, NMI 38.18%",
"on SearchSnippets dataset by 6.72% in ACC, by 6.94% in NMI; on Biomedical dataset by 5.77% in ACC, 3.91% in NMI"
] | 85 |
1911.03894 | CamemBERT: a Tasty French Language Model | Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models—in all languages except English—very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP. | {
"paragraphs": [
[
"Pretrained word representations have a long history in Natural Language Processing (NLP), from non-neural methods BIBREF0, BIBREF1, BIBREF2 to neural word embeddings BIBREF3, BIBREF4 and to contextualised representations BIBREF5, BIBREF6. Approaches shifted more recently from using these representations as an input to task-specific architectures to replacing these architectures with large pretrained language models. These models are then fine-tuned to the task at hand with large improvements in performance over a wide range of tasks BIBREF7, BIBREF8, BIBREF9, BIBREF10.",
"These transfer learning methods exhibit clear advantages over more traditional task-specific approaches, probably the most important being that they can be trained in an unsupervised manner. They nevertheless come with implementation challenges, namely the amount of data and computational resources needed for pretraining that can reach hundreds of gigabytes of uncompressed text and require hundreds of GPUs BIBREF11, BIBREF9. The latest transformer architecture has gone uses as much as 750GB of plain text and 1024 TPU v3 for pretraining BIBREF10. This has limited the availability of these state-of-the-art models to the English language, at least in the monolingual setting. Even though multilingual models give remarkable results, they are often larger and their results still lag behind their monolingual counterparts BIBREF12. This is particularly inconvenient as it hinders their practical use in NLP systems as well as the investigation of their language modeling capacity, something that remains to be investigated in the case of, for instance, morphologically rich languages.",
"We take advantage of the newly available multilingual corpus OSCAR BIBREF13 and train a monolingual language model for French using the RoBERTa architecture. We pretrain the model - which we dub CamemBERT- and evaluate it in four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI). CamemBERT improves the state of the art for most tasks over previous monolingual and multilingual approaches, which confirms the effectiveness of large pretrained language models for French.",
"We summarise our contributions as follows:",
"We train a monolingual BERT model on the French language using recent large-scale corpora.",
"We evaluate our model on four downstream tasks (POS tagging, dependency parsing, NER and natural language inference (NLI)), achieving state-of-the-art results in most tasks, confirming the effectiveness of large BERT-based models for French.",
"We release our model in a user-friendly format for popular open-source libraries so that it can serve as a strong baseline for future research and be useful for French NLP practitioners."
],
[
"The first neural word vector representations were non-contextualised word embeddings, most notably word2vec BIBREF3, GloVe BIBREF4 and fastText BIBREF14, which were designed to be used as input to task-specific neural architectures. Contextualised word representations such as ELMo BIBREF5 and flair BIBREF6, improved the expressivity of word embeddings by taking context into account. They improved the performance of downstream tasks when they replaced traditional word representations. This paved the way towards larger contextualised models that replaced downstream architectures in most tasks. These approaches, trained with language modeling objectives, range from LSTM-based architectures such as ULMFiT BIBREF15 to the successful transformer-based architectures such as GPT2 BIBREF8, BERT BIBREF7, RoBERTa BIBREF9 and more recently ALBERT BIBREF16 and T5 BIBREF10."
],
[
"Since the introduction of word2vec BIBREF3, many attempts have been made to create monolingual models for a wide range of languages. For non-contextual word embeddings, the first two attempts were by BIBREF17 and BIBREF18 who created word embeddings for a large number of languages using Wikipedia. Later BIBREF19 trained fastText word embeddings for 157 languages using Common Crawl and showed that using crawled data significantly increased the performance of the embeddings relatively to those trained only on Wikipedia."
],
[
"Following the success of large pretrained language models, they were extended to the multilingual setting with multilingual BERT , a single multilingual model for 104 different languages trained on Wikipedia data, and later XLM BIBREF12, which greatly improved unsupervised machine translation. A few monolingual models have been released: ELMo models for Japanese, Portuguese, German and Basque and BERT for Simplified and Traditional Chinese and German.",
"However, to the best of our knowledge, no particular effort has been made toward training models for languages other than English, at a scale similar to the latest English models (e.g. RoBERTa trained on more than 100GB of data)."
],
[
"Our approach is based on RoBERTa BIBREF9, which replicates and improves the initial BERT by identifying key hyper-parameters for more robust performance.",
"In this section, we describe the architecture, training objective, optimisation setup and pretraining data that was used for CamemBERT.",
"CamemBERT differs from RoBERTa mainly with the addition of whole-word masking and the usage of SentencePiece tokenisation BIBREF20."
],
[
"Similar to RoBERTa and BERT, CamemBERT is a multi-layer bidirectional Transformer BIBREF21. Given the widespread usage of Transformers, we do not describe them in detail here and refer the reader to BIBREF21. CamemBERT uses the original BERT $_{\\small \\textsc {BASE}}$ configuration: 12 layers, 768 hidden dimensions, 12 attention heads, which amounts to 110M parameters."
],
[
"We train our model on the Masked Language Modeling (MLM) task. Given an input text sequence composed of $N$ tokens $x_1, ..., x_N$, we select $15\\%$ of tokens for possible replacement. Among those selected tokens, 80% are replaced with the special $<$mask$>$ token, 10% are left unchanged and 10% are replaced by a random token. The model is then trained to predict the initial masked tokens using cross-entropy loss.",
"Following RoBERTa we dynamically mask tokens instead of fixing them statically for the whole dataset during preprocessing. This improves variability and makes the model more robust when training for multiple epochs.",
"Since we segment the input sentence into subwords using SentencePiece, the input tokens to the models can be subwords. An upgraded version of BERT and BIBREF22 have shown that masking whole words instead of individual subwords leads to improved performance. Whole-word masking (WWM) makes the training task more difficult because the model has to predict a whole word instead of predicting only part of the word given the rest. As a result, we used WWM for CamemBERT by first randomly sampling 15% of the words in the sequence and then considering all subword tokens in each of these 15% words for candidate replacement. This amounts to a proportion of selected tokens that is close to the original 15%. These tokens are then either replaced by $<$mask$>$ tokens (80%), left unchanged (10%) or replaced by a random token.",
"Subsequent work has shown that the next sentence prediction task (NSP) originally used in BERT does not improve downstream task performance BIBREF12, BIBREF9, we do not use NSP as a consequence."
],
[
"Following BIBREF9, we optimise the model using Adam BIBREF23 ($\\beta _1 = 0.9$, $\\beta _2 = 0.98$) for 100k steps. We use large batch sizes of 8192 sequences. Each sequence contains at most 512 tokens. We enforce each sequence to only contain complete sentences. Additionally, we used the DOC-SENTENCES scenario from BIBREF9, consisting of not mixing multiple documents in the same sequence, which showed slightly better results."
],
[
"We segment the input text into subword units using SentencePiece BIBREF20. SentencePiece is an extension of Byte-Pair encoding (BPE) BIBREF24 and WordPiece BIBREF25 that does not require pre-tokenisation (at the word or token level), thus removing the need for language-specific tokenisers. We use a vocabulary size of 32k subword tokens. These are learned on $10^7$ sentences sampled from the pretraining dataset. We do not use subword regularisation (i.e. sampling from multiple possible segmentations) in our implementation for simplicity."
],
[
"Pretrained language models can be significantly improved by using more data BIBREF9, BIBREF10. Therefore we used French text extracted from Common Crawl, in particular, we use OSCAR BIBREF13 a pre-classified and pre-filtered version of the November 2018 Common Craw snapshot.",
"OSCAR is a set of monolingual corpora extracted from Common Crawl, specifically from the plain text WET format distributed by Common Crawl, which removes all HTML tags and converts all text encodings to UTF-8. OSCAR follows the same approach as BIBREF19 by using a language classification model based on the fastText linear classifier BIBREF26, BIBREF27 pretrained on Wikipedia, Tatoeba and SETimes, which supports 176 different languages.",
"OSCAR performs a deduplication step after language classification and without introducing a specialised filtering scheme, other than only keeping paragraphs containing 100 or more UTF-8 encoded characters, making OSCAR quite close to the original Crawled data.",
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
],
[
"We fist evaluate CamemBERT on the two downstream tasks of part-of-speech (POS) tagging and dependency parsing. POS tagging is a low-level syntactic task, which consists in assigning to each word its corresponding grammatical category. Dependency parsing consists in predicting the labeled syntactic tree capturing the syntactic relations between words.",
"We run our experiments using the Universal Dependencies (UD) paradigm and its corresponding UD POS tag set BIBREF28 and UD treebank collection version 2.2 BIBREF29, which was used for the CoNLL 2018 shared task. We perform our work on the four freely available French UD treebanks in UD v2.2: GSD, Sequoia, Spoken, and ParTUT.",
"GSD BIBREF30 is the second-largest treebank available for French after the FTB (described in subsection SECREF25), it contains data from blogs, news articles, reviews, and Wikipedia. The Sequoia treebank BIBREF31, BIBREF32 comprises more than 3000 sentences, from the French Europarl, the regional newspaper L’Est Républicain, the French Wikipedia and documents from the European Medicines Agency. Spoken is a corpus converted automatically from the Rhapsodie treebank BIBREF33, BIBREF34 with manual corrections. It consists of 57 sound samples of spoken French with orthographic transcription and phonetic transcription aligned with sound (word boundaries, syllables, and phonemes), syntactic and prosodic annotations. Finally, ParTUT is a conversion of a multilingual parallel treebank developed at the University of Turin, and consisting of a variety of text genres, including talks, legal texts, and Wikipedia articles, among others; ParTUT data is derived from the already-existing parallel treebank Par(allel)TUT BIBREF35 . Table TABREF23 contains a summary comparing the sizes of the treebanks.",
"We evaluate the performance of our models using the standard UPOS accuracy for POS tagging, and Unlabeled Attachment Score (UAS) and Labeled Attachment Score (LAS) for dependency parsing. We assume gold tokenisation and gold word segmentation as provided in the UD treebanks."
],
[
"To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT). We then compare our models to UDify BIBREF36. UDify is a multitask and multilingual model based on mBERT that is near state-of-the-art on all UD languages including French for both POS tagging and dependency parsing.",
"It is relevant to compare CamemBERT to UDify on those tasks because UDify is the work that pushed the furthest the performance in fine-tuning end-to-end a BERT-based model on downstream POS tagging and dependency parsing. Finally, we compare our model to UDPipe Future BIBREF37, a model ranked 3rd in dependency parsing and 6th in POS tagging during the CoNLL 2018 shared task BIBREF38. UDPipe Future provides us a strong baseline that does not make use of any pretrained contextual embedding.",
"We will compare to the more recent cross-lingual language model XLM BIBREF12, as well as the state-of-the-art CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper."
],
[
"Named Entity Recognition (NER) is a sequence labeling task that consists in predicting which words refer to real-world objects, such as people, locations, artifacts and organisations. We use the French Treebank (FTB) BIBREF39 in its 2008 version introduced by cc-clustering:09short and with NER annotations by sagot2012annotation. The NER-annotated FTB contains more than 12k sentences and more than 350k tokens extracted from articles of the newspaper Le Monde published between 1989 and 1995. In total, it contains 11,636 entity mentions distributed among 7 different types of entities, namely: 2025 mentions of “Person”, 3761 of “Location”, 2382 of “Organisation”, 3357 of “Company”, 67 of “Product”, 15 of “POI” (Point of Interest) and 29 of “Fictional Character”.",
"A large proportion of the entity mentions in the treebank are multi-word entities. For NER we therefore report the 3 metrics that are commonly used to evaluate models: precision, recall, and F1 score. Here precision measures the percentage of entities found by the system that are correctly tagged, recall measures the percentage of named entities present in the corpus that are found and the F1 score combines both precision and recall measures giving a general idea of a model's performance."
],
[
"Most of the advances in NER haven been achieved on English, particularly focusing on the CoNLL 2003 BIBREF40 and the Ontonotes v5 BIBREF41, BIBREF42 English corpora. NER is a task that was traditionally tackled using Conditional Random Fields (CRF) BIBREF43 which are quite suited for NER; CRFs were later used as decoding layers for Bi-LSTM architectures BIBREF44, BIBREF45 showing considerable improvements over CRFs alone. These Bi-LSTM-CRF architectures were later enhanced with contextualised word embeddings which yet again brought major improvements to the task BIBREF5, BIBREF6. Finally, large pretrained architectures settled the current state of the art showing a small yet important improvement over previous NER-specific architectures BIBREF7, BIBREF46.",
"In non-English NER the CoNLL 2002 shared task included NER corpora for Spanish and Dutch corpora BIBREF47 while the CoNLL 2003 included a German corpus BIBREF40. Here the recent efforts of BIBREF48 settled the state of the art for Spanish and Dutch, while BIBREF6 did it for German.",
"In French, no extensive work has been done due to the limited availability of NER corpora. We compare our model with the strong baselines settled by BIBREF49, who trained both CRF and BiLSTM-CRF architectures on the FTB and enhanced them using heuristics and pretrained word embeddings."
],
[
"We also evaluate our model on the Natural Language Inference (NLI) task, using the French part of the XNLI dataset BIBREF50. NLI consists in predicting whether a hypothesis sentence is entailed, neutral or contradicts a premise sentence.",
"The XNLI dataset is the extension of the Multi-Genre NLI (MultiNLI) corpus BIBREF51 to 15 languages by translating the validation and test sets manually into each of those languages. The English training set is also machine translated for all languages. The dataset is composed of 122k train, 2490 valid and 5010 test examples. As usual, NLI performance is evaluated using accuracy.",
"To evaluate a model on a language other than English (such as French), we consider the two following settings:",
"TRANSLATE-TEST: The French test set is machine translated into English, and then used with an English classification model. This setting provides a reasonable, although imperfect, way to circumvent the fact that no such data set exists for French, and results in very strong baseline scores.",
"TRANSLATE-TRAIN: The French model is fine-tuned on the machine-translated English training set and then evaluated on the French test set. This is the setting that we used for CamemBERT."
],
[
"For the TRANSLATE-TEST setting, we report results of the English RoBERTa to act as a reference.",
"In the TRANSLATE-TRAIN setting, we report the best scores from previous literature along with ours. BiLSTM-max is the best model in the original XNLI paper, mBERT which has been reported in French in BIBREF52 and XLM (MLM+TLM) is the best-presented model from BIBREF50."
],
[
"In this section, we measure the performance of CamemBERT by evaluating it on the four aforementioned tasks: POS tagging, dependency parsing, NER and NLI."
],
[
"We use the RoBERTa implementation in the fairseq library BIBREF53. Our learning rate is warmed up for 10k steps up to a peak value of $0.0007$ instead of the original $0.0001$ given our large batch size (8192). The learning rate fades to zero with polynomial decay. We pretrain our model on 256 Nvidia V100 GPUs (32GB each) for 100k steps during 17h."
],
[
"For each task, we append the relevant predictive layer on top of CamemBERT's Transformer architecture. Following the work done on BERT BIBREF7, for sequence tagging and sequence labeling we append a linear layer respectively to the $<$s$>$ special token and to the first subword token of each word. For dependency parsing, we plug a bi-affine graph predictor head as inspired by BIBREF54 following the work done on multilingual parsing with BERT by BIBREF36. We refer the reader to these two articles for more details on this module.",
"We fine-tune independently CamemBERT for each task and each dataset. We optimise the model using the Adam optimiser BIBREF23 with a fixed learning rate. We run a grid search on a combination of learning rates and batch sizes. We select the best model on the validation set out of the 30 first epochs.",
"Although this might push the performances even further, for all tasks except NLI, we don't apply any regularisation techniques such as weight decay, learning rate warm-up or discriminative fine-tuning. We show that fine-tuning CamemBERT in a straight-forward manner leads to state-of-the-art results on most tasks and outperforms the existing BERT-based models in most cases.",
"The POS tagging, dependency parsing, and NER experiments are run using hugging face's Transformer library extended to support CamemBERT and dependency parsing BIBREF55. The NLI experiments use the fairseq library following the RoBERTa implementation."
],
[
"For POS tagging and dependency parsing, we compare CamemBERT to three other near state-of-the-art models in Table TABREF32. CamemBERT outperforms UDPipe Future by a large margin for all treebanks and all metrics. Despite a much simpler optimisation process, CamemBERT beats UDify performances on all the available French treebanks.",
"CamemBERT also demonstrates higher performances than mBERT on those tasks. We observe a larger error reduction for parsing than for tagging. For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT."
],
[
"On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters)."
],
[
"For named entity recognition, our experiments show that CamemBERT achieves a slightly better precision than the traditional CRF-based SEM architectures described above in Section SECREF25 (CRF and Bi-LSTM+CRF), but shows a dramatic improvement in finding entity mentions, raising the recall score by 3.5 points. Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB. One other important finding is the results obtained by mBERT. Previous work with this model showed increased performance in NER for German, Dutch and Spanish when mBERT is used as contextualised word embedding for an NER-specific model BIBREF48, but our results suggest that the multilingual setting in which mBERT was trained is simply not enough to use it alone and fine-tune it for French NER, as it shows worse performance than even simple CRF models, suggesting that monolingual models could be better at NER."
],
[
"CamemBERT displays improved performance compared to prior work for the 4 downstream tasks considered. This confirms the hypothesis that pretrained language models can be effectively fine-tuned for various downstream tasks, as observed for English in previous work. Moreover, our results also show that dedicated monolingual models still outperform multilingual ones. We explain this point in two ways. First, the scale of data is possibly essential to the performance of CamemBERT. Indeed, we use 138GB of uncompressed text vs. 57GB for mBERT. Second, with more data comes more diversity in the pretraining distribution. Reaching state-of-the-art performances on 4 different tasks and 6 different datasets requires robust pretrained models. Our results suggest that the variability in the downstream tasks and datasets considered is handled more efficiently by a general language model than by Wikipedia-pretrained models such as mBERT."
],
[
"CamemBERT improves the state of the art for multiple downstream tasks in French. It is also lighter than other BERT-based approaches such as mBERT or XLM. By releasing our model, we hope that it can serve as a strong baseline for future research in French NLP, and expect our experiments to be reproduced in many other languages. We will publish an updated version in the near future where we will explore and release models trained for longer, with additional downstream tasks, baselines (e.g. XLM) and analysis, we will also train additional models with potentially cleaner corpora such as CCNet BIBREF56 for more accurate performance evaluation and more complete ablation."
],
[
"This work was partly funded by three French National grants from the Agence Nationale de la Recherche, namely projects PARSITI (ANR-16-CE33-0021), SoSweet (ANR-15-CE38-0011) and BASNUM (ANR-18-CE38-0003), as well as by the last author's chair in the PRAIRIE institute."
],
[
"We analyze the addition of whole-word masking on the downstream performance of CamemBERT. As reported for English on other downstream tasks, whole word masking improves downstream performances for all tasks but NER as seen in Table TABREF46. NER is highly sensitive to capitalisation, prefixes, suffixes and other subword features that could be used by a model to correctly identify entity mentions. Thus the added information by learning the masking at a subword level rather than at whole-word level seems to have a detrimental effect on downstream NER results."
]
],
"section_name": [
"Introduction",
"Related Work ::: From non-contextual to contextual word embeddings",
"Related Work ::: Non-contextual word embeddings for languages other than English",
"Related Work ::: Contextualised models for languages other than English",
"CamemBERT",
"CamemBERT ::: Architecture",
"CamemBERT ::: Pretraining objective",
"CamemBERT ::: Optimisation",
"CamemBERT ::: Segmentation into subword units",
"CamemBERT ::: Pretraining data",
"Evaluation ::: Part-of-speech tagging and dependency parsing",
"Evaluation ::: Part-of-speech tagging and dependency parsing ::: Baselines",
"Evaluation ::: Named Entity Recognition",
"Evaluation ::: Named Entity Recognition ::: Baselines",
"Evaluation ::: Natural Language Inference",
"Evaluation ::: Natural Language Inference ::: Baselines",
"Experiments",
"Experiments ::: Experimental Setup ::: Pretraining",
"Experiments ::: Experimental Setup ::: Fine-tuning",
"Experiments ::: Results ::: Part-of-Speech tagging and dependency parsing",
"Experiments ::: Results ::: Natural Language Inference: XNLI",
"Experiments ::: Results ::: Named-Entity Recognition",
"Experiments ::: Discussion",
"Conclusion",
"Acknowledgments",
"Appendix ::: Impact of Whole-Word Masking"
]
} | {
"answers": [
{
"annotation_id": [
"e9e1b87a031a0b9b9f2f47eede9097c58a6b500f"
],
"answer": [
{
"evidence": [
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
],
"extractive_spans": [
"unshuffled version of the French OSCAR corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"15bd8457ee6ef5ee00c78810010f9b9613730b86"
],
"answer": [
{
"evidence": [
"Experiments ::: Results ::: Natural Language Inference: XNLI",
"On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters)."
],
"extractive_spans": [
"its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experiments ::: Results ::: Natural Language Inference: XNLI\nOn the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"23b1324a33d14f2aac985bc2fca7d204607225ed"
],
"answer": [
{
"evidence": [
"We will compare to the more recent cross-lingual language model XLM BIBREF12, as well as the state-of-the-art CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper.",
"In French, no extensive work has been done due to the limited availability of NER corpora. We compare our model with the strong baselines settled by BIBREF49, who trained both CRF and BiLSTM-CRF architectures on the FTB and enhanced them using heuristics and pretrained word embeddings.",
"In the TRANSLATE-TRAIN setting, we report the best scores from previous literature along with ours. BiLSTM-max is the best model in the original XNLI paper, mBERT which has been reported in French in BIBREF52 and XLM (MLM+TLM) is the best-presented model from BIBREF50."
],
"extractive_spans": [],
"free_form_answer": "POS and DP task: CONLL 2018\nNER task: (no extensive work) Strong baselines CRF and BiLSTM-CRF\nNLI task: mBERT or XLM (not clear from text)",
"highlighted_evidence": [
"We will compare to the more recent cross-lingual language model XLM BIBREF12, as well as the state-of-the-art CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper.",
"In French, no extensive work has been done due to the limited availability of NER corpora. We compare our model with the strong baselines settled by BIBREF49, who trained both CRF and BiLSTM-CRF architectures on the FTB and enhanced them using heuristics and pretrained word embeddings.",
"In the TRANSLATE-TRAIN setting, we report the best scores from previous literature along with ours. BiLSTM-max is the best model in the original XNLI paper, mBERT which has been reported in French in BIBREF52 and XLM (MLM+TLM) is the best-presented model from BIBREF50."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"742351bb0ed07c34bbc4badb7fdd255761bd664a"
],
"answer": [
{
"evidence": [
"CamemBERT also demonstrates higher performances than mBERT on those tasks. We observe a larger error reduction for parsing than for tagging. For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT.",
"On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters).",
"For named entity recognition, our experiments show that CamemBERT achieves a slightly better precision than the traditional CRF-based SEM architectures described above in Section SECREF25 (CRF and Bi-LSTM+CRF), but shows a dramatic improvement in finding entity mentions, raising the recall score by 3.5 points. Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB. One other important finding is the results obtained by mBERT. Previous work with this model showed increased performance in NER for German, Dutch and Spanish when mBERT is used as contextualised word embedding for an NER-specific model BIBREF48, but our results suggest that the multilingual setting in which mBERT was trained is simply not enough to use it alone and fine-tune it for French NER, as it shows worse performance than even simple CRF models, suggesting that monolingual models could be better at NER."
],
"extractive_spans": [
"2.36 point increase in the F1 score with respect to the best SEM architecture",
"on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM)",
"lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa",
"For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT",
"For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT"
],
"free_form_answer": "",
"highlighted_evidence": [
"For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT.",
"On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M).",
"However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa.",
"Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"92ee0c954a5d2197aa496d78771ac58396ee8035"
],
"answer": [
{
"evidence": [
"To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT). We then compare our models to UDify BIBREF36. UDify is a multitask and multilingual model based on mBERT that is near state-of-the-art on all UD languages including French for both POS tagging and dependency parsing."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"825945c12f43ef2d07ba436f460fa58d3829dde3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"02150b6860e8e3097f4f1cb1c60d42af03952c54"
],
"answer": [
{
"evidence": [
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
],
"extractive_spans": [
"unshuffled version of the French OSCAR corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is CamemBERT trained on?",
"Which tasks does CamemBERT not improve on?",
"What is the state of the art?",
"How much better was results of CamemBERT than previous results on these tasks?",
"Was CamemBERT compared against multilingual BERT on these tasks?",
"How long was CamemBERT trained?",
"What data is used for training CamemBERT?"
],
"question_id": [
"71f2b368228a748fd348f1abf540236568a61b07",
"d3d4eef047aa01391e3e5d613a0f1f786ae7cfc7",
"63723c6b398100bba5dc21754451f503cb91c9b8",
"5471766ca7c995dd7f0f449407902b32ac9db269",
"dc49746fc98647445599da9d17bc004bafdc4579",
"8720c096c8b990c7b19f956ee4930d5f2c019e2b",
"b573b36936ffdf1d70e66f9b5567511c989b46b2"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Sizes in Number of tokens, words and phrases of the 4 treebanks used in the evaluations of POS-tagging and dependency parsing.",
"Table 2: Final POS and dependency parsing scores of CamemBERT and mBERT (fine-tuned in the exact same conditions as CamemBERT), UDify as reported in the original paper on 4 French treebanks (French GSD, Spoken, Sequoia and ParTUT), reported on test sets (4 averaged runs) assuming gold tokenisation. Best scores in bold, second to best underlined.",
"Table 3: Accuracy of models for French on the XNLI test set. Best scores in bold, second to best underlined.",
"Table 4: Results for NER on the FTB. Best scores in bold, second to best underlined.",
"Table 5: Comparing subword and whole-word masking procedures on the validation sets of each task. Each score is an average of 4 runs with different random seeds. For POS tagging and Dependency parsing, we average the scores on the 4 treebanks.)"
],
"file": [
"3-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"10-Table5-1.png"
]
} | [
"What is the state of the art?"
] | [
[
"1911.03894-Evaluation ::: Named Entity Recognition ::: Baselines-2",
"1911.03894-Evaluation ::: Part-of-speech tagging and dependency parsing ::: Baselines-2",
"1911.03894-Evaluation ::: Natural Language Inference ::: Baselines-1"
]
] | [
"POS and DP task: CONLL 2018\nNER task: (no extensive work) Strong baselines CRF and BiLSTM-CRF\nNLI task: mBERT or XLM (not clear from text)"
] | 89 |
1710.01492 | Semantic Sentiment Analysis of Twitter Data | Internet and the proliferation of smart mobile devices have changed the way information is created, shared, and spreads, e.g., microblogs such as Twitter, weblogs such as LiveJournal, social networks such as Facebook, and instant messengers such as Skype and WhatsApp are now commonly used to share thoughts and opinions about anything in the surrounding world. This has resulted in the proliferation of social media content, thus creating new opportunities to study public opinion at a scale that was never possible before. Naturally, this abundance of data has quickly attracted business and research interest from various fields including marketing, political science, and social studies, among many others, which are interested in questions like these: Do people like the new Apple Watch? Do Americans support ObamaCare? How do Scottish feel about the Brexit? Answering these questions requires studying the sentiment of opinions people express in social media, which has given rise to the fast growth of the field of sentiment analysis in social media, with Twitter being especially popular for research due to its scale, representativeness, variety of topics discussed, as well as ease of public access to its messages. Here we present an overview of work on sentiment analysis on Twitter. | {
"paragraphs": [
[
"Microblog sentiment analysis; Twitter opinion mining"
],
[
"Sentiment Analysis: This is text analysis aiming to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a piece of text."
],
[
"Sentiment analysis on Twitter is the use of natural language processing techniques to identify and categorize opinions expressed in a tweet, in order to determine the author's attitude toward a particular topic or in general. Typically, discrete labels such as positive, negative, neutral, and objective are used for this purpose, but it is also possible to use labels on an ordinal scale, or even continuous numerical values."
],
[
"Internet and the proliferation of smart mobile devices have changed the way information is created, shared, and spreads, e.g., microblogs such as Twitter, weblogs such as LiveJournal, social networks such as Facebook, and instant messengers such as Skype and WhatsApp are now commonly used to share thoughts and opinions about anything in the surrounding world. This has resulted in the proliferation of social media content, thus creating new opportunities to study public opinion at a scale that was never possible before.",
"Naturally, this abundance of data has quickly attracted business and research interest from various fields including marketing, political science, and social studies, among many others, which are interested in questions like these: Do people like the new Apple Watch? What do they hate about iPhone6? Do Americans support ObamaCare? What do Europeans think of Pope's visit to Palestine? How do we recognize the emergence of health problems such as depression? Do Germans like how Angela Merkel is handling the refugee crisis in Europe? What do republican voters in USA like/hate about Donald Trump? How do Scottish feel about the Brexit?",
"Answering these questions requires studying the sentiment of opinions people express in social media, which has given rise to the fast growth of the field of sentiment analysis in social media, with Twitter being especially popular for research due to its scale, representativeness, variety of topics discussed, as well as ease of public access to its messages BIBREF0 , BIBREF1 .",
"Despite all these opportunities, the rise of social media has also presented new challenges for natural language processing (NLP) applications, which had largely relied on NLP tools tuned for formal text genres such as newswire, and thus were not readily applicable to the informal language and style of social media. That language proved to be quite challenging with its use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, e.g., RT for re-tweet and #hashtags. In addition to the genre difference, there is also a difference in length: social media messages are generally short, often length-limited by design as in Twitter, i.e., a sentence or a headline rather than a full document. How to handle such challenges has only recently been the subject of thorough research BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 ."
],
[
"Sentiment analysis has a wide number of applications in areas such as market research, political and social sciences, and for studying public opinion in general, and Twitter is one of the most commonly-used platforms for this. This is due to its streaming nature, which allows for real-time analysis, to its social aspect, which encourages people to share opinions, and to the short size of the tweets, which simplifies linguistic analysis.",
"There are several formulations of the task of Sentiment Analysis on Twitter that look at different sizes of the target (e.g., at the level of words vs. phrases vs. tweets vs. sets of tweets), at different types of semantic targets (e.g., aspect vs. topic vs. overall tweet), at the explicitness of the target (e.g., sentiment vs. stance detection), at the scale of the expected label (2-point vs. 3-point vs. ordinal), etc. All these are explored at SemEval, the International Workshop on Semantic Evaluation, which has created a number of benchmark datasets and has enabled direct comparison between different systems and approaches, both as part of the competition and beyond.",
"Traditionally, the task has been addressed using supervised and semi-supervised methods, as well as using distant supervision, with the most important resource being sentiment polarity lexicons, and with feature-rich approaches as the dominant research direction for years. With the recent rise of deep learning, which in many cases eliminates the need for any explicit feature modeling, the importance of both lexicons and features diminishes, while at the same time attention is shifting towards learning from large unlabeled data, which is needed to train the high number of parameters of such complex models. Finally, as methods for sentiment analysis mature, more attention is also being paid to linguistic structure and to multi-linguality and cross-linguality."
],
[
"Sentiment analysis emerged as a popular research direction in the early 2000s. Initially, it was regarded as standard document classification into topics such as business, sport, and politics BIBREF10 . However, researchers soon realized that it was quite different from standard document classification BIBREF11 , and that it crucially needed external knowledge in the form of sentiment polarity lexicons.",
"Around the same time, other researchers realized the importance of external sentiment lexicons, e.g., Turney BIBREF12 proposed an unsupervised approach to learn the sentiment orientation of words/phrases: positive vs. negative. Later work studied the linguistic aspects of expressing opinions, evaluations, and speculations BIBREF13 , the role of context in determining the sentiment orientation BIBREF14 , of deeper linguistic processing such as negation handling BIBREF15 , of finer-grained sentiment distinctions BIBREF16 , of positional information BIBREF17 , etc. Moreover, it was recognized that in many cases, it is crucial to know not just the polarity of the sentiment but also the topic toward which this sentiment is expressed BIBREF18 .",
"Until the rise of social media, research on opinion mining and sentiment analysis had focused primarily on learning about the language of sentiment in general, meaning that it was either genre-agnostic BIBREF19 or focused on newswire texts BIBREF20 and customer reviews (e.g., from web forums), most notably about movies BIBREF10 and restaurants BIBREF21 but also about hotels, digital cameras, cell phones, MP3 and DVD players BIBREF22 , laptops BIBREF21 , etc. This has given rise to several resources, mostly word and phrase polarity lexicons, which have proven to be very valuable for their respective domains and types of texts, but less useful for short social media messages.",
"Later, with the emergence of social media, sentiment analysis in Twitter became a hot research topic. Unfortunately, research in that direction was hindered by the unavailability of suitable datasets and lexicons for system training, development, and testing. While some Twitter-specific resources were developed, initially they were either small and proprietary, such as the i-sieve corpus BIBREF6 , were created only for Spanish like the TASS corpus BIBREF23 , or relied on noisy labels obtained automatically, e.g., based on emoticons and hashtags BIBREF24 , BIBREF25 , BIBREF10 .",
"This situation changed with the shared task on Sentiment Analysis on Twitter, which was organized at SemEval, the International Workshop on Semantic Evaluation, a semantic evaluation forum previously known as SensEval. The task ran in 2013, 2014, 2015, and 2016, attracting over 40 participating teams in all four editions. While the focus was on general tweets, the task also featured out-of-domain testing on SMS messages, LiveJournal messages, as well as on sarcastic tweets.",
"SemEval-2013 Task 2 BIBREF26 and SemEval-2014 Task 9 BIBREF27 focused on expression-level and message-level polarity. SemEval-2015 Task 10 BIBREF28 , BIBREF29 featured topic-based message polarity classification on detecting trends toward a topic and on determining the out-of-context (a priori) strength of association of Twitter terms with positive sentiment. SemEval-2016 Task 4 BIBREF30 introduced a 5-point scale, which is used for human review ratings on popular websites such as Amazon, TripAdvisor, Yelp, etc.; from a research perspective, this meant moving from classification to ordinal regression. Moreover, it focused on quantification, i.e., determining what proportion of a set of tweets on a given topic are positive/negative about it. It also featured a 5-point scale ordinal quantification subtask BIBREF31 .",
"Other related tasks have explored aspect-based sentiment analysis BIBREF32 , BIBREF33 , BIBREF21 , sentiment analysis of figurative language on Twitter BIBREF34 , implicit event polarity BIBREF35 , stance in tweets BIBREF36 , out-of-context sentiment intensity of phrases BIBREF37 , and emotion detection BIBREF38 . Some of these tasks featured languages other than English."
],
[
"Tweet-level sentiment. The simplest and also the most popular task of sentiment analysis on Twitter is to determine the overall sentiment expressed by the author of a tweet BIBREF30 , BIBREF28 , BIBREF26 , BIBREF29 , BIBREF27 . Typically, this means choosing one of the following three classes to describe the sentiment: Positive, Negative, and Neutral. Here are some examples:",
"Positive: @nokia lumia620 cute and small and pocket-size, and available in the brigh test colours of day! #lumiacaption",
"Negative: I hate tweeting on my iPhone 5 it's so small :(",
"Neutral: If you work as a security in a samsung store...Does that make you guardian of the galaxy??",
"Sentiment polarity lexicons. Naturally, the overall sentiment in a tweet can be determined based on the sentiment-bearing words and phrases it contains as well as based on emoticons such as ;) and:(. For this purpose, researchers have been using lexicons of sentiment-bearing words. For example, cute is a positive word, while hate is a negative one, and the occurrence of these words in (1) and (2) can help determine the overall polarity of the respective tweet. We will discuss these lexicons in more detail below.",
"Prior sentiment polarity of multi-word phrases. Unfortunately, many sentiment-bearing words are not universally good or universally bad. For example, the polarity of an adjective could depend on the noun it modifies, e.g., hot coffee and unpredictable story express positive sentiment, while hot beer and unpredictable steering are negative. Thus, determining the out-of-context (a priori) strength of association of Twitter terms, especially multi-word terms, with positive/negative sentiment is an active research direction BIBREF28 , BIBREF29 .",
"Phrase-level polarity in context. Even when the target noun is the same, the polarity of the modifying adjective could be different in different tweets, e.g., small is positive in (1) but negative in (2), even though they both refer to a phone. Thus, there has been research in determining the sentiment polarity of a term in the context of a tweet BIBREF26 , BIBREF29 , BIBREF27 .",
"Sarcasm. Going back to tweet-level sentiment analysis, we should mention sarcastic tweets, which are particularly challenging as the sentiment they express is often the opposite of what the words they contain suggest BIBREF4 , BIBREF29 , BIBREF27 . For example, (4) and (5) express a negative sentiment even though they contain positive words and phrases such as thanks, love, and boosts my morale.",
"Negative: Thanks manager for putting me on the schedule for Sunday",
"Negative: I just love missing my train every single day. Really boosts my morale.",
"Sentiment toward a topic. Even though tweets are short, as they are limited to 140 characters by design (even though this was relaxed a bit as of September 19, 2016, and now media attachments such as images, videos, polls, etc., and quoted tweets no longer reduce the character count), they are still long enough to allow the tweet's author to mention several topics and to express potentially different sentiment toward each of them. A topic can be anything that people express opinions about, e.g., a product (e.g., iPhone6), a political candidate (e.g., Donald Trump), a policy (e.g., Obamacare), an event (e.g., Brexit), etc. For example, in (6) the author is positive about Donald Trump but negative about Hillary Clinton. A political analyzer would not be interested so much in the overall sentiment expressed in the tweet (even though one could argue that here it is positive overall), but rather in the sentiment with respect to a topic of his/her interest of study.",
"As a democrat I couldnt ethically support Hillary no matter who was running against her. Just so glad that its Trump, just love the guy!",
"(topic: Hillary INLINEFORM0 Negative)",
"(topic: Trump INLINEFORM0 Positive)",
"Aspect-based sentiment analysis. Looking again at (1) and (2), we can say that the sentiment is not about the phone (lumia620 and iPhone 5, respectively), but rather about some specific aspect thereof, namely, size. Similarly, in (7) instead of sentiment toward the topic lasagna, we can see sentiment toward two aspects thereof: quality (Positive sentiment) and quantity (Negative sentiment). Aspect-based sentiment analysis is an active research area BIBREF32 , BIBREF33 , BIBREF21 .",
"The lasagna is delicious but do not come here on an empty stomach.",
"Stance detection. A task related to, but arguably different in some respect from sentiment analysis, is that of stance detection. The goal here is to determine whether the author of a piece of text is in favor of, against, or neutral toward a proposition or a target BIBREF36 . For example, in (8) the author has a negative stance toward the proposition women have the right to abortion, even though the target is not mentioned at all. Similarly, in (9§) the author expresses a negative sentiment toward Mitt Romney, from which one can imply that s/he has a positive stance toward the target Barack Obama.",
"A foetus has rights too! Make your voice heard.",
"(Target: women have the right to abortion INLINEFORM0 Against)",
"All Mitt Romney cares about is making money for the rich.",
"(Target: Barack Obama INLINEFORM0 InFavor)",
"Ordinal regression. The above tasks were offered in different granularities, e.g., 2-way (Positive, Negative), 3-way (Positive, Neutral, Negative), 4-way (Positive, Neutral, Negative, Objective), 5-way (HighlyPositive, Positive, Neutral, Negative, HighlyNegative), and sometimes even 11-way BIBREF34 . It is important to note that the 5-way and the 11-way scales are ordinal, i.e., the classes can be associated with numbers, e.g., INLINEFORM0 2, INLINEFORM1 1, 0, 1, and 2 for the 5-point scale. This changes the machine learning task as not all mistakes are equal anymore BIBREF16 . For example, misclassifying a HighlyNegative example as HighlyPositive is a bigger mistake than misclassifying it as Negative or as Neutral. From a machine learning perspective, this means moving from classification to ordinal regression. This also requires different evaluation measures BIBREF30 .",
"Quantification. Practical applications are hardly ever interested in the sentiment expressed in a specific tweet. Rather, they look at estimating the prevalence of positive and negative tweets about a given topic in a set of tweets from some time interval. Most (if not all) tweet sentiment classification studies conducted within political science BIBREF39 , BIBREF40 , BIBREF41 , economics BIBREF42 , BIBREF7 , social science BIBREF43 , and market research BIBREF44 , BIBREF45 use Twitter with an interest in aggregate data and not in individual classifications. Thus, some tasks, such as SemEval-2016 Task 4 BIBREF30 , replace classification with class prevalence estimation, which is also known as quantification in data mining and related fields. Note that quantification is not a mere byproduct of classification, since a good classifier is not necessarily a good quantifier, and vice versa BIBREF46 . Finally, in case of multiple labels on an ordinal scale, we have yet another machine learning problem: ordinal quantification. Both versions of quantification require specific evaluation measures and machine learning algorithms."
],
[
"Pre-processing. Tweets are subject to standard preprocessing steps for text such as tokenization, stemming, lemmatization, stop-word removal, and part-of-speech tagging. Moreover, due to their noisy nature, they are also processed using some Twitter-specific techniques such as substitution/removal of URLs, of user mentions, of hashtags, and of emoticons, spelling correction, elongation normalization, abbreviation lookup, punctuation removal, detection of amplifiers and diminishers, negation scope detection, etc. For this, one typically uses Twitter-specific NLP tools such as part-of-speech and named entity taggers, syntactic parsers, etc. BIBREF47 , BIBREF48 , BIBREF49 .",
"Negation handling. Special handling is also done for negation. The most popular approach to negation handling is to transform any word that appeared in a negation context by adding a suffix _NEG to it, e.g., good would become good_NEG BIBREF50 , BIBREF10 . A negated context is typically defined as a text span between a negation word, e.g., no, not, shouldn't, and a punctuation mark or the end of the message. Alternatively, one could flip the polarity of sentiment words, e.g., the positive word good would become negative when negated. It has also been argued BIBREF51 that negation affects different words differently, and thus it was also proposed to build and use special sentiment polarity lexicons for words in negation contexts BIBREF52 .",
"Features. Traditionally, systems for Sentiment Analysis on Twitter have relied on handcrafted features derived from word-level (e.g., great, freshly roasted coffee, becoming president) and character-level INLINEFORM0 -grams (e.g., bec, beco, comin, oming), stems (e.g., becom), lemmata (e.g., become, roast), punctuation (e.g., exclamation and question marks), part-of-speech tags (e.g., adjectives, adverbs, verbs, nouns), word clusters (e.g., probably, probly, and maybe could be collapsed to the same word cluster), and Twitter-specific encodings such as emoticons (e.g., ;), :D), hashtags (#Brexit), user tags (e.g., @allenai_org), abbreviations (e.g., RT, BTW, F2F, OMG), elongated words (e.g., soooo, yaayyy), use of capitalization (e.g., proportion of ALL CAPS words), URLs, etc. Finally, the most important features are those based on the presence of words and phrases in sentiment polarity lexicons with positive/negative scores; examples of such features include number of positive terms, number of negative terms, ratio of the number of positive terms to the number of positive+negative terms, ratio of the number of negative terms to the number of positive+negative terms, sum of all positive scores, sum of all negative scores, sum of all scores, etc.",
"Supervised learning. Traditionally, the above features were fed into classifiers such as Maximum Entropy (MaxEnt) and Support Vector Machines (SVM) with various kernels. However, observation over the SemEval Twitter sentiment task in recent years shows growing interest in, and by now clear dominance of methods based on deep learning. In particular, the best-performing systems at SemEval-2015 and SemEval-2016 used deep convolutional networks BIBREF53 , BIBREF54 . Conversely, kernel machines seem to be less frequently used than in the past, and the use of learning methods other than the ones mentioned above is at this point scarce. All these models are examples of supervised learning as they need labeled training data.",
"Semi-supervised learning. We should note two things about the use of deep neural networks. First they can often do quite well without the need for explicit feature modeling, as they can learn the relevant features in their hidden layers starting from the raw text. Second, they have too many parameters, and thus they require a lot of training data, orders of magnitude more than it is realistic to have manually annotated. A popular way to solve this latter problem is to use self training, a form of semi-supervised learning, where first a system is trained on the available training data only, then this system is applied to make predictions on a large unannotated set of tweets, and finally it is trained for a few more iterations on its own predictions. This works because parts of the network, e.g., with convolution or with LSTMs BIBREF55 , BIBREF54 , BIBREF56 , need to learn something like a language model, i.e., which word is likely to follow which one. Training these parts needs no labels. While these parts can be also pre-trained, it is easier, and often better, to use self training.",
"Distantly-supervised learning. Another way to make use of large unannotated datasets is to rely on distant supervision BIBREF41 . For example, one can annotate tweets for sentiment polarity based on whether they contain a positive or a negative emoticon. This results in noisy labels, which can be used to train a system BIBREF54 , to induce sentiment-specific word embeddings BIBREF57 , sentiment-polarity lexicons BIBREF25 , etc.",
"Unsupervised learning. Fully unsupervised learning is not a popular method for addressing sentiment analysis tasks. Yet, some features used in sentiment analysis have been learned in an unsupervised way, e.g., Brown clusters to generalize over words BIBREF58 . Similarly, word embeddings are typically trained from raw tweets that have no annotation for sentiment (even though there is also work on sentiment-specific word embeddings BIBREF57 , which uses distant supervision)."
],
[
"Despite the wide variety of knowledge sources explored so far in the literature, sentiment polarity lexicons remain the most commonly used resource for the task of sentiment analysis.",
"Until recently, such sentiment polarity lexicons were manually crafted and were thus of small to moderate size, e.g., LIWC BIBREF59 has 2,300 words, the General Inquirer BIBREF60 contains 4,206 words, Bing Liu's lexicon BIBREF22 includes 6,786 words, and MPQA BIBREF14 has about 8,000 words.",
"Early efforts toward building sentiment polarity lexicons automatically yielded lexicons of moderate sizes such as the SentiWordNet BIBREF19 , BIBREF61 . However, recent results have shown that automatically extracted large-scale lexicons (e.g., up to a million words and phrases) offer important performance advantages, as confirmed at shared tasks on Sentiment Analysis on Twitter at SemEval 2013-2016 BIBREF30 , BIBREF26 , BIBREF29 , BIBREF27 . Using such large-scale lexicons was crucial for the performance of the top-ranked systems. Similar observations were made in the related Aspect-Based Sentiment Analysis task at SemEval 2014 BIBREF21 . In both tasks, the winning systems benefitted from building and using massive sentiment polarity lexicons BIBREF25 , BIBREF62 .",
"The two most popular large-scale lexicons were the Hashtag Sentiment Lexicon and the Sentiment140 lexicon, which were developed by the team of NRC Canada for their participation in the SemEval-2013 shared task on sentiment analysis on Twitter. Similar automatically induced lexicons proved useful for other SemEval tasks, e.g., for SemEval-2016 Task 3 on Community Question Answering BIBREF63 , BIBREF30 .",
"The importance of building sentiment polarity lexicons has resulted in a special subtask BIBREF29 at SemEval-2015 (part of Task 4) and an entire task BIBREF37 at SemEval-2016 (namely, Task 7), on predicting the out-of-context sentiment intensity of words and phrases. Yet, we should note though that the utility of using sentiment polarity lexicons for sentiment analysis probably needs to be revisited, as the best system at SemEval-2016 Task 4 could win without using any lexicons BIBREF53 ; it relied on semi-supervised learning using a deep neural network.",
"Various approaches have been proposed in the literature for bootstrapping sentiment polarity lexicons starting from a small set of seeds: positive and negative terms (words and phrases). The dominant approach is that of Turney BIBREF12 , who uses pointwise mutual information and bootstrapping to build a large lexicon and to estimate the semantic orientation of each word in that lexicon. He starts with a small set of seed positive (e.g., excellent) and negative words (e.g., bad), and then uses these words to induce sentiment polarity orientation for new words in a large unannotated set of texts (in his case, product reviews). The idea is that words that co-occur in the same text with positive seed words are likely to be positive, while those that tend to co-occur with negative words are likely to be negative. To quantify this intuition, Turney defines the notion of sentiment orientation (SO) for a term INLINEFORM0 as follows:",
" INLINEFORM0 ",
"where PMI is the pointwise mutual information, INLINEFORM0 and INLINEFORM1 are placeholders standing for any of the seed positive and negative terms, respectively, and INLINEFORM2 is a target word/phrase from the large unannotated set of texts (here tweets).",
"A positive/negative value for INLINEFORM0 indicates positive/negative polarity for the word INLINEFORM1 , and its magnitude shows the corresponding sentiment strength. In turn, INLINEFORM2 , where INLINEFORM3 is the probability to see INLINEFORM4 with any of the seed positive words in the same tweet, INLINEFORM5 is the probability to see INLINEFORM6 in any tweet, and INLINEFORM7 is the probability to see any of the seed positive words in a tweet; INLINEFORM8 is defined similarly.",
"The pointwise mutual information is a notion from information theory: given two random variables INLINEFORM0 and INLINEFORM1 , the mutual information of INLINEFORM2 and INLINEFORM3 is the “amount of information” (in units such as bits) obtained about the random variable INLINEFORM4 , through the random variable INLINEFORM5 BIBREF64 .",
"Let INLINEFORM0 and INLINEFORM1 be two values from the sample space of INLINEFORM2 and INLINEFORM3 , respectively. The pointwise mutual information between INLINEFORM4 and INLINEFORM5 is defined as follows: DISPLAYFORM0 ",
" INLINEFORM0 takes values between INLINEFORM1 , which happens when INLINEFORM2 = 0, and INLINEFORM3 if INLINEFORM4 .",
"In his experiments, Turney BIBREF12 used five positive and five negative words as seeds. His PMI-based approach further served as the basis for the creation of the two above-mentioned large-scale automatic lexicons for sentiment analysis in Twitter for English, initially developed by NRC for their participation in SemEval-2013 BIBREF25 . The Hashtag Sentiment Lexicon uses as seeds hashtags containing 32 positive and 36 negative words, e.g., #happy and #sad. Similarly, the Sentiment140 lexicon uses smileys as seed indicators for positive and negative sentiment, e.g., :), :-), and :)) as positive seeds, and :( and :-( as negative ones.",
"An alternative approach to lexicon induction has been proposed BIBREF65 , which, instead of using PMI, assigns positive/negative labels to the unlabeled tweets (based on the seeds), and then trains an SVM classifier on them, using word INLINEFORM0 -grams as features. These INLINEFORM1 -grams are then used as lexicon entries (words and phrases) with the learned classifier weights as polarity scores. Finally, it has been shown that sizable further performance gains can be obtained by starting with mid-sized seeds, i.e., hundreds of words and phrases BIBREF66 ."
],
[
"Sentiment analysis on Twitter has applications in a number of areas, including political science BIBREF39 , BIBREF40 , BIBREF41 , economics BIBREF42 , BIBREF7 , social science BIBREF43 , and market research BIBREF44 , BIBREF45 . It is used to study company reputation online BIBREF45 , to measure customer satisfaction, to identify detractors and promoters, to forecast market growth BIBREF42 , to predict the future income from newly-released movies, to forecast the outcome of upcoming elections BIBREF41 , BIBREF7 , to study political polarization BIBREF39 , BIBREF9 , etc."
],
[
"We expect the quest for more interesting formulations of the general sentiment analysis task to continue. We see competitions such as those at SemEval as the engine of this innovation, as they not only perform head-to-head comparisons, but also create databases and tools that enable follow-up research for many years afterward.",
"In terms of methods, we believe that deep learning BIBREF55 , BIBREF54 , BIBREF56 , together with semi-supervised and distantly-supervised methods BIBREF67 , BIBREF57 , will be the main focus of future research. We also expect more attention to be paid to linguistic structure and sentiment compositionality BIBREF68 , BIBREF69 . Moreover, we forecast more interest for languages other than English, and for cross-lingual methods BIBREF40 , BIBREF70 , BIBREF71 , which will allow leveraging on the rich resources that are already available for English. Last, but not least, the increase in opinion spam on Twitter will make it important to study astroturfing BIBREF72 and troll detection BIBREF73 , BIBREF74 , BIBREF75 ."
],
[
"Microblog Sentiment Analysis 100590",
"Multi-classifier System for Sentiment Analysis and Opinion Mining 351",
"Sentiment Analysis in Social Media 120",
"Sentiment Analysis of Microblogging Data 110168",
"Sentiment Analysis of Reviews 110169",
"Sentiment Analysis, Basics of 110159",
"Sentiment Quantification of User-Generated Content 110170",
"Social Media Analysis for Monitoring Political Sentiment 110172",
"Twitter Microblog Sentiment Analysis 265",
"User Sentiment and Opinion Analysis 192"
],
[
"For general research on sentiment analysis, we recommend the following surveys: BIBREF76 and BIBREF15 . For sentiment analysis on Twitter, we recommend the overview article on Sentiment Analysis on Twitter about the SemEval task BIBREF28 as well as the task description papers for different editions of the task BIBREF30 , BIBREF26 , BIBREF29 , BIBREF27 ."
]
],
"section_name": [
"Synonyms",
"Glossary",
"Definition",
"Introduction",
"Key Points",
"Historical Background",
"Variants of the Task at SemEval",
"Features and Learning",
"Sentiment Polarity Lexicons",
"Key Applications",
"Future Directions",
"Cross-References",
"Recommended Reading"
]
} | {
"answers": [
{
"annotation_id": [
"eaa2871ebfa0e132a84ca316dee33a4e45c9aba9"
],
"answer": [
{
"evidence": [
"Supervised learning. Traditionally, the above features were fed into classifiers such as Maximum Entropy (MaxEnt) and Support Vector Machines (SVM) with various kernels. However, observation over the SemEval Twitter sentiment task in recent years shows growing interest in, and by now clear dominance of methods based on deep learning. In particular, the best-performing systems at SemEval-2015 and SemEval-2016 used deep convolutional networks BIBREF53 , BIBREF54 . Conversely, kernel machines seem to be less frequently used than in the past, and the use of learning methods other than the ones mentioned above is at this point scarce. All these models are examples of supervised learning as they need labeled training data."
],
"extractive_spans": [
"deep convolutional networks BIBREF53 , BIBREF54"
],
"free_form_answer": "",
"highlighted_evidence": [
" In particular, the best-performing systems at SemEval-2015 and SemEval-2016 used deep convolutional networks BIBREF53 , BIBREF54 "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"71753531f52e1fc8ce0c1059d14979d0e723fff8"
],
"answer": [
{
"evidence": [
"Pre-processing. Tweets are subject to standard preprocessing steps for text such as tokenization, stemming, lemmatization, stop-word removal, and part-of-speech tagging. Moreover, due to their noisy nature, they are also processed using some Twitter-specific techniques such as substitution/removal of URLs, of user mentions, of hashtags, and of emoticons, spelling correction, elongation normalization, abbreviation lookup, punctuation removal, detection of amplifiers and diminishers, negation scope detection, etc. For this, one typically uses Twitter-specific NLP tools such as part-of-speech and named entity taggers, syntactic parsers, etc. BIBREF47 , BIBREF48 , BIBREF49 .",
"Despite all these opportunities, the rise of social media has also presented new challenges for natural language processing (NLP) applications, which had largely relied on NLP tools tuned for formal text genres such as newswire, and thus were not readily applicable to the informal language and style of social media. That language proved to be quite challenging with its use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, e.g., RT for re-tweet and #hashtags. In addition to the genre difference, there is also a difference in length: social media messages are generally short, often length-limited by design as in Twitter, i.e., a sentence or a headline rather than a full document. How to handle such challenges has only recently been the subject of thorough research BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 ."
],
"extractive_spans": [],
"free_form_answer": "Tweets noisy nature, use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, short (length limited) text",
"highlighted_evidence": [
" Moreover, due to their noisy nature, they are also processed using some Twitter-specific techniques such as substitution/removal of URLs, of user mentions, of hashtags, and of emoticons, spelling correction, elongation normalization, abbreviation lookup, punctuation removal, detection of amplifiers and diminishers, negation scope detection, etc.",
"That language proved to be quite challenging with its use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, e.g., RT for re-tweet and #hashtags. In addition to the genre difference, there is also a difference in length: social media messages are generally short, often length-limited by design as in Twitter, i.e., a sentence or a headline rather than a full document"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"021b8796d9378d1be927a2a74d587f9f64b7082e"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is the current SOTA for sentiment analysis on Twitter at the time of writing?",
"What difficulties does sentiment analysis on Twitter have, compared to sentiment analysis in other domains?",
"What are the metrics to evaluate sentiment analysis on Twitter?"
],
"question_id": [
"fa3663567c48c27703e09c42930e51bacfa54905",
"7997b9971f864a504014110a708f215c84815941",
"0d1408744651c3847469c4a005e4a9dccbd89cf1"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"irony",
"irony",
"irony"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [],
"file": []
} | [
"What difficulties does sentiment analysis on Twitter have, compared to sentiment analysis in other domains?"
] | [
[
"1710.01492-Introduction-3",
"1710.01492-Features and Learning-0"
]
] | [
"Tweets noisy nature, use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, short (length limited) text"
] | 91 |
1912.01673 | COSTRA 1.0: A Dataset of Complex Sentence Transformations | COSTRA 1.0 is a dataset of Czech complex sentence transformations. The dataset is intended for the study of sentence-level embeddings beyond simple word alternations or standard paraphrasing. ::: The dataset consist of 4,262 unique sentences with average length of 10 words, illustrating 15 types of modifications such as simplification, generalization, or formal and informal language variation. ::: The hope is that with this dataset, we should be able to test semantic properties of sentence embeddings and perhaps even to find some topologically interesting “skeleton” in the sentence embedding space. | {
"paragraphs": [
[
"Vector representations are becoming truly essential in majority of natural language processing tasks. Word embeddings became widely popular with the introduction of word2vec BIBREF0 and GloVe BIBREF1 and their properties have been analyzed in length from various aspects.",
"Studies of word embeddings range from word similarity BIBREF2, BIBREF3, over the ability to capture derivational relations BIBREF4, linear superposition of multiple senses BIBREF5, the ability to predict semantic hierarchies BIBREF6 or POS tags BIBREF7 up to data efficiency BIBREF8.",
"Several studies BIBREF9, BIBREF10, BIBREF11, BIBREF12 show that word vector representations are capable of capturing meaningful syntactic and semantic regularities. These include, for example, male/female relation demonstrated by the pairs “man:woman”, “king:queen” and the country/capital relation (“Russia:Moscow”, “Japan:Tokyo”). These regularities correspond to simple arithmetic operations in the vector space.",
"Sentence embeddings are becoming equally ubiquitous in NLP, with novel representations appearing almost every other week. With an overwhelming number of methods to compute sentence vector representations, the study of their general properties becomes difficult. Furthermore, it is not so clear in which way the embeddings should be evaluated.",
"In an attempt to bring together more traditional representations of sentence meanings and the emerging vector representations, bojar:etal:jnle:representations:2019 introduce a number of aspects or desirable properties of sentence embeddings. One of them is denoted as “relatability”, which highlights the correspondence between meaningful differences between sentences and geometrical relations between their respective embeddings in the highly dimensional continuous vector space. If such a correspondence could be found, we could use geometrical operations in the space to induce meaningful changes in sentences.",
"In this work, we present COSTRA, a new dataset of COmplex Sentence TRAnsformations. In its first version, the dataset is limited to sample sentences in Czech. The goal is to support studies of semantic and syntactic relations between sentences in the continuous space. Our dataset is the prerequisite for one of possible ways of exploring sentence meaning relatability: we envision that the continuous space of sentences induced by an ideal embedding method would exhibit topological similarity to the graph of sentence variations. For instance, one could argue that a subset of sentences could be organized along a linear scale reflecting the formalness of the language used. Another set of sentences could form a partially ordered set of gradually less and less concrete statements. And yet another set, intersecting both of the previous ones in multiple sentences could be partially or linearly ordered according to the strength of the speakers confidence in the claim.",
"Our long term goal is to search for an embedding method which exhibits this behaviour, i.e. that the topological map of the embedding space corresponds to meaningful operations or changes in the set of sentences of a language (or more languages at once). We prefer this behaviour to emerge, as it happened for word vector operations, but regardless if the behaviour is emergent or trained, we need a dataset of sentences illustrating these patterns. If large enough, such a dataset could serve for training. If it will be smaller, it will provide a test set. In either case, these sentences could provide a “skeleton” to the continuous space of sentence embeddings.",
"The paper is structured as follows: related summarizes existing methods of sentence embeddings evaluation and related work. annotation describes our methodology for constructing our dataset. data details the obtained dataset and some first observations. We conclude and provide the link to the dataset in conclusion"
],
[
"As hinted above, there are many methods of converting a sequence of words into a vector in a highly dimensional space. To name a few: BiLSTM with the max-pooling trained for natural language inference BIBREF13, masked language modeling and next sentence prediction using bidirectional Transformer BIBREF14, max-pooling last states of neural machine translation among many languages BIBREF15 or the encoder final state in attentionless neural machine translation BIBREF16.",
"The most common way of evaluating methods of sentence embeddings is extrinsic, using so called `transfer tasks', i.e. comparing embeddings via the performance in downstream tasks such as paraphrasing, entailment, sentence sentiment analysis, natural language inference and other assignments. However, even simple bag-of-words (BOW) approaches achieve often competitive results on such tasks BIBREF17.",
"Adi16 introduce intrinsic evaluation by measuring the ability of models to encode basic linguistic properties of a sentence such as its length, word order, and word occurrences. These so called `probing tasks' are further extended by a depth of the syntactic tree, top constituent or verb tense by DBLP:journals/corr/abs-1805-01070.",
"Both transfer and probing tasks are integrated in SentEval BIBREF18 framework for sentence vector representations. Later, Perone2018 applied SentEval to eleven different encoding methods revealing that there is no consistently well performing method across all tasks. SentEval was further criticized for pitfalls such as comparing different embedding sizes or correlation between tasks BIBREF19, BIBREF20.",
"shi-etal-2016-string show that NMT encoder is able to capture syntactic information about the source sentence. DBLP:journals/corr/BelinkovDDSG17 examine the ability of NMT to learn morphology through POS and morphological tagging.",
"Still, very little is known about semantic properties of sentence embeddings. Interestingly, cifka:bojar:meanings:2018 observe that the better self-attention embeddings serve in NMT, the worse they perform in most of SentEval tasks.",
"zhu-etal-2018-exploring generate automatically sentence variations such as:",
"Original sentence: A rooster pecked grain.",
"Synonym Substitution: A cock pecked grain.",
"Not-Negation: A rooster didn't peck grain.",
"Quantifier-Negation: There was no rooster pecking grain.",
"and compare their triplets by examining distances between their embeddings, i.e. distance between (1) and (2) should be smaller than distances between (1) and (3), (2) and (3), similarly, (3) and (4) should be closer together than (1)–(3) or (1)–(4).",
"In our previous study BIBREF21, we examined the effect of small sentence alternations in sentence vector spaces. We used sentence pairs automatically extracted from datasets for natural language inference BIBREF22, BIBREF23 and observed, that the simple vector difference, familiar from word embeddings, serves reasonably well also in sentence embedding spaces. The examined relations were however very simple: a change of gender, number, addition of an adjective, etc. The structure of the sentence and its wording remained almost identical.",
"We would like to move to more interesting non-trivial sentence comparison, beyond those in zhu-etal-2018-exploring or BaBo2019, such as change of style of a sentence, the introduction of a small modification that drastically changes the meaning of a sentence or reshuffling of words in a sentence that alters its meaning.",
"Unfortunately, such a dataset cannot be generated automatically and it is not available to our best knowledge. We try to start filling this gap with COSTRA 1.0."
],
[
"We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions. In the second one, we collected sentence alternations using ideas from the first round. The first and second rounds of annotation could be broadly called as collecting ideas and collecting data, respectively."
],
[
"We manually selected 15 newspaper headlines. Eleven annotators were asked to modify each headline up to 20 times and describe the modification with a short name. They were given an example sentence and several of its possible alternations, see tab:firstroundexamples.",
"Unfortunately, these examples turned out to be highly influential on the annotators' decisions and they correspond to almost two thirds of all of modifications gathered in the first round. Other very common transformations include change of a word order or transformation into a interrogative/imperative sentence.",
"Other interesting modification were also proposed such as change into a fairy-tale style, excessive use of diminutives/vulgarisms or dadaism—a swap of roles in the sentence so that the resulting sentence is grammatically correct but nonsensical in our world. Of these suggestions, we selected only the dadaistic swap of roles for the current exploration (see nonsense in Table TABREF7).",
"In total, we collected 984 sentences with 269 described unique changes. We use them as an inspiration for second round of annotation."
],
[
"We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.",
"We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.",
"Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense."
],
[
"The source sentences for annotations were selected from Czech data of Global Voices BIBREF24 and OpenSubtitles BIBREF25. We used two sources in order to have different styles of seed sentences, both journalistic and common spoken language. We considered only sentences with more than 5 and less than 15 words and we manually selected 150 of them for further annotation. This step was necessary to remove sentences that are:",
"too unreal, out of this world, such as:",
"Jedno fotonový torpédo a je z tebe vesmírná topinka.",
"“One photon torpedo and you're a space toast.”",
"photo captions (i.e. incomplete sentences), e.g.:",
"Zvláštní ekvádorský případ Correa vs. Crudo",
"“Specific Ecuadorian case Correa vs. Crudo”",
"too vague, overly dependent on the context:",
"Běž tam a mluv na ni.",
"“Go there and speak to her.”",
"Many of the intended sentence transformations would be impossible to apply to such sentences and annotators' time would be wasted. Even after such filtering, it was still quite possible that a desired sentence modification could not be achieved for a sentence. For such a case, we gave the annotators the option to enter the keyword IMPOSSIBLE instead of the particular (impossible) modification.",
"This option allowed to explicitly state that no such transformation is possible. At the same time most of the transformations are likely to lead to a large number possible outcomes. As documented in scratching2013, Czech sentence might have hundreds of thousand of paraphrases. To support some minimal exploration of this possible diversity, most of sentences were assigned to several annotators."
],
[
"The annotation is a challenging task and the annotators naturally make mistakes. Unfortunately, a single typo can significantly influence the resulting embedding BIBREF26. After collecting all the sentence variations, we applied the statistical spellchecker and grammar checker Korektor BIBREF27 in order to minimize influence of typos to performance of embedding methods. We manually inspected 519 errors identified by Korektor and fixed 129, which were identified correctly."
],
[
"In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics.",
"The time needed to carry out one piece of annotation (i.e. to provide one seed sentence with all 15 transformations) was on average almost 20 minutes but some annotators easily needed even half an hour. Out of the 4262 distinct sentences, only 188 was recorded more than once. In other words, the chance of two annotators producing the same output string is quite low. The most repeated transformations are by far past, future and ban. The least repeated is paraphrase with only single one repeated.",
"multiple-annots documents this in another way. The 293 annotations are split into groups depending on how many annotators saw the same input sentence: 30 annotations were annotated by one person only, 30 annotations by two different persons etc. The last column shows the number of unique outputs obtained in that group. Across all cases, 96.8% of produced strings were unique.",
"In line with instructions, the annotators were using the IMPOSSIBLE option scarcely (95 times, i.e. only 2%). It was also a case of 7 annotators only; the remaining 5 annotators were capable of producing all requested transformations. The top three transformations considered unfeasible were different meaning (using the same set of words), past (esp. for sentences already in the past tense) and simple sentence."
],
[
"We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019. Having browsed a number of 2D visualizations (PCA and t-SNE) of the space, we have to conclude that visually, LASER space does not seem to exhibit any of the desired topological properties discussed above, see fig:pca for one example.",
"The lack of semantic relations in the LASER space is also reflected in vector similarities, summarized in similarities. The minimal change operation substantially changed the meaning of the sentence, and yet the embedding of the transformation lies very closely to the original sentence (average similarity of 0.930). Tense changes and some form of negation or banning also keep the vectors very similar.",
"The lowest average similarity was observed for generalization (0.739) and simplification (0.781), which is not any bad sign. However the fact that paraphrases have much smaller similarity (0.826) than opposite meaning (0.902) documents that the vector space lacks in terms of “relatability”."
],
[
"We presented COSTRA 1.0, a small corpus of complex transformations of Czech sentences.",
"We plan to use this corpus to analyze a wide spectrum sentence embeddings methods to see to what extent the continuous space they induce reflects semantic relations between sentences in our corpus. The very first analysis using LASER embeddings indicates lack of “meaning relatability”, i.e. the ability to move along a trajectory in the space in order to reach desired sentence transformations. Actually, not even paraphrases are found in close neighbourhoods of embedded sentences. More “semantic” sentence embeddings methods are thus to be sought for.",
"The corpus is freely available at the following link:",
"http://hdl.handle.net/11234/1-3123",
"Aside from extending the corpus in Czech and adding other language variants, we are also considering to wrap COSTRA 1.0 into an API such as SentEval, so that it is very easy for researchers to evaluate their sentence embeddings in terms of “relatability”."
]
],
"section_name": [
"Introduction",
"Background",
"Annotation",
"Annotation ::: First Round: Collecting Ideas",
"Annotation ::: Second Round: Collecting Data ::: Sentence Transformations",
"Annotation ::: Second Round: Collecting Data ::: Seed Data",
"Annotation ::: Second Round: Collecting Data ::: Spell-Checking",
"Dataset Description",
"Dataset Description ::: First Observations",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"0259888535c15dba7d2d5de40c53adb8dee11971"
],
"answer": [
{
"evidence": [
"In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics."
],
"extractive_spans": [],
"free_form_answer": "27.41 transformation on average of single seed sentence is available in dataset.",
"highlighted_evidence": [
"After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ccd5497747bba7fc7db7b20a4f6e4b3bdd72e410"
],
"answer": [
{
"evidence": [
"We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.",
"Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense."
],
"extractive_spans": [],
"free_form_answer": "For each source sentence, transformation sentences that are transformed according to some criteria (paraphrase, minimal change etc.)",
"highlighted_evidence": [
"We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.\n\nSeveral modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2d057fce8922ab961ff70f7564f6b6d9a96c93e8"
],
"answer": [
{
"evidence": [
"In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics."
],
"extractive_spans": [],
"free_form_answer": "Yes, as new sentences.",
"highlighted_evidence": [
"In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"656c8738231070b03ee6902ad1d3370b9baf283c"
],
"answer": [
{
"evidence": [
"We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.",
"FLOAT SELECTED: Table 2: Sentences transformations requested in the second round of annotation with the instructions to the annotators. The annotators were given no examples (with the exception of nonsense) not to be influenced as much as in the first round."
],
"extractive_spans": [],
"free_form_answer": "- paraphrase 1\n- paraphrase 2\n- different meaning\n- opposite meaning\n- nonsense\n- minimal change\n- generalization\n- gossip\n- formal sentence\n- non-standard sentence\n- simple sentence\n- possibility\n- ban\n- future\n- past",
"highlighted_evidence": [
"We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.",
"FLOAT SELECTED: Table 2: Sentences transformations requested in the second round of annotation with the instructions to the annotators. The annotators were given no examples (with the exception of nonsense) not to be influenced as much as in the first round."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"684548df1af075ac0ccea74e6955d72d24f5f553"
],
"answer": [
{
"evidence": [
"The corpus is freely available at the following link:",
"http://hdl.handle.net/11234/1-3123"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus is freely available at the following link:\n\nhttp://hdl.handle.net/11234/1-3123"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"81c2cdb6b03a7dca137cea7d19912636c332c2b3"
],
"answer": [
{
"evidence": [
"We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019. Having browsed a number of 2D visualizations (PCA and t-SNE) of the space, we have to conclude that visually, LASER space does not seem to exhibit any of the desired topological properties discussed above, see fig:pca for one example."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ad359795e78244cb903c71c375f97649e496bea1"
],
"answer": [
{
"evidence": [
"The lack of semantic relations in the LASER space is also reflected in vector similarities, summarized in similarities. The minimal change operation substantially changed the meaning of the sentence, and yet the embedding of the transformation lies very closely to the original sentence (average similarity of 0.930). Tense changes and some form of negation or banning also keep the vectors very similar."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The minimal change operation substantially changed the meaning of the sentence, and yet the embedding of the transformation lies very closely to the original sentence (average similarity of 0.930)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"384b2c6628c987547369f4c442bf19c759b7631c"
],
"answer": [
{
"evidence": [
"We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions. In the second one, we collected sentence alternations using ideas from the first round. The first and second rounds of annotation could be broadly called as collecting ideas and collecting data, respectively."
],
"extractive_spans": [
" we were looking for original and uncommon sentence change suggestions"
],
"free_form_answer": "",
"highlighted_evidence": [
"We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e7b99e8d5fb7b4623f4c43da91e6ce3cbfa550ff"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How many sentence transformations on average are available per unique sentence in dataset?",
"What annotations are available in the dataset?",
"How are possible sentence transformations represented in dataset, as new sentences?",
"What are all 15 types of modifications ilustrated in the dataset?",
"Is this dataset publicly available?",
"Are some baseline models trained on this dataset?",
"Do they do any analysis of of how the modifications changed the starting set of sentences?",
"How do they introduce language variation?",
"Do they use external resources to make modifications to sentences?"
],
"question_id": [
"a3d83c2a1b98060d609e7ff63e00112d36ce2607",
"aeda22ae760de7f5c0212dad048e4984cd613162",
"d5fa26a2b7506733f3fa0973e2fe3fc1bbd1a12d",
"2d536961c6e1aec9f8491e41e383dc0aac700e0a",
"18482658e0756d69e39a77f8fcb5912545a72b9b",
"9d336c4c725e390b6eba8bb8fe148997135ee981",
"016b59daa84269a93ce821070f4f5c1a71752a8a",
"771b373d09e6eb50a74fffbf72d059ad44e73ab0",
"efb52bda7366d2b96545cf927f38de27de3b5b77"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Examples of transformations given to annotators for the source sentence Several hunters slept on a clearing. The third column shows how many of all the transformation suggestions collected in the first round closely mimic the particular example. The number is approximate as annotators typically call one transformation by several names, e.g. less formally, formality diminished, decrease of formality, not formal expressions, non-formal, less formal, formality decreased, ...",
"Table 2: Sentences transformations requested in the second round of annotation with the instructions to the annotators. The annotators were given no examples (with the exception of nonsense) not to be influenced as much as in the first round.",
"Table 3: Statistics for individual annotators (anonymized as armadillo, . . . , capybara).",
"Table 4: The number of people annotating the same sentence. Most of the sentences have at least three different annotators. Unfortunately, 24 sentences were left without a single annotation.",
"Table 5: Average cosine similarity between the seed sentence and its transformation.",
"Figure 1: 2D visualization using PCA of a single annotation. Best viewed in colors. Every color corresponds to one type of transformation, the large dot represents the source sentence."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Table5-1.png",
"5-Figure1-1.png"
]
} | [
"How many sentence transformations on average are available per unique sentence in dataset?",
"What annotations are available in the dataset?",
"How are possible sentence transformations represented in dataset, as new sentences?",
"What are all 15 types of modifications ilustrated in the dataset?"
] | [
[
"1912.01673-Dataset Description-0"
],
[
"1912.01673-Annotation ::: Second Round: Collecting Data ::: Sentence Transformations-2",
"1912.01673-Annotation ::: Second Round: Collecting Data ::: Sentence Transformations-1"
],
[
"1912.01673-Dataset Description-0"
],
[
"1912.01673-3-Table2-1.png",
"1912.01673-Annotation ::: Second Round: Collecting Data ::: Sentence Transformations-0"
]
] | [
"27.41 transformation on average of single seed sentence is available in dataset.",
"For each source sentence, transformation sentences that are transformed according to some criteria (paraphrase, minimal change etc.)",
"Yes, as new sentences.",
"- paraphrase 1\n- paraphrase 2\n- different meaning\n- opposite meaning\n- nonsense\n- minimal change\n- generalization\n- gossip\n- formal sentence\n- non-standard sentence\n- simple sentence\n- possibility\n- ban\n- future\n- past"
] | 92 |
1706.08032 | A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking | This paper introduces a novel deep learning framework including a lexicon-based approach for sentence-level prediction of sentiment label distribution. We propose to first apply semantic rules and then use a Deep Convolutional Neural Network (DeepCNN) for character-level embeddings in order to increase information for word-level embedding. After that, a Bidirectional Long Short-Term Memory Network (Bi-LSTM) produces a sentence-wide feature representation from the word-level embedding. We evaluate our approach on three Twitter sentiment classification datasets. Experimental results show that our model can improve the classification accuracy of sentence-level sentiment analysis in Twitter social networking. | {
"paragraphs": [
[
"Twitter sentiment classification have intensively researched in recent years BIBREF0 BIBREF1 . Different approaches were developed for Twitter sentiment classification by using machine learning such as Support Vector Machine (SVM) with rule-based features BIBREF2 and the combination of SVMs and Naive Bayes (NB) BIBREF3 . In addition, hybrid approaches combining lexicon-based and machine learning methods also achieved high performance described in BIBREF4 . However, a problem of traditional machine learning is how to define a feature extractor for a specific domain in order to extract important features.",
"Deep learning models are different from traditional machine learning methods in that a deep learning model does not depend on feature extractors because features are extracted during training progress. The use of deep learning methods becomes to achieve remarkable results for sentiment analysis BIBREF5 BIBREF6 BIBREF7 . Some researchers used Convolutional Neural Network (CNN) for sentiment classification. CNN models have been shown to be effective for NLP. For example, BIBREF6 proposed various kinds of CNN to learn sentiment-bearing sentence vectors, BIBREF5 adopted two CNNs in character-level to sentence-level representation for sentiment analysis. BIBREF7 constructs experiments on a character-level CNN for several large-scale datasets. In addition, Long Short-Term Memory (LSTM) is another state-of-the-art semantic composition model for sentiment classification with many variants described in BIBREF8 . The studies reveal that using a CNN is useful in extracting information and finding feature detectors from texts. In addition, a LSTM can be good in maintaining word order and the context of words. However, in some important aspects, the use of CNN or LSTM separately may not capture enough information.",
"Inspired by the models above, the goal of this research is using a Deep Convolutional Neural Network (DeepCNN) to exploit the information of characters of words in order to support word-level embedding. A Bi-LSTM produces a sentence-wide feature representation based on these embeddings. The Bi-LSTM is a version of BIBREF9 with Full Gradient described in BIBREF10 . In addition, the rules-based approach also effects classification accuracy by focusing on important sub-sentences expressing the main sentiment of a tweet while removing unnecessary parts of a tweet. The paper makes the following contributions:",
"The organization of the present paper is as follows: In section 2, we describe the model architecture which introduces the structure of the model. We explain the basic idea of model and the way of constructing the model. Section 3 show results and analysis and section 4 summarize this paper."
],
[
"Our proposed model consists of a deep learning classifier and a tweet processor. The deep learning classifier is a combination of DeepCNN and Bi-LSTM. The tweet processor standardizes tweets and then applies semantic rules on datasets. We construct a framework that treats the deep learning classifier and the tweet processor as two distinct components. We believe that standardizing data is an important step to achieve high accuracy. To formulate our problem in increasing the accuracy of the classifier, we illustrate our model in Figure. FIGREF4 as follows:",
"Tweets are firstly considered via a processor based on preprocessing steps BIBREF0 and the semantic rules-based method BIBREF11 in order to standardize tweets and capture only important information containing the main sentiment of a tweet.",
"We use DeepCNN with Wide convolution for character-level embeddings. A wide convolution can learn to recognize specific n-grams at every position in a word that allows features to be extracted independently of these positions in the word. These features maintain the order and relative positions of characters. A DeepCNN is constructed by two wide convolution layers and the need of multiple wide convolution layers is widely accepted that a model constructing by multiple processing layers have the ability to learn representations of data with higher levels of abstraction BIBREF12 . Therefore, we use DeepCNN for character-level embeddings to support morphological and shape information for a word. The DeepCNN produces INLINEFORM0 global fixed-sized feature vectors for INLINEFORM1 words.",
"A combination of the global fixed-size feature vectors and word-level embedding is fed into Bi-LSTM. The Bi-LSTM produces a sentence-level representation by maintaining the order of words.",
"Our work is philosophically similar to BIBREF5 . However, our model is distinguished with their approaches in two aspects:",
"Using DeepCNN with two wide convolution layers to increase representation with multiple levels of abstraction.",
"Integrating global character fixed-sized feature vectors with word-level embedding to extract a sentence-wide feature set via Bi-LSTM. This deals with three main problems: (i) Sentences have any different size; (ii) The semantic and the syntactic of words in a sentence are captured in order to increase information for a word; (iii) Important information of characters that can appear at any position in a word are extracted.",
"In sub-section B, we introduce various kinds of dataset. The modules of our model are constructed in other sub-sections."
],
[
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .",
"Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.",
"Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
],
[
"We firstly take unique properties of Twitter in order to reduce the feature space such as Username, Usage of links, None, URLs and Repeated Letters. We then process retweets, stop words, links, URLs, mentions, punctuation and accentuation. For emoticons, BIBREF0 revealed that the training process makes the use of emoticons as noisy labels and they stripped the emoticons out from their training dataset because BIBREF0 believed that if we consider the emoticons, there is a negative impact on the accuracies of classifiers. In addition, removing emoticons makes the classifiers learns from other features (e.g. unigrams and bi-grams) presented in tweets and the classifiers only use these non-emoticon features to predict the sentiment of tweets. However, there is a problem is that if the test set contains emoticons, they do not influence the classifiers because emoticon features do not contain in its training data. This is a limitation of BIBREF0 , because the emoticon features would be useful when classifying test data. Therefore, we keep emoticon features in the datasets because deep learning models can capture more information from emoticon features for increasing classification accuracy."
],
[
"In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:",
"@lonedog bwahahah...you are amazing! However, it was quite the letdown.",
"@kirstiealley my dentist is great but she's expensive...=(",
"In two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset."
],
[
"To construct embedding inputs for our model, we use a fixed-sized word vocabulary INLINEFORM0 and a fixed-sized character vocabulary INLINEFORM1 . Given a word INLINEFORM2 is composed from characters INLINEFORM3 , the character-level embeddings are encoded by column vectors INLINEFORM4 in the embedding matrix INLINEFORM5 , where INLINEFORM6 is the size of the character vocabulary. For word-level embedding INLINEFORM7 , we use a pre-trained word-level embedding with dimension 200 or 300. A pre-trained word-level embedding can capture the syntactic and semantic information of words BIBREF17 . We build every word INLINEFORM8 into an embedding INLINEFORM9 which is constructed by two sub-vectors: the word-level embedding INLINEFORM10 and the character fixed-size feature vector INLINEFORM11 of INLINEFORM12 where INLINEFORM13 is the length of the filter of wide convolutions. We have INLINEFORM14 character fixed-size feature vectors corresponding to word-level embedding in a sentence."
],
[
"DeepCNN in the deep learning module is illustrated in Figure. FIGREF22 . The DeepCNN has two wide convolution layers. The first layer extract local features around each character windows of the given word and using a max pooling over character windows to produce a global fixed-sized feature vector for the word. The second layer retrieves important context characters and transforms the representation at previous level into a representation at higher abstract level. We have INLINEFORM0 global character fixed-sized feature vectors for INLINEFORM1 words.",
"In the next step of Figure. FIGREF4 , we construct the vector INLINEFORM0 by concatenating the word-level embedding with the global character fixed-size feature vectors. The input of Bi-LSTM is a sequence of embeddings INLINEFORM1 . The use of the global character fixed-size feature vectors increases the relationship of words in the word-level embedding. The purpose of this Bi-LSTM is to capture the context of words in a sentence and maintain the order of words toward to extract sentence-level representation. The top of the model is a softmax function to predict sentiment label. We describe in detail the kinds of CNN and LSTM that we use in next sub-part 1 and 2.",
"The one-dimensional convolution called time-delay neural net has a filter vector INLINEFORM0 and take the dot product of filter INLINEFORM1 with each m-grams in the sequence of characters INLINEFORM2 of a word in order to obtain a sequence INLINEFORM3 : DISPLAYFORM0 ",
"Based on Equation 1, we have two types of convolutions that depend on the range of the index INLINEFORM0 . The narrow type of convolution requires that INLINEFORM1 and produce a sequence INLINEFORM2 . The wide type of convolution does not require on INLINEFORM3 or INLINEFORM4 and produce a sequence INLINEFORM5 . Out-of-range input values INLINEFORM6 where INLINEFORM7 or INLINEFORM8 are taken to be zero. We use wide convolution for our model.",
"Given a word INLINEFORM0 composed of INLINEFORM1 characters INLINEFORM2 , we take a character embedding INLINEFORM3 for each character INLINEFORM4 and construct a character matrix INLINEFORM5 as following Equation. 2: DISPLAYFORM0 ",
"The values of the embeddings INLINEFORM0 are parameters that are optimized during training. The trained weights in the filter INLINEFORM1 correspond to a feature detector which learns to recognize a specific class of n-grams. The n-grams have size INLINEFORM2 . The use of a wide convolution has some advantages more than a narrow convolution because a wide convolution ensures that all weights of filter reach the whole characters of a word at the margins. The resulting matrix has dimension INLINEFORM3 .",
"Long Short-Term Memory networks usually called LSTMs are a improved version of RNN. The core idea behind LSTMs is the cell state which can maintain its state over time, and non-linear gating units which regulate the information flow into and out of the cell. The LSTM architecture that we used in our proposed model is described in BIBREF9 . A single LSTM memory cell is implemented by the following composite function: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is the logistic sigmoid function, INLINEFORM1 and INLINEFORM2 are the input gate, forget gate, output gate, cell and cell input activation vectors respectively. All of them have a same size as the hidden vector INLINEFORM3 . INLINEFORM4 is the hidden-input gate matrix, INLINEFORM5 is the input-output gate matrix. The bias terms which are added to INLINEFORM6 and INLINEFORM7 have been omitted for clarity. In addition, we also use the full gradient for calculating with full backpropagation through time (BPTT) described in BIBREF10 . A LSTM gradients using finite differences could be checked and making practical implementations more reliable."
],
[
"For regularization, we use a constraint on INLINEFORM0 of the weight vectors BIBREF18 ."
],
[
"For the Stanford Twitter Sentiment Corpus, we use the number of samples as BIBREF5 . The training data is selected 80K tweets for a training data and 16K tweets for the development set randomly from the training data of BIBREF0 . We conduct a binary prediction for STS Corpus.",
"For Sander dataset, we use standard 10-fold cross validation as BIBREF14 . We construct the development set by selecting 10% randomly from 9-fold training data.",
"In Health Care Reform Corpus, we also select 10% randomly for the development set in a training set and construct as BIBREF14 for comparison. We describe the summary of datasets in Table III.",
"for all datasets, the filter window size ( INLINEFORM0 ) is 7 with 6 feature maps each for the first wide convolution layer, the second wide convolution layer has a filter window size of 5 with 14 feature maps each. Dropout rate ( INLINEFORM1 ) is 0.5, INLINEFORM2 constraint, learning rate is 0.1 and momentum of 0.9. Mini-batch size for STS Corpus is 100 and others are 4. In addition, training is done through stochastic gradient descent over shuffled mini-batches with Adadelta update rule BIBREF19 .",
"we use the publicly available Word2Vec trained from 100 billion words from Google and TwitterGlove of Stanford is performed on aggregated global word-word co-occurrence statistics from a corpus. Word2Vec has dimensionality of 300 and Twitter Glove have dimensionality of 200. Words that do not present in the set of pre-train words are initialized randomly."
],
[
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.",
"For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset."
],
[
"As can be seen, the models with SR outperforms the model with no SR. Semantic rules is effective in order to increase classification accuracy. We evaluate the efficiency of SR for the model in Table V of our full paper . We also conduct two experiments on two separate models: DeepCNN and Bi-LSTM in order to show the effectiveness of combination of DeepCNN and Bi-LSTM. In addition, the model using TwitterGlove outperform the model using GoogleW2V because TwitterGlove captures more information in Twitter than GoogleW2V. These results show that the character-level information and SR have a great impact on Twitter Data. The pre-train word vectors are good, universal feature extractors. The difference between our model and other approaches is the ability of our model to capture important features by using SR and combine these features at high benefit. The use of DeepCNN can learn a representation of words in higher abstract level. The combination of global character fixed-sized feature vectors and a word embedding helps the model to find important detectors for particles such as 'not' that negate sentiment and potentiate sentiment such as 'too', 'so' standing beside expected features. The model not only learns to recognize single n-grams, but also patterns in n-grams lead to form a structure significance of a sentence."
],
[
"In the present work, we have pointed out that the use of character embeddings through a DeepCNN to enhance information for word embeddings built on top of Word2Vec or TwitterGlove improves classification accuracy in Tweet sentiment classification. Our results add to the well-establish evidence that character vectors are an important ingredient for word-level in deep learning for NLP. In addition, semantic rules contribute handling non-essential sub-tweets in order to improve classification accuracy."
]
],
"section_name": [
"Introduction",
"Basic idea",
"Data Preparation",
"Preprocessing",
"Semantic Rules (SR)",
"Representation Levels",
"Deep Learning Module",
"Regularization",
" Experimental setups",
"Experimental results",
"Analysis",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"0282506d82926af9792f42326478042758bdc913"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION"
],
"extractive_spans": [],
"free_form_answer": "accuracy of 86.63 on STS, 85.14 on Sanders and 80.9 on HCR",
"highlighted_evidence": [
"FLOAT SELECTED: Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"740a7d8f2b75e1985ebefff16360d9b704eec6b3"
],
"answer": [
{
"evidence": [
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.",
"For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset."
],
"extractive_spans": [
"We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN.",
"we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. "
],
"free_form_answer": "",
"highlighted_evidence": [
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.\n\nFor Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"ecc705477bc9fc15949d2a0ca55fd5f2e129acfb"
],
"answer": [
{
"evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .",
"Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.",
"Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
],
"extractive_spans": [
"Stanford - Twitter Sentiment Corpus (STS Corpus)",
"Sanders - Twitter Sentiment Corpus",
"Health Care Reform (HCR)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .\n\nSanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.\n\nHealth Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"710ac11299a9dce0201ababcbffafc1dce9f905b"
],
"answer": [
{
"evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .",
"Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.",
"Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .",
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.",
"For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .\n\nSanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.\n\nHealth Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .",
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. ",
"For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"de891b9e0b026bcc3d3fb336aceffb8a7228dbbd"
],
"answer": [
{
"evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .",
"Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.",
"Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
],
"extractive_spans": [
"Stanford - Twitter Sentiment Corpus (STS Corpus)",
"Sanders - Twitter Sentiment Corpus",
"Health Care Reform (HCR)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .\n\nSanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.\n\nHealth Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"c59556729d9eaaff1c3e24854a7d78ff2255399d"
],
"answer": [
{
"evidence": [
"In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:",
"@lonedog bwahahah...you are amazing! However, it was quite the letdown.",
"@kirstiealley my dentist is great but she's expensive...=(",
"In two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset.",
"FLOAT SELECTED: Table I SEMANTIC RULES [12]"
],
"extractive_spans": [],
"free_form_answer": "rules that compute polarity of words after POS tagging or parsing steps",
"highlighted_evidence": [
"In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:\n\n@lonedog bwahahah...you are amazing! However, it was quite the letdown.\n\n@kirstiealley my dentist is great but she's expensive...=(\n\nIn two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset.",
"FLOAT SELECTED: Table I SEMANTIC RULES [12]"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"no",
"no",
"no"
],
"question": [
"What were their results on the three datasets?",
"What was the baseline?",
"Which datasets did they use?",
"Are results reported only on English datasets?",
"Which three Twitter sentiment classification datasets are used for experiments?",
"What semantic rules are proposed?"
],
"question_id": [
"efb3a87845460655c53bd7365bcb8393c99358ec",
"0619fc797730a3e59ac146a5a4575c81517cc618",
"846a1992d66d955fa1747bca9a139141c19908e8",
"1ef8d1cb1199e1504b6b0daea52f2e4bd2ef7023",
"12d77ac09c659d2e04b5e3955a283101c3ad1058",
"d60a3887a0d434abc0861637bbcd9ad0c596caf4"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"",
"",
"",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. The overview of a deep learning system.",
"Table II THE NUMBER OF TWEETS ARE PROCESSED BY USING SEMANTIC RULES",
"Table I SEMANTIC RULES [12]",
"Figure 2. Deep Convolutional Neural Network (DeepCNN) for the sequence of character embeddings of a word. For example with 1 region size is 2 and 4 feature maps in the first convolution and 1 region size is 3 with 3 feature maps in the second convolution.",
"Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION",
"Table III SUMMARY STATISTICS FOR THE DATASETS AFTER USING SEMANTIC RULES. c: THE NUMBER OF CLASSES. N : THE NUMBER OF TWEETS. lw : MAXIMUM SENTENCE LENGTH. lc : MAXIMUM CHARACTER LENGTH. |Vw|: WORD ALPHABET SIZE. |Vc|: CHARACTER ALPHABET SIZE."
],
"file": [
"3-Figure1-1.png",
"3-TableII-1.png",
"3-TableI-1.png",
"4-Figure2-1.png",
"5-TableIV-1.png",
"5-TableIII-1.png"
]
} | [
"What were their results on the three datasets?",
"What semantic rules are proposed?"
] | [
[
"1706.08032-5-TableIV-1.png"
],
[
"1706.08032-3-TableI-1.png",
"1706.08032-Semantic Rules (SR)-0",
"1706.08032-Semantic Rules (SR)-1",
"1706.08032-Semantic Rules (SR)-2",
"1706.08032-Semantic Rules (SR)-3"
]
] | [
"accuracy of 86.63 on STS, 85.14 on Sanders and 80.9 on HCR",
"rules that compute polarity of words after POS tagging or parsing steps"
] | 94 |
1909.00124 | Learning with Noisy Labels for Sentence-level Sentiment Classification | Deep neural networks (DNNs) can fit (or even over-fit) the training data very well. If a DNN model is trained using data with noisy labels and tested on data with clean labels, the model may perform poorly. This paper studies the problem of learning with noisy labels for sentence-level sentiment classification. We propose a novel DNN model called NetAb (as shorthand for convolutional neural Networks with Ab-networks) to handle noisy labels during training. NetAb consists of two convolutional neural networks, one with a noise transition layer for dealing with the input noisy labels and the other for predicting 'clean' labels. We train the two networks using their respective loss functions in a mutual reinforcement manner. Experimental results demonstrate the effectiveness of the proposed model. | {
"paragraphs": [
[
"It is well known that sentiment annotation or labeling is subjective BIBREF0. Annotators often have many disagreements. This is especially so for crowd-workers who are not well trained. That is why one always feels that there are many errors in an annotated dataset. In this paper, we study whether it is possible to build accurate sentiment classifiers even with noisy-labeled training data. Sentiment classification aims to classify a piece of text according to the polarity of the sentiment expressed in the text, e.g., positive or negative BIBREF1, BIBREF0, BIBREF2. In this work, we focus on sentence-level sentiment classification (SSC) with labeling errors.",
"As we will see in the experiment section, noisy labels in the training data can be highly damaging, especially for DNNs because they easily fit the training data and memorize their labels even when training data are corrupted with noisy labels BIBREF3. Collecting datasets annotated with clean labels is costly and time-consuming as DNN based models usually require a large number of training examples. Researchers and practitioners typically have to resort to crowdsourcing. However, as mentioned above, the crowdsourced annotations can be quite noisy. Research on learning with noisy labels dates back to 1980s BIBREF4. It is still vibrant today BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 as it is highly challenging. We will discuss the related work in the next section.",
"This paper studies the problem of learning with noisy labels for SSC. Formally, we study the following problem.",
"Problem Definition: Given noisy labeled training sentences $S=\\lbrace (x_1,y_1),...,(x_n,y_n)\\rbrace $, where $x_i|_{i=1}^n$ is the $i$-th sentence and $y_i\\in \\lbrace 1,...,c\\rbrace $ is the sentiment label of this sentence, the noisy labeled sentences are used to train a DNN model for a SSC task. The trained model is then used to classify sentences with clean labels to one of the $c$ sentiment labels.",
"In this paper, we propose a convolutional neural Network with Ab-networks (NetAb) to deal with noisy labels during training, as shown in Figure FIGREF2. We will introduce the details in the subsequent sections. Basically, NetAb consists of two convolutional neural networks (CNNs) (see Figure FIGREF2), one for learning sentiment scores to predict `clean' labels and the other for learning a noise transition matrix to handle input noisy labels. We call the two CNNs A-network and Ab-network, respectively. The fundamental here is that (1) DNNs memorize easy instances first and gradually adapt to hard instances as training epochs increase BIBREF3, BIBREF13; and (2) noisy labels are theoretically flipped from the clean/true labels by a noise transition matrix BIBREF14, BIBREF15, BIBREF16, BIBREF17. We motivate and propose a CNN model with a transition layer to estimate the noise transition matrix for the input noisy labels, while exploiting another CNN to predict `clean' labels for the input training (and test) sentences. In training, we pre-train A-network in early epochs and then train Ab-network and A-network with their own loss functions in an alternating manner. To our knowledge, this is the first work that addresses the noisy label problem in sentence-level sentiment analysis. Our experimental results show that the proposed model outperforms the state-of-the-art methods."
],
[
"Our work is related to sentence sentiment classification (SSC). SSC has been studied extensively BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. None of them can handle noisy labels. Since many social media datasets are noisy, researchers have tried to build robust models BIBREF29, BIBREF30, BIBREF31. However, they treat noisy data as additional information and don't specifically handle noisy labels. A noise-aware classification model in BIBREF12 trains using data annotated with multiple labels. BIBREF32 exploited the connection of users and noisy labels of sentiments in social networks. Since the two works use multiple-labeled data or users' information (we only use single-labeled data, and we do not use any additional information), they have different settings than ours.",
"Our work is closely related to DNNs based approaches to learning with noisy labels. DNNs based approaches explored three main directions: (1) training DNNs on selected samples BIBREF33, BIBREF34, BIBREF35, BIBREF17, (2) modifying the loss function of DNNs with regularization biases BIBREF5, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40, and (3) plugging an extra layer into DNNs BIBREF14, BIBREF41, BIBREF15, BIBREF16. All these approaches were proposed for image classification where training images were corrupted with noisy labels. Some of them require noise rate to be known a priori in order to tune their models during training BIBREF37, BIBREF17. Our approach combines direction (1) and direction (3), and trains two networks jointly without knowing the noise rate. We have used five latest existing methods in our experiments for SSC. The experimental results show that they are inferior to our proposed method. In addition, BIBREF42, BIBREF43, BIBREF44, BIBREF45, BIBREF46, and BIBREF47 studied weakly-supervised DNNs or semi-supervised DNNs. But they still need some clean-labeled training data. We use no clean-labeled data."
],
[
"Our model builds on CNN BIBREF25. The key idea is to train two CNNs alternately, one for addressing the input noisy labels and the other for predicting `clean' labels. The overall architecture of the proposed model is given in Figure FIGREF2. Before going further, we first introduce a proposition, a property, and an assumption below.",
"Proposition 1 Noisy labels are flipped from clean labels by an unknown noise transition matrix.",
"Proposition UNKREF3 is reformulated from BIBREF16 and has been investigated in BIBREF14, BIBREF15, BIBREF41. This proposition shows that if we know the noise transition matrix, we can use it to recover the clean labels. In other words, we can put noise transition matrix on clean labels to deal with noisy labels. Given these, we ask the following question: How to estimate such an unknown noise transition matrix?",
"Below we give a solution to this question based on the following property of DNNs.",
"Property 1 DNNs tend to prioritize memorization of simple instances first and then gradually memorize hard instances BIBREF3.",
"BIBREF13 further investigated this property of DNNs. Our setting is that simple instances are sentences of clean labels and hard instances are those with noisy labels. We also have the following assumption.",
"Assumption 1 The noise rate of the training data is less than $50\\%$.",
"This assumption is usually satisfied in practice because without it, it is hard to tackle the input noisy labels during training.",
"Based on the above preliminaries, we need to estimate the noisy transition matrix $Q\\in \\mathbb {R}^{c\\times c}$ ($c=2$ in our case, i.e., positive and negative), and train two classifiers $\\ddot{y}\\sim P(\\ddot{y}|x,\\theta )$ and $\\widehat{y}\\sim \\ P(\\widehat{y}|x,\\vartheta )$, where $x$ is an input sentence, $\\ddot{y}$ is its noisy label, $\\widehat{y}$ is its `clean' label, $\\theta $ and $\\vartheta $ are the parameters of two classifiers. Note that both $\\ddot{y}$ and $\\widehat{y}$ here are the prediction results from our model, not the input labels. We propose to formulate the probability of the sentence $x$ labeled as $j$ with",
"where $P(\\ddot{y}=j|\\widehat{y}=i)$ is an item (the $ji$-th item) in the noisy transition matrix $Q$. We can see that the noisy transition matrix $Q$ is exploited on the `clean' scores $P(\\widehat{y}|x,\\vartheta )$ to tackle noisy labels.",
"We now present our model NetAb and introduce how NetAb performs Eq. (DISPLAY_FORM6). As shown in Figure FIGREF2, NetAb consists of two CNNs. The intuition here is that we use one CNN to perform $P(\\widehat{y}=i|x,\\vartheta )$ and use another CNN to perform $P(\\ddot{y}=j|x,\\theta )$. Meanwhile, the CNN performing $P(\\ddot{y}=j|x,\\theta )$ estimates the noise transition matrix $Q$ to deal with noisy labels. Thus we add a transition layer into this CNN.",
"More precisely, in Figure FIGREF2, the CNN with a clean loss performs $P(\\widehat{y}=i|x,\\vartheta )$. We call this CNN the A-network. The other CNN with a noisy loss performs $P(\\ddot{y}=j|x,\\theta )$. We call this CNN the Ab-network. Ab-network shares all the parameters of A-network except the parameters from the Gate unit and the clean loss. In addition, Ab-network has a transition layer to estimate the noisy transition matrix $Q$. In such a way, A-network predict `clean' labels, and Ab-network handles the input noisy labels.",
"We use cross-entropy with the predicted labels $\\ddot{y}$ and the input labels $y$ (given in the dataset) to compute the noisy loss, formulated as below",
"where $\\mathbb {I}$ is the indicator function (if $y\\!==\\!i$, $\\mathbb {I}\\!=\\!1$; otherwise, $\\mathbb {I}\\!=\\!0$), and $|\\ddot{S}|$ is the number of sentences to train Ab-network in each batch.",
"Similarly, we use cross-entropy with the predicted labels $\\widehat{y}$ and the input labels $y$ to compute the clean loss, formulated as",
"where $|\\widehat{S}|$ is the number of sentences to train A-network in each batch.",
"Next we introduce how our model learns the parameters ($\\vartheta $, $\\theta $ and $Q$). An embedding matrix $v$ is produced for each sentence $x$ by looking up a pre-trained word embedding database (e.g., GloVe.840B BIBREF48). Then an encoding vector $h\\!=\\!CNN(v)$ (and $u\\!=\\!CNN(v)$) is produced for each embedding matrix $v$ in A-network (and Ab-network). A sofmax classifier gives us $P(\\hat{y}\\!=\\!i|x,\\vartheta )$ (i.e., `clean' sentiment scores) on the learned encoding vector $h$. As the noise transition matrix $Q$ indicates the transition values from clean labels to noisy labels, we compute $Q$ as follows",
"where $W_i$ is a trainable parameter matrix, $b_i$ and $f_i$ are two trainable parameter vectors. They are trained in the Ab-network. Finally, $P(\\ddot{y}=j|x,\\theta )$ is computed by Eq. (DISPLAY_FORM6).",
"In training, NetAb is trained end-to-end. Based on Proposition UNKREF3 and Property UNKREF4, we pre-train A-network in early epochs (e.g., 5 epochs). Then we train Ab-network and A-network in an alternating manner. The two networks are trained using their respective cross-entropy loss. Given a batch of sentences, we first train Ab-network. Then we use the scores predicted from A-network to select some possibly clean sentences from this batch and train A-network on the selected sentences. Specifically speaking, we use the predicted scores to compute sentiment labels by $\\arg \\max _i \\lbrace \\ddot{y}=i|\\ddot{y}\\sim P(\\ddot{y}|x,\\theta )\\rbrace $. Then we select the sentences whose resulting sentiment label equals to the input label. The selection process is marked by a Gate unit in Figure FIGREF2. When testing a sentence, we use A-network to produce the final classification result."
],
[
"In this section, we evaluate the performance of the proposed NetAb model. we conduct two types of experiments. (1) We corrupt clean-labeled datasets to produce noisy-labeled datasets to show the impact of noises on sentiment classification accuracy. (2) We collect some real noisy data and use them to train models to evaluate the performance of NetAb.",
"Clean-labeled Datasets. We use three clean labeled datasets. The first one is the movie sentence polarity dataset from BIBREF19. The other two datasets are laptop and restaurant datasets collected from SemEval-2016 . The former consists of laptop review sentences and the latter consists of restaurant review sentences. The original datasets (i.e., Laptop and Restaurant) were annotated with aspect polarity in each sentence. We used all sentences with only one polarity (positive or negative) for their aspects. That is, we only used sentences with aspects having the same sentiment label in each sentence. Thus, the sentiment of each aspect gives the ground-truth as the sentiments of all aspects are the same.",
"For each clean-labeled dataset, the sentences are randomly partitioned into training set and test set with $80\\%$ and $20\\%$, respectively. Following BIBREF25, We also randomly select $10\\%$ of the test data for validation to check the model during training. Summary statistics of the training, validation, and test data are shown in Table TABREF9.",
"Noisy-labeled Training Datasets. For the above three domains (movie, laptop, and restaurant), we collected 2,000 reviews for each domain from the same review source. We extracted sentences from each review and assigned review's label to its sentences. Like previous work, we treat 4 or 5 stars as positive and 1 or 2 stars as negative. The data is noisy because a positive (negative) review can contain negative (positive) sentences, and there are also neutral sentences. This gives us three noisy-labeled training datasets. We still use the same test sets as those for the clean-labeled datasets. Summary statistics of all the datasets are shown in Table TABREF9.",
"Experiment 1: Here we use the clean-labeled data (i.e., the last three columns in Table TABREF9). We corrupt the clean training data by switching the labels of some random instances based on a noise rate parameter. Then we use the corrupted data to train NetAb and CNN BIBREF25.",
"The test accuracy curves with the noise rates [0, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$] are shown in Figure FIGREF13. From the figure, we can see that the test accuracy drops from around 0.8 to 0.5 when the noise rate increases from 0 to 0.5, but our NetAb outperforms CNN. The results clearly show that the performance of the CNN drops quite a lot with the noise rate increasing.",
"Experiment 2: Here we use the real noisy-labeled training data to train our model and the baselines, and then test on the test data in Table TABREF9. Our goal is two fold. First, we want to evaluate NetAb using real noisy data. Second, we want to see whether sentences with review level labels can be used to build effective SSC models.",
"Baselines. We use one strong non-DNN baseline, NBSVM (with unigrams or bigrams features) BIBREF23 and six DNN baselines. The first DNN baseline is CNN BIBREF25, which does not handle noisy labels. The other five were designed to handle noisy labels.",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels. These baselines are inferior to ours as they were tailored for image classification. Note that we found no existing method to deal with noisy labels for SSC. Training Details. We use the publicly available pre-trained embedding GloVe.840B BIBREF48 to initialize the word vectors and the embedding dimension is 300.",
"For each baseline, we obtain the system from its author and use its default parameters. As the DNN baselines (except CNN) were proposed for image classification, we change the input channels from 3 to 1. For our NetAb, we follow BIBREF25 to use window sizes of 3, 4 and 5 words with 100 feature maps per window size, resulting in 300-dimensional encoding vectors. The input length of sentence is set to 40. The network parameters are updated using the Adam optimizer BIBREF49 with a learning rate of 0.001. The learning rate is clipped gradually using a norm of 0.96 in performing the Adam optimization. The dropout rate is 0.5 in the input layer. The number of epochs is 200 and batch size is 50."
],
[
"This paper proposed a novel CNN based model for sentence-level sentiment classification learning for data with noisy labels. The proposed model learns to handle noisy labels during training by training two networks alternately. The learned noisy transition matrices are used to tackle noisy labels. Experimental results showed that the proposed model outperforms a wide range of baselines markedly. We believe that learning with noisy labels is a promising direction as it is often easy to collect noisy-labeled training data."
],
[
"Hao Wang and Yan Yang's work was partially supported by a grant from the National Natural Science Foundation of China (No. 61572407)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Model",
"Experiments",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"36e4022e631bb303ba899a7b340d8024b3c5e19b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3f1a0f52b0d7249dab4b40e956e286785376f17f"
],
"answer": [
{
"evidence": [
"Clean-labeled Datasets. We use three clean labeled datasets. The first one is the movie sentence polarity dataset from BIBREF19. The other two datasets are laptop and restaurant datasets collected from SemEval-2016 . The former consists of laptop review sentences and the latter consists of restaurant review sentences. The original datasets (i.e., Laptop and Restaurant) were annotated with aspect polarity in each sentence. We used all sentences with only one polarity (positive or negative) for their aspects. That is, we only used sentences with aspects having the same sentiment label in each sentence. Thus, the sentiment of each aspect gives the ground-truth as the sentiments of all aspects are the same.",
"Noisy-labeled Training Datasets. For the above three domains (movie, laptop, and restaurant), we collected 2,000 reviews for each domain from the same review source. We extracted sentences from each review and assigned review's label to its sentences. Like previous work, we treat 4 or 5 stars as positive and 1 or 2 stars as negative. The data is noisy because a positive (negative) review can contain negative (positive) sentences, and there are also neutral sentences. This gives us three noisy-labeled training datasets. We still use the same test sets as those for the clean-labeled datasets. Summary statistics of all the datasets are shown in Table TABREF9."
],
"extractive_spans": [
" movie sentence polarity dataset from BIBREF19",
"laptop and restaurant datasets collected from SemEval-201",
"we collected 2,000 reviews for each domain from the same review source"
],
"free_form_answer": "",
"highlighted_evidence": [
"Clean-labeled Datasets. We use three clean labeled datasets. The first one is the movie sentence polarity dataset from BIBREF19. The other two datasets are laptop and restaurant datasets collected from SemEval-2016 .",
"Noisy-labeled Training Datasets. For the above three domains (movie, laptop, and restaurant), we collected 2,000 reviews for each domain from the same review source."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"fbe5540e5e8051f9fbdadfcdf4b3c2f2fd62cfb6"
],
"answer": [
{
"evidence": [
"The test accuracy curves with the noise rates [0, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$] are shown in Figure FIGREF13. From the figure, we can see that the test accuracy drops from around 0.8 to 0.5 when the noise rate increases from 0 to 0.5, but our NetAb outperforms CNN. The results clearly show that the performance of the CNN drops quite a lot with the noise rate increasing.",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels. These baselines are inferior to ours as they were tailored for image classification. Note that we found no existing method to deal with noisy labels for SSC. Training Details. We use the publicly available pre-trained embedding GloVe.840B BIBREF48 to initialize the word vectors and the embedding dimension is 300.",
"FLOAT SELECTED: Table 2: Accuracy (ACC) of both classes, F1 (F1 pos) of positive class and F1 (F1 neg) of negative class on clean test data/sentences. Training data are real noisy-labeled sentences.",
"FLOAT SELECTED: Figure 2: Accuracy (ACC) on clean test data. For training, the labels of clean data are flipped with the noise rates [0, 0.1, 0.2, 0.3, 0.4, 0.5]. For example, 0.1means that 10% of the labels are flipped. (Color online)"
],
"extractive_spans": [],
"free_form_answer": "Experiment 1: ACC around 0.5 with 50% noise rate in worst case - clearly higher than baselines for all noise rates\nExperiment 2: ACC on real noisy datasets: 0.7 on Movie, 0.79 on Laptop, 0.86 on Restaurant (clearly higher than baselines in almost all cases)",
"highlighted_evidence": [
"The test accuracy curves with the noise rates [0, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$] are shown in Figure FIGREF13. From the figure, we can see that the test accuracy drops from around 0.8 to 0.5 when the noise rate increases from 0 to 0.5, but our NetAb outperforms CNN",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels.",
"FLOAT SELECTED: Table 2: Accuracy (ACC) of both classes, F1 (F1 pos) of positive class and F1 (F1 neg) of negative class on clean test data/sentences. Training data are real noisy-labeled sentences.",
"FLOAT SELECTED: Figure 2: Accuracy (ACC) on clean test data. For training, the labels of clean data are flipped with the noise rates [0, 0.1, 0.2, 0.3, 0.4, 0.5]. For example, 0.1means that 10% of the labels are flipped. (Color online)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4a3b781469f48ce226c4af01c0e6f31e0c906298"
],
"answer": [
{
"evidence": [
"Baselines. We use one strong non-DNN baseline, NBSVM (with unigrams or bigrams features) BIBREF23 and six DNN baselines. The first DNN baseline is CNN BIBREF25, which does not handle noisy labels. The other five were designed to handle noisy labels.",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels. These baselines are inferior to ours as they were tailored for image classification. Note that we found no existing method to deal with noisy labels for SSC. Training Details. We use the publicly available pre-trained embedding GloVe.840B BIBREF48 to initialize the word vectors and the embedding dimension is 300."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines. We use one strong non-DNN baseline, NBSVM (with unigrams or bigrams features) BIBREF23 and six DNN baselines. The first DNN baseline is CNN BIBREF25, which does not handle noisy labels. The other five were designed to handle noisy labels.\n\nThe comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How does the model differ from Generative Adversarial Networks?",
"What is the dataset used to train the model?",
"What is the performance of the model?",
"Is the model evaluated against a CNN baseline?"
],
"question_id": [
"045dbdbda5d96a672e5c69442e30dbf21917a1ee",
"c20b012ad31da46642c553ce462bc0aad56912db",
"13e87f6d68f7217fd14f4f9a008a65dd2a0ba91c",
"89b9a2389166b992c42ca19939d750d88c5fa79b"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"sentiment ",
"sentiment ",
"sentiment ",
"sentiment "
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The proposed NETAB model (left) and its training method (right). Components in light gray color denote that these components are deactivated during training in that stage. (Color online)",
"Table 1: Summary statistics of the datasets. Number of positive (P) and negative (N) sentences in (noisy and clean) training data, validation data, and test data. The second column shows the statistics of sentences extracted from the 2,000 reviews of each dataset. The last three columns show the statistics of the sentences in three clean-labeled datasets, see “Clean-labeled Datasets”.",
"Table 2: Accuracy (ACC) of both classes, F1 (F1 pos) of positive class and F1 (F1 neg) of negative class on clean test data/sentences. Training data are real noisy-labeled sentences.",
"Figure 2: Accuracy (ACC) on clean test data. For training, the labels of clean data are flipped with the noise rates [0, 0.1, 0.2, 0.3, 0.4, 0.5]. For example, 0.1means that 10% of the labels are flipped. (Color online)"
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Figure2-1.png"
]
} | [
"What is the performance of the model?"
] | [
[
"1909.00124-5-Figure2-1.png",
"1909.00124-Experiments-5",
"1909.00124-5-Table2-1.png",
"1909.00124-Experiments-8"
]
] | [
"Experiment 1: ACC around 0.5 with 50% noise rate in worst case - clearly higher than baselines for all noise rates\nExperiment 2: ACC on real noisy datasets: 0.7 on Movie, 0.79 on Laptop, 0.86 on Restaurant (clearly higher than baselines in almost all cases)"
] | 96 |
1911.01799 | CN-CELEB: a challenging Chinese speaker recognition dataset | Recently, researchers set an ambitious goal of conducting speaker recognition in unconstrained conditions where the variations on ambient, channel and emotion could be arbitrary. However, most publicly available datasets are collected under constrained environments, i.e., with little noise and limited channel variation. These datasets tend to deliver over optimistic performance and do not meet the request of research on speaker recognition in unconstrained conditions. In this paper, we present CN-Celeb, a large-scale speaker recognition dataset collected `in the wild'. This dataset contains more than 130,000 utterances from 1,000 Chinese celebrities, and covers 11 different genres in real world. Experiments conducted with two state-of-the-art speaker recognition approaches (i-vector and x-vector) show that the performance on CN-Celeb is far inferior to the one obtained on VoxCeleb, a widely used speaker recognition dataset. This result demonstrates that in real-life conditions, the performance of existing techniques might be much worse than it was thought. Our database is free for researchers and can be downloaded from this http URL. | {
"paragraphs": [
[
"Speaker recognition including identification and verification, aims to recognize claimed identities of speakers. After decades of research, performance of speaker recognition systems has been vastly improved, and the technique has been deployed to a wide range of practical applications. Nevertheless, the present speaker recognition approaches are still far from reliable in unconstrained conditions where uncertainties within the speech recordings could be arbitrary. These uncertainties might be caused by multiple factors, including free text, multiple channels, environmental noises, speaking styles, and physiological status. These uncertainties make the speaker recognition task highly challenging BIBREF0, BIBREF1.",
"Researchers have devoted much effort to address the difficulties in unconstrained conditions. Early methods are based on probabilistic models that treat these uncertainties as an additive Gaussian noise. JFA BIBREF2, BIBREF3 and PLDA BIBREF4 are the most famous among such models. These models, however, are shallow and linear, and therefore cannot deal with the complexity of real-life applications. Recent advance in deep learning methods offers a new opportunity BIBREF5, BIBREF6, BIBREF7, BIBREF8. Resorting to the power of deep neural networks (DNNs) in representation learning, these methods can remove unwanted uncertainties by propagating speech signals through the DNN layer by layer and retain speaker-relevant features only BIBREF9. Significant improvement in robustness has been achieved by the DNN-based approach BIBREF10, which makes it more suitable for applications in unconstrained conditions.",
"The success of DNN-based methods, however, largely relies on a large amount of data, in particular data that involve the true complexity in unconstrained conditions. Unfortunately, most existing datasets for speaker recognition are collected in constrained conditions, where the acoustic environment, channel and speaking style do not change significantly for each speaker BIBREF11, BIBREF12, BIBREF13. These datasets tend to deliver over optimistic performance and do not meet the request of research on speaker recognition in unconstrained conditions.",
"To address this shortage in datasets, researchers have started to collect data `in the wild'. The most successful `wild' dataset may be VoxCeleb BIBREF14, BIBREF15, which contains millions of utterances from over thousands of speakers. The utterances were collected from open-source media using a fully automated pipeline based on computer vision techniques, in particular face detection, tracking and recognition, plus video-audio synchronization. The automated pipeline is almost costless, and thus greatly improves the efficiency of data collection.",
"In this paper, we re-implement the automated pipeline of VoxCeleb and collect a new large-scale speaker dataset, named CN-Celeb. Compared with VoxCeleb, CN-Celeb has three distinct features:",
"CN-Celeb specially focuses on Chinese celebrities, and contains more than $130,000$ utterances from $1,000$ persons.",
"CN-Celeb covers more genres of speech. We intentionally collected data from 11 genres, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement. The speech of a particular speaker may be in more than 5 genres. As a comparison, most of the utterances in VoxCeleb were extracted from interview videos. The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions, but also more challenging.",
"CN-Celeb is not fully automated, but involves human check. We found that more complex the genre is, more errors the automated pipeline tends to produce. Ironically, the error-pron segments could be highly valuable as they tend to be boundary samples. We therefore choose a two-stage strategy that employs the automated pipeline to perform pre-selection, and then perform human check.",
"The rest of the paper is organized as follows. Section SECREF2 presents a detailed description for CN-Celeb, and Section SECREF3 presents more quantitative comparisons between CN-Celeb and VoxCeleb on the speaker recognition task. Section SECREF4 concludes the entire paper."
],
[
"The original purpose of the CN-Celeb dataset is to investigate the true difficulties of speaker recognition techniques in unconstrained conditions, and provide a resource for researchers to build prototype systems and evaluate the performance. Ideally, it can be used as a standalone data source, and can be also used with other datasets together, in particular VoxCeleb which is free and large. For this reason, CN-Celeb tries to be distinguished from but also complementary to VoxCeleb from the beginning of the design. This leads to three features that we have discussed in the previous section: Chinese focused, complex genres, and quality guarantee by human check.",
"In summary, CN-Celeb contains over $130,000$ utterances from $1,000$ Chinese celebrities. It covers 11 genres and the total amount of speech waveforms is 274 hours. Table TABREF5 gives the data distribution over the genres, and Table TABREF6 presents the data distribution over the length of utterances."
],
[
"Table TABREF13 summarizes the main difference between CN-Celeb and VoxCeleb. Compared to VoxCeleb, CN-Celeb is a more complex dataset and more challenging for speaker recognition research. More details of these challenges are as follows.",
"Most of the utterances involve real-world noise, including ambient noise, background babbling, music, cheers and laugh.",
"A certain amount of utterances involve strong and overlapped background speakers, especially in the dram and movie genres.",
"Most of speakers have different genres of utterances, which results in significant variation in speaking styles.",
"The utterances of the same speaker may be recorded at different time and with different devices, leading to serious cross-time and cross-channel problems.",
"Most of the utterances are short, which meets the scenarios of most real applications but leads to unreliable decision."
],
[
"CN-Celeb was collected following a two-stage strategy: firstly we used an automated pipeline to extract potential segments of the Person of Interest (POI), and then applied a human check to remove incorrect segments. This process is much faster than purely human-based segmentation, and reduces errors caused by a purely automated process.",
"Briefly, the automated pipeline we used is similar to the one used to collect VoxCeleb1 BIBREF14 and VoxCeleb2 BIBREF15, though we made some modification to increase efficiency and precision. Especially, we introduced a new face-speaker double check step that fused the information from both the image and speech signals to increase the recall rate while maintaining the precision.",
"The detailed steps of the collection process are summarized as follows.",
"STEP 1. POI list design. We manually selected $1,000$ Chinese celebrities as our target speakers. These speakers were mostly from the entertainment sector, such as singers, drama actors/actrees, news reporters, interviewers. Region diversity was also taken into account so that variation in accent was covered.",
"STEP 2. Pictures and videos download. Pictures and videos of the $1,000$ POIs were downloaded from the data source (https://www.bilibili.com/) by searching for the names of the persons. In order to specify that we were searching for POI names, the word `human' was added in the search queries. The downloaded videos were manually examined and were categorized into the 11 genres.",
"STEP 3. Face detection and tracking. For each POI, we first obtained the portrait of the person. This was achieved by detecting and clipping the face images from all pictures of that person. The RetinaFace algorithm was used to perform the detection and clipping BIBREF16. Afterwards, video segments that contain the target person were extracted. This was achieved by three steps: (1) For each frame, detect all the faces appearing in the frame using RetinaFace; (2) Determine if the target person appears by comparing the POI portrait and the faces detected in the frame. We used the ArcFace face recognition system BIBREF17 to perform the comparison; (3) Apply the MOSSE face tracking system BIBREF18 to produce face streams.",
"STEP 4. Active speaker verification. As in BIBREF14, an active speaker verification system was employed to verify if the speech was really spoken by the target person. This is necessary as it is possible that the target person appears in the video but the speech is from other persons. We used the SyncNet model BIBREF19 as in BIBREF14 to perform the task. This model was trained to detect if a stream of mouth movement and a stream of speech are synchronized. In our implementation, the stream of mouth movement was derived from the face stream produced by the MOSSE system.",
"STEP 5. Double check by speaker recognition.",
"Although SyncNet worked well for videos in simple genres, it failed for videos of complex genres such as movie and vlog. A possible reason is that the video content of these genres may change dramatically in time, which leads to unreliable estimation for the stream of the mouth movement, hence unreliable synchronization detection. In order to improve the robustness of the active speaker verification in complex genres, we introduced a double check procedure based on speaker recognition. The idea is simple: whenever the speaker recognition system states a very low confidence for the target speaker, the segment will be discarded even if the confidence from SyncNet is high; vice versa, if the speaker recognition system states a very high confidence, the segment will be retained. We used an off-the-shelf speaker recognition system BIBREF20 to perform this double check. In our study, this double check improved the recall rate by 30% absolutely.",
"STEP 6. Human check.",
"The segments produced by the above automated pipeline were finally checked by human. According to our experience, this human check is rather efficient: one could check 1 hour of speech in 1 hour. As a comparison, if we do not apply the automated pre-selection, checking 1 hour of speech requires 4 hours."
],
[
"In this section, we present a series of experiments on speaker recognition using VoxCeleb and CN-Celeb, to compare the complexity of the two datasets."
],
[
"VoxCeleb: The entire dataset involves two parts: VoxCeleb1 and VoxCeleb2. We used SITW BIBREF21, a subset of VoxCeleb1 as the evaluation set. The rest of VoxCeleb1 was merged with VoxCeleb2 to form the training set (simply denoted by VoxCeleb). The training set involves $1,236,567$ utterances from $7,185$ speakers, and the evaluation set involves $6,445$ utterances from 299 speakers (precisely, this is the Eval. Core set within SITW).",
"CN-Celeb: The entire dataset was split into two parts: the first part CN-Celeb(T) involves $111,260$ utterances from 800 speakers and was used as the training set; the second part CN-Celeb(E) involves $18,849$ utterances from 200 speakers and was used as the evaluation set."
],
[
"Two state-of-the-art baseline systems were built following the Kaldi SITW recipe BIBREF22: an i-vector system BIBREF3 and an x-vector system BIBREF10.",
"For the i-vector system, the acoustic feature involved 24-dimensional MFCCs plus the log energy, augmented by the first- and second-order derivatives. We also applied the cepstral mean normalization (CMN) and the energy-based voice active detection (VAD). The universal background model (UBM) consisted of $2,048$ Gaussian components, and the dimensionality of the i-vector space was 400. LDA was applied to reduce the dimensionality of the i-vectors to 150. The PLDA model was used for scoring BIBREF4.",
"For the x-vector system, the feature-learning component was a 5-layer time-delay neural network (TDNN). The slicing parameters for the five time-delay layers were: {$t$-2, $t$-1, $t$, $t$+1, $t$+2}, {$t$-2, $t$, $t$+2}, {$t$-3, $t$, $t$+3}, {$t$}, {$t$}. The statistic pooling layer computed the mean and standard deviation of the frame-level features from a speech segment. The size of the output layer was consistent with the number of speakers in the training set. Once trained, the activations of the penultimate hidden layer were read out as x-vectors. In our experiments, the dimension of the x-vectors trained on VoxCeleb was set to 512, while for CN-Celeb, it was set to 256, considering the less number of speakers in the training set. Afterwards, the x-vectors were projected to 150-dimensional vectors by LDA, and finally the PLDA model was employed to score the trials. Refer to BIBREF10 for more details."
],
[
"We first present the basic results evaluated on SITW and CN-Celeb(E). Both the front-end (i-vector or x-vector models) and back-end (LDA-PLDA) models were trained with the VoxCeleb training set. Note that for SITW, the averaged length of the utterances is more than 80 seconds, while this number is about 8 seconds for CN-Celeb(E). For a better comparison, we resegmented the data of SITW and created a new dataset denoted by SITW(S), where the averaged lengths of the enrollment and test utterances are 28 and 8 seconds, respectively. These numbers are similar to the statistics of CN-Celeb(E).",
"The results in terms of the equal error rate (EER) are reported in Table TABREF24. It can be observed that for both the i-vector system and the x-vector system, the performance on CN-Celeb(E) is much worse than the performance on SITW and SITW(S). This indicates that there is big difference between these two datasets. From another perspective, it demonstrates that the model trained with VoxCeleb does not generalize well, although it has achieved reasonable performance on data from a similar source (SITW)."
],
[
"To further compare CN-Celeb and VoxCeleb in a quantitative way, we built systems based on CN-Celeb and VoxCeleb, respectively. For a fair comparison, we randomly sampled 800 speakers from VoxCeleb and built a new dataset VoxCeleb(L) whose size is comparable to CN-Celeb(T). This data set was used for back-end (LDA-PLDA) training.",
"The experimental results are shown in Table TABREF26. Note that the performance of all the comparative experiments show the same trend with the i-vector system and the x-vector system, we therefore only analyze the i-vector results.",
"Firstly, it can be seen that the system trained purely on VoxCeleb obtained good performance on SITW(S) (1st row). This is understandable as VoxCeleb and SITW(S) were collected from the same source. For the pure CN-Celeb system (2nd row), although CN-Celeb(T) and CN-Celeb(E) are from the same source, the performance is still poor (14.24%). More importantly, with re-training the back-end model with VoxCeleb(L) (4th row), the performance on SITW becomes better than the same-source result on CN-Celeb(E) (11.34% vs 14.24%). All these results reconfirmed the significant difference between the two datasets, and indicates that CN-Celeb is more challenging than VoxCeleb."
],
[
"We introduced a free dataset CN-Celeb for speaker recognition research. The dataset contains more than $130k$ utterances from $1,000$ Chinese celebrities, and covers 11 different genres in real world. We compared CN-Celeb and VoxCeleb, a widely used dataset in speaker recognition, by setting up a series of experiments based on two state-of-the-art speaker recognition models. Experimental results demonstrated that CN-Celeb is significantly different from VoxCeleb, and it is more challenging for speaker recognition research. The EER performance we obtained in this paper suggests that in unconstrained conditions, the performance of the current speaker recognition techniques might be much worse than it was thought."
]
],
"section_name": [
"Introduction",
"The CN-Celeb dataset ::: Data description",
"The CN-Celeb dataset ::: Challenges with CN-Celeb",
"The CN-Celeb dataset ::: Collection pipeline",
"Experiments on speaker recognition",
"Experiments on speaker recognition ::: Data",
"Experiments on speaker recognition ::: Settings",
"Experiments on speaker recognition ::: Basic results",
"Experiments on speaker recognition ::: Further comparison",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"45270b732239f93ee0e569f36984323d0dde8fd6"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"extractive_spans": [],
"free_form_answer": "ERR of 19.05 with i-vectors and 15.52 with x-vectors",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"d52158f81f0a690c7747aea82ced7b57c7f48c2b"
],
"answer": [
{
"evidence": [
"CN-Celeb specially focuses on Chinese celebrities, and contains more than $130,000$ utterances from $1,000$ persons.",
"CN-Celeb covers more genres of speech. We intentionally collected data from 11 genres, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement. The speech of a particular speaker may be in more than 5 genres. As a comparison, most of the utterances in VoxCeleb were extracted from interview videos. The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions, but also more challenging."
],
"extractive_spans": [
"entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement"
],
"free_form_answer": "",
"highlighted_evidence": [
"CN-Celeb specially focuses on Chinese celebrities, and contains more than $130,000$ utterances from $1,000$ persons.\n\nCN-Celeb covers more genres of speech. We intentionally collected data from 11 genres, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement. The speech of a particular speaker may be in more than 5 genres. As a comparison, most of the utterances in VoxCeleb were extracted from interview videos. The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions, but also more challenging."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2e6fa762aa2a37f00c418a565e35068d2f14dd6a"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1. The distribution over genres."
],
"extractive_spans": [],
"free_form_answer": "genre, entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1. The distribution over genres."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"02fce27e075bf24c3867c3c0a4449bac4ef5b925"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"28915bb2904719dec4e6f3fcc4426d758d76dde1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"extractive_spans": [],
"free_form_answer": "x-vector",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"dcde763fd85294ed182df9966c9fdb8dca3ec7eb"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"extractive_spans": [],
"free_form_answer": "For i-vector system, performances are 11.75% inferior to voxceleb. For x-vector system, performances are 10.74% inferior to voxceleb",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"",
"",
"",
"no",
"no",
"no"
],
"question": [
"What was the performance of both approaches on their dataset?",
"What kind of settings do the utterances come from?",
"What genres are covered?",
"Do they experiment with cross-genre setups?",
"Which of the two speech recognition models works better overall on CN-Celeb?",
"By how much is performance on CN-Celeb inferior to performance on VoxCeleb?"
],
"question_id": [
"8c0a0747a970f6ea607ff9b18cfeb738502d9a95",
"529dabe7b4a8a01b20ee099701834b60fb0c43b0",
"a2be2bd84e5ae85de2ab9968147b3d49c84dfb7f",
"5699996a7a2bb62c68c1e62e730cabf1e3186eef",
"944d5dbe0cfc64bf41ea36c11b1d378c408d40b8",
"327e6c6609fbd4c6ae76284ca639951f03eb4a4c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"dataset",
"dataset",
"dataset",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 2. The distribution over utterance length.",
"Table 1. The distribution over genres.",
"Table 3. Comparison between CN-Celeb and VoxCeleb.",
"Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets.",
"Table 5. EER(%) results with different data settings."
],
"file": [
"2-Table2-1.png",
"2-Table1-1.png",
"2-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png"
]
} | [
"What was the performance of both approaches on their dataset?",
"What genres are covered?",
"Which of the two speech recognition models works better overall on CN-Celeb?",
"By how much is performance on CN-Celeb inferior to performance on VoxCeleb?"
] | [
[
"1911.01799-4-Table4-1.png"
],
[
"1911.01799-2-Table1-1.png"
],
[
"1911.01799-4-Table4-1.png"
],
[
"1911.01799-4-Table4-1.png"
]
] | [
"ERR of 19.05 with i-vectors and 15.52 with x-vectors",
"genre, entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement",
"x-vector",
"For i-vector system, performances are 11.75% inferior to voxceleb. For x-vector system, performances are 10.74% inferior to voxceleb"
] | 98 |
1812.06705 | Conditional BERT Contextual Augmentation | We propose a novel data augmentation method for labeled sentences called conditional BERT contextual augmentation. Data augmentation methods are often applied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model. We retrofit BERT to conditional BERT by introducing a new conditional masked language model\footnote{The term"conditional masked language model"appeared once in original BERT paper, which indicates context-conditional, is equivalent to term"masked language model". In our paper,"conditional masked language model"indicates we apply extra label-conditional constraint to the"masked language model".} task. The well trained conditional BERT can be applied to enhance contextual augmentation. Experiments on six various different text classification tasks show that our method can be easily applied to both convolutional or recurrent neural networks classifier to obtain obvious improvement. | {
"paragraphs": [
[
"Deep neural network-based models are easy to overfit and result in losing their generalization due to limited size of training data. In order to address the issue, data augmentation methods are often applied to generate more training samples. Recent years have witnessed great success in applying data augmentation in the field of speech area BIBREF0 , BIBREF1 and computer vision BIBREF2 , BIBREF3 , BIBREF4 . Data augmentation in these areas can be easily performed by transformations like resizing, mirroring, random cropping, and color shifting. However, applying these universal transformations to texts is largely randomized and uncontrollable, which makes it impossible to ensure the semantic invariance and label correctness. For example, given a movie review “The actors is good\", by mirroring we get “doog si srotca ehT\", or by random cropping we get “actors is\", both of which are meaningless.",
"Existing data augmentation methods for text are often loss of generality, which are developed with handcrafted rules or pipelines for specific domains. A general approach for text data augmentation is replacement-based method, which generates new sentences by replacing the words in the sentences with relevant words (e.g. synonyms). However, words with synonyms from a handcrafted lexical database likes WordNet BIBREF5 are very limited , and the replacement-based augmentation with synonyms can only produce limited diverse patterns from the original texts. To address the limitation of replacement-based methods, Kobayashi BIBREF6 proposed contextual augmentation for labeled sentences by offering a wide range of substitute words, which are predicted by a label-conditional bidirectional language model according to the context. But contextual augmentation suffers from two shortages: the bidirectional language model is simply shallow concatenation of a forward and backward model, and the usage of LSTM models restricts their prediction ability to a short range.",
"BERT, which stands for Bidirectional Encoder Representations from Transformers, pre-trained deep bidirectional representations by jointly conditioning on both left and right context in all layers. BERT addressed the unidirectional constraint by proposing a “masked language model\" (MLM) objective by masking some percentage of the input tokens at random, and predicting the masked words based on its context. This is very similar to how contextual augmentation predict the replacement words. But BERT was proposed to pre-train text representations, so MLM task is performed in an unsupervised way, taking no label variance into consideration.",
"This paper focuses on the replacement-based methods, by proposing a novel data augmentation method called conditional BERT contextual augmentation. The method applies contextual augmentation by conditional BERT, which is fine-tuned on BERT. We adopt BERT as our pre-trained language model with two reasons. First, BERT is based on Transformer. Transformer provides us with a more structured memory for handling long-term dependencies in text. Second, BERT, as a deep bidirectional model, is strictly more powerful than the shallow concatenation of a left-to-right and right-to left model. So we apply BERT to contextual augmentation for labeled sentences, by offering a wider range of substitute words predicted by the masked language model task. However, the masked language model predicts the masked word based only on its context, so the predicted word maybe incompatible with the annotated labels of the original sentences. In order to address this issue, we introduce a new fine-tuning objective: the \"conditional masked language model\"(C-MLM). The conditional masked language model randomly masks some of the tokens from an input, and the objective is to predict a label-compatible word based on both its context and sentence label. Unlike Kobayashi's work, the C-MLM objective allows a deep bidirectional representations by jointly conditioning on both left and right context in all layers. In order to evaluate how well our augmentation method improves performance of deep neural network models, following Kobayashi BIBREF6 , we experiment it on two most common neural network structures, LSTM-RNN and CNN, on text classification tasks. Through the experiments on six various different text classification tasks, we demonstrate that the proposed conditional BERT model augments sentence better than baselines, and conditional BERT contextual augmentation method can be easily applied to both convolutional or recurrent neural networks classifier. We further explore our conditional MLM task’s connection with style transfer task and demonstrate that our conditional BERT can also be applied to style transfer too.",
"Our contributions are concluded as follows:",
"To our best knowledge, this is the first attempt to alter BERT to a conditional BERT or apply BERT on text generation tasks."
],
[
"Language model pre-training has attracted wide attention and fine-tuning on pre-trained language model has shown to be effective for improving many downstream natural language processing tasks. Dai BIBREF7 pre-trained unlabeled data to improve Sequence Learning with recurrent networks. Howard BIBREF8 proposed a general transfer learning method, Universal Language Model Fine-tuning (ULMFiT), with the key techniques for fine-tuning a language model. Radford BIBREF9 proposed that by generative pre-training of a language model on a diverse corpus of unlabeled text, large gains on a diverse range of tasks could be realized. Radford BIBREF9 achieved large improvements on many sentence-level tasks from the GLUE benchmark BIBREF10 . BERT BIBREF11 obtained new state-of-the-art results on a broad range of diverse tasks. BERT pre-trained deep bidirectional representations which jointly conditioned on both left and right context in all layers, following by discriminative fine-tuning on each specific task. Unlike previous works fine-tuning pre-trained language model to perform discriminative tasks, we aim to apply pre-trained BERT on generative tasks by perform the masked language model(MLM) task. To generate sentences that are compatible with given labels, we retrofit BERT to conditional BERT, by introducing a conditional masked language model task and fine-tuning BERT on the task."
],
[
"Text data augmentation has been extensively studied in natural language processing. Sample-based methods includes downsampling from the majority classes and oversampling from the minority class, both of which perform weakly in practice. Generation-based methods employ deep generative models such as GANs BIBREF12 or VAEs BIBREF13 , BIBREF14 , trying to generate sentences from a continuous space with desired attributes of sentiment and tense. However, sentences generated in these methods are very hard to guarantee the quality both in label compatibility and sentence readability. In some specific areas BIBREF15 , BIBREF16 , BIBREF17 . word replacement augmentation was applied. Wang BIBREF18 proposed the use of neighboring words in continuous representations to create new instances for every word in a tweet to augment the training dataset. Zhang BIBREF19 extracted all replaceable words from the given text and randomly choose $r$ of them to be replaced, then substituted the replaceable words with synonyms from WordNet BIBREF5 . Kolomiyets BIBREF20 replaced only the headwords under a task-specific assumption that temporal trigger words usually occur as headwords. Kolomiyets BIBREF20 selected substitute words with top- $K$ scores given by the Latent Words LM BIBREF21 , which is a LM based on fixed length contexts. Fadaee BIBREF22 focused on the rare word problem in machine translation, replacing words in a source sentence with only rare words. A word in the translated sentence is also replaced using a word alignment method and a rightward LM. The work most similar to our research is Kobayashi BIBREF6 . Kobayashi used a fill-in-the-blank context for data augmentation by replacing every words in the sentence with language model. In order to prevent the generated words from reversing the information related to the labels of the sentences, Kobayashi BIBREF6 introduced a conditional constraint to control the replacement of words. Unlike previous works, we adopt a deep bidirectional language model to apply replacement, and the attention mechanism within our model allows a more structured memory for handling long-term dependencies in text, which resulting in more general and robust improvement on various downstream tasks."
],
[
"In general, the language model(LM) models the probability of generating natural language sentences or documents. Given a sequence $\\textbf {\\textit {S}}$ of N tokens, $<t_1,t_2,...,t_N>$ , a forward language model allows us to predict the probability of the sequence as: ",
"$$p(t_1,t_2,...,t_N) = \\prod _{i=1}^{N}p(t_i|t_1, t_2,..., t_{i-1}).$$ (Eq. 8) ",
"Similarly, a backward language model allows us to predict the probability of the sentence as: ",
"$$p(t_1,t_2,...,t_N) = \\prod _{i=1}^{N}p(t_i|t_{i+1}, t_{i+2},..., t_N).$$ (Eq. 9) ",
"Traditionally, a bidirectional language model a shallow concatenation of independently trained forward and backward LMs.",
"In order to train a deep bidirectional language model, BERT proposed Masked Language Model (MLM) task, which was also referred to Cloze Task BIBREF23 . MLM task randomly masks some percentage of the input tokens, and then predicts only those masked tokens according to their context. Given a masked token ${t_i}$ , the context is the tokens surrounding token ${t_i}$ in the sequence $\\textbf {\\textit {S}}$ , i.e. cloze sentence ${\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace }$ . The final hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary to produce words with a probability distribution ${p(\\cdot |\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace )}$ . MLM task only predicts the masked words rather than reconstructing the entire input, which suggests that more pre-training steps are required for the model to converge. Pre-trained BERT can augment sentences through MLM task, by predicting new words in masked positions according to their context."
],
[
"As shown in Fig 1 , our conditional BERT shares the same model architecture with the original BERT. The differences are the input representation and training procedure.",
"The input embeddings of BERT are the sum of the token embeddings, the segmentation embeddings and the position embeddings. For the segmentation embeddings in BERT, a learned sentence A embedding is added to every token of the first sentence, and if a second sentence exists, a sentence B embedding will be added to every token of the second sentence. However, the segmentation embeddings has no connection to the actual annotated labels of a sentence, like sense, sentiment or subjectivity, so predicted word is not always compatible with annotated labels. For example, given a positive movie remark “this actor is good\", we have the word “good\" masked. Through the Masked Language Model task by BERT, the predicted word in the masked position has potential to be negative word likes \"bad\" or \"boring\". Such new generated sentences by substituting masked words are implausible with respect to their original labels, which will be harmful if added to the corpus to apply augmentation. In order to address this issue, we propose a new task: “conditional masked language model\".",
"The conditional masked language model randomly masks some of the tokens from the labeled sentence, and the objective is to predict the original vocabulary index of the masked word based on both its context and its label. Given a masked token ${t_i}$ , the context ${\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace }$ and label ${y}$ are both considered, aiming to calculate ${p(\\cdot |y,\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace )}$ , instead of calculating ${p(\\cdot |\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace )}$ . Unlike MLM pre-training, the conditional MLM objective allows the representation to fuse the context information and the label information, which allows us to further train a label-conditional deep bidirectional representations.",
"To perform conditional MLM task, we fine-tune on pre-trained BERT. We alter the segmentation embeddings to label embeddings, which are learned corresponding to their annotated labels on labeled datasets. Note that the BERT are designed with segmentation embedding being embedding A or embedding B, so when a downstream task dataset with more than two labels, we have to adapt the size of embedding to label size compatible. We train conditional BERT using conditional MLM task on labeled dataset. After the model has converged, it is expected to be able to predict words in masked position both considering the context and the label."
],
[
"After the conditional BERT is well-trained, we utilize it to augment sentences. Given a labeled sentence from the corpus, we randomly mask a few words in the sentence. Through conditional BERT, various words compatibly with the label of the sentence are predicted by conditional BERT. After substituting the masked words with predicted words, a new sentences is generated, which shares similar context and same label with original sentence. Then new sentences are added to original corpus. We elaborate the entire process in algorithm \"Conditional BERT Contextual Augmentation\" .",
"Conditional BERT contextual augmentation algorithm. Fine-tuning on the pre-trained BERT , we retrofit BERT to conditional BERT using conditional MLM task on labeled dataset. After the model converged, we utilize it to augment sentences. New sentences are added into dataset to augment the dataset. [1] Alter the segmentation embeddings to label embeddings Fine-tune the pre-trained BERT using conditional MLM task on labeled dataset D until convergence each iteration i=1,2,...,M Sample a sentence $s$ from D Randomly mask $k$ words Using fine-tuned conditional BERT to predict label-compatible words on masked positions to generate a new sentence $S^{\\prime }$ Add new sentences into dataset $D$ to get augmented dataset $D^{\\prime }$ Perform downstream task on augmented dataset $D^{\\prime }$ "
],
[
"In this section, we present conditional BERT parameter settings and, following Kobayashi BIBREF6 , we apply different augmentation methods on two types of neural models through six text classification tasks. The pre-trained BERT model we used in our experiment is BERT $_{BASE}$ , with number of layers (i.e., Transformer blocks) $L = 12$ , the hidden size $ H = 768$ , and the number of self-attention heads $A = 12$ , total parameters $= 110M$ . Detailed pre-train parameters setting can be found in original paper BIBREF11 . For each task, we perform the following steps independently. First, we evaluate the augmentation ability of original BERT model pre-trained on MLM task. We use pre-trained BERT to augment dataset, by predicted masked words only condition on context for each sentence. Second, we fine-tune the original BERT model to a conditional BERT. Well-trained conditional BERT augments each sentence in dataset by predicted masked words condition on both context and label. Third, we compare the performance of the two methods with Kobayashi's BIBREF6 contextual augmentation results. Note that the original BERT’s segmentation embeddings layer is compatible with two-label dataset. When the task-specific dataset is with more than two different labels, we should re-train a label size compatible label embeddings layer instead of directly fine-tuning the pre-trained one."
],
[
"Six benchmark classification datasets are listed in table 1 . Following Kim BIBREF24 , for a dataset without validation data, we use 10% of its training set for the validation set. Summary statistics of six classification datasets are shown in table 1.",
"SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).",
"Subj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.",
"MPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).",
"RT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.",
"TREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.)."
],
[
"We evaluate the performance improvement brought by conditional BERT contextual augmentation on sentence classification tasks, so we need to prepare two common sentence classifiers beforehand. For comparison, following Kobayashi BIBREF6 , we adopt two typical classifier architectures: CNN or LSTM-RNN. The CNN-based classifier BIBREF24 has convolutional filters of size 3, 4, 5 and word embeddings. All outputs of each filter are concatenated before applied with a max-pooling over time, then fed into a two-layer feed-forward network with ReLU, followed by the softmax function. An RNN-based classifier has a single layer LSTM and word embeddings, whose output is fed into an output affine layer with the softmax function. For both the architectures, dropout BIBREF30 and Adam optimization BIBREF31 are applied during training. The train process is finish by early stopping with validation at each epoch.",
"Sentence classifier hyper-parameters including learning rate, embedding dimension, unit or filter size, and dropout ratio, are selected using grid-search for each task-specific dataset. We refer to Kobayashi's implementation in the released code. For BERT, all hyper-parameters are kept the same as Devlin BIBREF11 , codes in Tensorflow and PyTorch are all available on github and pre-trained BERT model can also be downloaded. The number of conditional BERT training epochs ranges in [1-50] and number of masked words ranges in [1-2].",
"We compare the performance improvements obtained by our proposed method with the following baseline methods, “w/\" means “with\":",
"w/synonym: Words are randomly replaced with synonyms from WordNet BIBREF5 .",
"w/context: Proposed by Kobayashi BIBREF6 , which used a bidirectional language model to apply contextual augmentation, each word was replaced with a probability.",
"w/context+label: Kobayashi’s contextual augmentation method BIBREF6 in a label-conditional LM architecture.",
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does. For we masked words randomly, the masked words may be label-sensitive or label-insensitive. If label-insensitive words are masked, words predicted through BERT may not be compatible with original labels. The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks.",
"We also explore the effect of number of training steps to the performance of conditional BERT data augmentation. The fine-tuning epoch setting ranges in [1-50], we list the fine-tuning epoch of conditional BERT to outperform BERT for various benchmarks in table 3 . The results show that our conditional BERT contextual augmentation can achieve obvious performance improvement after only a few fine-tuning epochs, which is very convenient to apply to downstream tasks."
],
[
"In this section, we further deep into the connection to style transfer and apply our well trained conditional BERT to style transfer task. Style transfer is defined as the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context BIBREF32 . Our conditional MLM task changes words in the text condition on given label without changing the context. View from this point, the two tasks are very close. So in order to apply conditional BERT to style transfer task, given a specific stylistic sentence, we break it into two steps: first, we find the words relevant to the style; second, we mask the style-relevant words, then use conditional BERT to predict new substitutes with sentence context and target style property. In order to find style-relevant words in a sentence, we refer to Xu BIBREF33 , which proposed an attention-based method to extract the contribution of each word to the sentence sentimental label. For example, given a positive movie remark “This movie is funny and interesting\", we filter out the words contributes largely to the label and mask them. Then through our conditional BERT contextual augmentation method, we fill in the masked position by predicting words conditioning on opposite label and sentence context, resulting in “This movie is boring and dull\". The words “boring\" and “dull\" contribute to the new sentence being labeled as negative style. We sample some sentences from dataset SST2, transferring them to the opposite label, as listed in table 4 ."
],
[
"In this paper, we fine-tune BERT to conditional BERT by introducing a novel conditional MLM task. After being well trained, the conditional BERT can be applied to data augmentation for sentence classification tasks. Experiment results show that our model outperforms several baseline methods obviously. Furthermore, we demonstrate that our conditional BERT can also be applied to style transfer task. In the future, (1)We will explore how to perform text data augmentation on imbalanced datasets with pre-trained language model, (2) we believe the idea of conditional BERT contextual augmentation is universal and will be applied to paragraph or document level data augmentation."
]
],
"section_name": [
"Introduction",
"Fine-tuning on Pre-trained Language Model",
"Text Data Augmentation",
"Preliminary: Masked Language Model Task",
"Conditional BERT",
"Conditional BERT Contextual Augmentation",
"Experiment",
"Datasets",
"Text classification",
"Connection to Style Transfer",
"Conclusions and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"da6a68609a4ef853fbdc85494dbb628978a9d63d"
],
"answer": [
{
"evidence": [
"SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).",
"Subj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.",
"MPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).",
"RT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.",
"TREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.).",
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"extractive_spans": [
"SST (Stanford Sentiment Treebank)",
"Subj (Subjectivity dataset)",
"MPQA Opinion Corpus",
"RT is another movie review sentiment dataset",
"TREC is a dataset for classification of the six question types"
],
"free_form_answer": "",
"highlighted_evidence": [
"SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).\n\nSubj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.\n\nMPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).\n\nRT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.\n\nTREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.).",
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"3d4d56e4c3dcfc684bf56a1af8d6c3d0e94ab405"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"extractive_spans": [],
"free_form_answer": "Accuracy across six datasets",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"5dc1d75b5817b4b29cadcfe5da1b8796e3482fe5"
],
"answer": [
{
"evidence": [
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does. For we masked words randomly, the masked words may be label-sensitive or label-insensitive. If label-insensitive words are masked, words predicted through BERT may not be compatible with original labels. The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"09963269da86b53287634c76b47ecf335c9ce1d1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"033ab0c50e8d68b359f9fb259227becc14b5e942"
],
"answer": [
{
"evidence": [
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does. For we masked words randomly, the masked words may be label-sensitive or label-insensitive. If label-insensitive words are masked, words predicted through BERT may not be compatible with original labels. The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"On what datasets is the new model evaluated on?",
"How do the authors measure performance?",
"Does the new objective perform better than the original objective bert is trained on?",
"Are other pretrained language models also evaluated for contextual augmentation? ",
"Do the authors report performance of conditional bert on tasks without data augmentation?"
],
"question_id": [
"df8cc1f395486a12db98df805248eb37c087458b",
"6e97c06f998f09256be752fa75c24ba853b0db24",
"de2d33760dc05f9d28e9dabc13bab2b3264cadb7",
"63bb39fd098786a510147f8ebc02408de350cb7c",
"6333845facb22f862ffc684293eccc03002a4830"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"BERT",
"BERT",
"BERT",
"BERT",
"BERT"
],
"topic_background": [
"research",
"research",
"research",
"research",
"familiar"
]
} | {
"caption": [
"Figure 1: Model architecture of conditional BERT. The label embeddings in conditional BERT corresponding to segmentation embeddings in BERT, but their functions are different.",
"Table 1: Summary statistics for the datasets after tokenization. c: Number of target classes. l: Average sentence length. N : Dataset size. |V |: Vocabulary size. Test: Test set size (CV means there was no standard train/test split and thus 10-fold cross-validation was used).",
"Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018).",
"Table 3: Fine-tuning epochs of conditional BERT to outperform BERT for various benchmarks",
"Table 4: Examples generated by conditional BERT on the SST2 dataset. To perform style transfer, we reverse the original label of a sentence, and conditional BERT output a new label compatible sentence."
],
"file": [
"5-Figure1-1.png",
"5-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png"
]
} | [
"How do the authors measure performance?"
] | [
[
"1812.06705-7-Table2-1.png"
]
] | [
"Accuracy across six datasets"
] | 99 |
1905.08949 | Recent Advances in Neural Question Generation | Emerging research in Neural Question Generation (NQG) has started to integrate a larger variety of inputs, and generating questions requiring higher levels of cognition. These trends point to NQG as a bellwether for NLP, about how human intelligence embodies the skills of curiosity and integration. We present a comprehensive survey of neural question generation, examining the corpora, methodologies, and evaluation methods. From this, we elaborate on what we see as emerging on NQG's trend: in terms of the learning paradigms, input modalities, and cognitive levels considered by NQG. We end by pointing out the potential directions ahead. | {
"paragraphs": [
[
"Question Generation (QG) concerns the task of “automatically generating questions from various inputs such as raw text, database, or semantic representation\" BIBREF0 . People have the ability to ask rich, creative, and revealing questions BIBREF1 ; e.g., asking Why did Gollum betray his master Frodo Baggins? after reading the fantasy novel The Lord of the Rings. How can machines be endowed with the ability to ask relevant and to-the-point questions, given various inputs? This is a challenging, complementary task to Question Answering (QA). Both QA and QG require an in-depth understanding of the input source and the ability to reason over relevant contexts. But beyond understanding, QG additionally integrates the challenges of Natural Language Generation (NLG), i.e., generating grammatically and semantically correct questions.",
"QG is of practical importance: in education, forming good questions are crucial for evaluating students’ knowledge and stimulating self-learning. QG can generate assessments for course materials BIBREF2 or be used as a component in adaptive, intelligent tutoring systems BIBREF3 . In dialog systems, fluent QG is an important skill for chatbots, e.g., in initiating conversations or obtaining specific information from human users. QA and reading comprehension also benefit from QG, by reducing the needed human labor for creating large-scale datasets. We can say that traditional QG mainly focused on generating factoid questions from a single sentence or a paragraph, spurred by a series of workshops during 2008–2012 BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 .",
"Recently, driven by advances in deep learning, QG research has also begun to utilize “neural” techniques, to develop end-to-end neural models to generate deeper questions BIBREF8 and to pursue broader applications BIBREF9 , BIBREF10 .",
"While there have been considerable advances made in NQG, the area lacks a comprehensive survey. This paper fills this gap by presenting a systematic survey on recent development of NQG, focusing on three emergent trends that deep learning has brought in QG: (1) the change of learning paradigm, (2) the broadening of the input spectrum, and (3) the generation of deep questions."
],
[
"For the sake of clean exposition, we first provide a broad overview of QG by conceptualizing the problem from the perspective of the three introduced aspects: (1) its learning paradigm, (2) its input modalities, and (3) the cognitive level it involves. This combines past research with recent trends, providing insights on how NQG connects to traditional QG research."
],
[
"QG research traditionally considers two fundamental aspects in question asking: “What to ask” and “How to ask”. A typical QG task considers the identification of the important aspects to ask about (“what to ask”), and learning to realize such identified aspects as natural language (“how to ask”). Deciding what to ask is a form of machine understanding: a machine needs to capture important information dependent on the target application, akin to automatic summarization. Learning how to ask, however, focuses on aspects of the language quality such as grammatical correctness, semantically preciseness and language flexibility.",
"Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction. Given a sentence or a paragraph as input, content selection selects a particular salient topic worthwhile to ask about and determines the question type (What, When, Who, etc.). Approaches either take a syntactic BIBREF11 , BIBREF12 , BIBREF13 or semantic BIBREF14 , BIBREF3 , BIBREF15 , BIBREF16 tack, both starting by applying syntactic or semantic parsing, respectively, to obtain intermediate symbolic representations. Question construction then converts intermediate representations to a natural language question, taking either a tranformation- or template-based approach. The former BIBREF17 , BIBREF18 , BIBREF13 rearranges the surface form of the input sentence to produce the question; the latter BIBREF19 , BIBREF20 , BIBREF21 generates questions from pre-defined question templates. Unfortunately, such QG architectures are limiting, as their representation is confined to the variety of intermediate representations, transformation rules or templates.",
"In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework. The majority of current NQG models follow the sequence-to-sequence (Seq2Seq) framework that use a unified representation and joint learning of content selection (via the encoder) and question construction (via the decoder). In this framework, traditional parsing-based content selection has been replaced by more flexible approaches such as attention BIBREF22 and copying mechanism BIBREF23 . Question construction has become completely data-driven, requiring far less labor compared to transformation rules, enabling better language flexibility compared to question templates.",
"However, unlike other Seq2Seq learning NLG tasks, such as Machine Translation, Image Captioning, and Abstractive Summarization, which can be loosely regarded as learning a one-to-one mapping, generated questions can differ significantly when the intent of asking differs (e.g., the target answer, the target aspect to ask about, and the question's depth). In Section \"Methodology\" , we summarize different NQG methodologies based on Seq2Seq framework, investigating how some of these QG-specific factors are integrated with neural models, and discussing what could be further explored. The change of learning paradigm in NQG era is also represented by multi-task learning with other NLP tasks, for which we discuss in Section \"Multi-task Learning\" ."
],
[
"Question generation is an NLG task for which the input has a wealth of possibilities depending on applications. While a host of input modalities have been considered in other NLG tasks, such as text summarization BIBREF24 , image captioning BIBREF25 and table-to-text generation BIBREF26 , traditional QG mainly focused on textual inputs, especially declarative sentences, explained by the original application domains of question answering and education, which also typically featured textual inputs.",
"Recently, with the growth of various QA applications such as Knowledge Base Question Answering (KBQA) BIBREF27 and Visual Question Answering (VQA) BIBREF28 , NQG research has also widened the spectrum of sources to include knowledge bases BIBREF29 and images BIBREF10 . This trend is also spurred by the remarkable success of neural models in feature representation, especially on image features BIBREF30 and knowledge representations BIBREF31 . We discuss adapting NQG models to other input modalities in Section \"Wider Input Modalities\" ."
],
[
"Finally, we consider the required cognitive process behind question asking, a distinguishing factor for questions BIBREF32 . A typical framework that attempts to categorize the cognitive levels involved in question asking comes from Bloom's taxonomy BIBREF33 , which has undergone several revisions and currently has six cognitive levels: Remembering, Understanding, Applying, Analyzing, Evaluating and Creating BIBREF32 .",
"Traditional QG focuses on shallow levels of Bloom's taxonomy: typical QG research is on generating sentence-based factoid questions (e.g., Who, What, Where questions), whose answers are simple constituents in the input sentence BIBREF2 , BIBREF13 . However, a QG system achieving human cognitive level should be able to generate meaningful questions that cater to higher levels of Bloom's taxonomy BIBREF34 , such as Why, What-if, and How questions. Traditionally, those “deep” questions are generated through shallow methods such as handcrafted templates BIBREF20 , BIBREF21 ; however, these methods lack a real understanding and reasoning over the input.",
"Although asking deep questions is complex, NQG's ability to generalize over voluminous data has enabled recent research to explore the comprehension and reasoning aspects of QG BIBREF35 , BIBREF1 , BIBREF8 , BIBREF34 . We investigate this trend in Section \"Generation of Deep Questions\" , examining the limitations of current Seq2Seq model in generating deep questions, and the efforts made by existing works, indicating further directions ahead.",
"The rest of this paper provides a systematic survey of NQG, covering corpus and evaluation metrics before examining specific neural models."
],
[
"As QG can be regarded as a dual task of QA, in principle any QA dataset can be used for QG as well. However, there are at least two corpus-related factors that affect the difficulty of question generation. The first is the required cognitive level to answer the question, as we discussed in the previous section. Current NQG has achieved promising results on datasets consisting mainly of shallow factoid questions, such as SQuAD BIBREF36 and MS MARCO BIBREF38 . However, the performance drops significantly on deep question datasets, such as LearningQ BIBREF8 , shown in Section \"Generation of Deep Questions\" . The second factor is the answer type, i.e., the expected form of the answer, typically having four settings: (1) the answer is a text span in the passage, which is usually the case for factoid questions, (2) human-generated, abstractive answer that may not appear in the passage, usually the case for deep questions, (3) multiple choice question where question and its distractors should be jointly generated, and (4) no given answer, which requires the model to automatically learn what is worthy to ask. The design of NQG system differs accordingly.",
"Table 1 presents a listing of the NQG corpora grouped by their cognitive level and answer type, along with their statistics. Among them, SQuAD was used by most groups as the benchmark to evaluate their NQG models. This provides a fair comparison between different techniques. However, it raises the issue that most NQG models work on factoid questions with answer as text span, leaving other types of QG problems less investigated, such as generating deep multi-choice questions. To overcome this, a wider variety of corpora should be benchmarked against in future NQG research."
],
[
"Although the datasets are commonly shared between QG and QA, it is not the case for evaluation: it is challenging to define a gold standard of proper questions to ask. Meaningful, syntactically correct, semantically sound and natural are all useful criteria, yet they are hard to quantify. Most QG systems involve human evaluation, commonly by randomly sampling a few hundred generated questions, and asking human annotators to rate them on a 5-point Likert scale. The average rank or the percentage of best-ranked questions are reported and used for quality marks.",
"As human evaluation is time-consuming, common automatic evaluation metrics for NLG, such as BLEU BIBREF41 , METEOR BIBREF42 , and ROUGE BIBREF43 , are also widely used. However, some studies BIBREF44 , BIBREF45 have shown that these metrics do not correlate well with fluency, adequacy, coherence, as they essentially compute the $n$ -gram similarity between the source sentence and the generated question. To overcome this, BIBREF46 proposed a new metric to evaluate the “answerability” of a question by calculating the scores for several question-specific factors, including question type, content words, function words, and named entities. However, as it is newly proposed, it has not been applied to evaluate any NQG system yet.",
"To accurately measure what makes a good question, especially deep questions, improved evaluation schemes are required to specifically investigate the mechanism of question asking."
],
[
"Many current NQG models follow the Seq2Seq architecture. Under this framework, given a passage (usually a sentence) $X = (x_1, \\cdots , x_n)$ and (possibly) a target answer $A$ (a text span in the passage) as input, an NQG model aims to generate a question $Y = (y_1, \\cdots , y_m)$ asking about the target answer $A$ in the passage $X$ , which is defined as finding the best question $\\bar{Y}$ that maximizes the conditional likelihood given the passage $X$ and the answer $A$ :",
"$$\\bar{Y} & = \\arg \\max _Y P(Y \\vert X, A) \\\\\n\\vspace{-14.22636pt}\n& = \\arg \\max _Y \\sum _{t=1}^m P(y_t \\vert X, A, y_{< t})$$ (Eq. 5) ",
" BIBREF47 pioneered the first NQG model using an attention Seq2Seq model BIBREF22 , which feeds a sentence into an RNN-based encoder, and generate a question about the sentence through a decoder. The attention mechanism is applied to help decoder pay attention to the most relevant parts of the input sentence while generating a question. Note that this base model does not take the target answer as input. Subsequently, neural models have adopted attention mechanism as a default BIBREF48 , BIBREF49 , BIBREF50 .",
"Although these NQG models all share the Seq2Seq framework, they differ in the consideration of — (1) QG-specific factors (e.g., answer encoding, question word generation, and paragraph-level contexts), and (2) common NLG techniques (e.g., copying mechanism, linguistic features, and reinforcement learning) — discussed next."
],
[
"The most commonly considered factor by current NQG systems is the target answer, which is typically taken as an additional input to guide the model in deciding which information to focus on when generating; otherwise, the NQG model tend to generate questions without specific target (e.g., “What is mentioned?\"). Models have solved this by either treating the answer's position as an extra input feature BIBREF48 , BIBREF51 , or by encoding the answer with a separate RNN BIBREF49 , BIBREF52 .",
"The first type of method augments each input word vector with an extra answer indicator feature, indicating whether this word is within the answer span. BIBREF48 implement this feature using the BIO tagging scheme, while BIBREF50 directly use a binary indicator. In addition to the target answer, BIBREF53 argued that the context words closer to the answer also deserve more attention from the model, since they are usually more relevant. To this end, they incorporate trainable position embeddings $(d_{p_1}, d_{p_2}, \\cdots , d_{p_n})$ into the computation of attention distribution, where $p_i$ is the relative distance between the $i$ -th word and the answer, and $d_{p_i}$ is the embedding of $p_i$ . This achieved an extra BLEU-4 gain of $0.89$ on SQuAD.",
"To generate answer-related questions, extra answer indicators explicitly emphasize the importance of answer; however, it also increases the tendency that generated questions include words from the answer, resulting in useless questions, as observed by BIBREF52 . For example, given the input “John Francis O’Hara was elected president of Notre Dame in 1934.\", an improperly generated question would be “Who was elected John Francis?\", which exposes some words in the answer. To address this, they propose to replace the answer into a special token for passage encoding, and a separate RNN is used to encode the answer. The outputs from two encoders are concatenated as inputs to the decoder. BIBREF54 adopted a similar idea that separately encodes passage and answer, but they instead use the multi-perspective matching between two encodings as an extra input to the decoder.",
"We forecast treating the passage and the target answer separately as a future trend, as it results in a more flexible model, which generalizes to the abstractive case when the answer is not a text span in the input passage. However, this inevitably increases the model complexity and difficulty in training."
],
[
"Question words (e.g., “when”, “how”, and “why”) also play a vital role in QG; BIBREF53 observed that the mismatch between generated question words and answer type is common for current NQG systems. For example, a when-question should be triggered for answer “the end of the Mexican War\" while a why-question is generated by the model. A few works BIBREF49 , BIBREF53 considered question word generation separately in model design.",
" BIBREF49 proposed to first generate a question template that contains question word (e.g., “how to #\", where # is the placeholder), before generating the rest of the question. To this end, they train two Seq2Seq models; the former learns to generate question templates for a given text , while the latter learns to fill the blank of template to form a complete question. Instead of a two-stage framework, BIBREF53 proposed a more flexible model by introducing an additional decoding mode that generates the question word. When entering this mode, the decoder produces a question word distribution based on a restricted set of vocabulary using the answer embedding, the decoder state, and the context vector. The switch between different modes is controlled by a discrete variable produced by a learnable module of the model in each decoding step.",
"Determining the appropriate question word harks back to question type identification, which is correlated with the question intention, as different intents may yield different questions, even when presented with the same (passage, answer) input pair. This points to the direction of exploring question pragmatics, where external contextual information (such as intent) can inform and influence how questions should optimally be generated."
],
[
"Leveraging rich paragraph-level contexts around the input text is another natural consideration to produce better questions. According to BIBREF47 , around 20% of questions in SQuAD require paragraph-level information to be answered. However, as input texts get longer, Seq2Seq models have a tougher time effectively utilizing relevant contexts, while avoiding irrelevant information.",
"To address this challenge, BIBREF51 proposed a gated self-attention encoder to refine the encoded context by fusing important information with the context's self-representation properly, which has achieved state-of-the-art results on SQuAD. The long passage consisting of input texts and its context is first embedded via LSTM with answer position as an extra feature. The encoded representation is then fed through a gated self-matching network BIBREF55 to aggregate information from the entire passage and embed intra-passage dependencies. Finally, a feature fusion gate BIBREF56 chooses relevant information between the original and self-matching enhanced representations.",
"Instead of leveraging the whole context, BIBREF57 performed a pre-filtering by running a coreference resolution system on the context passage to obtain coreference clusters for both the input sentence and the answer. The co-referred sentences are then fed into a gating network, from which the outputs serve as extra features to be concatenated with the original input vectors."
],
[
"The aforementioned models require the target answer as an input, in which the answer essentially serves as the focus of asking. However, in the case that only the input passage is given, a QG system should automatically identify question-worthy parts within the passage. This task is synonymous with content selection in traditional QG. To date, only two works BIBREF58 , BIBREF59 have worked in this setting. They both follow the traditional decomposition of QG into content selection and question construction but implement each task using neural networks. For content selection, BIBREF58 learn a sentence selection task to identify question-worthy sentences from the input paragraph using a neural sequence tagging model. BIBREF59 train a neural keyphrase extractor to predict keyphrases of the passage. For question construction, they both employed the Seq2Seq model, for which the input is either the selected sentence or the input passage with keyphrases as target answer.",
"However, learning what aspect to ask about is quite challenging when the question requires reasoning over multiple pieces of information within the passage; cf the Gollum question from the introduction. Beyond retrieving question-worthy information, we believe that studying how different reasoning patterns (e.g., inductive, deductive, causal and analogical) affects the generation process will be an aspect for future study."
],
[
"Common techniques of NLG have also been considered in NQG model, summarized as 3 tactics:",
"1. Copying Mechanism. Most NQG models BIBREF48 , BIBREF60 , BIBREF61 , BIBREF50 , BIBREF62 employ the copying mechanism of BIBREF23 , which directly copies relevant words from the source sentence to the question during decoding. This idea is widely accepted as it is common to refer back to phrases and entities appearing in the text when formulating factoid questions, and difficult for a RNN decoder to generate such rare words on its own.",
"2. Linguistic Features. Approaches also seek to leverage additional linguistic features that complements word embeddings, including word case, POS and NER tags BIBREF48 , BIBREF61 as well as coreference BIBREF50 and dependency information BIBREF62 . These categorical features are vectorized and concatenated with word embeddings. The feature vectors can be either one-hot or trainable and serve as input to the encoder.",
"3. Policy Gradient. Optimizing for just ground-truth log likelihood ignores the many equivalent ways of asking a question. Relevant QG work BIBREF60 , BIBREF63 have adopted policy gradient methods to add task-specific rewards (such as BLEU or ROUGE) to the original objective. This helps to diversify the questions generated, as the model learns to distribute probability mass among equivalent expressions rather than the single ground truth question."
],
[
"In Table 2 , we summarize existing NQG models with their employed techniques and their best-reported performance on SQuAD. These methods achieve comparable results; as of this writing, BIBREF51 is the state-of-the-art.",
"Two points deserve mention. First, while the copying mechanism has shown marked improvements, there exist shortcomings. BIBREF52 observed many invalid answer-revealing questions attributed to the use of the copying mechanism; cf the John Francis example in Section \"Emerging Trends\" . They abandoned copying but still achieved a performance rivaling other systems. In parallel application areas such as machine translation, the copy mechanism has been to a large extent replaced with self-attention BIBREF64 or transformer BIBREF65 . The future prospect of the copying mechanism requires further investigation. Second, recent approaches that employ paragraph-level contexts have shown promising results: not only boosting performance, but also constituting a step towards deep question generation, which requires reasoning over rich contexts."
],
[
"We discuss three trends that we wish to call practitioners' attention to as NQG evolves to take the center stage in QG: Multi-task Learning, Wider Input Modalities and Deep Question Generation."
],
[
"As QG has become more mature, work has started to investigate how QG can assist in other NLP tasks, and vice versa. Some NLP tasks benefit from enriching training samples by QG to alleviate the data shortage problem. This idea has been successfully applied to semantic parsing BIBREF66 and QA BIBREF67 . In the semantic parsing task that maps a natural language question to a SQL query, BIBREF66 achieved a 3 $\\%$ performance gain with an enlarged training set that contains pseudo-labeled $(SQL, question)$ pairs generated by a Seq2Seq QG model. In QA, BIBREF67 employed the idea of self-training BIBREF68 to jointly learn QA and QG. The QA and QG models are first trained on a labeled corpus. Then, the QG model is used to create more questions from an unlabeled text corpus and the QA model is used to answer these newly-created questions. The newly-generated question–answer pairs form an enlarged dataset to iteratively retrain the two models. The process is repeated while performance of both models improve.",
"Investigating the core aspect of QG, we say that a well-trained QG system should have the ability to: (1) find the most salient information in the passage to ask questions about, and (2) given this salient information as target answer, to generate an answer related question. BIBREF69 leveraged the first characteristic to improve text summarization by performing multi-task learning of summarization with QG, as both these two tasks require the ability to search for salient information in the passage. BIBREF49 applied the second characteristic to improve QA. For an input question $q$ and a candidate answer $\\hat{a}$ , they generate a question $\\hat{q}$ for $\\hat{a}$ by way of QG system. Since the generated question $\\hat{q}$ is closely related to $\\hat{a}$ , the similarity between $q$ and $\\hat{q}$ helps to evaluate whether $\\hat{a}$ is the correct answer.",
"Other works focus on jointly training to combine QG and QA. BIBREF70 simultaneously train the QG and QA models in the same Seq2Seq model by alternating input data between QA and QG examples. BIBREF71 proposed a training algorithm that generalizes Generative Adversarial Network (GANs) BIBREF72 under the question answering scenario. The model improves QG by incorporating an additional QA-specific loss, and improving QA performance by adding artificially generated training instances from QG. However, while joint training has shown some effectiveness, due to the mixed objectives, its performance on QG are lower than the state-of-the-art results, which leaves room for future exploration."
],
[
"QG work now has incorporated input from knowledge bases (KBQG) and images (VQG).",
"Inspired by the use of SQuAD as a question benchmark, BIBREF9 created a 30M large-scale dataset of (KB triple, question) pairs to spur KBQG work. They baselined an attention seq2seq model to generate the target factoid question. Due to KB sparsity, many entities and predicates are unseen or rarely seen at training time. BIBREF73 address these few-/zero-shot issues by applying the copying mechanism and incorporating textual contexts to enrich the information for rare entities and relations. Since a single KB triple provides only limited information, KB-generated questions also overgeneralize — a model asks “Who was born in New York?\" when given the triple (Donald_Trump, Place_of_birth, New_York). To solve this, BIBREF29 enrich the input with a sequence of keywords collected from its related triples.",
"Visual Question Generation (VQG) is another emerging topic which aims to ask questions given an image. We categorize VQG into grounded- and open-ended VQG by the level of cognition. Grounded VQG generates visually grounded questions, i.e., all relevant information for the answer can be found in the input image BIBREF74 . A key purpose of grounded VQG is to support the dataset construction for VQA. To ensure the questions are grounded, existing systems rely on image captions to varying degrees. BIBREF75 and BIBREF76 simply convert image captions into questions using rule-based methods with textual patterns. BIBREF74 proposed a neural model that can generate questions with diverse types for a single image, using separate networks to construct dense image captions and to select question types.",
"In contrast to grounded QG, humans ask higher cognitive level questions about what can be inferred rather than what can be seen from an image. Motivated by this, BIBREF10 proposed open-ended VQG that aims to generate natural and engaging questions about an image. These are deep questions that require high cognition such as analyzing and creation. With significant progress in deep generative models, marked by variational auto-encoders (VAEs) and GANs, such models are also used in open-ended VQG to bring “creativity” into generated questions BIBREF77 , BIBREF78 , showing promising results. This also brings hope to address deep QG from text, as applied in NLG: e.g., SeqGAN BIBREF79 and LeakGAN BIBREF80 ."
],
[
"Endowing a QG system with the ability to ask deep questions will help us build curious machines that can interact with humans in a better manner. However, BIBREF81 pointed out that asking high-quality deep questions is difficult, even for humans. Citing the study from BIBREF82 to show that students in college asked only about 6 deep-reasoning questions per hour in a question–encouraging tutoring session. These deep questions are often about events, evaluation, opinions, syntheses or reasons, corresponding to higher-order cognitive levels.",
"To verify the effectiveness of existing NQG models in generating deep questions, BIBREF8 conducted an empirical study that applies the attention Seq2Seq model on LearningQ, a deep-question centric dataset containing over 60 $\\%$ questions that require reasoning over multiple sentences or external knowledge to answer. However, the results were poor; the model achieved miniscule BLEU-4 scores of $< 4$ and METEOR scores of $< 9$ , compared with $> 12$ (BLEU-4) and $> 16$ (METEOR) on SQuAD. Despite further in-depth analysis are needed to explore the reasons behind, we believe there are two plausible explanations: (1) Seq2Seq models handle long inputs ineffectively, and (2) Seq2Seq models lack the ability to reason over multiple pieces of information.",
"Despite still having a long way to go, some works have set out a path forward. A few early QG works attempted to solve this through building deep semantic representations of the entire text, using concept maps over keywords BIBREF83 or minimal recursion semantics BIBREF84 to reason over concepts in the text. BIBREF35 proposed a crowdsourcing-based workflow that involves building an intermediate ontology for the input text, soliciting question templates through crowdsourcing, and generating deep questions based on template retrieval and ranking. Although this process is semi-automatic, it provides a practical and efficient way towards deep QG. In a separate line of work, BIBREF1 proposed a framework that simulates how people ask deep questions by treating questions as formal programs that execute on the state of the world, outputting an answer.",
"Based on our survey, we believe the roadmap towards deep NGQ points towards research that will (1) enhance the NGQ model with the ability to consider relationships among multiple source sentences, (2) explicitly model typical reasoning patterns, and (3) understand and simulate the mechanism behind human question asking."
],
[
"We have presented a comprehensive survey of NQG, categorizing current NQG models based on different QG-specific and common technical variations, and summarizing three emerging trends in NQG: multi-task learning, wider input modalities, and deep question generation.",
"What's next for NGQ? We end with future potential directions by applying past insights to current NQG models; the “unknown unknown\", promising directions yet explored.",
"When to Ask: Besides learning what and how to ask, in many real-world applications that question plays an important role, such as automated tutoring and conversational systems, learning when to ask become an important issue. In contrast to general dialog management BIBREF85 , no research has explored when machine should ask an engaging question in dialog. Modeling question asking as an interactive and dynamic process may become an interesting topic ahead.",
"Personalized QG: Question asking is quite personalized: people with different characters and knowledge background ask different questions. However, integrating QG with user modeling in dialog management or recommendation system has not yet been explored. Explicitly modeling user state and awareness leads us towards personalized QG, which dovetails deep, end-to-end QG with deep user modeling and pairs the dual of generation–comprehension much in the same vein as in the vision–image generation area."
]
],
"section_name": [
"Introduction",
"Fundamental Aspects of NQG",
"Learning Paradigm",
"Input Modality",
"Cognitive Levels",
"Corpora",
"Evaluation Metrics",
"Methodology",
"Encoding Answers",
"Question Word Generation",
"Paragraph-level Contexts",
"Answer-unaware QG",
"Technical Considerations",
"The State of the Art",
"Emerging Trends",
"Multi-task Learning",
"Wider Input Modalities",
"Generation of Deep Questions",
"Conclusion – What's the Outlook?"
]
} | {
"answers": [
{
"annotation_id": [
"f0dca97a210535659f8db4ad400dd5871135086f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"033cfb982d9533ed483a2d149ef6b901908303c1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Existing NQG models with their best-reported performance on SQuAD. Legend: QW: question word generation, PC: paragraph-level context, CP: copying mechanism, LF: linguistic features, PG: policy gradient."
],
"extractive_spans": [],
"free_form_answer": "Kim et al. (2019)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Existing NQG models with their best-reported performance on SQuAD. Legend: QW: question word generation, PC: paragraph-level context, CP: copying mechanism, LF: linguistic features, PG: policy gradient."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"39d19fc7612e27072ed9e84eda6fa43ba201a0bb"
],
"answer": [
{
"evidence": [
"Visual Question Generation (VQG) is another emerging topic which aims to ask questions given an image. We categorize VQG into grounded- and open-ended VQG by the level of cognition. Grounded VQG generates visually grounded questions, i.e., all relevant information for the answer can be found in the input image BIBREF74 . A key purpose of grounded VQG is to support the dataset construction for VQA. To ensure the questions are grounded, existing systems rely on image captions to varying degrees. BIBREF75 and BIBREF76 simply convert image captions into questions using rule-based methods with textual patterns. BIBREF74 proposed a neural model that can generate questions with diverse types for a single image, using separate networks to construct dense image captions and to select question types.",
"In contrast to grounded QG, humans ask higher cognitive level questions about what can be inferred rather than what can be seen from an image. Motivated by this, BIBREF10 proposed open-ended VQG that aims to generate natural and engaging questions about an image. These are deep questions that require high cognition such as analyzing and creation. With significant progress in deep generative models, marked by variational auto-encoders (VAEs) and GANs, such models are also used in open-ended VQG to bring “creativity” into generated questions BIBREF77 , BIBREF78 , showing promising results. This also brings hope to address deep QG from text, as applied in NLG: e.g., SeqGAN BIBREF79 and LeakGAN BIBREF80 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Visual Question Generation (VQG) is another emerging topic which aims to ask questions given an image.",
"Motivated by this, BIBREF10 proposed open-ended VQG that aims to generate natural and engaging questions about an image."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"2cfe5b5774f9893b33adef8c99a236f8bfa1183c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"33fe23afb062027041fcc9b9dc9eaac9d38258e1"
],
"answer": [
{
"evidence": [
"Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction. Given a sentence or a paragraph as input, content selection selects a particular salient topic worthwhile to ask about and determines the question type (What, When, Who, etc.). Approaches either take a syntactic BIBREF11 , BIBREF12 , BIBREF13 or semantic BIBREF14 , BIBREF3 , BIBREF15 , BIBREF16 tack, both starting by applying syntactic or semantic parsing, respectively, to obtain intermediate symbolic representations. Question construction then converts intermediate representations to a natural language question, taking either a tranformation- or template-based approach. The former BIBREF17 , BIBREF18 , BIBREF13 rearranges the surface form of the input sentence to produce the question; the latter BIBREF19 , BIBREF20 , BIBREF21 generates questions from pre-defined question templates. Unfortunately, such QG architectures are limiting, as their representation is confined to the variety of intermediate representations, transformation rules or templates.",
"In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework. The majority of current NQG models follow the sequence-to-sequence (Seq2Seq) framework that use a unified representation and joint learning of content selection (via the encoder) and question construction (via the decoder). In this framework, traditional parsing-based content selection has been replaced by more flexible approaches such as attention BIBREF22 and copying mechanism BIBREF23 . Question construction has become completely data-driven, requiring far less labor compared to transformation rules, enabling better language flexibility compared to question templates."
],
"extractive_spans": [],
"free_form_answer": "Considering \"What\" and \"How\" separately versus jointly optimizing for both.",
"highlighted_evidence": [
"Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction. ",
"In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"8bffad2892b897cc62faaa4e8b63c452cb530ccf"
],
"answer": [
{
"evidence": [
"Question generation is an NLG task for which the input has a wealth of possibilities depending on applications. While a host of input modalities have been considered in other NLG tasks, such as text summarization BIBREF24 , image captioning BIBREF25 and table-to-text generation BIBREF26 , traditional QG mainly focused on textual inputs, especially declarative sentences, explained by the original application domains of question answering and education, which also typically featured textual inputs.",
"Recently, with the growth of various QA applications such as Knowledge Base Question Answering (KBQA) BIBREF27 and Visual Question Answering (VQA) BIBREF28 , NQG research has also widened the spectrum of sources to include knowledge bases BIBREF29 and images BIBREF10 . This trend is also spurred by the remarkable success of neural models in feature representation, especially on image features BIBREF30 and knowledge representations BIBREF31 . We discuss adapting NQG models to other input modalities in Section \"Wider Input Modalities\" ."
],
"extractive_spans": [],
"free_form_answer": "Textual inputs, knowledge bases, and images.",
"highlighted_evidence": [
"While a host of input modalities have been considered in other NLG tasks, such as text summarization BIBREF24 , image captioning BIBREF25 and table-to-text generation BIBREF26 , traditional QG mainly focused on textual inputs, especially declarative sentences, explained by the original application domains of question answering and education, which also typically featured textual inputs.\n\nRecently, with the growth of various QA applications such as Knowledge Base Question Answering (KBQA) BIBREF27 and Visual Question Answering (VQA) BIBREF28 , NQG research has also widened the spectrum of sources to include knowledge bases BIBREF29 and images BIBREF10 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"7f2e8aadea59b20f3df567dc0140fedb23f4a347"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they cover data augmentation papers?",
"What is the latest paper covered by this survey?",
"Do they survey visual question generation work?",
"Do they survey multilingual aspects?",
"What learning paradigms do they cover in this survey?",
"What are all the input modalities considered in prior work in question generation?",
"Do they survey non-neural methods for question generation?"
],
"question_id": [
"a12a08099e8193ff2833f79ecf70acf132eda646",
"999b20dc14cb3d389d9e3ba5466bc3869d2d6190",
"ca4b66ffa4581f9491442dcec78ca556253c8146",
"b3ff166bd480048e099d09ba4a96e2e32b42422b",
"3703433d434f1913307ceb6a8cfb9a07842667dd",
"f7c34b128f8919e658ba4d5f1f3fc604fb7ff793",
"d42031893fd4ba5721c7d37e1acb1c8d229ffc21"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question generation",
"question generation",
"question generation",
"question generation",
"question generation",
"question generation",
"question generation"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: NQG datasets grouped by their cognitive level and answer type, where the number of documents, the number of questions, and the average number of questions per document (Q./Doc) for each corpus are listed.",
"Table 2: Existing NQG models with their best-reported performance on SQuAD. Legend: QW: question word generation, PC: paragraph-level context, CP: copying mechanism, LF: linguistic features, PG: policy gradient."
],
"file": [
"4-Table1-1.png",
"7-Table2-1.png"
]
} | [
"What is the latest paper covered by this survey?",
"What learning paradigms do they cover in this survey?",
"What are all the input modalities considered in prior work in question generation?"
] | [
[
"1905.08949-7-Table2-1.png"
],
[
"1905.08949-Learning Paradigm-2",
"1905.08949-Learning Paradigm-1"
],
[
"1905.08949-Input Modality-1",
"1905.08949-Input Modality-0"
]
] | [
"Kim et al. (2019)",
"Considering \"What\" and \"How\" separately versus jointly optimizing for both.",
"Textual inputs, knowledge bases, and images."
] | 100 |
1902.06843 | Fusing Visual, Textual and Connectivity Clues for Studying Mental Health | With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions. | {
"paragraphs": [
[
"0pt*0*0",
"0pt*0*0",
"0pt*0*0 0.95",
"1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj",
" 3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan",
" 1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA",
"[1] yazdavar.2@wright.edu",
"With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions."
],
[
"Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.",
"Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.",
"According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, \"a picture is worth a thousand words\" and now \"photos are worth a million likes.\" Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .",
"Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .",
"Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.",
"The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.",
"We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?"
],
[
"Mental Health Analysis using Social Media:",
"Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .",
"Demographic information inference on Social Media: ",
"There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 ."
],
[
"Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., \"16 years old suicidal girl\"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.",
"Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51 ",
"Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter."
],
[
"We now provide an in-depth analysis of visual and textual content of vulnerable users.",
"Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .",
"Facial Presence: ",
"For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.",
"Facial Expression:",
"Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.",
"Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.",
"General Image Features:",
"The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).",
"** alpha= 0.05, *** alpha = 0.05/223",
"Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .",
"Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)",
"Thinking Style:",
"Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as \"think,\" \"realize,\" and \"know\" indicates the degree of \"certainty\" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.",
"Authenticity:",
"Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)",
"Clout:",
"People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).",
"Self-references:",
"First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).",
"Informal Language Markers; Swear, Netspeak:",
"Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.",
"Sexual, Body: ",
"Sexual lexicon contains terms like \"horny\", \"love\" and \"incest\", and body terms like \"ache\", \"heart\", and \"cough\". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)",
"Quantitative Language Analysis:",
"We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.",
"*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05"
],
[
"We leverage both the visual and textual content for predicting age and gender.",
"Prediction with Textual Content:",
"We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2 ",
"where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.",
"Prediction with Visual Imagery:",
"Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .",
"Demographic Prediction Analysis:",
"We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).",
"However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis."
],
[
"We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .",
"Main each Feature INLINEFORM0 INLINEFORM1 ",
"RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important",
" Ensemble Feature Selection",
"Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.",
"In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8 ",
"For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10 ",
"Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2 ",
"and by substituting weights: INLINEFORM0 ",
"which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the \"Analytic thinking\" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower \"Analytic thinking\" score compared to control class. Moreover, the 40.46 \"Clout\" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.",
"Baselines:",
"To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.)"
]
],
"section_name": [
null,
"Introduction",
"Related Work",
"Dataset",
"Data Modality Analysis",
"Demographic Prediction",
"Multi-modal Prediction Framework"
]
} | {
"answers": [
{
"annotation_id": [
"9069ef5e523b402dc27ab4c3defb1b547af8c8f2"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"03c66dab424666d2bf7457daa5023bb03bbbc691"
],
"answer": [
{
"evidence": [
"Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51",
"Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter."
],
"extractive_spans": [
"either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age",
"more women than men were given a diagnosis of depression"
],
"free_form_answer": "",
"highlighted_evidence": [
"The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.)",
"Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6f84296097eea6526dcfb59e23889bc1f5d592da"
],
"answer": [
{
"evidence": [
"We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 ."
],
"extractive_spans": [
"Random Forest classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"ea594b61eb07e9789c7d05668b77afa1a5f339b6"
],
"answer": [
{
"evidence": [
"We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2",
"where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset."
],
"extractive_spans": [],
"free_form_answer": "Demographic information is predicted using weighted lexicon of terms.",
"highlighted_evidence": [
"We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender.",
"Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2\n\nwhere INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1f209244d8f3c63649ee96ec3d4a58e2314a81b2"
],
"answer": [
{
"evidence": [
"For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.",
"Facial Expression:",
"Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.",
"General Image Features:",
"The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).",
"Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)"
],
"extractive_spans": [
"facial presence",
"Facial Expression",
"General Image Features",
" textual content",
"analytical thinking",
"clout",
"authenticity",
"emotional tone",
"Sixltr",
" informal language markers",
"1st person singular pronouns"
],
"free_form_answer": "",
"highlighted_evidence": [
"For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization.",
"Facial Expression:\n\nFollowing BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images.",
"General Image Features:\n\nThe importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . ",
"Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. ",
"It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"e277b34d09834dc7c33e8096d7b560b7fe686f52"
],
"answer": [
{
"evidence": [
"Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., \"16 years old suicidal girl\"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url."
],
"extractive_spans": [],
"free_form_answer": "The data are self-reported by Twitter users and then verified by two human experts.",
"highlighted_evidence": [
"We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"c4695e795080ba25f33c4becee24aea803ee068c"
],
"answer": [
{
"evidence": [
"Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51",
"Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter."
],
"extractive_spans": [],
"free_form_answer": "From Twitter profile descriptions of the users.",
"highlighted_evidence": [
"We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994).",
"We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"deedf2e223758db6f59cc8eeb41e7f258749e794"
],
"answer": [
{
"evidence": [
"The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
],
"extractive_spans": [],
"free_form_answer": "Sociability from ego-network on Twitter",
"highlighted_evidence": [
"We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"e8c7a7ff219abef43c0444bb270cf20d3bfcb5f6"
],
"answer": [
{
"evidence": [
"The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
],
"extractive_spans": [],
"free_form_answer": "Users' tweets",
"highlighted_evidence": [
"We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"06fbe4ab4db9860966cc6a49627d3554a01ee590"
],
"answer": [
{
"evidence": [
"The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
],
"extractive_spans": [],
"free_form_answer": "Profile pictures from the Twitter users' profiles.",
"highlighted_evidence": [
"We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What insights into the relationship between demographics and mental health are provided?",
"What model is used to achieve 5% improvement on F1 for identifying depressed individuals on Twitter?",
"How do this framework facilitate demographic inference from social media?",
"What types of features are used from each data type?",
"How is the data annotated?",
"Where does the information on individual-level demographics come from?",
"What is the source of the user interaction data? ",
"What is the source of the textual data? ",
"What is the source of the visual data? "
],
"question_id": [
"5d70c32137e82943526911ebdf78694899b3c28a",
"97dac7092cf8082a6238aaa35f4b185343b914af",
"195611926760d1ceec00bd043dfdc8eba2df5ad1",
"445e792ce7e699e960e2cb4fe217aeacdd88d392",
"a3b1520e3da29d64af2b6e22ff15d330026d0b36",
"2cf8825639164a842c3172af039ff079a8448592",
"36b25021464a9574bf449e52ae50810c4ac7b642",
"98515bd97e4fae6bfce2d164659cd75e87a9fc89",
"53bf6238baa29a10f4ff91656c470609c16320e1",
"b27f7993b1fe7804c5660d1a33655e424cea8d10"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Self-disclosure on Twitter from likely depressed users discovered by matching depressiveindicative terms",
"Figure 2: The age distribution for depressed and control users in ground-truth dataset",
"Figure 3: Gender and Depressive Behavior Association (Chi-square test: color-code: (blue:association), (red: repulsion), size: amount of each cell’s contribution)",
"Table 3: Statistical significance (t-statistic) of the mean of salient features for depressed and control classes 20",
"Figure 4: The Pearson correlation between the average emotions derived from facial expressions through the shared images and emotions from textual content for depressed-(a) and control users-(b). Pairs without statistically significant correlation are crossed (p-value <0.05)",
"Figure 5: Characterizing Linguistic Patterns in two aspects: Depressive-behavior and Age Distribution",
"Table 4: Statistical Significance Test of Linguistic Patterns/Visual Attributes for Different Age Groups with one-way ANOVA 31",
"Figure 6: Ranking Features obtained from Different Modalities with an Ensemble Algorithm",
"Table 7: Gender Prediction Performance through Visual and Textual Content",
"Figure 7: The explanation of the log-odds prediction of outcome (0.31) for a sample user (y-axis shows the outcome probability (depressed or control), the bar labels indicate the log-odds impact of each feature)",
"Table 8: Model’s Performance for Depressed User Identification from Twitter using different data modalities"
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Table3-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png",
"8-Table4-1.png",
"9-Figure6-1.png",
"10-Table7-1.png",
"10-Figure7-1.png",
"11-Table8-1.png"
]
} | [
"How do this framework facilitate demographic inference from social media?",
"How is the data annotated?",
"Where does the information on individual-level demographics come from?",
"What is the source of the user interaction data? ",
"What is the source of the textual data? ",
"What is the source of the visual data? "
] | [
[
"1902.06843-Demographic Prediction-3"
],
[
"1902.06843-Dataset-0"
],
[
"1902.06843-Dataset-2"
],
[
"1902.06843-Introduction-5"
],
[
"1902.06843-Introduction-5"
],
[
"1902.06843-Introduction-5"
]
] | [
"Demographic information is predicted using weighted lexicon of terms.",
"The data are self-reported by Twitter users and then verified by two human experts.",
"From Twitter profile descriptions of the users.",
"Sociability from ego-network on Twitter",
"Users' tweets",
"Profile pictures from the Twitter users' profiles."
] | 105 |
1910.02789 | Natural Language State Representation for Reinforcement Learning | Recent advances in Reinforcement Learning have highlighted the difficulties in learning within complex high dimensional domains. We argue that one of the main reasons that current approaches do not perform well, is that the information is represented sub-optimally. A natural way to describe what we observe, is through natural language. In this paper, we implement a natural language state representation to learn and complete tasks. Our experiments suggest that natural language based agents are more robust, converge faster and perform better than vision based agents, showing the benefit of using natural language representations for Reinforcement Learning. | {
"paragraphs": [
[
"“The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations.\"",
"(Edward Sapir, Language: An Introduction to the Study of Speech, 1921)",
"Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality\".",
"The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5.",
"The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation.",
"Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state.",
"In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work."
],
[
"In Reinforcement Learning the goal is to learn a policy $\\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \\mathbb {E}^{\\pi } [\\sum _t \\gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \\mathbb {E}^{\\pi } [\\sum _t \\gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3.",
"Deep Q Networks (DQN): The DQN algorithm is an extension of the classical Q-learning approach, to a deep learning regime. Q-learning learns the optimal policy by directly learning the value function, i.e., the action-value function. A neural network is used to estimate the $Q$-values and is trained to minimize the Bellman error, namely",
"Proximal Policy Optimization (PPO): While the DQN learns the optimal behavioral policy using a dynamic programming approach, PPO takes a different route. PPO builds upon the policy gradient theorem, which optimizes the policy directly, with an addition of a trust-region update rule. The policy gradient theorem updates the policy by"
],
[
"A word embedding is a mapping from a word $w$ to a vector $\\mathbf {w} \\in \\mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\\mathbf {w} \\in \\mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \\ll |D|$. These methods are also known as distributional embeddings.",
"The distributional hypothesis in linguistics is derived from the semantic theory of language usage (i.e. words that are used and occur in the same contexts tend to have similar meanings). Distributional word representations are a fundamental building block for representing natural language sentences. Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis, improving efficiency of state-of-the-art language models.",
"Convolutional Neural Networks (CNNs), originally invented for computer vision, have been shown to achieve strong performance on text classification tasks BIBREF22, BIBREF23, as well as other traditional NLP tasks BIBREF24. In this paper we consider a common architecture BIBREF25, in which each word in a sentence is represented as an embedding vector, a single convolutional layer with $m$ filters is applied, producing an $m$-dimensional vector for each $n$-gram. The vectors are combined using max-pooling followed by a ReLU activation. The result is then passed through multiple hidden linear layers with ReLU activation, eventually generating the final output."
],
[
"Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state.",
"The common approach is to derive decisions (e.g., classification, action, etc.) based on information in its raw form. In RL, the raw form is often the pixels representing an image – however the image is only one form of a semantic representation. In Semantic Segmentation, the image is converted from a 3-channel (RGB) matrix into an $N$-channel matrix, where $N$ is the number of classes. In this case, each channel represents a class, and a binary value at each coordinate denotes whether or not this class is present in the image at this location. For instance, fig: semantic segmentation example considers an autonomous vehicle task. The raw image and segmentation maps are both sufficient for the task (i.e., both contain a sufficient semantic representation). Nevertheless, the semantic segmentation maps contain less task-nuisances BIBREF17, which are random variables that affect the observed data, but are not informative to the task we are trying to solve.",
"In this paper we propose a forth method for representing a state, namely using natural language descriptions. One method to achieve such a representation is through Image Captioning BIBREF31, BIBREF32. Natural language is both rich as well as flexible. This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible. As an example, the top image in fig: semantic segmentation example can be represented using natural language as “There is a car in your lane two meters in front of you, a bicycle rider on your far left in the negative lane, a car in your direction in the opposite lane which is twenty meters away, and trees and pedestrians on the side walk.” or compactly by “There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away.”. Language also allows us to efficiently compress information. As an example, the segmentation map in the bottom image of fig: semantic segmentation example can be compactly described by “There are 13 pedestrians crossing the road in front of you”. In the next section we will demonstrate the benefits of using natural-language-based semantic state representation in a first person shooter enviornment."
],
[
"In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer.",
"The ViZDoom environment involves a 3D world that is significantly more real-world-like than Atari 2600 games, with a relatively realistic physics model. An agent in the ViZDoom environment must effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act. There are three types of state representations that are provided by the environment. The first, which is also most commonly used, is raw visual inputs, in which the state is represented by an image from a first person view of the agent. A feature vector representation is an additional state representation provided by the environment. The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent. Lastly, the environment provides a semantic segmentation map based on the aforementioned feature vector. An example of the visual representations in VizDoom is shown in fig: representations in vizdoom.",
"In order to incorporate natural language representation to the VizDoom environment we've constructed a semantic parser of the semantic segmentation maps provided by the environment. Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame. To implement this, the screen was divided into several vertical and horizontal patches, as depicted in fig: patches. These patches describe relational aspects of the state, such as distance of objects and their direction with respect to the agent's point of view. In each patch, objects were counted, and a natural language description of the patch was constructed. This technique was repeated for all patches to form the final state representation. fig: nlp state rep depicts examples of natural language sentences of different states in the enviornment."
],
[
"We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.",
"More specifically, in the basic scenario, a single monster is spawned in front of the agent. The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it. In the health gathering scenario, the floor of the room is covered in toxin, causing the agent to gradually lose health. Medipacks are spawned randomly in the room and the agent's objective is to keep itself alive by collecting them. In the take cover scenario, multiple fireball shooting monsters are spawned in front of the agent. The goal of the agent is to stay alive as long as possible, dodging inbound fireballs. The difficulty of the task increases over time, as additional monsters are spawned. In the defend the center scenario, melee attacking monsters are randomly spawned in the room, and charge towards the agent. As opposed to other scenarios, the agent is incapable of moving, aside from turning left and right and shooting. In the defend the line scenario, both melee and fireball shooting monsters are spawned near the opposing wall. The agent can only step right, left or shoot. Finally, in the “super\" scenario both melee and fireball shooting monsters are repeatably spawned all over the room. the room contains various items the agent can pick up and use, such as medipacks, shotguns, ammunition and armor. Furthermore, the room is filled with unusable objects, various types of trees, pillars and other decorations. The agent can freely move and turn in any direction, as well as shoot. This scenario combines elements from all of the previous scenarios.",
"Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4. We converted the parsed state into embedded representations of fixed length. We tested both a DQN and a PPO based agent, and compared the natural language representation to the other representation techniques, namely the raw image, feature vector, and semantic segmentation representations.",
"In order to effectively compare the performance of the different representation methods, we conducted our experiments under similar conditions for all agents. The same hyper-parameters were used under all tested representations. Moreover, to rule out effects of architectural expressiveness, the number of weights in all neural networks was approximately matched, regardless of the input type. Finally, we ensured the “super\" scenario was positively biased toward image-based representations. This was done by adding a large amount items to the game level, thereby filling the state with nuisances (these tests are denoted by `nuisance' in the scenario name). This was especially evident in the NLP representations, as sentences became extensively longer (average of over 250 words). This is contrary to image-based representations, which did not change in dimension.",
"Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise.",
"In order to verify the performance of the natural language representation was not due to extensive discretization of patches, we've conducted experiments increasing the number of horizontal patches - ranging from 3 to 31 patches in the extreme case. Our results, as depicted in fig: patch count, indicate that the amount of discretization of patches did not affect the performance of the NLP agent, remaining a superior representation compared to the rest.",
"To conclude, our experiments suggest that NLP representations, though they describe the same raw information of the semantic segmentation maps, are more robust to task-nuisances, allow for better transfer, and achieve higher performance in complex tasks, even when their description is long and convoluted. While we've only presented results for DQN agents, we include plots for a PPO agent in the Appendix, showing similar trends and conclusions. We thus deduce that NLP-based semantic state representations are a preferable choice for training VizDoom agents."
],
[
"Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33. In particular, deep learning exploits this concept by its very nature BIBREF2. Work on representation learning include Predictive State Representations (PSR) BIBREF34, BIBREF35, which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) BIBREF36, which learns to embed transition probabilities using an energy-based optimization problem.",
"There has been extensive work attempting to use natural language in RL. Efforts that integrate language in RL develop tools, approaches, and insights that are valuable for improving the generalization and sample efficiency of learning agents. Previous work on language-conditioned RL has considered the use of natural language in the observation and action space. Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing text-based games. Nevertheless, these environments do not search for semantic state representations, in which an RL algorithm can be better evaluated and controlled.",
"BIBREF38 use high-level semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model. BIBREF39 use high-level guidance expressed in text to enrich a stochastic agent, playing against the built-in AI of Civilization II. They train an agent with the Monte-Carlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback. BIBREF40 utilize natural language in a model-based approach to describe the dynamics and rewards of an environment, showing these can facilitate transfer between different domains.",
"More recently, the structure and compositionality of natural language has been used for representing policies in hierarchical RL. In a paper by BIBREF41, instructions given in natural language were used in order to break down complex problems into high-level plans and lower-level actions. Their suggested framework leverages the structure inherent to natural language, allowing for transfer to unfamiliar tasks and situations. This use of semantic structure has also been leveraged by BIBREF42, where abstract actions (not necessarily words) were recognized as symbols of a natural and expressive language, improving performance and transfer of RL agents.",
"Outside the context of RL, previous work has also shown that high-quality linguistic representations can assist in cross-modal transfer, such as using semantic relationships between labels for zero-shot transfer in image classification BIBREF43, BIBREF44."
],
[
"Our results indicate that natural language can outperform, and sometime even replace, vision-based representations. Nevertheless, natural language representations can also have disadvantages in various scenarios. For one, they require the designer to be able to describe the state exactly, whether by a rule-based or learned parser. Second, they abstract notions of the state space that the designer may not realize are necessary for solving the problem. As such, semantic representations should be carefully chosen, similar to the process of reward shaping or choosing a training algorithm. Here, we enumerate three instances in which we believe natural language representations are beneficial:",
"Natural use-case: Information contained in both generic and task-specific textual corpora may be highly valuable for decision making. This case assumes the state can either be easily described using natural language or is already in a natural language state. This includes examples such as user-based domains, in which user profiles and comments are part of the state, or the stock market, in which stocks are described by analysts and other readily available text. 3D physical environments such as VizDoom also fall into this category, as semantic segmentation maps can be easily described using natural language.",
"Subjective information: Subjectivity refers to aspects used to express opinions, evaluations, and speculations. These may include strategies for a game, the way a doctor feels about her patient, the mood of a driver, and more.",
"Unstructured information: In these cases, features might be measured by different units, with an arbitrary position in the state's feature vector, rendering them sensitive to permutations. Such state representations are thus hard to process using neural networks. As an example, the medical domain may contain numerous features describing the vitals of a patient. These raw features, when observed by an expert, can be efficiently described using natural language. Moreover, they allow an expert to efficiently add subjective information.",
"An orthogonal line of research considers automating the process of image annotation. The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation. We suspect the noise accumulated by this procedure would require additional information to be added to the state (e.g., past information). Nevertheless, as we have shown in this paper, such information can be compressed using natural language. In addition, while we have only considered spatial features of the state, information such as movement directions and transient features can be efficiently encoded as well.",
"Natural language representations help abstract information and interpret the state of an agent, improving its overall performance. Nevertheless, it is imperative to choose a representation that best fits the domain at hand. Designers of RL algorithms should consider searching for a semantic representation that fits their needs. While this work only takes a first step toward finding better semantic state representations, we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal."
],
[
"VizDoom is a \"Doom\" based research environment that was developed at the Poznań University of Technology. It is based on \"ZDoom\" game executable, and includes a Python based API. The API offers the user the ability to run game instances, query the game state, and execute actions. The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning. Thus, a natural language representation for the game was needed to be implemented. ViZDoom emulates the \"Doom\" game and enables us to access data within a certain frame using Python dictionaries. This makes it possible to extract valuable data including player health, ammo, enemy locations etc. Each game frame contains \"labels\", which contain data on visible objects in the game (the player, enemies, medkits, etc). We used \"Doom Builder\" in order to edit some of the scenarios and design a new one. Enviroment rewards are presented in doom-scenarios-table."
],
[
"A semantic representation using natural language should contain information which can be deduced by a human playing the game. For example, even though a human does not know the exact distance between objects, it can classify them as \"close\" or \"far\". However, objects that are outside the player's field of vision can not be a part of the state. Furthermore, a human would most likely refer to an object's location relative to itself, using directions such as \"right\" or \"left\"."
],
[
"To convert each frame to a natural language representation state, the list of available labels is iterated, and a string is built accordingly. The main idea of our implementation is to divide the screen into multiple vertical patches, count the amount of different objects inside by their types, and parse it as a sentence. The decision as to whether an object is close or far can be determined by calculating the distance from it to the player, and using two threshold levels. Object descriptions can be concise or detailed, as needed. We experimented with the following mechanics:",
"the screen can be divided between patches equally, or by determined ratios. Here, our main guideline was to keep the \"front\" patch narrow enough so it can be used as \"sights\".",
"our initial experiment was with 3 patches, and later we added 2 more patches classified as \"outer left\" and \"outer right\". In our experiments we have tested up to 51 patches, referred to as left or right patch with corresponding numbers.",
"we used 2 thresholds, which allowed us to classify the distance of an object from the player as \"close\",\"mid\", and \"far. Depending on the task, the value of the threshold can be changed, as well as adding more thresholds.",
"different states might generate sentence with different size. A maximum sentence length is another parameter that was tested. sentences-length-table presents some data regarding the average word count in some of the game sceanrios.",
"After the sentence describing the state is generated, it is transformed to an embedding vector. Words that were not found in the vocabulary were replaced with an “OOV\" vector. All words were then concatenated to a NxDx1 matrix, representing the state. We experimented with both Word2Vec and GloVe pretrained embedding vectors. Eventually, we used the latter, as it consumes less memory and speeds up the training process. The length of the state sentence is one of the hyperparameters of the agents; shorter sentences are zero padded, where longer ones are trimmed."
],
[
"All of our models were implemented using PyTorch. The DQN agents used a single network that outputs the Q-Values of the available actions. The PPO agents used an Actor-Critic model with two networks; the first outputs the policy distribution for the input state, and the second network outputs it's value. As mentioned earlier, we used three common neural network architectures:",
"used for the raw image and semantic segmentation based agents. VizDoom's raw output image resolution is 640X480X3 RGB image. We experimented with both the original image and its down-sampled version. The semantic segmentation image was of resolution 640X480X1, where the pixel value represents the object's class, generated using the VizDoom label API. the network consisted of two convolutional layers, two hidden linear layers and an output layer. The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation. The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation. The fully connected layers has 32 and 16 units, both of them are followed by ReLU activation. The output layer's size is the amount of actions the agent has available in the trained scenario.",
"Used in the feature vector based agent. Naturally, some discretization is needed in order to build a feature vector, so some of the state data is lost. the feature vector was made using features we extracted from the VizDoom API, and its dimensions was 90 X 1. The network is made up of two fully connected layers, each of them followed by a ReLU activation. The first layer has 32 units, and the second one one has 16 units. The output layer's size was the amount of actions available to the agent.",
"Used in the natural language based agent. As previously mentioned, each word in the natural language state is transformed into a 200X50X1 matrix. The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence, and return convolution outputs of sequences of varying lengths. The filters vary in width, such that each of them learns to identify different lengths of sequences in words. Longer filters have higher capability of extracting features from longer word sequences. The filters we have chosen have the following dimensions: 3X50X1, 4X50X1, 5X50X1, 8X50X1,11X50X1. Following the convolution layer there is a ReLU activation and a max pool layer. Finally, there are two fully connected layers; The first layer has 32 units, and second one has 16 units. Both of them are followed by ReLU activation.",
"All architectures have the same output, regardless of the input type. The DQN network is a regression network, with its output size the number of available actions. The PPO agent has 2 networks; actor and critic. The actor network has a Softmax activation with size equal to the available amount of actions. The critic network is a regression model with a single output representing the state's value. Reward plots for the PPO agent can be found in Figure FIGREF47."
]
],
"section_name": [
"Introduction",
"Preliminaries ::: Reinforcement Learning",
"Preliminaries ::: Deep Learning for NLP",
"Semantic Representation Methods",
"Semantic State Representations in the Doom Environment",
"Semantic State Representations in the Doom Environment ::: Experiments",
"Related Work",
"Discussion and Future Work",
"Appendix ::: VizDoom",
"Appendix ::: Natural language State Space",
"Appendix ::: Language model implementation",
"Appendix ::: Model implementation"
]
} | {
"answers": [
{
"annotation_id": [
"040faf49fbe5c02af982b966eec96f2efaef2243"
],
"answer": [
{
"evidence": [
"Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise."
],
"extractive_spans": [],
"free_form_answer": "Average reward across 5 seeds show that NLP representations are robust to changes in the environment as well task-nuisances",
"highlighted_evidence": [
"Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. ",
"NLP representations remain robust to changes in the environment as well as task-nuisances in the state. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1247a16fee4fd801faca9eb81331034412d89054"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ce47dbd8c234f9ef99f4c96c5e2e0271910589eb"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"fc219faad4cbdc4a0d17a5c4e30b187b5b08fd05"
],
"answer": [
{
"evidence": [
"We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent."
],
"extractive_spans": [
"a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios"
],
"free_form_answer": "",
"highlighted_evidence": [
"We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty.",
"The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"47aee4bb630643e14ceaa348b2fd1762fd4d43b1"
],
"answer": [
{
"evidence": [
"The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5."
],
"extractive_spans": [
" represent the state using natural language"
],
"free_form_answer": "",
"highlighted_evidence": [
". In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What result from experiments suggest that natural language based agents are more robust?",
"How better is performance of natural language based agents in experiments?",
"How much faster natural language agents converge in performed experiments?",
"What experiments authors perform?",
"How is state to learn and complete tasks represented via natural language?"
],
"question_id": [
"d79d897f94e666d5a6fcda3b0c7e807c8fad109e",
"599d9ca21bbe2dbe95b08cf44dfc7537bde06f98",
"827464c79f33e69959de619958ade2df6f65fdee",
"8e857e44e4233193c7b2d538e520d37be3ae1552",
"084fb7c80a24b341093d4bf968120e3aff56f693"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision",
"computer vision",
"computer vision",
"computer vision",
"computer vision"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Example of Semantic Segmentation [Kundu et al., 2016].",
"Figure 2: Left: Raw visual inputs and their corresponding semantic segmentation in the VizDoom enviornment. Right: Our suggested NLP-based semantic state representation framework.",
"Figure 3: Frame division used for describing the state in natural language.",
"Figure 4: Natural language state representation for a simple state (top) and complex state (bottom). The corresponding embedded representations and shown on the right.",
"Figure 5: Comparison of representation methods on the different VizDoom scenarios using a DQN agent. X and Y axes represent the number of iterations and cumulative reward, respectively. Last three graphs (bottom) depict nuisance-augmented scenarios.",
"Figure 6: Robustness of each representation type with respect to amount of nuisance.",
"Figure 7: Average rewards of NLP based agent as a function of the number of patches in the language model.",
"Figure 8: PPO - state representation and their average rewards, various degrees of nuisance",
"Table 1: statistics of words per state as function of patches.",
"Table 2: Doom scenarios"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png",
"7-Figure6-1.png",
"7-Figure7-1.png",
"13-Figure8-1.png",
"14-Table1-1.png",
"14-Table2-1.png"
]
} | [
"What result from experiments suggest that natural language based agents are more robust?"
] | [
[
"1910.02789-Semantic State Representations in the Doom Environment ::: Experiments-4"
]
] | [
"Average reward across 5 seeds show that NLP representations are robust to changes in the environment as well task-nuisances"
] | 108 |
2001.07209 | Text-based inference of moral sentiment change | We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora. Our framework is based on the premise that language use can inform people's moral perception toward right or wrong, and we build our methodology by exploring moral biases learned from diachronic word embeddings. We demonstrate how a parameter-free model supports inference of historical shifts in moral sentiment toward concepts such as slavery and democracy over centuries at three incremental levels: moral relevance, moral polarity, and fine-grained moral dimensions. We apply this methodology to visualizing moral time courses of individual concepts and analyzing the relations between psycholinguistic variables and rates of moral sentiment change at scale. Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society. | {
"paragraphs": [
[
"People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.",
"The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).",
"We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.",
"Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.",
"Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.",
"The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology."
],
[
"An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.",
"While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society."
],
[
"Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.",
"We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories."
],
[
"To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.",
"To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words."
],
[
"We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.",
"The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\\mathbf {S}_0$ and $\\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\\mathbf {S}_+$ and $\\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\\mathbf {S}_1, \\ldots , \\mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\\,|\\,\\mathbf {q})$, where $\\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.",
"We evaluate the following four models:",
"A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;",
"A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;",
"A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;",
"A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.",
"Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$."
],
[
"To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.",
"Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.",
"We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:",
"Google N-grams BIBREF31: a corpus of $8.5 \\times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.",
"COHA BIBREF32: a smaller corpus of $4.1 \\times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009."
],
[
"We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments."
],
[
"In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.",
"Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.",
"In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification."
],
[
"We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.",
"In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\\,|\\,\\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.",
"In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations."
],
[
"We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts."
],
[
"We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.",
"We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text."
],
[
"We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable\", “unacceptable\", and “not a moral issue\".",
"We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\\,|\\,\\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\\,|\\,\\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.",
"Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics."
],
[
"Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.",
"We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\\,|\\,\\mathbf {q}), i=1,\\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.",
"Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale."
],
[
"In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.",
"We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.",
"We performed a multiple linear regression under the following model:",
"Here $\\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\\beta _f$, $\\beta _l$, $\\beta _c$, and $\\beta _0$ are the corresponding factor weights and intercept, respectively; and $\\epsilon \\sim \\mathcal {N}(0, \\sigma )$ is the regression error term.",
"Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).",
"We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material)."
],
[
"We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.",
"Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.",
"Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society."
],
[
"We would like to thank Nina Wang, Nicola Lacerata, Dan Jurafsky, Paul Bloom, Dzmitry Bahdanau, and the Computational Linguistics Group at the University of Toronto for helpful discussion. We would also like to thank Ben Prystawski for his feedback on the manuscript. JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant, an NSERC Discovery Grant, and a Connaught New Researcher Award."
]
],
"section_name": [
"Moral sentiment change and language",
"Emerging NLP research on morality",
"A three-tier modelling framework",
"A three-tier modelling framework ::: Lexical data for moral sentiment",
"A three-tier modelling framework ::: Models",
"Historical corpus data",
"Model evaluations",
"Model evaluations ::: Moral sentiment inference of seed words",
"Model evaluations ::: Alignment with human valence ratings",
"Applications to diachronic morality",
"Applications to diachronic morality ::: Moral change in individual concepts ::: Historical time courses.",
"Applications to diachronic morality ::: Moral change in individual concepts ::: Prediction of human judgments.",
"Applications to diachronic morality ::: Retrieval of morally changing concepts",
"Applications to diachronic morality ::: Broad-scale investigation of moral change",
"Discussion and conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"047ca89bb05cf86c1747c79e310917a8225aebf3"
],
"answer": [
{
"evidence": [
"An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"f17a2c6afd767ff5278c07164927c3c3a166ee40"
],
"answer": [
{
"evidence": [
"To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.",
"To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.",
"We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:",
"Google N-grams BIBREF31: a corpus of $8.5 \\times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.",
"COHA BIBREF32: a smaller corpus of $4.1 \\times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009."
],
"extractive_spans": [],
"free_form_answer": "Google N-grams\nCOHA\nMoral Foundations Dictionary (MFD)\n",
"highlighted_evidence": [
"To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text.",
"To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.",
"We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:\n\nGoogle N-grams BIBREF31: a corpus of $8.5 \\times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.\n\nCOHA BIBREF32: a smaller corpus of $4.1 \\times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"25a58a9ba9472e5de77ec1ddeba0ef18e0238b02"
],
"answer": [
{
"evidence": [
"Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.",
"A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;",
"A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;"
],
"extractive_spans": [
"A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;",
"A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;"
],
"free_form_answer": "",
"highlighted_evidence": [
" Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.",
"A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;",
"A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"e3c7a80666fff31b038cdb13330b9fa7a8b6c8d0"
],
"answer": [
{
"evidence": [
"To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words."
],
"extractive_spans": [],
"free_form_answer": "By complementing morally relevant seed words with a set of morally irrelevant seed words based on the notion of valence",
"highlighted_evidence": [
"To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0c7b39838a3715c9f96f44796512eb886463cfe9"
],
"answer": [
{
"evidence": [
"We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories."
],
"extractive_spans": [
"Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation"
],
"free_form_answer": "",
"highlighted_evidence": [
"We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"b1ca28830abd09b4dea845015b4b37b90b141847"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"no",
"no",
"no"
],
"question": [
"Does the paper discuss previous models which have been applied to the same task?",
"Which datasets are used in the paper?",
"How does the parameter-free model work?",
"How do they quantify moral relevance?",
"Which fine-grained moral dimension examples do they showcase?",
"Which dataset sources to they use to demonstrate moral sentiment through history?"
],
"question_id": [
"31ee92e521be110b6a5a8d08cc9e6f90a3a97aae",
"737397f66751624bcf4ef891a10b29cfc46b0520",
"87cb19e453cf7e248f24b5f7d1ff9f02d87fc261",
"5fb6a21d10adf4e81482bb5c1ec1787dc9de260d",
"542a87f856cb2c934072bacaa495f3c2645f93be",
"4fcc668eb3a042f60c4ce2e7d008e7923b25b4fc"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"sentiment ",
"sentiment ",
"sentiment ",
"Inference",
"Inference",
"Inference"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Illustration of moral sentiment change over the past two centuries. Moral sentiment trajectories of three probe concepts, slavery, democracy, and gay, are shown in moral sentiment embedding space through 2D projection from Fisher’s discriminant analysis with respect to seed words from the classes of moral virtue, moral vice, and moral irrelevance. Parenthesized items represent moral categories predicted to be most strongly associated with the probe concepts. Gray markers represent the fine-grained centroids (or anchors) of these moral classes.",
"Figure 2: Illustration of the three-tier framework that supports moral sentiment inference at different levels.",
"Table 1: Summary of models for moral sentiment classification. Each model infers moral sentiment of a query word vector q based on moral classes c (at any of the three levels) represented by moral seed words Sc. E [Sc] is the mean vector of Sc; E [Sc, j] ,Var [Sc, j] refer to the mean and variance of Sc along the j-th dimension in embedding space. d is the number of embedding dimensions; and fN , fMN refer to the density functions of univariate and multivariate normal distributions, respectively.",
"Table 2: Classification accuracy of moral seed words for moral relevance, moral polarity, and fine-grained moral categories based on 1990–1999 word embeddings for two independent corpora, Google N-grams and COHA.",
"Table 3: Pearson correlations between model predicted moral sentiment polarities and human valence ratings.",
"Table 4: Top 10 changing words towards moral relevance during 1800–2000, with model-inferred moral category and switching period. *, **, and *** denote p < 0.05, p < 0.001, and p < 0.0001, all Bonferroni-corrected.",
"Table 5: Top 10 changing words towards moral positive (upper panel) and negative (lower panel) polarities, with model-inferred most representative moral categories during historical and modern periods and the switching periods. *, **, and *** denote p < 0.05, p < 0.001, and p < 0.0001, all Bonferroni-corrected for multiple tests.",
"Figure 3: Moral sentiment time courses of slavery (left) and democracy (right) at each of the three levels, inferred by the Centroid model. Time courses at the moral relevance and polarity levels are in log odds ratios, and those for the fine-grained moral categories are represented by circles with sizes proportional to category probabilities.",
"Figure 4: Model predictions against percentage of Pew respondents who selected “Not a moral concern” (left) or “Acceptable” (right), with lines of best fit and Pearson correlation coefficients r shown in the background.",
"Table 6: Results from multiple regression that regresses rate of change in moral relevance against the factors of word frequency, length, and concreteness (n = 606)."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Figure3-1.png",
"7-Figure4-1.png",
"8-Table6-1.png"
]
} | [
"Which datasets are used in the paper?",
"How do they quantify moral relevance?"
] | [
[
"2001.07209-A three-tier modelling framework ::: Lexical data for moral sentiment-1",
"2001.07209-Historical corpus data-4",
"2001.07209-Historical corpus data-3",
"2001.07209-Historical corpus data-2",
"2001.07209-A three-tier modelling framework ::: Lexical data for moral sentiment-0"
],
[
"2001.07209-A three-tier modelling framework ::: Lexical data for moral sentiment-1"
]
] | [
"Google N-grams\nCOHA\nMoral Foundations Dictionary (MFD)\n",
"By complementing morally relevant seed words with a set of morally irrelevant seed words based on the notion of valence"
] | 110 |
1909.00279 | Generating Classical Chinese Poems from Vernacular Chinese | Classical Chinese poetry is a jewel in the treasure house of Chinese culture. Previous poem generation models only allow users to employ keywords to interfere the meaning of generated poems, leaving the dominion of generation to the model. In this paper, we propose a novel task of generating classical Chinese poems from vernacular, which allows users to have more control over the semantic of generated poems. We adapt the approach of unsupervised machine translation (UMT) to our task. We use segmentation-based padding and reinforcement learning to address under-translation and over-translation respectively. According to experiments, our approach significantly improve the perplexity and BLEU compared with typical UMT models. Furthermore, we explored guidelines on how to write the input vernacular to generate better poems. Human evaluation showed our approach can generate high-quality poems which are comparable to amateur poems. | {
"paragraphs": [
[
"During thousands of years, millions of classical Chinese poems have been written. They contain ancient poets' emotions such as their appreciation for nature, desiring for freedom and concerns for their countries. Among various types of classical poetry, quatrain poems stand out. On the one hand, their aestheticism and terseness exhibit unique elegance. On the other hand, composing such poems is extremely challenging due to their phonological, tonal and structural restrictions.",
"Most previous models for generating classical Chinese poems BIBREF0, BIBREF1 are based on limited keywords or characters at fixed positions (e.g., acrostic poems). Since users could only interfere with the semantic of generated poems using a few input words, models control the procedure of poem generation. In this paper, we proposed a novel model for classical Chinese poem generation. As illustrated in Figure FIGREF1, our model generates a classical Chinese poem based on a vernacular Chinese paragraph. Our objective is not only to make the model generate aesthetic and terse poems, but also keep rich semantic of the original vernacular paragraph. Therefore, our model gives users more control power over the semantic of generated poems by carefully writing the vernacular paragraph.",
"Although a great number of classical poems and vernacular paragraphs are easily available, there exist only limited human-annotated pairs of poems and their corresponding vernacular translations. Thus, it is unlikely to train such poem generation model using supervised approaches. Inspired by unsupervised machine translation (UMT) BIBREF2, we treated our task as a translation problem, namely translating vernacular paragraphs to classical poems.",
"However, our work is not just a straight-forward application of UMT. In a training example for UMT, the length difference of source and target languages are usually not large, but this is not true in our task. Classical poems tend to be more concise and abstract, while vernacular text tends to be detailed and lengthy. Based on our observation on gold-standard annotations, vernacular paragraphs usually contain more than twice as many Chinese characters as their corresponding classical poems. Therefore, such discrepancy leads to two main problems during our preliminary experiments: (1) Under-translation: when summarizing vernacular paragraphs to poems, some vernacular sentences are not translated and ignored by our model. Take the last two vernacular sentences in Figure FIGREF1 as examples, they are not covered in the generated poem. (2) Over-translation: when expanding poems to vernacular paragraphs, certain words are unnecessarily translated for multiple times. For example, the last sentence in the generated poem of Figure FIGREF1, as green as sapphire, is back-translated as as green as as as sapphire.",
"Inspired by the phrase segmentation schema in classical poems BIBREF3, we proposed the method of phrase-segmentation-based padding to handle with under-translation. By padding poems based on the phrase segmentation custom of classical poems, our model better aligns poems with their corresponding vernacular paragraphs and meanwhile lowers the risk of under-translation. Inspired by Paulus2018ADR, we designed a reinforcement learning policy to penalize the model if it generates vernacular paragraphs with too many repeated words. Experiments show our method can effectively decrease the possibility of over-translation.",
"The contributions of our work are threefold:",
"(1) We proposed a novel task for unsupervised Chinese poem generation from vernacular text.",
"(2) We proposed using phrase-segmentation-based padding and reinforcement learning to address two important problems in this task, namely under-translation and over-translation.",
"(3) Through extensive experiments, we proved the effectiveness of our models and explored how to write the input vernacular to inspire better poems. Human evaluation shows our models are able to generate high quality poems, which are comparable to amateur poems."
],
[
"Classical Chinese Poem Generation Most previous works in classical Chinese poem generation focus on improving the semantic coherence of generated poems. Based on LSTM, Zhang and Lapata Zhang2014ChinesePG purposed generating poem lines incrementally by taking into account the history of what has been generated so far. Yan Yan2016iPA proposed a polishing generation schema, each poem line is generated incrementally and iteratively by refining each line one-by-one. Wang et al. Wang2016ChinesePG and Yi et al. Yi2018ChinesePG proposed models to keep the generated poems coherent and semantically consistent with the user's intent. There are also researches that focus on other aspects of poem generation. (Yang et al. Yang2018StylisticCP explored increasing the diversity of generated poems using an unsupervised approach. Xu et al. Xu2018HowII explored generating Chinese poems from images. While most previous works generate poems based on topic words, our work targets at a novel task: generating poems from vernacular Chinese paragraphs.",
"Unsupervised Machine Translation Compared with supervised machine translation approaches BIBREF4, BIBREF5, unsupervised machine translation BIBREF6, BIBREF2 does not rely on human-labeled parallel corpora for training. This technique is proved to greatly improve the performance of low-resource languages translation systems. (e.g. English-Urdu translation). The unsupervised machine translation framework is also applied to various other tasks, e.g. image captioning BIBREF7, text style transfer BIBREF8, speech to text translation BIBREF9 and clinical text simplification BIBREF10. The UMT framework makes it possible to apply neural models to tasks where limited human labeled data is available. However, in previous tasks that adopt the UMT framework, the abstraction levels of source and target language are the same. This is not the case for our task.",
"Under-Translation & Over-Translation Both are troublesome problems for neural sequence-to-sequence models. Most previous related researches adopt the coverage mechanism BIBREF11, BIBREF12, BIBREF13. However, as far as we know, there were no successful attempt applying coverage mechanism to transformer-based models BIBREF14."
],
[
"We transform our poem generation task as an unsupervised machine translation problem. As illustrated in Figure FIGREF1, based on the recently proposed UMT framework BIBREF2, our model is composed of the following components:",
"Encoder $\\textbf {E}_s$ and decoder $\\textbf {D}_s$ for vernacular paragraph processing",
"Encoder $\\textbf {E}_t$ and decoder $\\textbf {D}_t$ for classical poem processing",
"where $\\textbf {E}_s$ (or $\\textbf {E}_t$) takes in a vernacular paragraph (or a classical poem) and converts it into a hidden representation, and $\\textbf {D}_s$ (or $\\textbf {D}_t$) takes in the hidden representation and converts it into a vernacular paragraph (or a poem). Our model relies on a vernacular texts corpus $\\textbf {\\emph {S}}$ and a poem corpus $\\textbf {\\emph {T}}$. We denote $S$ and $T$ as instances in $\\textbf {\\emph {S}}$ and $\\textbf {\\emph {T}}$ respectively.",
"The training of our model relies on three procedures, namely parameter initialization, language modeling and back-translation. We will give detailed introduction to each procedure.",
"Parameter initialization As both vernacular and classical poem use Chinese characters, we initialize the character embedding of both languages in one common space, the same character in two languages shares the same embedding. This initialization helps associate characters with their plausible translations in the other language.",
"Language modeling It helps the model generate texts that conform to a certain language. A well-trained language model is able to detect and correct minor lexical and syntactic errors. We train the language models for both vernacular and classical poem by minimizing the following loss:",
"where $S_N$ (or $T_N$) is generated by adding noise (drop, swap or blank a few words) in $S$ (or $T$).",
"Back-translation Based on a vernacular paragraph $S$, we generate a poem $T_S$ using $\\textbf {E}_s$ and $\\textbf {D}_t$, we then translate $T_S$ back into a vernacular paragraph $S_{T_S} = \\textbf {D}_s(\\textbf {E}_t(T_S))$. Here, $S$ could be used as gold standard for the back-translated paragraph $S_{T_s}$. In this way, we could turn the unsupervised translation into a supervised task by maximizing the similarity between $S$ and $S_{T_S}$. The same also applies to using poem $T$ as gold standard for its corresponding back-translation $T_{S_T}$. We define the following loss:",
"Note that $\\mathcal {L}^{bt}$ does not back propagate through the generation of $T_S$ and $S_T$ as we observe no improvement in doing so. When training the model, we minimize the composite loss:",
"where $\\alpha _1$ and $\\alpha _2$ are scaling factors."
],
[
"During our early experiments, we realize that the naive UMT framework is not readily applied to our task. Classical Chinese poems are featured for its terseness and abstractness. They usually focus on depicting broad poetic images rather than details. We collected a dataset of classical Chinese poems and their corresponding vernacular translations, the average length of the poems is $32.0$ characters, while for vernacular translations, it is $73.3$. The huge gap in sequence length between source and target language would induce over-translation and under-translation when training UMT models. In the following sections, we explain the two problems and introduce our improvements."
],
[
"By nature, classical poems are more concise and abstract while vernaculars are more detailed and lengthy, to express the same meaning, a vernacular paragraph usually contains more characters than a classical poem. As a result, when summarizing a vernacular paragraph $S$ to a poem $T_S$, $T_S$ may not cover all information in $S$ due to its length limit. In real practice, we notice the generated poems usually only cover the information in the front part of the vernacular paragraph, while the latter part is unmentioned.",
"To alleviate under-translation, we propose phrase segmentation-based padding. Specifically, we first segment each line in a classical poem into several sub-sequences, we then join these sub-sequences with the special padding tokens <p>. During training, the padded lines are used instead of the original poem lines. As illustrated in Figure FIGREF10, padding would create better alignments between a vernacular paragraph and a prolonged poem, making it more likely for the latter part of the vernacular paragraph to be covered in the poem. As we mentioned before, the length of the vernacular translation is about twice the length of its corresponding classical poem, so we pad each segmented line to twice its original length.",
"According to Ye jia:1984, to present a stronger sense of rhythm, each type of poem has its unique phrase segmentation schema, for example, most seven-character quatrain poems adopt the 2-2-3 schema, i.e. each quatrain line contains 3 phrases, the first, second and third phrase contains 2, 2, 3 characters respectively. Inspired by this law, we segment lines in a poem according to the corresponding phrase segmentation schema. In this way, we could avoid characters within the scope of a phrase to be cut apart, thus best preserve the semantic of each phrase.BIBREF15"
],
[
"In NMT, when decoding is complete, the decoder would generate an <EOS>token, indicating it has reached the end of the output sequence. However, when expending a poem $T$ into a vernacular Chinese paragraph $S_T$, due to the conciseness nature of poems, after finishing translating every source character in $T$, the output sequence $S_T$ may still be much shorter than the expected length of a poem‘s vernacular translation. As a result, the decoder would believe it has not finished decoding. Instead of generating the <EOS>token, the decoder would continue to generate new output characters from previously translated source characters. This would cause the decoder to repetitively output a piece of text many times.",
"To remedy this issue, in addition to minimizing the original loss function $\\mathcal {L}$, we propose to minimize a specific discrete metric, which is made possible with reinforcement learning.",
"We define repetition ratio $RR(S)$ of a paragraph $S$ as:",
"where $vocab(S)$ refers to the number of distinctive characters in $S$, $len(S)$ refers the number of all characters in $S$. Obviously, if a generated sequence contains many repeated characters, it would have high repetition ratio. Following the self-critical policy gradient training BIBREF16, we define the following loss function:",
"where $\\tau $ is a manually set threshold. Intuitively, minimizing $\\mathcal {L}^{rl}$ is equivalent to maximizing the conditional likelihood of the sequence $S$ given $S_{T_S}$ if its repetition ratio is lower than the threshold $\\tau $. Following BIBREF17, we revise the composite loss as:",
"where $\\alpha _1, \\alpha _2, \\alpha _3$ are scaling factors."
],
[
"The objectives of our experiment are to explore the following questions: (1) How much do our models improve the generated poems? (Section SECREF23) (2) What are characteristics of the input vernacular paragraph that lead to a good generated poem? (Section SECREF26) (3) What are weaknesses of generated poems compared to human poems? (Section SECREF27) To this end, we built a dataset as described in Section SECREF18. Evaluation metrics and baselines are described in Section SECREF21 and SECREF22. For the implementation details of building the dataset and models, please refer to supplementary materials."
],
[
"Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set.",
"Test Set From online resources, we collected 487 seven-character quatrain poems from Tang Poems and Song Poems, as well as their corresponding high quality vernacular translations. These poems could be used as gold standards for poems generated from their corresponding vernacular translations. Table TABREF11 shows the statistics of our training, validation and test set."
],
[
"Perplexity Perplexity reflects the probability a model generates a certain poem. Intuitively, a better model would yield higher probability (lower perplexity) on the gold poem.",
"BLEU As a standard evaluation metric for machine translation, BLEU BIBREF18 measures the intersection of n-grams between the generated poem and the gold poem. A better generated poem usually achieves higher BLEU score, as it shares more n-gram with the gold poem.",
"Human evaluation While perplexity and BLEU are objective metrics that could be applied to large-volume test set, evaluating Chinese poems is after all a subjective task. We invited 30 human evaluators to join our human evaluation. The human evaluators were divided into two groups. The expert group contains 15 people who hold a bachelor degree in Chinese literature, and the amateur group contains 15 people who holds a bachelor degree in other fields. All 30 human evaluators are native Chinese speakers.",
"We ask evaluators to grade each generated poem from four perspectives: 1) Fluency: Is the generated poem grammatically and rhythmically well formed, 2) Semantic coherence: Is the generated poem itself semantic coherent and meaningful, 3) Semantic preservability: Does the generated poem preserve the semantic of the modern Chinese translation, 4) Poeticness: Does the generated poem display the characteristic of a poem and does the poem build good poetic image. The grading scale for each perspective is from 1 to 5."
],
[
"We compare the performance of the following models: (1) LSTM BIBREF19; (2)Naive transformer BIBREF14; (3)Transformer + Anti OT (RL loss); (4)Transformer + Anti UT (phrase segmentation-based padding); (5)Transformer + Anti OT&UT."
],
[
"As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.",
"According to experiment results, perplexity, BLEU scores and total scores in human evaluation are consistent with each other. We observe all BLEU scores are fairly low, we believe it is reasonable as there could be multiple ways to compose a poem given a vernacular paragraph. Among transformer-based models, both +Anti OT and +Anti UT outperforms the naive transformer, while Anti OT&UT shows the best performance, this demonstrates alleviating under-translation and over-translation both helps generate better poems. Specifically, +Anti UT shows bigger improvement than +Anti OT. According to human evaluation, among the four perspectives, our Anti OT&UT brought most score improvement in Semantic preservability, this proves our improvement on semantic preservability was most obvious to human evaluators. All transformer-based models outperform LSTM. Note that the average length of the vernacular translation is over 70 characters, comparing with transformer-based models, LSTM may only keep the information in the beginning and end of the vernacular. We anticipated some score inconsistency between expert group and amateur group. However, after analyzing human evaluation results, we did not observed big divergence between two groups."
],
[
"Chinese literature is not only featured for classical poems, but also various other literature forms. Song lyricUTF8gbsn(宋词), or ci also gained tremendous popularity in its palmy days, standing out in classical Chinese literature. Modern prose, modern poems and pop song lyrics have won extensive praise among Chinese people in modern days. The goal of this experiment is to transfer texts of other literature forms into quatrain poems. We expect the generated poems to not only keep the semantic of the original text, but also demonstrate terseness, rhythm and other characteristics of ancient poems. Specifically, we chose 20 famous fragments from four types of Chinese literature (5 fragments for each of modern prose, modern poems, pop song lyrics and Song lyrics). We try to As no ground truth is available, we resorted to human evaluation with the same grading standard in Section SECREF23.",
"Comparing the scores of different literature forms, we observe Song lyric achieves higher scores than the other three forms of modern literature. It is not surprising as both Song lyric and quatrain poems are written in classical Chinese, while the other three literature forms are all in vernacular.",
"Comparing the scores within the same literature form, we observe the scores of poems generated from different paragraphs tends to vary. After carefully studying the generated poems as well as their scores, we have the following observation:",
"1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.",
"2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph."
],
[
"We manually select 25 generated poems from vernacular Chinese translations and pair each one with its corresponding human written poem. We then present the 25 pairs to human evaluators and ask them to differentiate which poem is generated by human poet.",
"As demonstrated in Table TABREF29, although the general meanings in human poems and generated poems seem to be the same, the wordings they employ are quite different. This explains the low BLEU scores in Section 4.3. According to the test results in Table TABREF30, human evaluators only achieved 65.8% in mean accuracy. This indicates the best generated poems are somewhat comparable to poems written by amateur poets.",
"We interviewed evaluators who achieved higher than 80% accuracy on their differentiation strategies. Most interviewed evaluators state they realize the sentences in a human written poem are usually well organized to highlight a theme or to build a poetic image, while the correlation between sentences in a generated poem does not seem strong. As demonstrated in Table TABREF29, the last two sentences in both human poems (marked as red) echo each other well, while the sentences in machine-generated poems seem more independent. This gives us hints on the weakness of generated poems: While neural models may generate poems that resemble human poems lexically and syntactically, it's still hard for them to compete with human beings in building up good structures."
],
[
"Addressing Under-Translation In this part, we wish to explore the effect of different phrase segmentation schemas on our phrase segmentation-based padding. According to Ye jia:1984, most seven-character quatrain poems adopt the 2-2-3 segmentation schema. As shown in examples in Figure FIGREF31, we compare our phrase segmentation-based padding (2-2-3 schema) to two less common schemas (i.e. 2-3-2 and 3-2-2 schema) we report our experiment results in Table TABREF32.",
"The results show our 2-2-3 segmentation-schema greatly outperforms 2-3-2 and 3-2-2 schema in both perplexity and BLEU scores. Note that the BLEU scores of 2-3-2 and 3-2-2 schema remains almost the same as our naive baseline (Without padding). According to the observation, we have the following conclusions: 1) Although padding better aligns the vernacular paragraph to the poem, it may not improve the quality of the generated poem. 2) The padding tokens should be placed according to the phrase segmentation schema of the poem as it preserves the semantic within the scope of each phrase.",
"Addressing Over-Translation To explore the effect of our reinforcement learning policy on alleviating over-translation, we calculate the repetition ratio of vernacular paragraphs generated from classical poems in our validation set. We found naive transformer achieves $40.8\\%$ in repetition ratio, while our +Anti OT achieves $34.9\\%$. Given the repetition ratio of vernacular paragraphs (written by human beings) in our validation set is $30.1\\%$, the experiment results demonstrated our RL loss effectively alleviate over-translation, which in turn leads to better generated poems."
],
[
"In this paper, we proposed a novel task of generating classical Chinese poems from vernacular paragraphs. We adapted the unsupervised machine translation model to our task and meanwhile proposed two novel approaches to address the under-translation and over-translation problems. Experiments show that our task can give users more controllability in generating poems. In addition, our approaches are very effective to solve the problems when the UMT model is directly used in this task. In the future, we plan to explore: (1) Applying the UMT model in the tasks where the abstraction levels of source and target languages are different (e.g., unsupervised automatic summarization); (2) Improving the quality of generated poems via better structure organization approaches."
]
],
"section_name": [
"Introduction",
"Related Works",
"Model ::: Main Architecture",
"Model ::: Addressing Under-Translation and Over-Translation",
"Model ::: Addressing Under-Translation and Over-Translation ::: Under-Translation",
"Model ::: Addressing Under-Translation and Over-Translation ::: Over-Translation",
"Experiment",
"Experiment ::: Datasets",
"Experiment ::: Evaluation Metrics",
"Experiment ::: Baselines",
"Experiment ::: Reborn Poems: Generating Poems from Vernacular Translations",
"Experiment ::: Interpoetry: Generating Poems from Various Literature Forms",
"Experiment ::: Human Discrimination Test",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"04c432ed960ff69bb335b3eac687be8fe4ecf97a"
],
"answer": [
{
"evidence": [
"1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.",
"2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph."
],
"extractive_spans": [
" if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score",
"poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs"
],
"free_form_answer": "",
"highlighted_evidence": [
"According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score.",
"We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e025375e4b5390c1b05ad8d0b226d6f05b5faa4c"
],
"answer": [
{
"evidence": [
"As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.",
"FLOAT SELECTED: Table 3: Perplexity and BLEU scores of generating poems from vernacular translations. Since perplexity and BLEU scores on the test set fluctuates from epoch to epoch, we report the mean perplexity and BLEU scores over 5 consecutive epochs after convergence."
],
"extractive_spans": [],
"free_form_answer": "Perplexity of the best model is 65.58 compared to best baseline 105.79.\nBleu of the best model is 6.57 compared to best baseline 5.50.",
"highlighted_evidence": [
"We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.",
"FLOAT SELECTED: Table 3: Perplexity and BLEU scores of generating poems from vernacular translations. Since perplexity and BLEU scores on the test set fluctuates from epoch to epoch, we report the mean perplexity and BLEU scores over 5 consecutive epochs after convergence."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2a6d7e0c7dfd73525cb559488b4c967b42f06831"
],
"answer": [
{
"evidence": [
"Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set."
],
"extractive_spans": [
"We collected a corpus of poems and a corpus of vernacular literature from online resources"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What are some guidelines in writing input vernacular so model can generate ",
"How much is proposed model better in perplexity and BLEU score than typical UMT models?",
"What dataset is used for training?"
],
"question_id": [
"6b9310b577c6232e3614a1612cbbbb17067b3886",
"d484a71e23d128f146182dccc30001df35cdf93f",
"5787ac3e80840fe4cf7bfae7e8983fa6644d6220"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An example of the training procedures of our model. Here we depict two procedures, namely back translation and language modeling. Back translation has two paths, namely ES → DT → ET → DS and DT → ES → DS → ET . Language modeling also has two paths, namely ET → DT and ES → DS . Figure 1 shows only the former one for each training procedure.",
"Figure 2: A real example to show the effectiveness of our phrase-segmentation-based padding. Without padding, the vernacular paragraph could not be aligned well with the poem. Therefore, the text in South Yangtze ends but the grass and trees have not withered in red is not covered in the poem. By contrast, they are covered well after using our padding method.",
"Table 1: Statistics of our dataset",
"Table 2: A few poems generated by our model from their corresponding vernacular paragraphs.",
"Table 3: Perplexity and BLEU scores of generating poems from vernacular translations. Since perplexity and BLEU scores on the test set fluctuates from epoch to epoch, we report the mean perplexity and BLEU scores over 5 consecutive epochs after convergence.",
"Table 4: Human evaluation results of generating poems from vernacular translations. We report the mean scores for each evaluation metric and total scores of four metrics.",
"Table 5: Human evaluation results for generating poems from various literature forms. We show the results obtained from our best model (Transformer+Anti OT&UT).",
"Table 6: Examples of generated poems and their corresponding gold poems used in human discrimination test.",
"Table 7: The performance of human discrimination test.",
"Table 8: Perplexity and BLEU scores of different padding schemas."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"9-Table7-1.png",
"9-Table8-1.png"
]
} | [
"How much is proposed model better in perplexity and BLEU score than typical UMT models?"
] | [
[
"1909.00279-7-Table3-1.png",
"1909.00279-Experiment ::: Reborn Poems: Generating Poems from Vernacular Translations-0"
]
] | [
"Perplexity of the best model is 65.58 compared to best baseline 105.79.\nBleu of the best model is 6.57 compared to best baseline 5.50."
] | 112 |
1812.07023 | From FiLM to Video: Multi-turn Question Answering with Multi-modal Context | Understanding audio-visual content and the ability to have an informative conversation about it have both been challenging areas for intelligent systems. The Audio Visual Scene-aware Dialog (AVSD) challenge, organized as a track of the Dialog System Technology Challenge 7 (DSTC7), proposes a combined task, where a system has to answer questions pertaining to a video given a dialogue with previous question-answer pairs and the video itself. We propose for this task a hierarchical encoder-decoder model which computes a multi-modal embedding of the dialogue context. It first embeds the dialogue history using two LSTMs. We extract video and audio frames at regular intervals and compute semantic features using pre-trained I3D and VGGish models, respectively. Before summarizing both modalities into fixed-length vectors using LSTMs, we use FiLM blocks to condition them on the embeddings of the current question, which allows us to reduce the dimensionality considerably. Finally, we use an LSTM decoder that we train with scheduled sampling and evaluate using beam search. Compared to the modality-fusing baseline model released by the AVSD challenge organizers, our model achieves a relative improvements of more than 16%, scoring 0.36 BLEU-4 and more than 33%, scoring 0.997 CIDEr. | {
"paragraphs": [
[
"Deep neural networks have been successfully applied to several computer vision tasks such as image classification BIBREF0 , object detection BIBREF1 , video action classification BIBREF2 , etc. They have also been successfully applied to natural language processing tasks such as machine translation BIBREF3 , machine reading comprehension BIBREF4 , etc. There has also been an explosion of interest in tasks which combine multiple modalities such as audio, vision, and language together. Some popular multi-modal tasks combining these three modalities, and their differences are highlighted in Table TABREF1 .",
"Given an image and a question related to the image, the vqa challenge BIBREF5 tasked users with selecting an answer to the question. BIBREF6 identified several sources of bias in the vqa dataset, which led to deep neural models answering several questions superficially. They found that in several instances, deep architectures exploited the statistics of the dataset to select answers ignoring the provided image. This prompted the release of vqa 2.0 BIBREF7 which attempts to balance the original dataset. In it, each question is paired to two similar images which have different answers. Due to the complexity of vqa, understanding the failures of deep neural architectures for this task has been a challenge. It is not easy to interpret whether the system failed in understanding the question or in understanding the image or in reasoning over it. The CLEVR dataset BIBREF8 was hence proposed as a useful benchmark to evaluate such systems on the task of visual reasoning. Extending question answering over images to videos, BIBREF9 have proposed MovieQA, where the task is to select the correct answer to a provided question given the movie clip on which it is based.",
"Intelligent systems that can interact with human users for a useful purpose are highly valuable. To this end, there has been a recent push towards moving from single-turn qa to multi-turn dialogue, which is a natural and intuitive setting for humans. Among multi-modal dialogue tasks, visdial BIBREF10 provides an image and dialogue where each turn is a qa pair. The task is to train a model to answer these questions within the dialogue. The avsd challenge extends the visdial task from images to the audio-visual domain.",
"We present our modelname model for the avsd task. modelname combines a hred for encoding and generating qa-dialogue with a novel FiLM-based audio-visual feature extractor for videos and an auxiliary multi-task learning-based decoder for decoding a summary of the video. It outperforms the baseline results for the avsd dataset BIBREF11 and was ranked 2nd overall among the dstc7 avsd challenge participants.",
"In Section SECREF2 , we discuss existing literature on end-to-end dialogue systems with a special focus on multi-modal dialogue systems. Section SECREF3 describes the avsd dataset. In Section SECREF4 , we present the architecture of our modelname model. We describe our evaluation and experimental setup in Section SECREF5 and then conclude in Section SECREF6 ."
],
[
"With the availability of large conversational corpora from sources like Reddit and Twitter, there has been a lot of recent work on end-to-end modelling of dialogue for open domains. BIBREF12 treated dialogue as a machine translation problem where they translate from the stimulus to the response. They observed this to be more challenging than machine translation tasks due the larger diversity of possible responses. Among approaches that just use the previous utterance to generate the current response, BIBREF13 proposed a response generation model based on the encoder decoder framework. BIBREF14 also proposed an encoder-decoder based neural network architecture that uses the previous two utterances to generate the current response. Among discriminative methods (i.e. methods that produce a score for utterances from a set and then rank them), BIBREF15 proposed a neural architecture to select the best next response from a list of responses by measuring their similarity to the dialogue context. BIBREF16 extended prior work on encoder-decoder-based models to multi-turn conversations. They trained a hierarchical model called hred for generating dialogue utterances where a recurrent neural network encoder encodes each utterance. A higher-level recurrent neural network maintains the dialogue state by further encoding the individual utterance encodings. This dialogue state is then decoded by another recurrent decoder to generate the response at that point in time. In followup work, BIBREF17 used a latent stochastic variable to condition the generation process which aided their model in producing longer coherent outputs that better retain the context.",
"Datasets and tasks BIBREF10 , BIBREF18 , BIBREF19 have also been released recently to study visual-input based conversations. BIBREF10 train several generative and discriminative deep neural models for the visdial task. They observe that on this task, discriminative models outperform generative models and that models making better use of the dialogue history do better than models that do not use dialogue history at all. Unexpectedly, the performance between models that use the image features and models that do no use these features is not significantly different. As we discussed in Section SECREF1 , this is similar to the issues vqa models faced initially due to the imbalanced nature of the dataset, which leads us to believe that language is a strong prior on the visdial dataset too. BIBREF20 train two separate agents to play a cooperative game where one agent has to answer the other agent's questions, which in turn has to predict the fc7 features of the Image obtained from VGGNet. Both agents are based on hred models and they show that agents fine-tuned with rl outperform agents trained solely with supervised learning. BIBREF18 train both generative and discriminative deep neural models on the igc dataset, where the task is to generate questions and answers to carry on a meaningful conversation. BIBREF19 train hred-based models on GuessWhat?! dataset in which agents have to play a guessing game where one player has to find an object in the picture which the other player knows about and can answer questions about them.",
"Moving from image-based dialogue to video-based dialogue adds further complexity and challenges. Limited availability of such data is one of the challenges. Apart from the avsd dataset, there does not exist a video dialogue dataset to the best of our knowledge and the avsd data itself is fairly limited in size. Extracting relevant features from videos also contains the inherent complexity of extracting features from individual frames and additionally requires understanding their temporal interaction. The temporal nature of videos also makes it important to be able to focus on a varying-length subset of video frames as the action which is being asked about might be happening within them. There is also the need to encode the additional modality of audio which would be required for answering questions that rely on the audio track. With limited size of publicly available datasets based on the visual modality, learning useful features from high dimensional visual data has been a challenge even for the visdial dataset, and we anticipate this to be an even more significant challenge on the avsd dataset as it involves videos.",
"On the avsd task, BIBREF11 train an attention-based audio-visual scene-aware dialogue model which we use as the baseline model for this paper. They divide each video into multiple equal-duration segments and, from each of them, extract video features using an I3D BIBREF21 model, and audio features using a VGGish BIBREF22 model. The I3D model was pre-trained on Kinetics BIBREF23 dataset and the VGGish model was pre-trained on Audio Set BIBREF24 . The baseline encodes the current utterance's question with a lstm BIBREF25 and uses the encoding to attend to the audio and video features from all the video segments and to fuse them together. The dialogue history is modelled with a hierarchical recurrent lstm encoder where the input to the lower level encoder is a concatenation of question-answer pairs. The fused feature representation is concatenated with the question encoding and the dialogue history encoding and the resulting vector is used to decode the current answer using an lstm decoder. Similar to the visdial models, the performance difference between the best model that uses text and the best model that uses both text and video features is small. This indicates that the language is a stronger prior here and the baseline model is unable to make good use of the highly relevant video.",
"Automated evaluation of both task-oriented and non-task-oriented dialogue systems has been a challenge BIBREF26 , BIBREF27 too. Most such dialogue systems are evaluated using per-turn evaluation metrics since there is no suitable per-dialogue metric as conversations do not need to happen in a deterministic ordering of turns. These per-turn evaluation metrics are mostly word-overlap-based metrics such as BLEU, METEOR, ROUGE, and CIDEr, borrowed from the machine translation literature. Due to the diverse nature of possible responses, world-overlap metrics are not highly suitable for evaluating these tasks. Human evaluation of generated responses is considered the most reliable metric for such tasks but it is cost prohibitive and hence the dialogue system literature continues to rely widely on word-overlap-based metrics."
],
[
"The avsd dataset BIBREF28 consists of dialogues collected via amt. Each dialogue is associated with a video from the Charades BIBREF29 dataset and has conversations between two amt workers related to the video. The Charades dataset has multi-action short videos and it provides text descriptions for these videos, which the avsd challenge also distributes as the caption. The avsd dataset has been collected using similar methodology as the visdial dataset. In avsd, each dialogue turn consists of a question and answer pair. One of the amt workers assumes the role of questioner while the other amt worker assumes the role of answerer. The questioner sees three static frames from the video and has to ask questions. The answerer sees the video and answers the questions asked by the questioner. After 10 such qa turns, the questioner wraps up by writing a summary of the video based on the conversation.",
"Dataset statistics such as the number of dialogues, turns, and words for the avsd dataset are presented in Table TABREF5 . For the initially released prototype dataset, the training set of the avsd dataset corresponds to videos taken from the training set of the Charades dataset while the validation and test sets of the avsd dataset correspond to videos taken from the validation set of the Charades dataset. For the official dataset, training, validation and test sets are drawn from the corresponding Charades sets.",
"The Charades dataset also provides additional annotations for the videos such as action, scene, and object annotations, which are considered to be external data sources by the avsd challenge, for which there is a special sub-task in the challenge. The action annotations also include the start and end time of the action in the video."
],
[
"Our modelname model is based on the hred framework for modelling dialogue systems. In our model, an utterance-level recurrent lstm encoder encodes utterances and a dialogue-level recurrent lstm encoder encodes the final hidden states of the utterance-level encoders, thus maintaining the dialogue state and dialogue coherence. We use the final hidden states of the utterance-level encoders in the attention mechanism that is applied to the outputs of the description, video, and audio encoders. The attended features from these encoders are fused with the dialogue-level encoder's hidden states. An utterance-level decoder decodes the response for each such dialogue state following a question. We also add an auxiliary decoding module which is similar to the response decoder except that it tries to generate the caption and/or the summary of the video. We present our model in Figure FIGREF2 and describe the individual components in detail below."
],
[
"The utterance-level encoder is a recurrent neural network consisting of a single layer of lstm cells. The input to the lstm are word embeddings for each word in the utterance. The utterance is concatenated with a special symbol <eos> marking the end of the sequence. We initialize our word embeddings using 300-dimensional GloVe BIBREF30 and then fine-tune them during training. For words not present in the GloVe vocabulary, we initialize their word embeddings from a random uniform distribution."
],
[
"Similar to the utterance-level encoder, the description encoder is also a single-layer lstm recurrent neural network. Its word embeddings are also initialized with GloVe and then fine-tuned during training. For the description, we use the caption and/or the summary for the video provided with the dataset. The description encoder also has access to the last hidden state of the utterance-level encoder, which it uses to generate an attention map over the hidden states of its lstm. The final output of this module is the attention-weighted sum of the lstm hidden states."
],
[
"For the video encoder, we use an I3D model pre-trained on the Kinetics dataset BIBREF23 and extract the output of its Mixed_7c layer for INLINEFORM0 (30 for our models) equi-distant segments of the video. Over these features, we add INLINEFORM1 (2 for our models) FiLM BIBREF31 blocks which have been highly successful in visual reasoning problems. Each FiLM block applies a conditional (on the utterance encoding) feature-wise affine transformation on the features input to it, ultimately leading to the extraction of more relevant features. The FiLM blocks are followed by fully connected layers which are further encoded by a single layer recurrent lstm network. The last hidden state of the utterance-level encoder then generates an attention map over the hidden states of its lstm, which is multiplied by the hidden states to provide the output of this module. We also experimented with using convolutional Mixed_5c features to capture spatial information but on the limited avsd dataset they did not yield any improvement. When not using the FiLM blocks, we use the final layer I3D features (provided by the avsd organizers) and encode them with the lstm directly, followed by the attention step. We present the video encoder in Figure FIGREF3 ."
],
[
"The audio encoder is structurally similar to the video encoder. We use the VGGish features provided by the avsd challenge organizers. Also similar to the video encoder, when not using the FiLM blocks, we use the VGGish features and encode them with the lstm directly, followed by the attention step. The audio encoder is depicted in Figure FIGREF4 ."
],
[
"The outputs of the encoders for past utterances, descriptions, video, and audio together form the dialogue context INLINEFORM0 which is the input of the decoder. We first combine past utterances using a dialogue-level encoder which is a single-layer lstm recurrent neural network. The input to this encoder are the final hidden states of the utterance-level lstm. To combine the hidden states of these diverse modalities, we found concatenation to perform better on the validation set than averaging or the Hadamard product."
],
[
"The answer decoder consists of a single-layer recurrent lstm network and generates the answer to the last question utterance. At each time-step, it is provided with the dialogue-level state and produces a softmax over a vector corresponding to vocabulary words and stops when 30 words were produced or an end of sentence token is encountered.",
"The auxiliary decoder is functionally similar to the answer decoder. The decoded sentence is the caption and/or description of the video. We use the Video Encoder state instead of the Dialogue-level Encoder state as input since with this module we want to learn a better video representation capable of decoding the description."
],
[
"For a given context embedding INLINEFORM0 at dialogue turn INLINEFORM1 , we minimize the negative log-likelihood of the answer word INLINEFORM2 (vocabulary size), normalized by the number of words INLINEFORM3 in the ground truth response INLINEFORM4 , L(Ct, r) = -1Mm=1MiV( [rt,m=i] INLINEFORM5 ) , where the probabilities INLINEFORM6 are given by the decoder LSTM output, r*t,m-1 ={ll rt,m-1 ; s>0.2, sU(0, 1)",
"v INLINEFORM0 ; else . is given by scheduled sampling BIBREF32 , and INLINEFORM1 is a symbol denoting the start of a sequence. We optimize the model using the AMSGrad algorithm BIBREF33 and use a per-condition random search to determine hyperparameters. We train the model using the BLEU-4 score on the validation set as our stopping citerion."
],
[
"The avsd challenge tasks we address here are:",
"We train our modelname model for Task 1.a and Task 2.a of the challenge and we present the results in Table TABREF9 . Our model outperforms the baseline model released by BIBREF11 on all of these tasks. The scores for the winning team have been released to challenge participants and are also included. Their approach, however, is not public as of yet. We observe the following for our models:",
"Since the official test set has not been released publicly, results reported on the official test set have been provided by the challenge organizers. For the prototype test set and for the ablation study presented in Table TABREF24 , we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability. We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:",
"Our primary architectural differences over the baseline model are: not concatenating the question, answer pairs before encoding them, the auxiliary decoder module, and using the Time-Extended FiLM module for feature extraction. These, combined with using scheduled sampling and running hyperparameter optimization over the validation set to select hyperparameters, give us the observed performance boost.",
"We observe that our models generate fairly relevant responses to questions in the dialogues, and models with audio-visual inputs respond to audio-visual questions (e.g. “is there any voices or music ?”) correctly more often.",
"We conduct an ablation study on the effectiveness of different components (eg., text, video and audio) and present it in Table TABREF24 . Our experiments show that:"
],
[
"We presented modelname, a state-of-the-art dialogue model for conversations about videos. We evaluated the model on the official AVSD test set, where it achieves a relative improvement of more than 16% over the baseline model on BLEU-4 and more than 33% on CIDEr. The challenging aspect of multi-modal dialogue is fusing modalities with varying information density. On AVSD, it is easiest to learn from the input text, while video features remain largely opaque to the decoder. modelname uses a generalization of FiLM to video that conditions video feature extraction on a question. However, similar to related work, absolute improvements of incorporating video features into dialogue are consistent but small. Thus, while our results indicate the suitability of our FiLM generalization, they also highlight that applications at the intersection between language and video are currently constrained by the quality of video features, and emphasizes the need for larger datasets."
]
],
"section_name": [
"Introduction",
"Related Work",
"The avsd dataset and challenge",
"Models",
"Utterance-level Encoder",
"Description Encoder",
"Video Encoder with Time-Extended FiLM",
"Audio Encoder",
"Fusing Modalities for Dialogue Context",
"Decoders",
"Loss Function",
"Experiments",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"ee2861105f2d63096676c4b63554fe0593a9c6a0"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"04f7cd52b0492dc423550fd5e96c757cec3066cc"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The utterance is concatenated with a special symbol marking the end of the sequence. We initialize our word embeddings using 300-dimensional GloVe BIBREF30 and then fine-tune them during training."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"88bf278c9f23fbbb3cee3410c62d8760350ddb7d"
],
"answer": [
{
"evidence": [
"Since the official test set has not been released publicly, results reported on the official test set have been provided by the challenge organizers. For the prototype test set and for the ablation study presented in Table TABREF24 , we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability. We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (list missing) \nScheduled sampling: In our experiments, we found that models trained with scheduled sampling performed better (about 0.004 BLEU-4 on validation set) than the ones trained using teacher-forcing for the AVSD dataset. Hence, we use scheduled sampling for all the results we report in this paper.\n\nYes.",
"highlighted_evidence": [
"We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"At which interval do they extract video and audio frames?",
"Do they use pretrained word vectors for dialogue context embedding?",
"Do they train a different training method except from scheduled sampling?"
],
"question_id": [
"05e3b831e4c02bbd64a6e35f6c52f0922a41539a",
"bd74452f8ea0d1d82bbd6911fbacea1bf6e08cab",
"6472f9d0a385be81e0970be91795b1b97aa5a9cf"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Tasks with audio, visual and text modalities",
"Figure 1: FA-HRED uses the last question’s encoding to attend to video description, audio, and video features. These features along with the dialogue state enable the model to generate the answer to the current question. The ground truth answer is encoded into the dialogue history for the next turn.",
"Figure 2: Video Encoder Module: FiLM for video features. Question encoding of the current question is used here.",
"Figure 3: Audio Encoder Module: FiLM for audio features. Question encoding of the current question is used here.",
"Table 2: AVSD: Dataset Statistics. Top: official dataset. Bottom half: prototype dataset released earlier.",
"Table 3: Scores achieved by our model on different tasks of the AVSD challenge test set. Task 1 model configurations use both video and text features while Task 2 model configurations only use text features. First section: train on official, test on official. Second section: train on prototype, test on official. Third section: train on prototype, test on prototype.",
"Table 4: Model ablation Study comparing BLEU-4 on the validation set: The best model makes use of all modalities and the video summary. Applying FiLM to audio and video features consistently outperforms unconditioned feature extraction. Video features (I3D) are more important than audio (VGGish). Combining all multi-modal components (e.g., text, audio and video) helps improve performance only when using FiLM blocks."
],
"file": [
"1-Table1-1.png",
"3-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png"
]
} | [
"Do they train a different training method except from scheduled sampling?"
] | [
[
"1812.07023-Experiments-2"
]
] | [
"Answer with content missing: (list missing) \nScheduled sampling: In our experiments, we found that models trained with scheduled sampling performed better (about 0.004 BLEU-4 on validation set) than the ones trained using teacher-forcing for the AVSD dataset. Hence, we use scheduled sampling for all the results we report in this paper.\n\nYes."
] | 114 |
1906.06448 | Can neural networks understand monotonicity reasoning? | Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures. Since no test set has been developed for monotonicity reasoning with wide coverage, it is still unclear whether neural models can perform monotonicity reasoning in a proper way. To investigate this issue, we introduce the Monotonicity Entailment Dataset (MED). Performance by state-of-the-art NLI models on the new test set is substantially worse, under 55%, especially on downward reasoning. In addition, analysis using a monotonicity-driven data augmentation method showed that these models might be limited in their generalization ability in upward and downward reasoning. | {
"paragraphs": [
[
"Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 .",
"Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( \"Introduction\" ) and ( \"Introduction\" ).",
"All [ workers $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [joined for a French dinner $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] joined for a dinner Not all workers joined for a dinner ",
"A context is upward entailing (shown by [... $\\leavevmode {\\color {red!80!black}\\uparrow }$ ]) that allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]) allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( \"Introduction\" )), as witness the fact that ( \"Introduction\" ) entails ( \"Introduction\" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.",
"For previous datasets containing monotonicity inference problems, FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manually-curated datasets for testing a wide range of linguistic phenomena. However, monotonicity problems are limited to very small sizes (FraCaS: 37/346 examples and GLUE: 93/1650 examples). The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning.",
"To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section \"Dataset\" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning.",
"We measure the performance of state-of-the-art NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning (Section \"Results and Discussion\" ). The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences.",
"In addition, we analyzed the performance of models trained with an automatically created monotonicity dataset, HELP BIBREF11 . The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set, while they perform worse in the opposite direction. This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set, and models might lack the ability to capture the structural relations between monotonicity operators and their arguments."
],
[
"As an example of a monotonicity inference, consider the example with the determiner every in ( \"Monotonicity\" ); here the premise $P$ entails the hypothesis $H$ .",
" $P$ : Every [ $_{\\scriptsize \\mathsf {NP}}$ person $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [ $_{\\scriptsize \\mathsf {VP}}$ bought a movie ticket $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] $H$ : Every young person bought a ticket ",
"Every is downward entailing in the first argument ( $\\mathsf {NP}$ ) and upward entailing in the second argument ( $\\mathsf {VP}$ ), and thus the term person can be more specific by adding modifiers (person $\\sqsupseteq $ young person), replacing it with its hyponym (person $\\sqsupseteq $ spectator), or adding conjunction (person $\\sqsupseteq $ person and alien). On the other hand, the term buy a ticket can be more general by removing modifiers (bought a movie ticket $\\sqsubseteq $ bought a ticket), replacing it with its hypernym (bought a movie ticket $\\sqsubseteq $ bought a show ticket), or adding disjunction (bought a movie ticket $\\sqsubseteq $ bought or sold a movie ticket). Table 1 shows determiners modeled as binary operators and their polarities with respect to the first and second arguments.",
"There are various types of downward operators, not limited to determiners (see Table 2 ). As shown in ( \"Monotonicity\" ), if a propositional object is embedded in a downward monotonic context (e.g., when), the polarity of words over its scope can be reversed.",
" $P$ : When [every [ $_{\\scriptsize \\mathsf {NP}}$ young person $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] [ $_{\\scriptsize \\mathsf {VP}}$ bought a ticket $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]], [that shop was open] $H$ : When [every [ $_{\\scriptsize \\mathsf {NP}}$ person] [ $_{\\scriptsize \\mathsf {VP}}$ bought a movie ticket]], [that shop was open] ",
"Thus, the polarity ( $\\leavevmode {\\color {red!80!black}\\uparrow }$ and $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ), where the replacement with more general (specific) phrases licenses entailment, needs to be determined by the interaction of monotonicity properties and syntactic structures; polarity of each constituent is calculated based on a monotonicity operator of functional expressions (e.g., every, when) and their function-term relations."
],
[
"To create monotonicity inference problems, we should satisfy three requirements: (a) detect the monotonicity operators and their arguments; (b) based on the syntactic structure, induce the polarity of the argument positions; and (c) replace the phrase in the argument position with a more general or specific phrase in natural and various ways (e.g., by using lexical knowledge or logical connectives). For (a) and (b), we first conduct polarity computation on a syntactic structure for each sentence, and then select premises involving upward/downward expressions.",
"For (c), we use crowdsourcing to narrow or broaden the arguments. The motivation for using crowdsourcing is to collect naturally alike monotonicity inference problems that include various expressions. One problem here is that it is unclear how to instruct workers to create monotonicity inference problems without knowledge of natural language syntax and semantics. We must make tasks simple for workers to comprehend and provide sound judgements. Moreover, recent studies BIBREF12 , BIBREF3 , BIBREF13 point out that previous crowdsourced datasets, such as SNLI BIBREF14 and MultiNLI BIBREF10 , include hidden biases. As these previous datasets are motivated by approximated entailments, workers are asked to freely write hypotheses given a premise, which does not strictly restrict them to creating logically complex inferences.",
"Taking these concerns into consideration, we designed two-step tasks to be performed via crowdsourcing for creating a monotonicity test set; (i) a hypothesis creation task and (ii) a validation task. The task (i) is to create a hypothesis by making some polarized part of an original sentence more specific. Instead of writing a complete sentence from scratch, workers are asked to rewrite only a relatively short sentence. By restricting workers to rewrite only a polarized part, we can effectively collect monotonicity inference examples. The task (ii) is to annotate an entailment label for the premise-hypothesis pair generated in (i). Figure 1 summarizes the overview of our human-oriented dataset creation. We used the crowdsourcing platform Figure Eight for both tasks.",
"As a resource, we use declarative sentences with more than five tokens from the Parallel Meaning Bank (PMB) BIBREF15 . The PMB contains syntactically correct sentences annotated with its syntactic category in Combinatory Categorial Grammar (CCG; BIBREF16 , BIBREF16 ) format, which is suitable for our purpose. To get a whole CCG derivation tree, we parse each sentence by the state-of-the-art CCG parser, depccg BIBREF17 . Then, we add a polarity to every constituent of the CCG tree by the polarity computation system ccg2mono BIBREF18 and make the polarized part a blank field.",
"We ran a trial rephrasing task on 500 examples and detected 17 expressions that were too general and thus difficult to rephrase them in a natural way (e.g., every one, no time). We removed examples involving such expressions. To collect more downward inference examples, we select examples involving determiners in Table 1 and downward operators in Table 2 . As a result, we selected 1,485 examples involving expressions having arguments with upward monotonicity and 1,982 examples involving expressions having arguments with downward monotonicity.",
"We present crowdworkers with a sentence whose polarized part is underlined, and ask them to replace the underlined part with more specific phrases in three different ways. In the instructions, we showed examples rephrased in various ways: by adding modifiers, by adding conjunction phrases, and by replacing a word with its hyponyms.",
"Workers were paid US$0.05 for each set of substitutions, and each set was assigned to three workers. To remove low-quality examples, we set the minimum time it should take to complete each set to 200 seconds. The entry in our task was restricted to workers from native speaking English countries. 128 workers contributed to the task, and we created 15,339 hypotheses (7,179 upward examples and 8,160 downward examples).",
"The gold label of each premise-hypothesis pair created in the previous task is automatically determined by monotonicity calculus. That is, a downward inference pair is labeled as entailment, while an upward inference pair is labeled as non-entailment.",
"However, workers sometimes provided some ungrammatical or unnatural sentences such as the case where a rephrased phrase does not satisfy the selectional restrictions (e.g., original: Tom doesn't live in Boston, rephrased: Tom doesn't live in yes), making it difficult to judge their entailment relations. Thus, we performed an annotation task to ensure accurate labeling of gold labels. We asked workers about the entailment relation of each premise-hypothesis pair as well as how natural it is.",
"Worker comprehension of an entailment relation directly affects the quality of inference problems. To avoid worker misunderstandings, we showed workers the following definitions of labels and five examples for each label:",
"entailment: the case where the hypothesis is true under any situation that the premise describes.",
"non-entailment: the case where the hypothesis is not always true under a situation that the premise describes.",
"unnatural: the case where either the premise and/or the hypothesis is ungrammatical or does not make sense.",
"Workers were paid US$0.04 for each question, and each question was assigned to three workers. To collect high-quality annotation results, we imposed ten test questions on each worker, and removed workers who gave more than three wrong answers. We also set the minimum time it should take to complete each question to 200 seconds. 1,237 workers contributed to this task, and we annotated gold labels of 15,339 premise-hypothesis pairs.",
"Table 3 shows the numbers of cases where answers matched gold labels automatically determined by monotonicity calculus. This table shows that there exist inference pairs whose labels are difficult even for humans to determine; there are 3,354 premise-hypothesis pairs whose gold labels as annotated by polarity computations match with those answered by all workers. We selected these naturalistic monotonicity inference pairs for the candidates of the final test set.",
"To make the distribution of gold labels symmetric, we checked these pairs to determine if we can swap the premise and the hypothesis, reverse their gold labels, and create another monotonicity inference pair. In some cases, shown below, the gold label cannot be reversed if we swap the premise and the hypothesis.",
"In ( UID15 ), child and kid are not hyponyms but synonyms, and the premise $P$ and the hypothesis $H$ are paraphrases.",
" $P$ : Tom is no longer a child $H$ : Tom is no longer a kid ",
"These cases are not strict downward inference problems, in the sense that a phrase is not replaced by its hyponym/hypernym.",
"Consider the example ( UID16 ).",
" $P$ : The moon has no atmosphere $H$ : The moon has no atmosphere, and the gravity force is too low ",
"The hypothesis $H$ was created by asking workers to make atmosphere in the premise $P$ more specific. However, the additional phrase and the gravity force is too low does not form constituents with atmosphere. Thus, such examples are not strict downward monotone inferences.",
"In such cases as (a) and (b), we do not swap the premise and the hypothesis. In the end, we collected 4,068 examples from crowdsourced datasets."
],
[
"We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications. The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain well-designed inference problems, which might be challenging problems for NLI models.",
"We collected 1,184 examples from 11 linguistics publications BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Regarding previous manually-curated datasets, we collected 93 examples for monotonicity reasoning from the GLUE diagnostic dataset, and 37 single-premise problems from FraCaS.",
"Both the GLUE diagnostic dataset and FraCaS categorize problems by their types of monotonicity reasoning, but we found that each dataset has different classification criteria. Thus, following GLUE, we reclassified problems into three types of monotone reasoning (upward, downward, and non-monotone) by checking if they include (i) the target monotonicity operator in both the premise and the hypothesis and (ii) the phrase replacement in its argument position. In the GLUE diagnostic dataset, there are several problems whose gold labels are contradiction. We regard them as non-entailment in that the premise does not semantically entail the hypothesis."
],
[
"We merged the human-oriented dataset created via crowdsourcing and the linguistics-oriented dataset created from linguistics publications to create the current version of the monotonicity entailment dataset (MED). Table 4 shows some examples from the MED dataset. We can see that our dataset contains various phrase replacements (e.g., conjunction, relative clauses, and comparatives). Table 5 reports the statistics of the MED dataset, including 5,382 premise-hypothesis pairs (1,820 upward examples, 3,270 downward examples, and 292 non-monotone examples). Regarding non-monotone problems, gold labels are always non-entailment, whether a hypothesis is more specific or general than its premise, and thus almost all non-monotone problems are labeled as non-entailment. The size of the word vocabulary in the MED dataset is 4,023, and overlap ratios of vocabulary with previous standard NLI datasets is 95% with MultiNLI and 90% with SNLI.",
"We assigned a set of annotation tags for linguistic phenomena to each example in the test set. These tags allow us to analyze how well models perform on each linguistic phenomenon related to monotonicity reasoning. We defined 6 tags (see Table 4 for examples):",
"lexical knowledge (2,073 examples): inference problems that require lexical relations (i.e., hypernyms, hyponyms, or synonyms)",
"reverse (240 examples): inference problems where a propositional object is embedded in a downward environment more than once",
"conjunction (283 examples): inference problems that include the phrase replacement by adding conjunction (and) to the hypothesis",
"disjunction (254 examples): inference problems that include the phrase replacement by adding disjunction (or) to the hypothesis",
"conditionals (149 examples): inference problems that include conditionals (e.g., if, when, unless) in the hypothesis ",
"negative polarity items (NPIs) (338 examples): inference problems that include NPIs (e.g., any, ever, at all, anything, anyone, anymore, anyhow, anywhere) in the hypothesis"
],
[
"To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment.",
"Table 6 shows that the accuracies of all models were better on upward inferences, in accordance with the reported results of the GLUE leaderboard. The overall accuracy of each model was low. In particular, all models underperformed the majority baseline on downward inferences, despite some models having rich lexical knowledge from a knowledge base (KIM) or pretraining (BERT). This indicates that downward inferences are difficult to perform even with the expansion of lexical knowledge. In addition, it is interesting to see that if a model performed better on upward inferences, it performed worse on downward inferences. We will investigate these results in detail below."
],
[
"To explore whether the performance of models on monotonicity reasoning depends on the training set or the model themselves, we conducted further analysis performed by data augmentation with the automatically generated monotonicity dataset HELP BIBREF11 . HELP contains 36K monotonicity inference examples (7,784 upward examples, 21,192 downward examples, and 1,105 non-monotone examples). The size of the HELP word vocabulary is 15K, and the overlap ratio of vocabulary between HELP and MED is 15.2%.",
"We trained BERT on MultiNLI only and on MultiNLI augmented with HELP, and compared their performance. Following BIBREF3 , we also checked the performance of a hypothesis-only model trained with each training set to test whether our test set contains undesired biases.",
"Table 7 shows that the performance of BERT with the hypothesis-only training set dropped around 10-40% as compared with the one with the premise-hypothesis training set, even if we use the data augmentation technique. This indicates that the MED test set does not allow models to predict from hypotheses alone. Data augmentation by HELP improved the overall accuracy to 71.6%, but there is still room for improvement. In addition, while adding HELP increased the accuracy on downward inferences, it slightly decreased accuracy on upward inferences. The size of downward examples in HELP is much larger than that of upward examples. This might improve accuracy on downward inferences, but might decrease accuracy on upward inferences.",
"To investigate the relationship between accuracy on upward inferences and downward inferences, we checked the performance throughout training BERT with only upward and downward inference examples in HELP (Figure 2 (i), (ii)). These two figures show that, as the size of the upward training set increased, BERT performed better on upward inferences but worse on downward inferences, and vice versa.",
"Figure 2 (iii) shows performance on a different ratio of upward and downward inference training sets. When downward inference examples constitute more than half of the training set, accuracies on upward and downward inferences were reversed. As the ratio of downward inferences increased, BERT performed much worse on upward inferences. This indicates that a training set in one direction (upward or downward entailing) of monotonicity might be harmful to models when learning the opposite direction of monotonicity.",
"Previous work using HELP BIBREF11 reported that the BERT trained with MultiNLI and HELP containing both upward and downward inferences improved accuracy on both directions of monotonicity. MultiNLI rarely comes from downward inferences (see Section \"Discussion\" ), and its size is large enough to be immune to the side-effects of downward inference examples in HELP. This indicates that MultiNLI might act as a buffer against side-effects of the monotonicity-driven data augmentation technique.",
"Table 8 shows the evaluation results by genre. This result shows that inference problems collected from linguistics publications are more challenging than crowdsourced inference problems, even if we add HELP to training sets. As shown in Figure 2 , the change in performance on problems from linguistics publications is milder than that on problems from crowdsourcing. This result also indicates the difficulty of problems from linguistics publications. Regarding non-monotone problems collected via crowdsourcing, there are very few non-monotone problems, so accuracy is 100%. Adding non-monotone problems to our test set is left for future work.",
"Table 9 shows the evaluation results by type of linguistic phenomenon. While accuracy on problems involving NPIs and conditionals was improved on both upward and downward inferences, accuracy on problems involving conjunction and disjunction was improved on only one direction. In addition, it is interesting to see that the change in accuracy on conjunction was opposite to that on disjunction. Downward inference examples involving disjunction are similar to upward inference ones; that is, inferences from a sentence to a shorter sentence are valid (e.g., Not many campers have had a sunburn or caught a cold $\\Rightarrow $ Not many campers have caught a cold). Thus, these results were also caused by addition of downward inference examples. Also, accuracy on problems annotated with reverse tags was apparently better without HELP because all examples are upward inferences embedded in a downward environment twice.",
"Table 9 also shows that accuracy on conditionals was better on upward inferences than that on downward inferences. This indicates that BERT might fail to capture the monotonicity property that conditionals create a downward entailing context in their scope while they create an upward entailing context out of their scope.",
"Regarding lexical knowledge, the data augmentation technique improved the performance much better on downward inferences which do not require lexical knowledge. However, among the 394 problems for which all models provided wrong answers, 244 problems are non-lexical inference problems. This indicates that some non-lexical inference problems are more difficult than lexical inference problems, though accuracy on non-lexical inference problems was better than that on lexical inference problems."
],
[
"One of our findings is that there is a type of downward inferences to which every model fails to provide correct answers. One such example is concerned with the contrast between few and a few. Among 394 problems for which all models provided wrong answers, 148 downward inference problems were problems involving the downward monotonicity operator few such as in the following example:",
" $P$ : Few of the books had typical or marginal readers $H$ : Few of the books had some typical readers We transformed these downward inference problems to upward inference problems in two ways: (i) by replacing the downward operator few with the upward operator a few, and (ii) by removing the downward operator few. We tested BERT using these transformed test sets. The results showed that BERT predicted the same answers for the transformed test sets. This suggests that BERT does not understand the difference between the downward operator few and the upward operator a few.",
"The results of crowdsourcing tasks in Section 3.1.3 showed that some downward inferences can naturally be performed in human reasoning. However, we also found that the MultiNLI training set BIBREF10 , which is one of the dataset created from naturally-occurring texts, contains only 77 downward inference problems, including the following one.",
" $P$ : No racin' on the Range $H$ : No horse racing is allowed on the Range ",
"One possible reason why there are few downward inferences is that certain pragmatic factors can block people to draw a downward inference. For instance, in the case of the inference problem in ( \"Discussion\" ), unless the added disjunct in $H$ , i.e., a small cat with green eyes, is salient in the context, it would be difficult to draw the conclusion $H$ from the premise $P$ .",
" $P$ : I saw a dog $H$ : I saw a dog or a small cat with green eyes ",
"Such pragmatic factors would be one of the reasons why it is difficult to obtain downward inferences in naturally occurring texts."
],
[
"We introduced a large monotonicity entailment dataset, called MED. To illustrate the usefulness of MED, we tested state-of-the-art NLI models, and found that performance on the new test set was substantially worse for all state-of-the-art NLI models. In addition, the accuracy on downward inferences was inversely proportional to the one on upward inferences.",
"An experiment with the data augmentation technique showed that accuracy on upward and downward inferences depends on the proportion of upward and downward inferences in the training set. This indicates that current neural models might have limitations on their generalization ability in monotonicity reasoning. We hope that the MED will be valuable for future research on more advanced models that are capable of monotonicity reasoning in a proper way."
],
[
"This work was partially supported by JST AIP- PRISM Grant Number JPMJCR18Y1, Japan, and JSPS KAKENHI Grant Number JP18H03284, Japan. We thank our three anonymous reviewers for helpful suggestions. We are also grateful to Koki Washio, Masashi Yoshikawa, and Thomas McLachlan for helpful discussion."
]
],
"section_name": [
"Introduction",
"Monotonicity",
"Human-oriented dataset",
"Linguistics-oriented dataset",
"Statistics",
"Baselines",
"Data augmentation for analysis",
"Discussion",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"9ae76059d33b24d99445adb910a6ebc0ebc8a559"
],
"answer": [
{
"evidence": [
"To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section \"Dataset\" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section \"Dataset\" )."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"faa8cc896618919e0565306b4eaf03e0dc18eaa0"
],
"answer": [
{
"evidence": [
"To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment."
],
"extractive_spans": [
"BiMPM",
"ESIM",
"Decomposable Attention Model",
"KIM",
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"10cc4f11be85ffb0eaabd7017d5df80c4c9b309f"
],
"answer": [
{
"evidence": [
"A context is upward entailing (shown by [... $\\leavevmode {\\color {red!80!black}\\uparrow }$ ]) that allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]) allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( \"Introduction\" )), as witness the fact that ( \"Introduction\" ) entails ( \"Introduction\" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.",
"All [ workers $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [joined for a French dinner $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] joined for a dinner Not all workers joined for a dinner"
],
"extractive_spans": [],
"free_form_answer": "Upward reasoning is defined as going from one specific concept to a more general one. Downward reasoning is defined as the opposite, going from a general concept to one that is more specific.",
"highlighted_evidence": [
"A context is upward entailing (shown by [... $\\leavevmode {\\color {red!80!black}\\uparrow }$ ]) that allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where French dinner is replaced by a more general concept dinner. ",
"On the other hand, a downward entailing context (shown by [... $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]) allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where workers is replaced by a more specific concept new workers.",
"All [ workers $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [joined for a French dinner $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] joined for a dinner Not all workers joined for a dinner"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"0558e97a25b01a79de670fda145e072bdecc0aed"
],
"answer": [
{
"evidence": [
"Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( \"Introduction\" ) and ( \"Introduction\" )."
],
"extractive_spans": [
"a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures"
],
"free_form_answer": "",
"highlighted_evidence": [
"Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they release MED?",
"What NLI models do they analyze?",
"How do they define upward and downward reasoning?",
"What is monotonicity reasoning?"
],
"question_id": [
"c0a11ba0f6bbb4c69b5a0d4ae9d18e86a4a8f354",
"dfc393ba10ec4af5a17e5957fcbafdffdb1a6443",
"311a7fa62721e82265f4e0689b4adc05f6b74215",
"82bcacad668351c0f81bd841becb2dbf115f000e"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Determiners and their polarities.",
"Table 2: Examples of downward operators.",
"Figure 1: Overview of our human-oriented dataset creation. E: entailment, NE: non-entailment.",
"Table 3: Numbers of cases where answers matched automatically determined gold labels.",
"Table 4: Examples in the MED dataset. Crowd: problems collected through crowdsourcing, Paper: problems collected from linguistics publications, up: upward monotone, down: downward monotone, non: non-monotone, cond: condisionals, rev: reverse, conj: conjunction, disj: disjunction, lex: lexical knowledge, E: entailment, NE: non-entailment.",
"Table 5: Statistics for the MED dataset.",
"Table 6: Accuracies (%) for different models and training datasets.",
"Table 7: Evaluation results on types of monotonicity reasoning. –Hyp: Hypothesis-only model.",
"Figure 2: Accuracy throughout training BERT (i) with only upward examples and (ii) with only downward examples. We checked the accuracy at sizes [50, 100, 200, 500, 1000, 2000, 5000] for each direction. (iii) Performance on different ratios of upward/downward training sets. The total size of the training sets was 5,000 examples.",
"Table 8: Evaluation results by genre. Paper: problems collected from linguistics publications, Crowd: problems via crowdsourcing.",
"Table 9: Evaluation results by linguistic phenomenon type. (non-)Lexical: problems that (do not) require lexical relations. Numbers in parentheses are numbers of problems."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"6-Table7-1.png",
"7-Figure2-1.png",
"7-Table8-1.png",
"8-Table9-1.png"
]
} | [
"How do they define upward and downward reasoning?"
] | [
[
"1906.06448-Introduction-3"
]
] | [
"Upward reasoning is defined as going from one specific concept to a more general one. Downward reasoning is defined as the opposite, going from a general concept to one that is more specific."
] | 116 |
1907.00758 | Synchronising audio and ultrasound by learning cross-modal embeddings | Audiovisual synchronisation is the task of determining the time offset between speech audio and a video recording of the articulators. In child speech therapy, audio and ultrasound videos of the tongue are captured using instruments which rely on hardware to synchronise the two modalities at recording time. Hardware synchronisation can fail in practice, and no mechanism exists to synchronise the signals post hoc. To address this problem, we employ a two-stream neural network which exploits the correlation between the two modalities to find the offset. We train our model on recordings from 69 speakers, and show that it correctly synchronises 82.9% of test utterances from unseen therapy sessions and unseen speakers, thus considerably reducing the number of utterances to be manually synchronised. An analysis of model performance on the test utterances shows that directed phone articulations are more difficult to automatically synchronise compared to utterances containing natural variation in speech such as words, sentences, or conversations. | {
"paragraphs": [
[
"Ultrasound tongue imaging (UTI) is a non-invasive way of observing the vocal tract during speech production BIBREF0 . Instrumental speech therapy relies on capturing ultrasound videos of the patient's tongue simultaneously with their speech audio in order to provide a diagnosis, design treatments, and measure therapy progress BIBREF1 . The two modalities must be correctly synchronised, with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags, based on synchronisation standards for broadcast audiovisual signals BIBREF2 . Errors beyond this range can render the data unusable – indeed, synchronisation errors do occur, resulting in significant wasted effort if not corrected. No mechanism currently exists to automatically correct these errors, and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 , it is time consuming and tedious.",
"In this work, we exploit the correlation between the two modalities to synchronise them. We utilise a two-stream neural network architecture for the task BIBREF4 , using as our only source of supervision pairs of ultrasound and audio segments which have been automatically generated and labelled as positive (correctly synchronised) or negative (randomly desynchronised); a process known as self-supervision BIBREF5 . We demonstrate how this approach enables us to correctly synchronise the majority of utterances in our test set, and in particular, those exhibiting natural variation in speech.",
"Section SECREF2 reviews existing approaches for audiovisual synchronisation, and describes the challenges specifically associated with UTI data, compared with lip videos for which automatic synchronisation has been previously attempted. Section SECREF3 describes our approach. Section SECREF4 describes the data we use, including data preprocessing and positive and negative sample creation using a self-supervision strategy. Section SECREF5 describes our experiments, followed by an analysis of the results. We conclude with a summary and future directions in Section SECREF6 ."
],
[
"Ultrasound and audio are recorded using separate components, and hardware synchronisation is achieved by translating information from the visual signal into audio at recording time. Specifically, for every ultrasound frame recorded, the ultrasound beam-forming unit releases a pulse signal, which is translated by an external hardware synchroniser into an audio pulse signal and captured by the sound card BIBREF6 , BIBREF7 . Synchronisation is achieved by aligning the ultrasound frames with the audio pulse signal, which is already time-aligned with the speech audio BIBREF8 .",
"Hardware synchronisation can fail for a number of reasons. The synchroniser is an external device which needs to be correctly connected and operated by therapists. Incorrect use can lead to missing the pulse signal, which would cause synchronisation to fail for entire therapy sessions BIBREF9 . Furthermore, low-quality sound cards report an approximate, rather than the exact, sample rate which leads to errors in the offset calculation BIBREF8 . There is currently no recovery mechanism for when synchronisation fails, and to the best of our knowledge, there has been no prior work on automatically correcting the synchronisation error between ultrasound tongue videos and audio. There is, however, some prior work on synchronising lip movement with audio which we describe next."
],
[
"Speech audio is generated by articulatory movement and is therefore fundamentally correlated with other manifestations of this movement, such as lip or tongue videos BIBREF10 . An alternative to the hardware approach is to exploit this correlation to find the offset. Previous approaches have investigated the effects of using different representations and feature extraction techniques on finding dimensions of high correlation BIBREF11 , BIBREF12 , BIBREF13 . More recently, neural networks, which learn features directly from input, have been employed for the task. SyncNet BIBREF4 uses a two-stream neural network and self-supervision to learn cross-modal embeddings, which are then used to synchronise audio with lip videos. It achieves near perfect accuracy ( INLINEFORM0 99 INLINEFORM1 ) using manual evaluation where lip-sync error is not detectable to a human. It has since been extended to use different sample creation methods for self-supervision BIBREF5 , BIBREF14 and different training objectives BIBREF14 . We adopt the original approach BIBREF4 , as it is both simpler and significantly less expensive to train than the more recent variants."
],
[
"Videos of lip movement can be obtained from various sources including TV, films, and YouTube, and are often cropped to include only the lips BIBREF4 . UTI data, on the other hand, is recorded in clinics by trained therapists BIBREF15 . An ultrasound probe placed under the chin of the patient captures the midsaggital view of their oral cavity as they speak. UTI data consists of sequences of 2D matrices of raw ultrasound reflection data, which can be interpreted as greyscale images BIBREF15 . There are several challenges specifically associated with UTI data compared with lip videos, which can potentially lower the performance of models relative to results reported on lip video data. These include:",
"Poor image quality: Ultrasound data is noisy, containing arbitrary high-contrast edges, speckle noise, artefacts, and interruptions to the tongue's surface BIBREF0 , BIBREF16 , BIBREF17 . The oral cavity is not entirely visible, missing the lips, the palate, and the pharyngeal wall, and visually interpreting the data requires specialised training. In contrast, videos of lip movement are of much higher quality and suffer from none of these issues.",
"Probe placement variation: Surfaces that are orthogonal to the ultrasound beam image better than those at an angle. Small shifts in probe placement during recording lead to high variation between otherwise similar tongue shapes BIBREF0 , BIBREF18 , BIBREF17 . In contrast, while the scaling and rotations of lip videos lead to variation, they do not lead to a degradation in image quality.",
"Inter-speaker variation: Age and physiology affect the quality of ultrasound data, and subjects with smaller vocal tracts and less tissue fat image better BIBREF0 , BIBREF17 . Dryness in the mouth, as a result of nervousness during speech therapy, leads to poor imaging. While inter-speaker variation is expected in lip videos, again, the variation does not lead to quality degradation.",
"Limited amount of data: Existing UTI datasets are considerably smaller than lip movement datasets. Consider for example VoxCeleb and VoxCeleb2 used to train SyncNet BIBREF4 , BIBREF14 , which together contain 1 million utterances from 7,363 identities BIBREF19 , BIBREF20 . In contrast, the UltraSuite repository (used in this work) contains 13,815 spoken utterances from 86 identities.",
"Uncorrelated segments: Speech therapy data contains interactions between the therapist and patient. The audio therefore contains speech from both speakers, while the ultrasound captures only the patient's tongue BIBREF15 . As a result, parts of the recordings will consist of completely uncorrelated audio and ultrasound. This issue is similar to that of dubbed voices in lip videos BIBREF4 , but is more prevalent in speech therapy data."
],
[
"We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0 ",
"The model is trained using a contrastive loss function BIBREF21 , BIBREF22 , INLINEFORM0 , which minimises the Euclidean distance INLINEFORM1 between INLINEFORM2 and INLINEFORM3 for positive pairs ( INLINEFORM4 ), and maximises it for negative pairs ( INLINEFORM5 ), for a number of training samples INLINEFORM6 : DISPLAYFORM0 ",
"Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 )."
],
[
"For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details.",
"Each utterance consists of 3 files: audio, ultrasound, and parameter. The audio file is a RIFF wave file, sampled at 22.05 KHz, containing the speech of the child and therapist. The ultrasound file consists of a sequence of ultrasound frames capturing the midsagittal view of the child's tongue. A single ultrasound frame is recorded as a 2D matrix where each column represents the ultrasound reflection intensities along a single scan line. Each ultrasound frame consists of 63 scan lines of 412 data points each, and is sampled at a rate of INLINEFORM0 121.5 fps. Raw ultrasound frames can be visualised as greyscale images and can thus be interpreted as videos. The parameter file contains the synchronisation offset value (in milliseconds), determined using hardware synchronisation at recording time and confirmed by the therapists to be correct for this dataset."
],
[
"First, we exclude utterances of type “Non-speech\" (E) from our training data (and statistics). These are coughs recorded to obtain additional tongue shapes, or swallowing motions recorded to capture a trace of the hard palate. Both of these rarely contain audible content and are therefore not relevant to our task. Next, we apply the offset, which should be positive if the audio leads and negative if the audio lags. In this dataset, the offset is always positive. We apply it by cropping the leading audio and trimming the end of the longer signal to match the duration.",
"To process the ultrasound more efficiently, we first reduce the frame rate from INLINEFORM0 121.5 fps to INLINEFORM1 24.3 fps by retaining 1 out of every 5 frames. We then downsample by a factor of (1, 3), shrinking the frame size from 63x412 to 63x138 using max pixel value. This retains the number of ultrasound vectors (63), but reduces the number of pixels per vector (from 412 to 138).",
"The final pre-preprocessing step is to remove empty regions. UltraSuite was previously anonymised by zero-ing segments of audio which contained personally identifiable information. As a preprocessing step, we remove the zero regions from audio and corresponding ultrasound. We additionally experimented with removing regions of silence using voice activity detection, but obtained a higher performance by retaining them."
],
[
"To train our model we need positive and negative training pairs. The model ingests short clips from each modality of INLINEFORM0 200ms long, calculated as INLINEFORM1 , where INLINEFORM2 is the time window, INLINEFORM3 is the number of ultrasound frames per window (5 in our case), and INLINEFORM4 is the ultrasound frame rate of the utterance ( INLINEFORM5 24.3 fps). For each recording, we split the ultrasound into non-overlapping windows of 5 frames each. We extract MFCC features (13 cepstral coefficients) from the audio using a window length of INLINEFORM6 20ms, calculated as INLINEFORM7 , and a step size of INLINEFORM8 10ms, calculated as INLINEFORM9 . This give us the input sizes shown in Figure FIGREF1 .",
"Positive samples are pairs of ultrasound windows and the corresponding MFCC frames. To create negative samples, we randomise pairings of ultrasound windows to MFCC frames within the same utterance, generating as many negative as positive samples to achieve a balanced dataset. We obtain 243,764 samples for UXTD (13.5hrs), 333,526 for UXSSD (18.5hrs), and 572,078 for UPX (31.8 hrs), or a total 1,149,368 samples (63.9hrs) which we divide into training, validation and test sets."
],
[
"We aim to test whether our model generalises to data from new speakers, and to data from new sessions recorded with known speakers. To simulate this, we select a group of speakers from each dataset, and hold out all of their data either for validation or for testing. Additionally, we hold out one entire session from each of the remaining speakers, and use the rest of their data for training. We aim to reserve approximately 80% of the created samples for training, 10% for validation, and 10% for testing, and select speakers and sessions on this basis.",
"Each speaker in UXTD recorded 1 session, but sessions are of different durations. We reserve 45 speakers for training, 5 for validation, and 8 for testing. UXSSD and UPX contain fewer speakers, but each recorded multiple sessions. We hold out 1 speaker for validation and 1 for testing from each of the two datasets. We also hold out a session from the first half of the remaining speakers for validation, and a session from the second half of the remaining speakers for testing. This selection process results in 909,858 (pooled) samples for training (50.5hrs), 128,414 for validation (7.1hrs) and 111,096 for testing (6.2hrs). From the training set, we create shuffled batches which are balanced in the number of positive and negative samples."
],
[
"We select the hyper-parameters of our model empirically by tuning on the validation set (Table ). Hyper-parameter exploration is guided by BIBREF24 . We train our model using the Adam optimiser BIBREF25 with a learning rate of 0.001, a batch size of 64 samples, and for 20 epochs. We implement learning rate scheduling which reduces the learning rate by a factor of 0.1 when the validation loss plateaus for 2 epochs.",
"Upon convergence, the model achieves 0.193 training loss, 0.215 validation loss, and 0.213 test loss. By placing a threshold of 0.5 on predicted distances, the model achieves 69.9% binary classification accuracy on training samples, 64.7% on validation samples, and 65.3% on test samples.",
"Synchronisation offset prediction: Section SECREF3 described briefly how to use our model to predict the synchronisation offset for test utterances. To obtain a discretised set of offset candidates, we retrieve the true offsets of the training utterances, and find that they fall in the range [0, 179] ms. We discretise this range taking 45ms steps and rendering 40 candidate values (45ms is the smaller of the absolute values of the detectability boundaries, INLINEFORM0 125 and INLINEFORM1 45 ms). We bin the true offsets in the candidate set and discard empty bins, reducing the set from 40 to 24 values. We consider all 24 candidates for each test utterance. We do this by aligning the two signals according to the given candidate, then producing the non-overlapping windows of ultrasound and MFCC pairs, as we did when preparing the data. We then use our model to predict the Euclidean distance for each pair, and average the distances. Finally, we select the offset with the smallest average distance as our prediction.",
"Evaluation: Because the true offsets are known, we evaluate the performance of our model by computing the discrepancy between the predicted and the true offset for each utterance. If the discrepancy falls within the minimum detectability range ( INLINEFORM0 125 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 45) then the prediction is correct. Random prediction (averaged over 1000 runs) yields 14.6% accuracy with a mean and standard deviation discrepancy of 328 INLINEFORM5 518ms. We achieve 82.9% accuracy with a mean and standard deviation discrepancy of 32 INLINEFORM6 223ms. SyncNet reports INLINEFORM7 99% accuracy on lip video synchronisation using a manual evaluation where the lip error is not detectable to a human observer BIBREF4 . However, we argue that our data is more challenging (Section SECREF4 ).",
"Analysis: We analyse the performance of our model across different conditions. Table shows the model accuracy broken down by utterance type. The model achieves 91.2% accuracy on utterances containing words, sentences, and conversations, all of which exhibit natural variation in speech. The model is less successful with Articulatory utterances, which contain isolated phones occurring once or repeated (e.g., “sh sh sh\"). Such utterances contain subtle tongue movement, making it more challenging to correlate the visual signal with the audio. And indeed, the model finds the correct offset for only 55.9% of Articulatory utterances. A further analysis shows that 84.4% (N INLINEFORM0 90) of stop consonants (e.g., “t”), which are relied upon by therapists as the most salient audiovisual synchronisation cues BIBREF3 , are correctly synchronised by our model, compared to 48.6% (N INLINEFORM1 140) of vowels, which contain less distinct movement and are also more challenging for therapists to synchronise.",
"Table shows accuracy broken down by test set. The model performs better on test sets containing entirely new speakers compared with test sets containing new sessions from previously seen speakers. This is contrary to expectation but could be due to the UTI challenges (described in Section SECREF4 ) affecting different subsets to different degrees. Table shows that the model performs considerably worse on UXTD compared to other test sets (64.8% accuracy). However, a further breakdown of the results in Table by test set and utterance type explains this poor performance; the majority of UXTD utterances (71%) are Articulatory utterances which the model struggles to correctly synchronise. In fact, for other utterance types (where there is a large enough sample, such as Words) performance on UXTD is on par with other test sets."
],
[
"We have shown how a two-stream neural network originally designed to synchronise lip videos with audio can be used to synchronise UTI data with audio. Our model exploits the correlation between the modalities to learn cross-model embeddings which are used to find the synchronisation offset. It generalises well to held-out data, allowing us to correctly synchronise the majority of test utterances. The model is best-suited to utterances which contain natural variation in speech and least suited to those containing isolated phones, with the exception of stop consonants. Future directions include integrating the model and synchronisation offset prediction process into speech therapy software BIBREF6 , BIBREF7 , and using the learned embeddings for other tasks such as active speaker detection BIBREF4 ."
],
[
"Supported by EPSRC Healthcare Partnerships Programme grant number EP/P02338X/1 (Ultrax2020)."
]
],
"section_name": [
"Introduction",
"Background",
"Audiovisual synchronisation for lip videos",
"Lip videos vs. ultrasound tongue imaging (UTI)",
"Model",
"Data",
"Preparing the data",
"Creating samples using a self-supervision strategy",
"Dividing samples for training, validation and testing",
"Experiments",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"89cd66698512e65e6d240af77f3fc829fe373b2a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"c8d789113074b382993be027d1efa7e2d6889f00"
],
"answer": [
{
"evidence": [
"For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details."
],
"extractive_spans": [],
"free_form_answer": "Use an existing one",
"highlighted_evidence": [
"We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2547291c6f433f23fd04b97d9bf6228d47f28c18"
],
"answer": [
{
"evidence": [
"Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 )."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"05c266e2b0ab0b45fca7c0b09534b1870aa75efd"
],
"answer": [
{
"evidence": [
"We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0",
"FLOAT SELECTED: Figure 1: UltraSync maps high dimensional inputs to low dimensional vectors using a contrastive loss function, such that the Euclidean distance is small between vectors from positive pairs and large otherwise. Inputs span '200ms: 5 consecutive raw ultrasound frames on one stream and 20 frames of the corresponding MFCC features on the other."
],
"extractive_spans": [],
"free_form_answer": "CNN",
"highlighted_evidence": [
"Figure FIGREF1 illustrates the main architecture. ",
"FLOAT SELECTED: Figure 1: UltraSync maps high dimensional inputs to low dimensional vectors using a contrastive loss function, such that the Euclidean distance is small between vectors from positive pairs and large otherwise. Inputs span '200ms: 5 consecutive raw ultrasound frames on one stream and 20 frames of the corresponding MFCC features on the other."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they compare their neural network against any other model?",
"Do they annotate their own dataset or use an existing one?",
"Does their neural network predict a single offset in a recording?",
"What kind of neural network architecture do they use?"
],
"question_id": [
"73d657d6faed0c11c65b1ab60e553db57f4971ca",
"9ef182b61461d0d8b6feb1d6174796ccde290a15",
"f6f8054f327a2c084a73faca16cf24a180c094ae",
"b8f711179a468fec9a0d8a961fb0f51894af4b31"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: UltraSync maps high dimensional inputs to low dimensional vectors using a contrastive loss function, such that the Euclidean distance is small between vectors from positive pairs and large otherwise. Inputs span '200ms: 5 consecutive raw ultrasound frames on one stream and 20 frames of the corresponding MFCC features on the other.",
"Table 1: Each stream has 3 convolutional layers followed by 2 fully-connected layers. Fully connected layers have 64 units each. For convolutional layers, we specify the number of filters and their receptive field size as “num×size×size” followed by the max-pooling downsampling factor. Each layer is followed by batch-normalisation then ReLU activation. Max-pooling is applied after the activation function.",
"Table 2: Model accuracy per test set and utterance type. Performance is consistent across test sets for Words (A) where the sample sizes are large, and less consistent for types where the sample sizes are small. 71% of UXTD utterances are Articulatory (D), which explains the low performance on this test set (64.8% in Table 4). In contrast, performance on UXTD Words (A) is comparable to other test sets.",
"Table 3: Model accuracy per utterance type, where N is the number of utterances. Performance is best on utterances containing natural variation in speech, such as Words (A) and Sentences (C). Non-words (B) and Conversations (F) also exhibit this variation, but due to smaller sample sizes the lower percentages are not representative. Performance is lowest on Articulatory utterances (D), which contain isolated phones. The mean and standard deviation of the discrepancy between the prediction and the true offset are also shown in milliseconds.",
"Table 4: Model accuracy per test set. Contrary to expectation, performance is better on test sets containing new speakers than on test sets containing new sessions from known speakers. The performance on UXTD is considerably lower than other test sets, due to it containing a large number of Articulatory utterances, which are difficult to synchronise (see Tables 3 and 2)."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png"
]
} | [
"Do they annotate their own dataset or use an existing one?",
"What kind of neural network architecture do they use?"
] | [
[
"1907.00758-Data-0"
],
[
"1907.00758-1-Figure1-1.png"
]
] | [
"Use an existing one",
"CNN"
] | 118 |
1701.02877 | Generalisation in Named Entity Recognition: A Quantitative Analysis | Named Entity Recognition (NER) is a key NLP task, which is all the more challenging on Web and user-generated content with their diverse and continuously changing language. This paper aims to quantify how this diversity impacts state-of-the-art NER methods, by measuring named entity (NE) and context variability, feature sparsity, and their effects on precision and recall. In particular, our findings indicate that NER approaches struggle to generalise in diverse genres with limited training data. Unseen NEs, in particular, play an important role, which have a higher incidence in diverse genres such as social media than in more regular genres such as newswire. Coupled with a higher incidence of unseen features more generally and the lack of large training corpora, this leads to significantly lower F1 scores for diverse genres as compared to more regular ones. We also find that leading systems rely heavily on surface forms found in training data, having problems generalising beyond these, and offer explanations for this observation. | {
"paragraphs": [
[
"Named entity recognition and classification (NERC, short NER), the task of recognising and assigning a class to mentions of proper names (named entities, NEs) in text, has attracted many years of research BIBREF0 , BIBREF1 , analyses BIBREF2 , starting from the first MUC challenge in 1995 BIBREF3 . Recognising entities is key to many applications, including text summarisation BIBREF4 , search BIBREF5 , the semantic web BIBREF6 , topic modelling BIBREF7 , and machine translation BIBREF8 , BIBREF9 .",
"As NER is being applied to increasingly diverse and challenging text genres BIBREF10 , BIBREF11 , BIBREF12 , this has lead to a noisier, sparser feature space, which in turn requires regularisation BIBREF13 and the avoidance of overfitting. This has been the case even for large corpora all of the same genre and with the same entity classification scheme, such as ACE BIBREF14 . Recall, in particular, has been a persistent problem, as named entities often seem to have unusual surface forms, e.g. unusual character sequences for the given language (e.g. Szeged in an English-language document) or words that individually are typically not NEs, unless they are combined together (e.g. the White House).",
"Indeed, the move from ACE and MUC to broader kinds of corpora has presented existing NER systems and resources with a great deal of difficulty BIBREF15 , which some researchers have tried to address through domain adaptation, specifically with entity recognition in mind BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, more recent performance comparisons of NER methods over different corpora showed that older tools tend to simply fail to adapt, even when given a fair amount of in-domain data and resources BIBREF21 , BIBREF11 . Simultaneously, the value of NER in non-newswire data BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 has rocketed: for example, social media now provides us with a sample of all human discourse, unmolested by editors, publishing guidelines and the like, and all in digital format – leading to, for example, whole new fields of research opening in computational social science BIBREF26 , BIBREF27 , BIBREF28 .",
"The prevailing assumption has been that this lower NER performance is due to domain differences arising from using newswire (NW) as training data, as well as from the irregular, noisy nature of new media (e.g. BIBREF21 ). Existing studies BIBREF11 further suggest that named entity diversity, discrepancy between named entities in the training set and the test set (entity drift over time in particular), and diverse context, are the likely reasons behind the significantly lower NER performance on social media corpora, as compared to newswire.",
"No prior studies, however, have investigated these hypotheses quantitatively. For example, it is not yet established whether this performance drop is really due to a higher proportion of unseen NEs in the social media, or is it instead due to NEs being situated in different kinds of linguistic context.",
"Accordingly, the contributions of this paper lie in investigating the following open research questions:",
"In particular, the paper carries out a comparative analyses of the performance of several different approaches to statistical NER over multiple text genres, with varying NE and lexical diversity. In line with prior analyses of NER performance BIBREF2 , BIBREF11 , we carry out corpus analysis and introduce briefly the NER methods used for experimentation. Unlike prior efforts, however, our main objectives are to uncover the impact of NE diversity and context diversity on performance (measured primarily by F1 score), and also to study the relationship between OOV NEs and features and F1. See Section \"Experiments\" for details.",
"To ensure representativeness and comprehensiveness, our experimental findings are based on key benchmark NER corpora spanning multiple genres, time periods, and corpus annotation methodologies and guidelines. As detailed in Section \"Datasets\" , the corpora studied are OntoNotes BIBREF29 , ACE BIBREF30 , MUC 7 BIBREF31 , the Ritter NER corpus BIBREF21 , the MSM 2013 corpus BIBREF32 , and the UMBC Twitter corpus BIBREF33 . To eliminate potential bias from the choice of statistical NER approach, experiments are carried out with three differently-principled NER approaches, namely Stanford NER BIBREF34 , SENNA BIBREF35 and CRFSuite BIBREF36 (see Section \"NER Models and Features\" for details)."
],
[
"Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details). These datasets were chosen such that they have been annotated with the same or very similar entity classes, in particular, names of people, locations, and organisations. Thus corpora including only domain-specific entities (e.g. biomedical corpora) were excluded. The choice of corpora was also motivated by their chronological age; we wanted to ensure a good temporal spread, in order to study possible effects of entity drift over time.",
"A note is required about terminology. This paper refers to text genre and also text domain. These are two dimensions by which a document or corpus can be described. Genre here accounts the general characteristics of the text, measurable with things like register, tone, reading ease, sentence length, vocabulary and so on. Domain describes the dominant subject matter of text, which might give specialised vocabulary or specific, unusal word senses. For example, “broadcast news\" is a genre, describing the manner of use of language, whereas “financial text\" or “popular culture\" are domains, describing the topic. One notable exception to this terminology is social media, which tends to be a blend of myriad domains and genres, with huge variation in both these dimensions BIBREF38 , BIBREF39 ; for simplicity, we also refer to this as a genre here.",
"In chronological order, the first corpus included here is MUC 7, which is the last of the MUC challenges BIBREF31 . This is an important corpus, since the Message Understanding Conference (MUC) was the first one to introduce the NER task in 1995 BIBREF3 , with focus on recognising persons, locations and organisations in newswire text.",
"A subsequent evaluation campaign was the CoNLL 2003 NER shared task BIBREF40 , which created gold standard data for newswire in Spanish, Dutch, English and German. The corpus of this evaluation effort is now one of the most popular gold standards for NER, with new NER approaches and methods often reporting performance on that.",
"Later evaluation campaigns began addressing NER for genres other than newswire, specifically ACE BIBREF30 and OntoNotes BIBREF29 . Both of those contain subcorpora in several genres, namely newswire, broadcast news, broadcast conversation, weblogs, and conversational telephone speech. ACE, in addition, contains a subcorpus with usenet newsgroups. Like CoNLL 2003, the OntoNotes corpus is also a popular benchmark dataset for NER. The languages covered are English, Arabic and Chinese. A further difference between the ACE and OntoNotes corpora on one hand, and CoNLL and MUC on the other, is that they contain annotations not only for NER, but also for other tasks such as coreference resolution, relation and event extraction and word sense disambiguation. In this paper, however, we restrict ourselves purely to the English NER annotations, for consistency across datasets. The ACE corpus contains HEAD as well as EXTENT annotations for NE spans. For our experiments we use the EXTENT tags.",
"With the emergence of social media, studying NER performance on this genre gained momentum. So far, there have been no big evaluation efforts, such as ACE and OntoNotes, resulting in substantial amounts of gold standard data. Instead, benchmark corpora were created as part of smaller challenges or individual projects. The first such corpus is the UMBC corpus for Twitter NER BIBREF33 , where researchers used crowdsourcing to obtain annotations for persons, locations and organisations. A further Twitter NER corpus was created by BIBREF21 , which, in contrast to other corpora, contains more fine-grained classes defined by the Freebase schema BIBREF41 . Next, the Making Sense of Microposts initiative BIBREF32 (MSM) provides single annotated data for named entity recognition on Twitter for persons, locations, organisations and miscellaneous. MSM initiatives from 2014 onwards in addition feature a named entity linking task, but since we only focus on NER here, we use the 2013 corpus.",
"These corpora are diverse not only in terms of genres and time periods covered, but also in terms of NE classes and their definitions. In particular, the ACE and OntoNotes corpora try to model entity metonymy by introducing facilities and geo-political entities (GPEs). Since the rest of the benchmark datasets do not make this distinction, metonymous entities are mapped to a more common entity class (see below).",
"In order to ensure consistency across corpora, only Person (PER), Location (LOC) and Organisation (ORG) are used in our experiments, and other NE classes are mapped to O (no NE). For the Ritter corpus, the 10 entity classes are collapsed to three as in BIBREF21 . For the ACE and OntoNotes corpora, the following mapping is used: PERSON $\\rightarrow $ PER; LOCATION, FACILITY, GPE $\\rightarrow $ LOC; ORGANIZATION $\\rightarrow $ ORG; all other classes $\\rightarrow $ O.",
"Tokens are annotated with BIO sequence tags, indicating that they are the beginning (B) or inside (I) of NE mentions, or outside of NE mentions (O). For the Ritter and ACE 2005 corpora, separate training and test corpora are not publicly available, so we randomly sample 1/3 for testing and use the rest for training. The resulting training and testing data sizes measured in number of NEs are listed in Table 2 . Separate models are then trained on the training parts of each corpus and evaluated on the development (if available) and test parts of the same corpus. If development parts are available, as they are for CoNLL (CoNLL Test A) and MUC (MUC 7 Dev), they are not merged with the training corpora for testing, as it was permitted to do in the context of those evaluation challenges.",
"[t]",
" P, R and F1 of NERC with different models evaluated on different testing corpora, trained on corpora normalised by size",
"Table 1 shows which genres the different corpora belong to, the number of NEs and the proportions of NE classes per corpus. Sizes of NER corpora have increased over time, from MUC to OntoNotes.",
"Further, the class distribution varies between corpora: while the CoNLL corpus is very balanced and contains about equal numbers of PER, LOC and ORG NEs, other corpora are not. The least balanced corpus is the MSM 2013 Test corpus, which contains 98 LOC NEs, but 1110 PER NEs. This makes it difficult to compare NER performance here, since performance partly depends on training data size. Since comparing NER performance as such is not the goal of this paper, we will illustrate the impact of training data size by using learning curves in the next section; illustrate NERC performance on trained corpora normalised by size in Table UID9 ; and then only use the original training data size for subsequent experiments.",
"In order to compare corpus diversity across genres, we measure NE and token/type diversity (following e.g. BIBREF2 ). Note that types are the unique tokens, so the ratio can be understood as ratio of total tokens to unique ones. Table 4 shows the ratios between the number of NEs and the number of unique NEs per corpus, while Table 5 reports the token/type ratios. The lower those ratios are, the more diverse a corpus is. While token/type ratios also include tokens which are NEs, they are a good measure of broader linguistic diversity.",
"Aside from these metrics, there are other factors which contribute to corpus diversity, including how big a corpus is and how well sampled it is, e.g. if a corpus is only about one story, it should not be surprising to see a high token/type ratio. Therefore, by experimenting on multiple corpora, from different genres and created through different methodologies, we aim to encompass these other aspects of corpus diversity.",
"Since the original NE and token/type ratios do not account for corpus size, Tables 5 and 4 present also the normalised ratios. For those, a number of tokens equivalent to those in the corpus, e.g. 7037 for UMBC (Table 5 ) or, respectively, a number of NEs equivalent to those in the corpus (506 for UMBC) are selected (Table 4 ).",
"An easy choice of sampling method would be to sample tokens and NEs randomly. However, this would not reflect the composition of corpora appropriately. Corpora consist of several documents, tweets or blog entries, which are likely to repeat the words or NEs since they are about one story. The difference between bigger and smaller corpora is then that bigger corpora consist of more of those documents, tweets, blog entries, interviews, etc. Therefore, when we downsample, we take the first $n$ tokens for the token/type ratios or the first $n$ NEs for the NEs/Unique NEs ratios.",
"Looking at the normalised diversity metrics, the lowest NE/Unique NE ratios $<= 1.5$ (in bold, Table 4 ) are observed on the Twitter and CoNLL Test corpora. Seeing this for Twitter is not surprising since one would expect noise in social media text (e.g. spelling variations or mistakes) to also have an impact on how often the same NEs are seen. Observing this in the latter, though, is less intuitive and suggests that the CoNLL corpora are well balanced in terms of stories. Low NE/Unique ratios ( $<= 1.7$ ) can also be observed for ACE WL, ACE UN and OntoNotes TC. Similar to social media text, content from weblogs, usenet dicussions and telephone conversations also contains a larger amount of noise compared to the traditionally-studied newswire genre, so this is not a surprising result. Corpora bearing high NE/Unique NE ratios ( $> 2.5$ ) are ACE CTS, OntoNotes MZ and OntoNotes BN. These results are also not surprising. The telephone conversations in ACE CTS are all about the same story, and newswire and broadcast news tend to contain longer stories (reducing variety in any fixed-size set) and are more regular due to editing.",
"The token/type ratios reflect similar trends (Table 5 ). Low token/type ratios $<= 2.8$ (in bold) are observed for the Twitter corpora (Ritter and UMBC), as well as for the CoNLL Test corpus. Token/type ratios are also low ( $<= 3.2$ ) for CoNLL Train and ACE WL. Interestingly, ACE UN and MSM Train and Test do not have low token/type ratios although they have low NE/Unique ratios. That is, many diverse persons, locations and organisations are mentioned in those corpora, but similar context vocabulary is used. Token/type ratios are high ( $>= 4.4$ ) for MUC7 Dev, ACE BC, ACE CTS, ACE UN and OntoNotes TC. Telephone conversations (TC) having high token/type ratios can be attributed to the high amount filler words (e.g. “uh”, “you know”). NE corpora are generally expected to have regular language use – for ACE, at least, in this instance.",
"Furthermore, it is worth pointing out that, especially for the larger corpora (e.g. OntoNotes NW), size normalisation makes a big difference. The normalised NE/Unique NE ratios drop by almost a half compared to the un-normalised ratios, and normalised Token/Type ratios drop by up to 85%. This strengthens our argument for size normalisation and also poses the question of low NERC performance for diverse genres being mostly due to the lack of large training corpora. This is examined in Section \"RQ2: NER performance in Different Genres\" .",
"Lastly, Table 6 reports tag density (percentage of tokens tagged as part of a NE), which is another useful metric of corpus diversity that can be interpreted as the information density of a corpus. What can be observed here is that the NW corpora have the highest tag density and generally tend to have higher tag density than corpora of other genres; that is, newswire bears a lot of entities. Corpora with especially low tag density $<= 0.06$ (in bold) are the TC corpora, Ritter, OntoNotes WB, ACE UN, ACE BN and ACE BC. As already mentioned, conversational corpora, to which ACE BC also belong, tend to have many filler words, thus it is not surprising that they have a low tag density. There are only minor differences between the tag density and the normalised tag density, since corpus size as such does not impact tag density."
],
[
"To avoid system-specific bias in our experiments, three widely-used supervised statistical approaches to NER are included: Stanford NER, SENNA, and CRFSuite. These systems each have contrasting notable attributes.",
"Stanford NER BIBREF34 is the most popular of the three, deployed widely in both research and commerce. The system has been developed in terms of both generalising the underlying technology and also specific additions for certain languages. The majority of openly-available additions to Stanford NER, in terms of models, gazetteers, prefix/suffix handling and so on, have been created for newswire-style text. Named entity recognition and classification is modelled as a sequence labelling task with first-order conditional random fields (CRFs) BIBREF43 .",
"SENNA BIBREF35 is a more recent system for named entity extraction and other NLP tasks. Using word representations and deep learning with deep convolutional neural networks, the general principle for SENNA is to avoid task-specific engineering while also doing well on multiple benchmarks. The approach taken to fit these desiderata is to use representations induced from large unlabelled datasets, including LM2 (introduced in the paper itself) and Brown clusters BIBREF44 , BIBREF45 . The outcome is a flexible system that is readily adaptable, given training data. Although the system is more flexible in general, it relies on learning language models from unlabelled data, which might take a long time to gather and retrain. For the setup in BIBREF35 language models are trained for seven weeks on the English Wikipedia, Reuters RCV1 BIBREF46 and parts of the Wall Street Journal, and results are reported over the CoNLL 2003 NER dataset. Reuters RCV1 is chosen as unlabelled data because the English CoNLL 2003 corpus is created from the Reuters RCV1 corpus. For this paper, we use the original language models distributed with SENNA and evaluate SENNA with the DeepNL framework BIBREF47 . As such, it is to some degree also biased towards the CoNLL 2003 benchmark data.",
"Finally, we use the classical NER approach from CRFsuite BIBREF36 , which also uses first-order CRFs. This frames NER as a structured sequence prediction task, using features derived directly from the training text. Unlike the other systems, no external knowledge (e.g. gazetteers and unsupervised representations) are used. This provides a strong basic supervised system, and – unlike Stanford NER and SENNA – has not been tuned for any particular domain, giving potential to reveal more challenging domains without any intrinsic bias.",
"We use the feature extractors natively distributed with the NER frameworks. For Stanford NER we use the feature set “chris2009” without distributional similarity, which has been tuned for the CoNLL 2003 data. This feature was tuned to handle OOV words through word shape, i.e. capitalisation of constituent characters. The goal is to reduce feature sparsity – the basic problem behind OOV named entities – by reducing the complexity of word shapes for long words, while retaining word shape resolution for shorter words. In addition, word clusters, neighbouring n-grams, label sequences and quasi-Newton minima search are included. SENNA uses word embedding features and gazetteer features; for the training configuration see https://github.com/attardi/deepnl#benchmarks. Finally, for CRFSuite, we use the provided feature extractor without POS or chunking features, which leaves unigram and bigram word features of the mention and in a window of 2 to the left and the right of the mention, character shape, prefixes and suffixes of tokens.",
"These systems are compared against a simple surface form memorisation tagger. The memorisation baseline picks the most frequent NE label for each token sequence as observed in the training corpus. There are two kinds of ambiguity: one is overlapping sequences, e.g. if both “New York City” and “New York” are memorised as a location. In that case the longest-matching sequence is labelled with the corresponding NE class. The second, class ambiguity, occurs when the same textual label refers to different NE classes, e.g. “Google” could either refer to the name of a company, in which case it would be labelled as ORG, or to the company's search engine, which would be labelled as O (no NE)."
],
[
"[t]",
" P, R and F1 of NERC with different models trained on original corpora",
"[t]",
" F1 per NE type with different models trained on original corpora",
"Our first research question is how NERC performance differs for corpora between approaches. In order to answer this, Precision (P), Recall (R) and F1 metrics are reported on size-normalised corpora (Table UID9 ) and original corpora (Tables \"RQ1: NER performance with Different Approaches\" and \"RQ1: NER performance with Different Approaches\" ). The reason for size normalisation is to make results comparable across corpora. For size normalisation, the training corpora are downsampled to include the same number of NEs as the smallest corpus, UMBC. For that, sentences are selected from the beginning of the train part of the corpora so that they include the same number of NEs as UMBC. Other ways of downsampling the corpora would be to select the first $n$ sentences or the first $n$ tokens, where $n$ is the number of sentences in the smallest corpus. The reason that the number of NEs, which represent the number of positive training examples, is chosen for downsampling the corpora is that the number of positive training examples have a much bigger impact on learning than the number of negative training examples. For instance, BIBREF48 , among others, study topic classification performance for small corpora and sample from the Reuters corpus. They find that adding more negative training data gives little to no improvement, whereas adding positive examples drastically improves performance.",
"Table UID9 shows results with size normalised precision (P), recall (R), and F1-Score (F1). The five lowest P, R and F1 values per method (CRFSuite, Stanford NER, SENNA) are in bold to highlight underperformers. Results for all corpora are summed with macro average.",
"Comparing the different methods, the highest F1 results are achieved with SENNA, followed by Stanford NER and CRFSuite. SENNA has a balanced P and R, which can be explained by the use of word embeddings as features, which help with the unseen word problem. For Stanford NER as well as CRFSuite, which do not make use of embeddings, recall is about half of precision. These findings are in line with other work reporting the usefulness of word embeddings and deep learning for a variety of NLP tasks and domains BIBREF49 , BIBREF50 , BIBREF51 . With respect to individual corpora, the ones where SENNA outperforms other methods by a large margin ( $>=$ 13 points in F1) are CoNLL Test A, ACE CTS and OntoNotes TC. The first success can be attributed to being from the same the domain SENNA was originally tuned for. The second is more unexpected and could be due to those corpora containing a disproportional amount of PER and LOC NEs (which are easier to tag correctly) compared to ORG NEs, as can be seen in Table \"RQ1: NER performance with Different Approaches\" , where F1 of NERC methods is reported on the original training data.",
"Our analysis of CRFSuite here is that it is less tuned for NW corpora and might therefore have a more balanced performance across genres does not hold. Results with CRFSuite for every corpus are worse than the results for that corpus with Stanford NER, which is also CRF-based.",
"To summarise, our findings are:",
"[noitemsep]",
"F1 is highest with SENNA, followed by Stanford NER and CRFSuite",
"SENNA outperforms other methods by a large margin (e.g. $>=$ 13 points in F1) for CoNLL Test A, ACE CTS and OntoNotes TC",
"Our hypothesis that CRFSuite is less tuned for NW corpora and will therefore have a more balanced performance across genres does not hold, as results for CRFSuite for every corpus are worse than with Stanford NER"
],
[
"Our second research question is whether existing NER approaches generalise well over corpora in different genres. To do this we study again Precision (P), Recall (R) and F1 metrics on size-normalised corpora (Table UID9 ), on original corpora (Tables \"RQ1: NER performance with Different Approaches\" and \"RQ1: NER performance with Different Approaches\" ), and we further test performance per genre in a separate table (Table 3 ).",
"F1 scores over size-normalised corpora vary widely (Table UID9 ). For example, the SENNA scores range from 9.35% F1 (ACE UN) to 71.48% (CoNLL Test A). Lowest results are consistently observed for the ACE subcorpora, UMBC, and OntoNotes BC and WB. The ACE corpora are large and so may be more prone to non-uniformities emerging during downsampling; they also have special rules for some kinds of organisation which can skew results (as described in Section UID9 ). The highest results are on the CoNLL Test A corpus, OntoNotes BN and MUC 7 Dev. This moderately supports our hypothesis that NER systems perform better on NW than on other genres, probably due to extra fitting from many researchers using them as benchmarks for tuning their approaches. Looking at the Twitter (TWI) corpora present the most challenge due to increased diversity, the trends are unstable. Although results for UMBC are among the lowest, results for MSM 2013 and Ritter are in the same range or even higher than those on NW datasets. This begs the question whether low results for Twitter corpora reported previously were due to the lack of sufficient in-genre training data.",
"Comparing results on normalised to non-normalised data, Twitter results are lower than those for most OntoNotes corpora and CoNLL test corpora, mostly due to low recall. Other difficult corpora having low performance are ACE UN and WEB corpora. We further explicitly examine results on size normalised corpora grouped by corpus type, shown in Table 3 . It becomes clear that, on average, newswire corpora and OntoNotes MZ are the easiest corpora and ACE UN, WEB and TWI are harder. This confirms our hypothesis that social media and Web corpora are challenging for NERC.",
"The CoNLL results, on the other hand, are the highest across all corpora irrespective of the NERC method. What is very interesting to see is that they are much higher than the results on the biggest training corpus, OntoNotes NW. For instance, SENNA has an F1 of 78.04 on OntoNotes, compared to an F1 of 92.39 and 86.44 for CoNLL Test A and Test B respectively. So even though OntoNotes NW is more than twice the size of CoNLL in terms of NEs (see Table 4 ), NERC performance is much higher on CoNLL. NERC performance with respect to training corpus size is represented in Figure 1 . The latter figure confirms that although there is some correlation between corpus size and F1, the variance between results on comparably sized corpora is big. This strengthens our argument that there is a need for experimental studies, such as those reported below, to find out what, apart from corpus size, impacts NERC performance.",
"Another set of results presented in Table \"RQ1: NER performance with Different Approaches\" are those of the simple NERC memorisation baseline. It can be observed that corpora with a low F1 for NERC methods, such as UMBC and ACE UN, also have a low memorisation performance. Memorisation is discussed in more depth in Section \"RQ5: Out-Of-Domain NER Performance and Memorisation\" .",
"When NERC results are compared to the corpus diversity statistics, i.e. NE/Unique NE ratios (Table 4 ), token/type ratios (Table 5 ), and tag density (Table 6 ), the strongest predictor for F1 is tag density, as can be evidenced by the R correlation values between the ratios and F1 scores with the Stanford NER system, shown in the respective tables.",
"There is a positive correlation between high F1 and high tag density (R of 0.57 and R of 0.62 with normalised tag density), a weak positive correlation for NE/unique ratios (R of 0.20 and R of 0.15 for normalised ratio), whereas for token/type ratios, no such clear correlation can be observed (R of 0.25 and R of -0.07 for normalised ratio).",
"However, tag density is also not an absolute predictor for NERC performance. While NW corpora have both high NERC performance and high tag density, this high density is not necessarily an indicator of high performance. For example, systems might not find high tag density corpora of other genres necessarily so easy.",
"One factor that can explain the difference in genre performance between e.g. newswire and social media is entity drift – the change in observed entity terms over time. In this case, it is evident from the differing surface forms and contexts for a given entity class. For example, the concept of “location\" that NER systems try to learn might be frequently represented in English newswire from 1991 with terms like Iraq or Kuwait, but more with Atlanta, Bosnia and Kabul in the same language and genre from 1996. Informally, drift on Twitter is often characterised as both high-frequency and high-magnitude; that is, the changes are both rapid and correspond to a large amount of surface form occurrences (e.g. BIBREF12 , BIBREF52 ).",
"We examined the impact of drift in newswire and Twitter corpora, taking datasets based in different timeframes. The goal is to gauge how much diversity is due to new entities appearing over time. To do this, we used just the surface lexicalisations of entities as the entity representation. The overlap of surface forms was measured across different corpora of the same genre and language. We used an additional corpus based on recent data – that from the W-NUT 2015 challenge BIBREF25 . This is measured in terms of occurrences, rather than distinct surface forms, so that the magnitude of the drift is shown instead of having skew in results from the the noisy long tail. Results are given in Table 7 for newswire and Table 8 for Twitter corpora.",
"It is evident that the within-class commonalities in surface forms are much higher in newswire than in Twitter. That is to say, observations of entity texts in one newswire corpus are more helpful in labelling other newswire corpora, than if the same technique is used to label other twitter corpora.",
"This indicates that drift is lower in newswire than in tweets. Certainly, the proportion of entity mentions in most recent corpora (the rightmost-columns) are consistently low compared to entity forms available in earlier data. These reflect the raised OOV and drift rates found in previous work BIBREF12 , BIBREF53 . Another explanation is that there is higher noise in variation, and that the drift is not longitudinal, but rather general. This is partially addressed by RQ3, which we will address next, in Section \"RQ3: Impact of NE Diversity\" .",
"To summarise, our findings are:",
"[noitemsep]",
"Overall, F1 scores vary widely across corpora.",
"Trends can be marked in some genres. On average, newswire corpora and OntoNotes MZ are the easiest corpora and ACE UN, WEB and TWI are the hardest corpora for NER methods to reach good performance on.",
"Normalising corpora by size results in more noisy data such as TWI and WEB data achieving similar results to NW corpora.",
"Increasing the amount of available in-domain training data will likely result in improved NERC performance.",
"There is a strong positive correlation between high F1 and high tag density, a weak positive correlation for NE/unique ratios and no clear correlation between token/type ratios and F1",
"Temporal NE drift is lower in newswire than in tweets",
"The next section will take a closer look at the impact of seen and unseen NEs on NER performance."
],
[
"Unseen NEs are those with surface forms present only in the test, but not training data, whereas seen NEs are those also encountered in the training data. As discussed previously, the ratio between those two measures is an indicator of corpus NE diversity.",
"Table 9 shows how the number of unseen NEs per test corpus relates to the total number of NEs per corpus. The proportion of unseen forms varies widely by corpus, ranging from 0.351 (ACE NW) to 0.931 (UMBC). As expected there is a correlation between corpus size and percentage of unseen NEs, i.e. smaller corpora such as MUC and UMBC tend to contain a larger proportion of unseen NEs than bigger corpora such as ACE NW. In addition, similar to the token/type ratios listed in Table 5 , we observe that TWI and WEB corpora have a higher proportion of unseen entities.",
"As can be seen from Table \"RQ1: NER performance with Different Approaches\" , corpora with a low percentage of unseen NEs (e.g. CoNLL Test A and OntoNotes NW) tend to have high NERC performance, whereas corpora with high percentage of unseen NEs (e.g. UMBC) tend to have low NERC performance. This suggests that systems struggle to recognise and classify unseen NEs correctly.",
"To check this seen/unseen performance split, next we examine NERC performance for unseen and seen NEs separately; results are given in Table 10 . The “All\" column group represents an averaged performance result. What becomes clear from the macro averages is that F1 on unseen NEs is significantly lower than F1 on seen NEs for all three NERC approaches. This is mostly due to recall on unseen NEs being lower than that on seen NEs, and suggests some memorisation and poor generalisation in existing systems. In particular, Stanford NER and CRFSuite have almost 50% lower recall on unseen NEs compared to seen NEs. One outlier is ACE UN, for which the average seen F1 is 1.01 and the average unseen F1 is 1.52, though both are miniscule and the different negligible.",
"Of the three approaches, SENNA exhibits the narrowest F1 difference between seen and unseen NEs. In fact it performs below Stanford NER for seen NEs on many corpora. This may be because SENNA has but a few features, based on word embeddings, which reduces feature sparsity; intuitively, the simplicity of the representation is likely to help with unseen NEs, at the cost of slightly reduced performance on seen NEs through slower fitting. Although SENNA appears to be better at generalising than Stanford NER and our CRFSuite baseline, the difference between its performance on seen NEs and unseen NEs is still noticeable. This is 21.77 for SENNA (macro average), whereas it is 29.41 for CRFSuite and 35.68 for Stanford NER.",
"The fact that performance over unseen entities is significantly lower than on seen NEs partly explains what we observed in the previous section; i.e., that corpora with a high proportion of unseen entities, such as the ACE WL corpus, are harder to label than corpora of a similar size from other genres, such as the ACE BC corpus (e.g. systems reach F1 of $\\sim $ 30 compared to $\\sim $ 50; Table \"RQ1: NER performance with Different Approaches\" ).",
"However, even though performance on seen NEs is higher than on unseen, there is also a difference between seen NEs in corpora of different sizes and genres. For instance, performance on seen NEs in ACE WL is 70.86 (averaged over the three different approaches), whereas performance on seen NEs in the less-diverse ACE BC corpus is higher at 76.42; the less diverse data is, on average, easier to tag. Interestingly, average F1 on seen NEs in the Twitter corpora (MSM and Ritter) is around 80, whereas average F1 on the ACE corpora, which are of similar size, is lower, at around 70.",
"To summarise, our findings are:",
"[noitemsep]",
"F1 on unseen NEs is significantly lower than F1 on seen NEs for all three NERC approaches, which is mostly due to recall on unseen NEs being lower than that on seen NEs.",
"Performance on seen NEs is significantly and consistently higher than that of unseen NEs in different corpora, with the lower scores mostly attributable to lower recall.",
"However, there are still significant differences at labelling seen NEs in different corpora, which means that if NEs are seen or unseen does not account for all of the difference of F1 between corpora of different genres."
],
[
"Having examined the impact of seen/unseen NEs on NERC performance in RQ3, and touched upon surface form drift in RQ2, we now turn our attention towards establishing the impact of seen features, i.e. features appearing in the test set that are observed also in the training set. While feature sparsity can help to explain low F1, it is not a good predictor of performance across methods: sparse features can be good if mixed with high-frequency ones. For instance, Stanford NER often outperforms CRFSuite (see Table \"RQ1: NER performance with Different Approaches\" ) despite having a lower proportion of seen features (i.e. those that occur both in test data and during training). Also, some approaches such as SENNA use a small number of features and base their features almost entirely on the NEs and not on their context.",
"Subsequently, we want to measure F1 for unseens and seen NEs, as in Section \"RQ3: Impact of NE Diversity\" , but also examine how the proportion of seen features impacts on the result. We define seen features as those observed in the test data and also the training data. In turn, unseen features are those observed in the test data but not in the training data. That is, they have not been previously encountered by the system at the time of labeling. Unseen features are different from unseen words in that they are the difference in representation, not surface form. For example, the entity “Xoxarle\" may be an unseen entity not found in training data This entity could reasonably have “shape:Xxxxxxx\" and “last-letter:e\" as part of its feature representation. If the training data contains entities “Kenneth\" and “Simone\", each of this will have generated these two features respectively. Thus, these example features will not be unseen features in this case, despite coming from an unseen entity. Conversely, continuing this example, if the training data contains no feature “first-letter:X\" – which applies to the unseen entity in question – then this will be an unseen feature.",
"We therefore measure the proportion of unseen features per unseen and seen proportion of different corpora. An analysis of this with Stanford NER is shown in Figure 2 . Each data point represents a corpus. The blue squares are data points for seen NEs and the red circles are data points for unseen NEs. The figure shows a negative correlation between F1 and percentage of unseen features, i.e. the lower the percentage of unseen features, the higher the F1. Seen and unseen performance and features separate into two groups, with only two outlier points. The figure shows that novel, previously unseen NEs have more unseen features and that systems score a lower F1 on them. This suggests that despite the presence of feature extractors for tackling unseen NEs, the features generated often do not overlap with those from seen NEs. However, one would expect individual features to give different generalisation power for other sets of entities, and for systems use these features in different ways. That is, machine learning approaches to the NER task do not seem to learn clear-cut decision boundaries based on a small set of features. This is reflected in the softness of the correlation.",
"Finally, the proportion of seen features is higher for seen NEs. The two outlier points are ACE UN (low F1 for seen NEs despite low percentage of unseen features) and UMBC (high F1 for seen NEs despite high percentage of unseen features). An error analysis shows that the ACE UN corpus suffers from the problem that the seen NEs are ambiguous, meaning even if they have been seen in the training corpus, a majority of the time they have been observed with a different NE label. For the UMBC corpus, the opposite is true: seen NEs are unambiguous. This kind of metonymy is a known and challenging issue in NER, and the results on these corpora highlight the impact is still has on modern systems.",
"For all approaches the proportion of observed features for seen NEs is bigger than the proportion of observed features for unseen NEs, as it should be. However, within the seen and unseen testing instances, there is no clear trend indicating whether having more observed features overall increases F1 performance. One trend that is observable is that the smaller the token/type ratio is (Table 5 ), the bigger the variance between the smallest and biggest $n$ for each corpus, or, in other words, the smaller the token/type ratio is, the more diverse the features.",
"To summarise, our findings are:",
"[noitemsep]",
"Seen NEs have more unseen features and systems score a lower F1 on them.",
"Outliers are due to low/high ambiguity of seen NEs.",
"The proportion of observed features for seen NEs is bigger than the proportion of observed features for unseen NEs",
"Within the seen and unseen testing instances, there is no clear trend indicating whether having more observed features overall increases F1 performance.",
"The smaller the token/type ratio is, the more diverse the features."
],
[
"This section explores baseline out-of-domain NERC performance without domain adaptation; what percentage of NEs are seen if there is a difference between the the training and the testing domains; and how the difference in performance on unseen and seen NEs compares to in-domain performance.",
"As demonstrated by the above experiments, and in line with related work, NERC performance varies across domains while also being influenced by the size of the available in-domain training data. Prior work on transfer learning and domain adaptation (e.g. BIBREF16 ) has aimed at increasing performance in domains where only small amounts of training data are available. This is achieved by adding out-of domain data from domains where larger amounts of training data exist. For domain adaptation to be successful, however, the seed domain needs to be similar to the target domain, i.e. if there is no or very little overlap in terms of contexts of the training and testing instances, the model does not learn any additional helpful weights. As a confounding factor, Twitter and other social media generally consist of many (thousands-millions) of micro-domains, with each author BIBREF54 community BIBREF55 and even conversation BIBREF56 having its own style, which makes it hard to adapt to it as a single, monolithic genre; accordingly, adding out-of-domain NER data gives bad results in this situation BIBREF21 . And even if recognised perfectly, entities that occur just once cause problems beyond NER, e.g. in co-reference BIBREF57 .",
"In particular, BIBREF58 has reported improving F1 by around 6% through adaptation from the CoNLL to the ACE dataset. However, transfer learning becomes more difficult if the target domain is very noisy or, as mentioned already, too different from the seed domain. For example, BIBREF59 unsuccessfully tried to adapt the CoNLL 2003 corpus to a Twitter corpus spanning several topics. They found that hand-annotating a Twitter corpus consisting of 24,000 tokens performs better on new Twitter data than their transfer learning efforts with the CoNLL 2003 corpus.",
"The seed domain for the experiments here is newswire, where we use the classifier trained on the biggest NW corpus investigated in this study, i.e. OntoNotes NW. That classifier is then applied to all other corpora. The rationale is to test how suitable such a big corpus would be for improving Twitter NER, for which only small training corpora are available.",
"Results for out-of-domain performance are reported in Table 11 . The highest F1 performance is on the OntoNotes BC corpus, with similar results to the in-domain task. This is unsurprising as it belongs to a similar domain as the training corpus (broadcast conversation) the data was collected in the same time period, and it was annotated using the same guidelines. In contrast, out-of-domain results are much lower than in-domain results for the CoNLL corpora, even though they belong to the same genre as OntoNotes NW. Memorisation recall performance on CoNLL TestA and TestB with OntoNotes NW test suggest that this is partly due to the relatively low overlap in NEs between the two datasets. This could be attributed to the CoNLL corpus having been collected in a different time period to the OntoNotes corpus, when other entities were popular in the news; an example of drift BIBREF37 . Conversely, Stanford NER does better on these corpora than it does on other news data, e.g. ACE NW. This indicates that Stanford NER is capable of some degree of generalisation and can detect novel entity surface forms; however, recall is still lower than precision here, achieving roughly the same scores across these three (from 44.11 to 44.96), showing difficulty in picking up novel entities in novel settings.",
"In addition, there are differences in annotation guidelines between the two datasets. If the CoNLL annotation guidelines were more inclusive than the Ontonotes ones, then even a memorisation evaluation over the same dataset would yield this result. This is, in fact, the case: OntoNotes divides entities into more classes, not all of which can be readily mapped to PER/LOC/ORG. For example, OntoNotes includes PRODUCT, EVENT, and WORK OF ART classes, which are not represented in the CoNLL data. It also includes the NORP class, which blends nationalities, religious and political groups. This has some overlap with ORG, but also includes terms such as “muslims\" and “Danes\", which are too broad for the ACE-related definition of ORGANIZATION. Full details can be found in the OntoNotes 5.0 release notes and the (brief) CoNLL 2003 annotation categories. Notice how the CoNLL guidelines are much more terse, being generally non-prose, but also manage to cram in fairly comprehensive lists of sub-kinds of entities in each case. This is likely to make the CoNLL classes include a diverse range of entities, with the many suggestions acting as generative material for the annotator, and therefore providing a broader range of annotations from which to generalise from – i.e., slightly easier to tag.",
"The lowest F1 of 0 is “achieved\" on ACE BN. An examination of that corpus reveals the NEs contained in that corpus are all lower case, whereas those in OntoNotes NW have initial capital letters.",
"Results on unseen NEs for the out-of-domain setting are in Table 12 . The last section's observation of NERC performance being lower for unseen NEs also generally holds true in this out-of-domain setting. The macro average over F1 for the in-domain setting is 76.74% for seen NEs vs. 53.76 for unseen NEs, whereas for the out-of-domain setting the F1 is 56.10% for seen NEs and 47.73% for unseen NEs.",
"Corpora with a particularly big F1 difference between seen and unseen NEs ( $<=$ 20% averaged over all NERC methods) are ACE NW, ACE BC, ACE UN, OntoNotes BN and OntoNotes MZ. For some corpora (CoNLL Test A and B, MSM and Ritter), out-of-domain F1 (macro average over all methods) of unseen NEs is better than for seen NEs. We suspect that this is due to the out-of-domain evaluation setting encouraging better generalisation, as well as the regularity in entity context observed in the fairly limited CoNLL news data – for example, this corpus contains a large proportion of cricket score reports and many cricketer names, occurring in linguistically similar contexts. Others have also noted that the CoNLL datasets are low-diversity compared to OntoNotes, in the context of named entity recognition BIBREF60 . In each of the exceptions except MSM, the difference is relatively small. We note that the MSM test corpus is one of the smallest datasets used in the evaluation, also based on a noisier genre than most others, and so regard this discrepancy as an outlier.",
"Corpora for which out-of-domain F1 is better than in-domain F1 for at least one of the NERC methods are: MUC7 Test, ACE WL, ACE UN, OntoNotes WB, OntoNotes TC and UMBC. Most of those corpora are small, with combined training and testing bearing fewer than 1,000 NEs (MUC7 Test, ACE UN, UMBC). In such cases, it appears beneficial to have a larger amount of training data, even if it is from a different domain and/or time period. The remaining 3 corpora contain weblogs (ACE WL, ACE WB) and online Usenet discussions (ACE UN). Those three are diverse corpora, as can be observed by the relatively low NEs/Unique NEs ratios (Table 4 ). However, NE/Unique NEs ratios are not an absolute predictor for better out-of-domain than in-domain performance: there are corpora with lower NEs/Unique NEs ratios than ACE WB which have better in-domain than out-of-domain performance. As for the other Twitter corpora, MSM 2013 and Ritter, performance is very low, especially for the memorisation system. This reflects that, as well as surface form variation, the context or other information represented by features shifts significantly more in Twitter than across different samples of newswire, and that the generalisations that can be drawn from newswire by modern NER systems are not sufficient to give any useful performance in this natural, unconstrained kind of text.",
"In fact, it is interesting to see that the memorisation baseline is so effective with many genres, including broadcast news, weblog and newswire. This indicates that there is low variation in the topics discussed by these sources – only a few named entities are mentioned by each. When named entities are seen as micro-topics, each indicating a grounded and small topic of interest, this reflects the nature of news having low topic variation, focusing on a few specific issues – e.g., location referred to tend to be big; persons tend to be politically or financially significant; and organisations rich or governmental BIBREF61 . In contrast, social media users also discuss local locations like restaurants, organisations such as music band and sports clubs, and are content to discuss people that are not necessarily mentioned in Wikipedia. The low overlap and memorisation scores on tweets, when taking entity lexica based on newswire, are therefore symptomatic of the lack of variation in newswire text, which has a limited authorship demographic BIBREF62 and often has to comply to editorial guidelines.",
"The other genre that was particularly difficult for the systems was ACE Usenet. This is a form of user-generated content, not intended for publication but rather discussion among communities. In this sense, it is social media, and so it is not surprising that system performance on ACE UN resembles performance on social media more than other genres.",
"Crucially, the computationally-cheap memorisation method actually acts as a reasonable predictor of the performance of other methods. This suggests that high entity diversity predicts difficulty for current NER systems. As we know that social media tends to have high entity diversity – certainly higher than other genres examined – this offers an explanation for why NER systems perform so poorly when taken outside the relatively conservative newswire domain. Indeed, if memorisation offers a consistent prediction of performance, then it is reasonable to say that memorisation and memorisation-like behaviour accounts for a large proportion of NER system performance.",
"To conclude regarding memorisation and out-of-domain performance, there are multiple issues to consider: is the corpus a sub-corpus of the same corpus as the training corpus, does it belong to the same genre, is it collected in the same time period, and was it created with similar annotation guidelines. Yet it is very difficult to explain high/low out-of-domain performance compared to in-domain performance with those factors.",
"A consistent trend is that, if out-of-domain memorisation is better in-domain memorisation, out-of-domain NERC performance with supervised learning is better than in-domain NERC performance with supervised learning too. This reinforces discussions in previous sections: an overlap in NEs is a good predictor for NERC performance. This is useful when a suitable training corpus has to be identified for a new domain. It can be time-consuming to engineer features or study and compare machine learning methods for different domains, while memorisation performance can be checked quickly.",
"Indeed, memorisation consistently predicts NER performance. The prediction applies both within and across domains. This has implications for the focus of future work in NER: the ability to generalise well enough to recognise unseen entities is a significant and still-open problem.",
"To summarise, our findings are:",
"[noitemsep]",
"What time period an out of domain corpus is collected in plays an important role in NER performance.",
"The context or other information represented by features shifts significantly more in Twitter than across different samples of newswire.",
"The generalisations that can be drawn from newswire by modern NER systems are not sufficient to give any useful performance in this varied kind of text.",
"Memorisation consistently predicts NER performance, both inside and outside genres or domains."
],
[
"This paper investigated the ability of modern NER systems to generalise effectively over a variety of genres. Firstly, by analysing different corpora, we demonstrated that datasets differ widely in many regards: in terms of size; balance of entity classes; proportion of NEs; and how often NEs and tokens are repeated. The most balanced corpus in terms of NE classes is the CoNLL corpus, which, incidentally, is also the most widely used NERC corpus, both for method tuning of off-the-shelf NERC systems (e.g. Stanford NER, SENNA), as well as for comparative evaluation. Corpora, traditionally viewed as noisy, i.e. the Twitter and Web corpora, were found to have a low repetition of NEs and tokens. More surprisingly, however, so does the CoNLL corpus, which indicates that it is well balanced in terms of stories. Newswire corpora have a large proportion of NEs as percentage of all tokens, which indicates high information density. Web, Twitter and telephone conversation corpora, on the other hand, have low information density.",
"Our second set of findings relates to the NERC approaches studied. Overall, SENNA achieves consistently the highest performance across most corpora, and thus has the best approach to generalising from training to testing data. This can mostly be attributed to SENNA's use of word embeddings, trained with deep convolutional neural nets. The default parameters of SENNA achieve a balanced precision and recall, while for Stanford NER and CRFSuite, precision is almost twice as high as recall.",
"Our experiments also confirmed the correlation between NERC performance and training corpus size, although size alone is not an absolute predictor. In particular, the biggest NE-annotated corpus amongst those studied is OntoNotes NW – almost twice the size of CoNLL in terms of number of NEs. Nevertheless, the average F1 for CoNLL is the highest of all corpora and, in particular, SENNA has 11 points higher F1 on CoNLL than on OntoNotes NW.",
"Studying NERC on size-normalised corpora, it becomes clear that there is also a big difference in performance on corpora from the same genre. When normalising training data by size, diverse corpora, such as Web and social media, still yield lower F1 than newswire corpora. This indicates that annotating more training examples for diverse genres would likely lead to a dramatic increase in F1.",
"What is found to be a good predictor of F1 is a memorisation baseline, which picks the most frequent NE label for each token sequence in the test corpus as observed in the training corpus. This supported our hypothesis that entity diversity plays an important role, being negatively correlated with F1. Studying proportions of unseen entity surface forms, experiments showed corpora with a large proportion of unseen NEs tend to yield lower F1, due to much lower performance on unseen than seen NEs (about 17 points lower averaged over all NERC methods and corpora). This finally explains why the performance is highest for the benchmark CoNLL newswire corpus – it contains the lowest proportion of unseen NEs. It also explains the difference in performance between NERC on other corpora. Out of all the possible indicators for high NER F1 studied, this is found to be the most reliable one. This directly supports our hypothesis that generalising for unseen named entities is both difficult and important.",
"Also studied is the proportion of unseen features per unseen and seen NE portions of different corpora. However, this is found to not be very helpful. The proportion of seen features is higher for seen NEs, as it should be. However, within the seen and unseen NE splits, there is no clear trend indicating if having more seen features helps.",
"We also showed that hand-annotating more training examples is a straight-forward and reliable way of improving NERC performance. However, this is costly, which is why it can be useful to study if using different, larger corpora for training might be helpful. Indeed, substituting in-domain training corpora with other training corpora for the same genre created at the same time improves performance, and studying how such corpora can be combined with transfer learning or domain adaptation strategies might improve performance even further. However, for most corpora, there is a significant drop in performance for out-of-domain training. What is again found to be reliable is to check the memorisation baseline: if results for the out-of-domain memorisation baseline are higher than for in-domain memorisation, than using the out-of-domain corpus for training is likely to be helpful.",
"Across a broad range of corpora and genres, characterised in different ways, we have examined how named entities are embedded and presented. While there is great variation in the range and class of entities found, it is consistent that the more varied texts are harder to do named entity recognition in. This connection with variation occurs to such an extent that, in fact, performance when memorising lexical forms stably predicts system accuracy. The result of this is that systems are not sufficiently effective at generalising beyond the entity surface forms and contexts found in training data. To close this gap and advance NER systems, and cope with the modern reality of streamed NER, as opposed to the prior generation of batch-learning based systems with static evaluation sets being used as research benchmarks, future work needs to address named entity generalisation and out-of-vocabulary lexical forms."
],
[
"This work was partially supported by the UK EPSRC Grant No. EP/K017896/1 uComp and by the European Union under Grant Agreements No. 611233 PHEME. The authors wish to thank the CS&L reviewers for their helpful and constructive feedback."
]
],
"section_name": [
"Introduction",
"Datasets",
"NER Models and Features",
"RQ1: NER performance with Different Approaches",
"RQ2: NER performance in Different Genres",
"RQ3: Impact of NE Diversity",
"RQ4: Unseen Features, unseen NEs and NER performance",
"RQ5: Out-Of-Domain NER Performance and Memorisation",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"05dfe42d133923f3516fb680679bacc680589a03"
],
"answer": [
{
"evidence": [
"Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details). These datasets were chosen such that they have been annotated with the same or very similar entity classes, in particular, names of people, locations, and organisations. Thus corpora including only domain-specific entities (e.g. biomedical corpora) were excluded. The choice of corpora was also motivated by their chronological age; we wanted to ensure a good temporal spread, in order to study possible effects of entity drift over time.",
"FLOAT SELECTED: Table 1 Corpora genres and number of NEs of different classes."
],
"extractive_spans": [],
"free_form_answer": "MUC, CoNLL, ACE, OntoNotes, MSM, Ritter, UMBC",
"highlighted_evidence": [
"Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details).",
"FLOAT SELECTED: Table 1 Corpora genres and number of NEs of different classes."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"somewhat"
],
"question": [
"What web and user-generated NER datasets are used for the analysis?"
],
"question_id": [
"94e0cf44345800ef46a8c7d52902f074a1139e1a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"named entity recognition"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1 Corpora genres and number of NEs of different classes.",
"Table 2 Sizes of corpora, measured in number of NEs, used for training and testing. Note that the for the ConLL corpus the dev set is called “Test A” and the test set “Test B”.",
"Table 3 P, R and F1 of NERC with different models evaluated on different testing corpora, trained on corpora normalised by size.",
"Table 4 P, R and F1 of NERC with different models evaluated on different testing corpora, trained on corpora normalised by size, metrics macro averaged by genres.",
"Table 5 NE/Unique NE ratios and normalised NE/Unique NE ratios of different corpora, mean and median of those values plus R correlation of ratios with Stanford NER F1 on original corpora.",
"Table 6 Token/type ratios and normalised token/type ratios of different corpora, mean and median of those values plus R correlation of ratios with Stanford NER F1 on original corpora.",
"Table 7 Tag density and normalised tag density, the proportion of tokens with NE tags to all tokens, mean and median of those values plus R correlation of density with Stanford NER F1 on original corpora.",
"Table 8 P, R and F1 of NERC with different models trained on original corpora.",
"Table 9 F1 per NE type with different models trained on original corpora.",
"Fig. 1. F1 of different NER methods with respect to training corpus size, measured in log of number of NEs.",
"Table 10 Entity surface form occurrence overlap between Twitter corpora.",
"Table 11 Entity surface form occurrence overlap between news corpora.",
"Table 12 Proportion of unseen entities in different test corpora.",
"Table 13 P, R and F1 of NERC with different models of unseen and seen NEs.",
"Fig. 2. Percentage of unseen features and F1 with Stanford NER for seen (blue squares) and unseen (red circles) NEs in different corpora. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)",
"Table 14 Out of domain performance: F1 of NERC with different models.",
"Table 15 Out-of-domain performance for unseen vs. seen NEs: F1 of NERC with different models."
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"8-Table7-1.png",
"10-Table8-1.png",
"11-Table9-1.png",
"12-Figure1-1.png",
"13-Table10-1.png",
"13-Table11-1.png",
"14-Table12-1.png",
"15-Table13-1.png",
"17-Figure2-1.png",
"18-Table14-1.png",
"19-Table15-1.png"
]
} | [
"What web and user-generated NER datasets are used for the analysis?"
] | [
[
"1701.02877-4-Table1-1.png",
"1701.02877-Datasets-0"
]
] | [
"MUC, CoNLL, ACE, OntoNotes, MSM, Ritter, UMBC"
] | 120 |
1904.05862 | wav2vec: Unsupervised Pre-training for Speech Recognition | We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using three orders of magnitude less labeled training data. | {
"paragraphs": [
[
"Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance BIBREF1 . Recently, pre-training of neural networks has emerged as an effective technique for settings where labeled data is scarce. The key idea is to learn general representations in a setup where substantial amounts of labeled or unlabeled data is available and to leverage the learned representations to improve performance on a downstream task for which the amount of data is limited. This is particularly interesting for tasks where substantial effort is required to obtain labeled data, such as speech recognition.",
"In computer vision, representations for ImageNet BIBREF2 and COCO BIBREF3 have proven to be useful to initialize models for tasks such as image captioning BIBREF4 or pose estimation BIBREF5 . Unsupervised pre-training for computer vision has also shown promise BIBREF6 . In natural language processing (NLP), unsupervised pre-training of language models BIBREF7 , BIBREF8 , BIBREF9 improved many tasks such as text classification, phrase structure parsing and machine translation BIBREF10 , BIBREF11 . In speech processing, pre-training has focused on emotion recogniton BIBREF12 , speaker identification BIBREF13 , phoneme discrimination BIBREF14 , BIBREF15 as well as transferring ASR representations from one language to another BIBREF16 . There has been work on unsupervised learning for speech but the resulting representations have not been applied to improve supervised speech recognition BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .",
"In this paper, we apply unsupervised pre-training to improve supervised speech recognition. This enables exploiting unlabeled audio data which is much easier to collect than labeled data. Our model, , is a convolutional neural network that takes raw audio as input and computes a general representation that can be input to a speech recognition system. The objective is a contrastive loss that requires distinguishing a true future audio sample from negatives BIBREF22 , BIBREF23 , BIBREF15 . Different to previous work BIBREF15 , we move beyond frame-wise phoneme classification and apply the learned representations to improve strong supervised ASR systems. relies on a fully convolutional architecture which can be easily parallelized over time on modern hardware compared to recurrent autoregressive models used in previous work (§ SECREF2 ).",
"Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 )."
],
[
"Given an audio signal as input, we optimize our model (§ SECREF3 ) to predict future samples from a given signal context. A common problem with these approaches is the requirement to accurately model the data distribution INLINEFORM0 , which is challenging. We avoid this problem by first encoding raw speech samples INLINEFORM1 into a feature representation INLINEFORM2 at a lower temporal frequency and then implicitly model a density function INLINEFORM3 similar to BIBREF15 ."
],
[
"Our model takes raw audio signal as input and then applies two networks. The encoder network embeds the audio signal in latent space and the context network combines multiple time-steps of the encoder to obtain contextualized representations (Figure FIGREF2 ). Both networks are then used to compute the objective function (§ SECREF4 ).",
"Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 . Alternatively, one could use other architectures such as the trainable frontend of BIBREF24 amongst others. The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 . The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms.",
"Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. The total receptive field of the context network is about 180ms.",
"The layers of both networks consist of a causal convolution with 512 channels, a group normalization layer and a ReLU nonlinearity. We normalize both across the feature and temporal dimension for each sample which is equivalent to group normalization with a single normalization group BIBREF25 . We found it important to choose a normalization scheme that is invariant to the scaling and the offset of the input data. This choice resulted in representations that generalize well across datasets."
],
[
"We train the model to distinguish a sample INLINEFORM0 that is k steps in the future from distractor samples INLINEFORM1 drawn from a proposal distribution INLINEFORM2 , by minimizing the contrastive loss for each step INLINEFORM3 : DISPLAYFORM0 ",
"where we denote the sigmoid INLINEFORM0 , and where INLINEFORM1 is the probability of INLINEFORM2 being the true sample. We consider a step-specific affine transformation INLINEFORM3 for each step INLINEFORM4 , that is applied to INLINEFORM5 BIBREF15 . We optimize the loss INLINEFORM6 , summing ( EQREF5 ) over different step sizes. In practice, we approximate the expectation by sampling ten negatives examples by uniformly choosing distractors from each audio sequence, i.e., INLINEFORM7 , where INLINEFORM8 is the sequence length and we set INLINEFORM9 to the number of negatives.",
"After training, we input the representations produced by the context network INLINEFORM0 to the acoustic model instead of log-mel filterbank features."
],
[
"We consider the following corpora: For phoneme recognition on TIMIT BIBREF26 we use the standard train, dev and test split where the training data contains just over three hours of audio data. Wall Street Journal (WSJ; Woodland et al., 1994) comprises about 81 hours of transcribed audio data. We train on si284, validate on nov93dev and test on nov92. Librispeech BIBREF27 contains a total of 960 hours of clean and noisy speech for training. For pre-training, we use either the full 81 hours of the WSJ corpus, an 80 hour subset of clean Librispeech, the full 960 hour Librispeech training set, or a combination of all of them.",
"To train the baseline acoustic model we compute 80 log-mel filterbank coefficients for a 25ms sliding window with stride 10ms. Final models are evaluated in terms of both word error rate (WER) and letter error rate (LER)."
],
[
"We use the wav2letter++ toolkit for training and evaluation of acoustic models BIBREF28 . For the TIMIT task, we follow the character-based wav2letter++ setup of BIBREF24 which uses seven consecutive blocks of convolutions (kernel size 5 with 1,000 channels), followed by a PReLU nonlinearity and a dropout rate of 0.7. The final representation is projected to a 39-dimensional phoneme probability. The model is trained using the Auto Segmentation Criterion (ASG; Collobert et al., 2016)) using SGD with momentum.",
"Our baseline for the WSJ benchmark is the wav2letter++ setup described in BIBREF29 which is a 17 layer model with gated convolutions BIBREF30 . The model predicts probabilities for 31 graphemes, including the standard English alphabet, the apostrophe and period, two repetition characters (e.g. the word ann is transcribed as an1), and a silence token (|) used as word boundary.",
"All acoustic models are trained on 8 Nvidia V100 GPUs using the distributed training implementations of fairseq and wav2letter++. When training acoustic models on WSJ, we use plain SGD with learning rate 5.6 as well as gradient clipping BIBREF29 and train for 1,000 epochs with a total batch size of 64 audio sequences. We use early stopping and choose models based on validation WER after evaluating checkpoints with a 4-gram language model. For TIMIT we use learning rate 0.12, momentum of 0.9 and train for 1,000 epochs on 8 GPUs with a batch size of 16 audio sequences."
],
[
"For decoding the emissions from the acoustic model we use a lexicon as well as a separate language model trained on the WSJ language modeling data only. We consider a 4-gram KenLM language model BIBREF31 , a word-based convolutional language model BIBREF29 , and a character based convolutional language model BIBREF32 . We decode the word sequence INLINEFORM0 from the output of the context network INLINEFORM1 or log-mel filterbanks using the beam search decoder of BIBREF29 by maximizing DISPLAYFORM0 ",
"where INLINEFORM0 is the acoustic model, INLINEFORM1 is the language model, INLINEFORM2 are the characters of INLINEFORM3 . Hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are weights for the language model, the word penalty, and the silence penalty.",
"For decoding WSJ, we tune the hyperparameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 using a random search. Finally, we decode the emissions from the acoustic model with the best parameter setting for INLINEFORM3 , INLINEFORM4 and INLINEFORM5 , and a beam size of 4000 and beam score threshold of 250."
],
[
"The pre-training models are implemented in PyTorch in the fairseq toolkit BIBREF0 . We optimize them with Adam BIBREF33 and a cosine learning rate schedule BIBREF34 annealed over 40K update steps for both WSJ and the clean Librispeech training datasets. We start with a learning rate of 1e-7, and the gradually warm it up for 500 updates up to 0.005 and then decay it following the cosine curve up to 1e-6. We train for 400K steps for full Librispeech. To compute the objective, we sample ten negatives and we use INLINEFORM0 tasks.",
"We train on 8 GPUs and put a variable number of audio sequences on each GPU, up to a pre-defined limit of 1.5M frames per GPU. Sequences are grouped by length and we crop them to a maximum size of 150K frames each, or the length of the shortest sequence in the batch, whichever is smaller. Cropping removes speech signal from either the beginning or end of the sequence and we randomly decide the cropping offsets for each sample; we re-sample every epoch. This is a form of data augmentation but also ensures equal length of all sequences on a GPU and removes on average 25% of the training data. After cropping the total effective batch size across GPUs is about 556 seconds of speech signal (for a variable number of audio sequences)."
],
[
"Different to BIBREF15 , we evaluate the pre-trained representations directly on downstream speech recognition tasks. We measure speech recognition performance on the WSJ benchmark and simulate various low resource setups (§ SECREF12 ). We also evaluate on the TIMIT phoneme recognition task (§ SECREF13 ) and ablate various modeling choices (§ SECREF14 )."
],
[
"We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). For the pre-training experiments we feed the output of the context network to the acoustic model, instead of log-mel filterbank features.",
"Table shows that pre-training on more data leads to better accuracy on the WSJ benchmark. Pre-trained representations can substantially improve performance over our character-based baseline which is trained on log-mel filterbank features. This shows that pre-training on unlabeled audio data can improve over the best character-based approach, Deep Speech 2 BIBREF1 , by 0.3 WER on nov92. Our best pre-training model performs as well as the phoneme-based model of BIBREF35 . BIBREF36 is a phoneme-based approach that pre-trains on the transcribed Libirspeech data and then fine-tunes on WSJ. In comparison, our method requires only unlabeled audio data and BIBREF36 also rely on a stronger baseline model than our setup.",
"What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance."
],
[
"On the TIMIT task we use a 7-layer wav2letter++ model with high dropout (§ SECREF3 ; Synnaeve et al., 2016). Table shows that we can match the state of the art when we pre-train on Librispeech and WSJ audio data. Accuracy steadily increases with more data for pre-training and the best accuracy is achieved when we use the largest amount of data for pre-training."
],
[
"In this section we analyze some of the design choices we made for . We pre-train on the 80 hour subset of clean Librispeech and evaluate on TIMIT. Table shows that increasing the number of negative samples only helps up to ten samples. Thereafter, performance plateaus while training time increases. We suspect that this is because the training signal from the positive samples decreases as the number of negative samples increases. In this experiment, everything is kept equal except for the number of negative samples.",
"Next, we analyze the effect of data augmentation through cropping audio sequences (§ SECREF11 ). When creating batches we crop sequences to a pre-defined maximum length. Table shows that a crop size of 150K frames results in the best performance. Not restricting the maximum length (None) gives an average sequence length of about 207K frames and results in the worst accuracy. This is most likely because the setting provides the least amount of data augmentation.",
"Table shows that predicting more than 12 steps ahead in the future does not result in better performance and increasing the number of steps increases training time."
],
[
"We introduce , the first application of unsupervised pre-training to speech recognition with a fully convolutional model. Our approach achieves 2.78 WER on the test set of WSJ, a result that outperforms the next best known character-based speech recognition model in the literature BIBREF1 while using three orders of magnitude less transcribed training data. We show that more data for pre-training improves performance and that this approach not only improves resource-poor setups, but also settings where all WSJ training data is used. In future work, we will investigate different architectures and fine-tuning which is likely to further improve performance."
],
[
"We thank the Speech team at FAIR, especially Jacob Kahn, Vineel Pratap and Qiantong Xu for help with wav2letter++ experiments, and Tatiana Likhomanenko for providing convolutional language models for our experiments."
]
],
"section_name": [
"Introduction",
"Pre-training Approach",
"Model",
"Objective",
"Data",
"Acoustic Models",
"Decoding",
"Pre-training Models",
"Results",
"Pre-training for the WSJ benchmark",
"Pre-training for TIMIT",
"Ablations",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"b5db6a885782bd0be2ae18fb5f4ee7b901f4899a"
],
"answer": [
{
"evidence": [
"We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). For the pre-training experiments we feed the output of the context network to the acoustic model, instead of log-mel filterbank features.",
"Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 )."
],
"extractive_spans": [],
"free_form_answer": "1000 hours of WSJ audio data",
"highlighted_evidence": [
"We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). ",
"Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8e62f7f6e7e443e1ab1df3d3c04d273a06ade07f"
],
"answer": [
{
"evidence": [
"Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 . Alternatively, one could use other architectures such as the trainable frontend of BIBREF24 amongst others. The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 . The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms.",
"Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. The total receptive field of the context network is about 180ms."
],
"extractive_spans": [],
"free_form_answer": "wav2vec has 12 convolutional layers",
"highlighted_evidence": [
"Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 .",
"Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0633347cc1331b9aecb030e036503854b5167b2d"
],
"answer": [
{
"evidence": [
"What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which unlabeled data do they pretrain with?",
"How many convolutional layers does their model have?",
"Do they explore how much traning data is needed for which magnitude of improvement for WER? "
],
"question_id": [
"ad67ca844c63bf8ac9fdd0fa5f58c5a438f16211",
"12eaaf3b6ebc51846448c6e1ad210dbef7d25a96",
"828615a874512844ede9d7f7d92bdc48bb48b18d"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Illustration of pre-training from audio data X which is encoded with two convolutional neural networks that are stacked on top of each other. The model is optimized to solve a next time step prediction task.",
"Table 1: Replacing log-mel filterbanks (Baseline) by pre-trained embeddings improves WSJ performance on test (nov92) and validation (nov93dev) in terms of both LER and WER. We evaluate pre-training on the acoustic data of part of clean and full Librispeech as well as the combination of all of them. † indicates results with phoneme-based models.",
"Figure 2: Pre-training substanstially improves WER in simulated low-resource setups on the audio data of WSJ compared to wav2letter++ with log-mel filterbanks features (Baseline). Pre-training on the audio data of the full 960 h Librispeech dataset (wav2vec Libri) performs better than pre-training on the 81 h WSJ dataset (wav2vec WSJ).",
"Table 2: Results for phoneme recognition on TIMIT in terms of PER. All our models use the CNN8L-PReLU-do0.7 architecture (Ravanelli et al., 2018).",
"Table 3: Effect of different number of negative samples during pre-training for TIMIT on the development set.",
"Table 5: Effect of different number of tasks K (cf. Table 3)."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"5-Figure2-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table5-1.png"
]
} | [
"Which unlabeled data do they pretrain with?",
"How many convolutional layers does their model have?"
] | [
[
"1904.05862-Pre-training for the WSJ benchmark-0",
"1904.05862-Introduction-3"
],
[
"1904.05862-Model-2",
"1904.05862-Model-1"
]
] | [
"1000 hours of WSJ audio data",
"wav2vec has 12 convolutional layers"
] | 121 |
1911.00069 | Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping | Relation extraction (RE) seeks to detect and classify semantic relationships between entities, which provides useful information for many NLP applications. Since the state-of-the-art RE models require large amounts of manually annotated data and language-specific resources to achieve high accuracy, it is very challenging to transfer an RE model of a resource-rich language to a resource-poor language. In this paper, we propose a new approach for cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language, so that a well-trained source-language neural network RE model can be directly applied to the target language. Experiment results show that the proposed approach achieves very good performance for a number of target languages on both in-house and open datasets, using a small bilingual dictionary with only 1K word pairs. | {
"paragraphs": [
[
"Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?\"",
"Traditional RE models (e.g., BIBREF0, BIBREF1, BIBREF2) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., BIBREF3, BIBREF4, BIBREF5, BIBREF6) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction.",
"All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and time-consuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resource-poor language.",
"There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice.",
"In this paper, we make the following contributions to cross-lingual RE:",
"We propose a new approach for direct cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language.",
"We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves the-state-of-the-art performance without using language-specific resources.",
"We conduct extensive experiments which show that the proposed approach achieves very good performance (up to $79\\%$ of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets BIBREF11, using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems.",
"We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7."
],
[
"We summarize the main steps of our neural cross-lingual RE model transfer approach as follows.",
"Build word embeddings for the source language and the target language separately using monolingual data.",
"Learn a linear mapping that projects the target-language word embeddings into the source-language embedding space using a small bilingual dictionary.",
"Build a neural network source-language RE model that uses word embeddings and generic language-independent features as the input.",
"For a target-language sentence and any two entities in it, project the word embeddings of the words in the sentence to the source-language word embeddings using the linear mapping, and then apply the source-language RE model on the projected word embeddings to classify the relationship between the two entities. An example is shown in Figure FIGREF4, where the target language is Portuguese and the source language is English.",
"We will describe each component of our approach in the subsequent sections."
],
[
"In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications BIBREF12, BIBREF13, BIBREF14.",
"A monolingual word embedding model maps words in the vocabulary $\\mathcal {V}$ of a language to real-valued vectors in $\\mathbb {R}^{d\\times 1}$. The dimension of the vector space $d$ is normally much smaller than the size of the vocabulary $V=|\\mathcal {V}|$ for efficient representation. It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data.",
"Cross-lingual word embedding models try to build word embeddings across multiple languages BIBREF15, BIBREF16. One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary BIBREF17, BIBREF18. Another approach builds multilingual word embeddings in a shared vector space simultaneously, by generating mixed language corpora using aligned sentences BIBREF19, BIBREF20.",
"In this paper, we adopt the technique in BIBREF17 because it only requires a small bilingual dictionary of aligned word pairs, and does not require parallel corpora of aligned sentences which could be more difficult to obtain."
],
[
"To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model BIBREF13.",
"The standard CBOW model has two matrices, the input word matrix $\\tilde{\\mathbf {X}} \\in \\mathbb {R}^{d\\times V}$ and the output word matrix $\\mathbf {X} \\in \\mathbb {R}^{d\\times V}$. For the $i$th word $w_i$ in $\\mathcal {V}$, let $\\mathbf {e}(w_i) \\in \\mathbb {R}^{V \\times 1}$ be a one-hot vector with 1 at index $i$ and 0s at other indexes, so that $\\tilde{\\mathbf {x}}_i = \\tilde{\\mathbf {X}}\\mathbf {e}(w_i)$ (the $i$th column of $\\tilde{\\mathbf {X}}$) is the input vector representation of word $w_i$, and $\\mathbf {x}_i = \\mathbf {X}\\mathbf {e}(w_i)$ (the $i$th column of $\\mathbf {X}$) is the output vector representation (i.e., word embedding) of word $w_i$.",
"Given a sequence of training words $w_1, w_2, ..., w_N$, the CBOW model seeks to predict a target word $w_t$ using a window of $2c$ context words surrounding $w_t$, by maximizing the following objective function:",
"The conditional probability is calculated using a softmax function:",
"where $\\mathbf {x}_t=\\mathbf {X}\\mathbf {e}(w_t)$ is the output vector representation of word $w_t$, and",
"is the sum of the input vector representations of the context words.",
"In our variant of the CBOW model, we use a separate input word matrix $\\tilde{\\mathbf {X}}_j$ for a context word at position $j, -c \\le j \\le c, j\\ne 0$. In addition, we employ weights that decay with the distances of the context words to the target word. Under these modifications, we have",
"We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model BIBREF21."
],
[
"BIBREF17 observed that word embeddings of different languages often have similar geometric arrangements, and suggested to learn a linear mapping between the vector spaces.",
"Let $\\mathcal {D}$ be a bilingual dictionary with aligned word pairs ($w_i, v_i)_{i=1,...,D}$ between a source language $s$ and a target language $t$, where $w_i$ is a source-language word and $v_i$ is the translation of $w_i$ in the target language. Let $\\mathbf {x}_i \\in \\mathbb {R}^{d \\times 1}$ be the word embedding of the source-language word $w_i$, $\\mathbf {y}_i \\in \\mathbb {R}^{d \\times 1}$ be the word embedding of the target-language word $v_i$.",
"We find a linear mapping (matrix) $\\mathbf {M}_{t\\rightarrow s}$ such that $\\mathbf {M}_{t\\rightarrow s}\\mathbf {y}_i$ approximates $\\mathbf {x}_i$, by solving the following least squares problem using the dictionary as the training set:",
"Using $\\mathbf {M}_{t\\rightarrow s}$, for any target-language word $v$ with word embedding $\\mathbf {y}$, we can project it into the source-language embedding space as $\\mathbf {M}_{t\\rightarrow s}\\mathbf {y}$."
],
[
"To ensure that all the training instances in the dictionary $\\mathcal {D}$ contribute equally to the optimization objective in (DISPLAY_FORM14) and to preserve vector norms after projection, we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in BIBREF22, BIBREF23, BIBREF24.",
"First, we normalize the source-language and target-language word embeddings to be unit vectors: $\\mathbf {x}^{\\prime }=\\frac{\\mathbf {x}}{||\\mathbf {x}||}$ for each source-language word embedding $\\mathbf {x}$, and $\\mathbf {y}^{\\prime }= \\frac{\\mathbf {y}}{||\\mathbf {y}||}$ for each target-language word embedding $\\mathbf {y}$.",
"Next, we add an orthogonality constraint to (DISPLAY_FORM14) such that $\\mathbf {M}$ is an orthogonal matrix, i.e., $\\mathbf {M}^\\mathrm {T}\\mathbf {M} = \\mathbf {I}$ where $\\mathbf {I}$ denotes the identity matrix:",
"$\\mathbf {M}^{O} _{t\\rightarrow s}$ can be computed using singular-value decomposition (SVD)."
],
[
"The mapping learned in (DISPLAY_FORM14) or (DISPLAY_FORM16) requires a seed dictionary. To relax this requirement, BIBREF25 proposed a self-learning procedure that can be combined with a dictionary-based mapping technique. Starting with a small seed dictionary, the procedure iteratively 1) learns a mapping using the current dictionary; and 2) computes a new dictionary using the learned mapping.",
"BIBREF26 proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary. The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages, and then applies a robust self-learning procedure to iteratively improve the mapping. Another unsupervised method based on adversarial training was proposed in BIBREF27.",
"We compare the performance of different mappings for cross-lingual RE model transfer in Section SECREF45."
],
[
"For any two entities in a sentence, an RE model determines whether these two entities have a relationship, and if yes, classifies the relationship into one of the pre-defined relation types. We focus on neural network RE models since these models achieve the state-of-the-art performance for relation extraction. Most importantly, neural network RE models use word embeddings as the input, which are amenable to cross-lingual model transfer via cross-lingual word embeddings. In this paper, we use English as the source language.",
"Our neural network architecture has four layers. The first layer is the embedding layer which maps input words in a sentence to word embeddings. The second layer is a context layer which transforms the word embeddings to context-aware vector representations using a recurrent or convolutional neural network layer. The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling. The final layer is the output layer which returns the classification label for the relation type."
],
[
"For an English sentence with $n$ words $\\mathbf {s}=(w_1,w_2,...,w_n)$, the embedding layer maps each word $w_t$ to a real-valued vector (word embedding) $\\mathbf {x}_t\\in \\mathbb {R}^{d \\times 1}$ using the English word embedding model (Section SECREF9). In addition, for each entity $m$ in the sentence, the embedding layer maps its entity type to a real-valued vector (entity label embedding) $\\mathbf {l}_m \\in \\mathbb {R}^{d_m \\times 1}$ (initialized randomly). In our experiments we use $d=300$ and $d_m = 50$."
],
[
"Given the word embeddings $\\mathbf {x}_t$'s of the words in the sentence, the context layer tries to build a sentence-context-aware vector representation for each word. We consider two types of neural network layers that aim to achieve this."
],
[
"The first type of context layer is based on Long Short-Term Memory (LSTM) type recurrent neural networks BIBREF28, BIBREF29. Recurrent neural networks (RNNs) are a class of neural networks that operate on sequential data such as sequences of words. LSTM networks are a type of RNNs that have been invented to better capture long-range dependencies in sequential data.",
"We pass the word embeddings $\\mathbf {x}_t$'s to a forward and a backward LSTM layer. A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks. The memory block at the $t$-th word in the forward LSTM layer contains a memory cell $\\overrightarrow{\\mathbf {c}}_t$ and three gates: an input gate $\\overrightarrow{\\mathbf {i}}_t$, a forget gate $\\overrightarrow{\\mathbf {f}}_t$ and an output gate $\\overrightarrow{\\mathbf {o}}_t$ ($\\overrightarrow{\\cdot }$ indicates the forward direction), which are updated as follows:",
"where $\\sigma $ is the element-wise sigmoid function and $\\odot $ is the element-wise multiplication.",
"The hidden state vector $\\overrightarrow{\\mathbf {h}}_t$ in the forward LSTM layer incorporates information from the left (past) tokens of $w_t$ in the sentence. Similarly, we can compute the hidden state vector $\\overleftarrow{\\mathbf {h}}_t$ in the backward LSTM layer, which incorporates information from the right (future) tokens of $w_t$ in the sentence. The concatenation of the two vectors $\\mathbf {h}_t = [\\overrightarrow{\\mathbf {h}}_t, \\overleftarrow{\\mathbf {h}}_t]$ is a good representation of the word $w_t$ with both left and right contextual information in the sentence."
],
[
"The second type of context layer is based on Convolutional Neural Networks (CNNs) BIBREF3, BIBREF4, which applies convolution-like operation on successive windows of size $k$ around each word in the sentence. Let $\\mathbf {z}_t = [\\mathbf {x}_{t-(k-1)/2},...,\\mathbf {x}_{t+(k-1)/2}]$ be the concatenation of $k$ word embeddings around $w_t$. The convolutional layer computes a hidden state vector",
"for each word $w_t$, where $\\mathbf {W}$ is a weight matrix and $\\mathbf {b}$ is a bias vector, and $\\tanh (\\cdot )$ is the element-wise hyperbolic tangent function."
],
[
"After the context layer, the sentence $(w_1,w_2,...,w_n)$ is represented by $(\\mathbf {h}_1,....,\\mathbf {h}_n)$. Suppose $m_1=(w_{b_1},..,w_{e_1})$ and $m_2=(w_{b_2},..,w_{e_2})$ are two entities in the sentence where $m_1$ is on the left of $m_2$ (i.e., $e_1 < b_2$). As different sentences and entities may have various lengths, the summarization layer tries to build a fixed-length vector that best summarizes the representations of the sentence and the two entities for relation type classification.",
"We divide the hidden state vectors $\\mathbf {h}_t$'s into 5 groups:",
"$G_1=\\lbrace \\mathbf {h}_{1},..,\\mathbf {h}_{b_1-1}\\rbrace $ includes vectors that are left to the first entity $m_1$.",
"$G_2=\\lbrace \\mathbf {h}_{b_1},..,\\mathbf {h}_{e_1}\\rbrace $ includes vectors that are in the first entity $m_1$.",
"$G_3=\\lbrace \\mathbf {h}_{e_1+1},..,\\mathbf {h}_{b_2-1}\\rbrace $ includes vectors that are between the two entities.",
"$G_4=\\lbrace \\mathbf {h}_{b_2},..,\\mathbf {h}_{e_2}\\rbrace $ includes vectors that are in the second entity $m_2$.",
"$G_5=\\lbrace \\mathbf {h}_{e_2+1},..,\\mathbf {h}_{n}\\rbrace $ includes vectors that are right to the second entity $m_2$.",
"We perform element-wise max pooling among the vectors in each group:",
"where $d_h$ is the dimension of the hidden state vectors. Concatenating the $\\mathbf {h}_{G_i}$'s we get a fixed-length vector $\\mathbf {h}_s=[\\mathbf {h}_{G_1},...,\\mathbf {h}_{G_5}]$."
],
[
"The output layer receives inputs from the previous layers (the summarization vector $\\mathbf {h}_s$, the entity label embeddings $\\mathbf {l}_{m_1}$ and $\\mathbf {l}_{m_2}$ for the two entities under consideration) and returns a probability distribution over the relation type labels:"
],
[
"Given the word embeddings of a sequence of words in a target language $t$, $(\\mathbf {y}_1,...,\\mathbf {y}_n)$, we project them into the English embedding space by applying the linear mapping $\\mathbf {M}_{t\\rightarrow s}$ learned in Section SECREF13: $(\\mathbf {M}_{t\\rightarrow s}\\mathbf {y}_1, \\mathbf {M}_{t\\rightarrow s}\\mathbf {y}_2,...,\\mathbf {M}_{t\\rightarrow s}\\mathbf {y}_n)$. The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings (which are language independent) to perform relationship classification.",
"Note that our models do not use language-specific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language. Also our models do not use precise word position features since word positions in sentences can vary a lot across languages."
],
[
"In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11."
],
[
"Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).",
"The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).",
"For both datasets, we create a class label “O\" to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest."
],
[
"We build 3 neural network English RE models under the architecture described in Section SECREF4:",
"The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer. We call it Pass-Through for short.",
"The second neural network RE model has a Bi-LSTM context layer. We call it Bi-LSTM for short.",
"The third neural network model has a CNN context layer with a window size 3. We call it CNN for short.",
"First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.",
"We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.",
"In Table TABREF40 we compare our models with the best models in BIBREF30 and BIBREF6. Our Bi-LSTM model outperforms the best model (single or ensemble) in BIBREF30 and the best single model in BIBREF6, without using any language-specific resources such as dependency parsers.",
"While the data split in the previous works was motivated by domain adaptation, the focus of this paper is on cross-lingual model transfer, and hence we apply a random data split as follows. For the source language English and each target language, we randomly select $80\\%$ of the data as the training set, $10\\%$ as the development set, and keep the remaining $10\\%$ as the test set. The sizes of the sets are summarized in Table TABREF41.",
"We report the Precision, Recall and $F_1$ score of the 3 neural network English RE models in Table TABREF42. Note that adding an additional context layer with either Bi-LSTM or CNN significantly improves the performance of our English RE model, compared with the simple Pass-Through model. Therefore, we will focus on the Bi-LSTM model and the CNN model in the subsequent experiments."
],
[
"We apply the English RE models to the 7 target languages across a variety of language families."
],
[
"The bilingual dictionary includes the most frequent target-language words and their translations in English. To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for cross-lingual RE, we first evaluate the performance ($F_1$ score) of our cross-lingual RE approach on the target-language development sets with an increasing dictionary size, as plotted in Figure FIGREF35.",
"We found that for most target languages, once the dictionary size reaches 1K, further increasing the dictionary size may not improve the transfer performance. Therefore, we select the dictionary size to be 1K."
],
[
"We compare the performance of cross-lingual RE model transfer under the following bilingual word embedding mappings:",
"Regular-1K: the regular mapping learned in (DISPLAY_FORM14) using 1K word pairs;",
"Orthogonal-1K: the orthogonal mapping with length normalization learned in (DISPLAY_FORM16) using 1K word pairs (in this case we train the English RE models with the normalized English word embeddings);",
"Semi-Supervised-1K: the mapping learned with 1K word pairs and improved by the self-learning method in BIBREF25;",
"Unsupervised: the mapping learned by the unsupervised method in BIBREF26.",
"The results are summarized in Table TABREF46. The regular mapping outperforms the orthogonal mapping consistently across the target languages. While the orthogonal mapping was shown to work better than the regular mapping for the word translation task BIBREF22, BIBREF23, BIBREF24, our cross-lingual RE approach directly maps target-language word embeddings to the English embedding space without conducting word translations. Moreover, the orthogonal mapping requires length normalization, but we observed that length normalization adversely affects the performance of the English RE models (about 2.0 $F_1$ points drop).",
"We apply the vecmap toolkit to obtain the semi-supervised and unsupervised mappings. The unsupervised mapping has the lowest average accuracy over the target languages, but it does not require a seed dictionary. Among all the mappings, the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs, and hence we adopt it for the cross-lingual RE task."
],
[
"The cross-lingual RE model transfer results for the in-house test data are summarized in Table TABREF52 and the results for the ACE05 test data are summarized in Table TABREF53, using the regular mapping learned with a bilingual dictionary of size 1K. In the tables, we also provide the performance of the supervised RE model (Bi-LSTM) for each target language, which is trained with a few hundred thousand tokens of manually annotated RE data in the target-language, and may serve as an upper bound for the cross-lingual model transfer performance.",
"Among the 2 neural network models, the Bi-LSTM model achieves a better cross-lingual RE performance than the CNN model for 6 out of the 7 target languages. In terms of absolute performance, the Bi-LSTM model achieves over $40.0$ $F_1$ scores for German, Spanish, Portuguese and Chinese. In terms of relative performance, it reaches over $75\\%$ of the accuracy of the supervised target-language RE model for German, Spanish, Italian and Portuguese. While Japanese and Arabic appear to be more difficult to transfer, it still achieves $55\\%$ and $52\\%$ of the accuracy of the supervised Japanese and Arabic RE model, respectively, without using any manually annotated RE data in Japanese/Arabic.",
"We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic."
],
[
"Since our approach projects the target-language word embeddings to the source-language embedding space preserving the word order, it is expected to work better for a target language that has more similar word order as the source language. This has been verified by our experiments. The source language, English, belongs to the SVO (Subject, Verb, Object) language family where in a sentence the subject comes first, the verb second, and the object third. Spanish, Italian, Portuguese, German (in conventional typology) and Chinese also belong to the SVO language family, and our approach achieves over $70\\%$ relative accuracy for these languages. On the other hand, Japanese belongs to the SOV (Subject, Object, Verb) language family and Arabic belongs to the VSO (Verb, Subject, Object) language family, and our approach achieves lower relative accuracy for these two languages."
],
[
"There are a few weakly supervised cross-lingual RE approaches. BIBREF7 and BIBREF8 project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora. BIBREF9 translates a target-language sentence into English, performs RE in English, and then projects the relation phrases back to the target-language sentence. BIBREF10 proposes an adversarial feature adaptation approach for cross-lingual relation classification, which uses a machine translation system to translate source-language sentences into target-language sentences. Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., BIBREF34, BIBREF35, BIBREF36, where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.",
"Many cross-lingual word embedding models have been developed recently BIBREF15, BIBREF16. An important application of cross-lingual word embeddings is to enable cross-lingual model transfer. In this paper, we apply the bilingual word embedding mapping technique in BIBREF17 to cross-lingual RE model transfer. Similar approaches have been applied to other NLP tasks such as dependency parsing BIBREF37, POS tagging BIBREF38 and named entity recognition BIBREF21, BIBREF39."
],
[
"In this paper, we developed a simple yet effective neural cross-lingual RE model transfer approach, which has very low resource requirements (a small bilingual dictionary with 1K word pairs) and can be easily extended to a new language. Extensive experiments for 7 target languages across a variety of language families on both in-house and open datasets show that the proposed approach achieves very good performance (up to $79\\%$ of the accuracy of the supervised target-language RE model), which provides a strong baseline for building cross-lingual RE models with minimal resources."
],
[
"We thank Mo Yu for sharing their ACE05 English data split and the anonymous reviewers for their valuable comments."
]
],
"section_name": [
"Introduction",
"Overview of the Approach",
"Cross-Lingual Word Embeddings",
"Cross-Lingual Word Embeddings ::: Monolingual Word Embeddings",
"Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping",
"Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Length Normalization and Orthogonal Transformation",
"Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Semi-Supervised and Unsupervised Mappings",
"Neural Network RE Models",
"Neural Network RE Models ::: Embedding Layer",
"Neural Network RE Models ::: Context Layer",
"Neural Network RE Models ::: Context Layer ::: Bi-LSTM Context Layer",
"Neural Network RE Models ::: Context Layer ::: CNN Context Layer",
"Neural Network RE Models ::: Summarization Layer",
"Neural Network RE Models ::: Output Layer",
"Neural Network RE Models ::: Cross-Lingual RE Model Transfer",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Source (English) RE Model Performance",
"Experiments ::: Cross-Lingual RE Performance",
"Experiments ::: Cross-Lingual RE Performance ::: Dictionary Size",
"Experiments ::: Cross-Lingual RE Performance ::: Comparison of Different Mappings",
"Experiments ::: Cross-Lingual RE Performance ::: Performance on Test Data",
"Experiments ::: Cross-Lingual RE Performance ::: Discussion",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0652ee6a3d11af5276f085ea7c4a098b4fd89508"
],
"answer": [
{
"evidence": [
"First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.",
"We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.",
"We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.\n\nWe learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.",
"We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"cb2f231c00f9cabcf986a656a15aefc3fe0beeb0"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Number of documents in the training/dev/test sets of the in-house and ACE05 datasets.",
"Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).",
"The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)."
],
"extractive_spans": [],
"free_form_answer": "In-house dataset consists of 3716 documents \nACE05 dataset consists of 1635 documents",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Number of documents in the training/dev/test sets of the in-house and ACE05 datasets.",
"Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).",
"The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"b0cb2a3723ff1ea75f6fdbfb4333f58603ace8c7"
],
"answer": [
{
"evidence": [
"Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).",
"The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)."
],
"extractive_spans": [
"English, German, Spanish, Italian, Japanese and Portuguese",
" English, Arabic and Chinese"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. ",
"The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"d1547b2e6fc9e3f4b029281744cb4e5e5e3ab697"
],
"answer": [
{
"evidence": [
"Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).",
"The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).",
"In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11."
],
"extractive_spans": [
"in-house dataset",
"ACE05 dataset "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).",
"The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).",
"the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they train their own RE model?",
"How big are the datasets?",
"What languages do they experiment on?",
"What datasets are used?"
],
"question_id": [
"f6496b8d09911cdf3a9b72aec0b0be6232a6dba1",
"5c90e1ed208911dbcae7e760a553e912f8c237a5",
"3c3b4797e2b21e2c31cf117ad9e52f327721790f",
"a7d72f308444616a0befc8db7ad388b3216e2143"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Neural cross-lingual relation extraction based on bilingual word embedding mapping - target language: Portuguese, source language: English.",
"Table 1: Comparison with the state-of-the-art RE models on the ACE05 English data (S: Single Model; E: Ensemble Model).",
"Table 2: Number of documents in the training/dev/test sets of the in-house and ACE05 datasets.",
"Figure 2: Cross-lingual RE performance (F1 score) vs. dictionary size (number of bilingual word pairs for learning the mapping (4)) under the Bi-LSTM English RE model on the target-language development data.",
"Table 3: Performance of the supervised English RE models on the in-house and ACE05 English test data.",
"Table 4: Comparison of the performance (F1 score) using different mappings on the target-language development data under the Bi-LSTM model.",
"Table 5: Performance of the cross-lingual RE approach on the in-house target-language test data.",
"Table 6: Performance of the cross-lingual RE approach on the ACE05 target-language test data."
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Figure2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Table6-1.png"
]
} | [
"How big are the datasets?"
] | [
[
"1911.00069-Experiments ::: Datasets-0",
"1911.00069-6-Table2-1.png",
"1911.00069-Experiments ::: Datasets-1"
]
] | [
"In-house dataset consists of 3716 documents \nACE05 dataset consists of 1635 documents"
] | 123 |
1810.00663 | Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation | We propose an end-to-end deep learning model for translating free-form natural language instructions to a high-level plan for behavioral robot navigation. We use attention models to connect information from both the user instructions and a topological representation of the environment. We evaluate our model's performance on a new dataset containing 10,050 pairs of navigation instructions. Our model significantly outperforms baseline approaches. Furthermore, our results suggest that it is possible to leverage the environment map as a relevant knowledge base to facilitate the translation of free-form navigational instruction. | {
"paragraphs": [
[
"Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .",
"Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):",
"Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.",
"In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.",
"We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.",
"We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.",
"This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.",
"We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings."
],
[
"This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .",
"Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .",
"Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .",
"Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .",
"Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .",
"We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion."
],
[
"Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.",
"Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0 ",
"based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data."
],
[
"We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.",
"We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as \"room-1\" or \"lab-2\", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.",
"Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details."
],
[
"We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).",
"As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.",
"Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:",
"Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.",
"Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .",
"Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0 ",
"where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .",
"The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .",
"The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.",
"FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .",
"Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0 ",
" where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0 ",
"with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.",
"Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0 ",
"with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph."
],
[
"We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.",
"As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:",
"While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort."
],
[
"This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results."
],
[
"While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3\" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor\", “cf\", “lt\", “cf\", “iol\"). In this plan, “R-1\",“C-1\", “C-0\", and “O-3\" are symbols for locations (nodes) in the graph.",
"We compare the performance of translation approaches based on four metrics:",
"[align=left,leftmargin=0em,labelsep=0.4em,font=]",
"As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.",
"The harmonic average of the precision and recall over all the test set BIBREF26 .",
"The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .",
"GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0."
],
[
"We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:",
"[align=left,leftmargin=0em,labelsep=0.4em,font=]",
"The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.",
"To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.",
"This model is the same as the previous Ablation model, but with the masking function in the output layer."
],
[
"We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.",
"The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.",
"We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”."
],
[
"Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.",
"First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.",
"We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.",
"The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.",
"Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.",
"The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.",
"The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.",
"Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans."
],
[
"This section discusses qualitative results to better understand how the proposed model uses the navigation graph.",
"We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).",
"We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.",
"All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:",
"[leftmargin=*, labelsep=0.2em, itemsep=0em]",
"“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”",
"“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”",
"For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix."
],
[
"This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.",
"We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.",
"As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors."
],
[
"The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project."
]
],
"section_name": [
"Introduction",
"Related work",
"Problem Formulation",
"The Behavioral Graph: A Knowledge Base For Navigation",
"Approach",
"Dataset",
"Experiments",
"Evaluation Metrics",
"Models Used in the Evaluation",
"Implementation Details",
"Quantitative Evaluation",
"Qualitative Evaluation",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"a38c1c344ccb96f3ff31ef6c371b2260c3d8db43"
],
"answer": [
{
"evidence": [
"This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.",
"While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.",
"While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"a6c1cfab37b756275380368b1d9f8cdb8929f57e"
],
"answer": [
{
"evidence": [
"First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.",
"Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models."
],
"extractive_spans": [
"increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively",
"over INLINEFORM0 increase in EM and GM between our model and the next best two models"
],
"free_form_answer": "",
"highlighted_evidence": [
"First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.",
"Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3f26b75051da0d7d675aa8f3a519f596e587b5a1"
],
"answer": [
{
"evidence": [
"The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path."
],
"extractive_spans": [],
"free_form_answer": "the baseline where path generation uses a standard sequence-to-sequence model augmented with attention mechanism and path verification uses depth-first search",
"highlighted_evidence": [
"The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"b61847a85ff71d95db307804edaf69a7e8fbd569"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Performance of different models on the test datasets. EM and GM report percentages, and ED corresponds to average edit distance. The symbol ↑ indicates that higher results are better in the corresponding column; ↓ indicates that lower is better."
],
"extractive_spans": [],
"free_form_answer": "For test-repeated set, EM score of 61.17, F1 of 93.54, ED of 0.75 and GM of 61.36. For test-new set, EM score of 41.71, F1 of 91.02, ED of 1.22 and GM of 41.81",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Performance of different models on the test datasets. EM and GM report percentages, and ED corresponds to average edit distance. The symbol ↑ indicates that higher results are better in the corresponding column; ↓ indicates that lower is better."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2392fbdb4eb273ea6706198fcfecc097f50785c9"
],
"answer": [
{
"evidence": [
"We compare the performance of translation approaches based on four metrics:",
"[align=left,leftmargin=0em,labelsep=0.4em,font=]",
"As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.",
"The harmonic average of the precision and recall over all the test set BIBREF26 .",
"The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .",
"GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0."
],
"extractive_spans": [],
"free_form_answer": "exact match, f1 score, edit distance and goal match",
"highlighted_evidence": [
"We compare the performance of translation approaches based on four metrics:\n\n[align=left,leftmargin=0em,labelsep=0.4em,font=]\n\nAs in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.\n\nThe harmonic average of the precision and recall over all the test set BIBREF26 .\n\nThe minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .\n\nGM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"16c3f79289f6601abd20ee058392d5dd7d0f0485"
],
"answer": [
{
"evidence": [
"This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"069af51dc41d41489fd579ea994c1b247827b4e5"
],
"answer": [
{
"evidence": [
"This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.",
"We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.",
"As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:",
"While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort."
],
"extractive_spans": [],
"free_form_answer": "using Amazon Mechanical Turk using simulated environments with topological maps",
"highlighted_evidence": [
"This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. ",
"We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.\n\nAs shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:\n\nWhile the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"33d46a08d2e593401d4ecb1f77de6b81ad8a70d1"
],
"answer": [
{
"evidence": [
"While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort."
],
"extractive_spans": [],
"free_form_answer": "english language",
"highlighted_evidence": [
"While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
"",
"",
""
],
"question": [
"Did the collection process use a WoZ method?",
"By how much did their model outperform the baseline?",
"What baselines did they compare their model with?",
"What was the performance of their model?",
"What evaluation metrics are used?",
"Did the authors use a crowdsourcing platform?",
"How were the navigation instructions collected?",
"What language is the experiment done in?"
],
"question_id": [
"aa800b424db77e634e82680f804894bfa37f2a34",
"fbd47705262bfa0a2ba1440a2589152def64cbbd",
"51aaec4c511d96ef5f5c8bae3d5d856d8bc288d3",
"3aee5c856e0ee608a7664289ffdd11455d153234",
"f42d470384ca63a8e106c7caf1cb59c7b92dbc27",
"29bdd1fb20d013b23b3962a065de3a564b14f0fb",
"25b2ae2d86b74ea69b09c140a41593c00c47a82b",
"fd7f13b63f6ba674f5d5447b6114a201fe3137cb"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Map of an environment (a), its (partial) behavioral navigation graph (b), and the problem setting of interest (c). The red part of (b) corresponds to the representation of the route highlighted in blue in (a). The codes “oo-left”, “oo-right”, “cf”, “left-io”, and “right-io” correspond to the behaviors “go out and turn left”, “go out and turn right”, “follow the corridor”, “enter the room on left”, and “enter office on right”, respectively.",
"Table 1: Behaviors (edges) of the navigation graphs considered in this work. The direction <d> can be left or right.",
"Figure 2: Model overview. The model contains six layers, takes the input of behavioral graph representation, free-form instruction, and the start location (yellow block marked as START in the decoder layer) and outputs a sequence of behaviors.",
"Table 2: Dataset statistics. “# Single” indicates the number of navigation plans with a single natural language instruction. “# Double” is the number of plans with two different instructions. The total number of plans is (# Single) × 2(# Double).",
"Table 3: Performance of different models on the test datasets. EM and GM report percentages, and ED corresponds to average edit distance. The symbol ↑ indicates that higher results are better in the corresponding column; ↓ indicates that lower is better.",
"Figure 3: Visualization of the attention weights of the decoder layer. The color-coded and numbered regions on the map (left) correspond to the triplets that are highlighted with the corresponding color in the attention map (right).",
"Figure 4: An example of two different navigation paths between the same pair of start and goal locations."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"5-Figure2-1.png",
"6-Table2-1.png",
"8-Table3-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png"
]
} | [
"What baselines did they compare their model with?",
"What was the performance of their model?",
"What evaluation metrics are used?",
"How were the navigation instructions collected?",
"What language is the experiment done in?"
] | [
[
"1810.00663-Models Used in the Evaluation-2"
],
[
"1810.00663-8-Table3-1.png"
],
[
"1810.00663-Evaluation Metrics-3",
"1810.00663-Evaluation Metrics-1",
"1810.00663-Evaluation Metrics-5",
"1810.00663-Evaluation Metrics-4",
"1810.00663-Evaluation Metrics-2",
"1810.00663-Evaluation Metrics-6"
],
[
"1810.00663-Dataset-0",
"1810.00663-Dataset-1",
"1810.00663-Introduction-6",
"1810.00663-Dataset-2"
],
[
"1810.00663-Dataset-2"
]
] | [
"the baseline where path generation uses a standard sequence-to-sequence model augmented with attention mechanism and path verification uses depth-first search",
"For test-repeated set, EM score of 61.17, F1 of 93.54, ED of 0.75 and GM of 61.36. For test-new set, EM score of 41.71, F1 of 91.02, ED of 1.22 and GM of 41.81",
"exact match, f1 score, edit distance and goal match",
"using Amazon Mechanical Turk using simulated environments with topological maps",
"english language"
] | 125 |
1809.05752 | Analysis of Risk Factor Domains in Psychosis Patient Health Records | Readmission after discharge from a hospital is disruptive and costly, regardless of the reason. However, it can be particularly problematic for psychiatric patients, so predicting which patients may be readmitted is critically important but also very difficult. Clinical narratives in psychiatric electronic health records (EHRs) span a wide range of topics and vocabulary; therefore, a psychiatric readmission prediction model must begin with a robust and interpretable topic extraction component. We created a data pipeline for using document vector similarity metrics to perform topic extraction on psychiatric EHR data in service of our long-term goal of creating a readmission risk classifier. We show initial results for our topic extraction model and identify additional features we will be incorporating in the future. | {
"paragraphs": [
[
"Psychotic disorders typically emerge in late adolescence or early adulthood BIBREF0 , BIBREF1 and affect approximately 2.5-4% of the population BIBREF2 , BIBREF3 , making them one of the leading causes of disability worldwide BIBREF4 . A substantial proportion of psychiatric inpatients are readmitted after discharge BIBREF5 . Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF6 , BIBREF7 . Reducing readmission risk is therefore a major unmet need of psychiatric care. Developing clinically implementable machine learning tools to enable accurate assessment of risk factors associated with readmission offers opportunities to inform the selection of treatment interventions and implement appropriate preventive measures.",
"In psychiatry, traditional strategies to study readmission risk factors rely on clinical observation and manual retrospective chart review BIBREF8 , BIBREF9 . This approach, although benefitting from clinical expertise, does not scale well for large data sets, is effort-intensive, and lacks automation. An efficient, more robust, and cheaper NLP-based alternative approach has been developed and met with some success in other medical fields BIBREF10 . However, this approach has seldom been applied in psychiatry because of the unique characteristics of psychiatric medical record content.",
"There are several challenges for topic extraction when dealing with clinical narratives in psychiatric EHRs. First, the vocabulary used is highly varied and context-sensitive. A patient may report “feeling `really great and excited'\" – symptoms of mania – without any explicit mention of keywords that differ from everyday vocabulary. Also, many technical terms in clinical narratives are multiword expressions (MWEs) such as `obsessive body image', `linear thinking', `short attention span', or `panic attack'. These phrasemes are comprised of words that in isolation do not impart much information in determining relatedness to a given topic but do in the context of the expression.",
"Second, the narrative structure in psychiatric clinical narratives varies considerably in how the same phenomenon can be described. Hallucinations, for example, could be described as “the patient reports auditory hallucinations,\" or “the patient has been hearing voices for several months,\" amongst many other possibilities.",
"Third, phenomena can be directly mentioned without necessarily being relevant to the patient specifically. Psychosis patient discharge summaries, for instance, can include future treatment plans (e.g. “Prevent relapse of a manic or major depressive episode.\", “Prevent recurrence of psychosis.\") containing vocabulary that at the word-level seem strongly correlated with readmission risk. Yet at the paragraph-level these do not indicate the presence of a readmission risk factor in the patient and in fact indicate the absence of a risk factor that was formerly present.",
"Lastly, given the complexity of phenotypic assessment in psychiatric illnesses, patients with psychosis exhibit considerable differences in terms of illness and symptom presentation. The constellation of symptoms leads to various diagnoses and comorbidities that can change over time, including schizophrenia, schizoaffective disorder, bipolar disorder with psychosis, and substance use induced psychosis. Thus, the lexicon of words and phrases used in EHRs differs not only across diagnoses but also across patients and time.",
"Taken together, these factors make topic extraction a difficult task that cannot be accomplished by keyword search or other simple text-mining techniques.",
"To identify specific risk factors to focus on, we not only reviewed clinical literature of risk factors associated with readmission BIBREF11 , BIBREF12 , but also considered research related to functional remission BIBREF13 , forensic risk factors BIBREF14 , and consulted clinicians involved with this project. Seven risk factor domains – Appearance, Mood, Interpersonal, Occupation, Thought Content, Thought Process, and Substance – were chosen because they are clinically relevant, consistent with literature, replicable across data sets, explainable, and implementable in NLP algorithms.",
"In our present study, we evaluate multiple approaches to automatically identify which risk factor domains are associated with which paragraphs in psychotic patient EHRs. We perform this study in support of our long-term goal of creating a readmission risk classifier that can aid clinicians in targeting individual treatment interventions and assessing patient risk of harm (e.g. suicide risk, homicidal risk). Unlike other contemporary approaches in machine learning, we intend to create a model that is clinically explainable and flexible across training data while maintaining consistent performance.",
"To incorporate clinical expertise in the identification of risk factor domains, we undertake an annotation project, detailed in section 3.1. We identify a test set of over 1,600 EHR paragraphs which a team of three domain-expert clinicians annotate paragraph-by-paragraph for relevant risk factor domains. Section 3.2 describes the results of this annotation task. We then use the gold standard from the annotation project to assess the performance of multiple neural classification models trained exclusively on Term Frequency – Inverse Document Frequency (TF-IDF) vectorized EHR data, described in section 4. To further improve the performance of our model, we incorporate domain-relevant MWEs identified using all in-house data."
],
[
"McCoy et al. mccoy2015clinical constructed a corpus of web data based on the Research Domain Criteria (RDoC) BIBREF15 , and used this corpus to create a vector space document similarity model for topic extraction. They found that the `negative valence' and `social' RDoC domains were associated with readmission. Using web data (in this case data retrieved from the Bing API) to train a similarity model for EHR texts is problematic since it differs from the target data in both structure and content. Based on reconstruction of the procedure, we conclude that many of the informative MWEs critical to understanding the topics of paragraphs in EHRs are not captured in the web data. Additionally, RDoC is by design a generalized research construct to describe the entire spectrum of mental disorders and does not include domains that are based on observation or causes of symptoms. Important indicators within EHRs of patient health, like appearance or occupation, are not included in the RDoC constructs.",
"Rumshisky et al. rumshisky2016predicting used a corpus of EHRs from patients with a primary diagnosis of major depressive disorder to create a 75-topic LDA topic model that they then used in a readmission prediction classifier pipeline. Like with McCoy et al. mccoy2015clinical, the data used to train the LDA model was not ideal as the generalizability of the data was narrow, focusing on only one disorder. Their model achieved readmission prediction performance with an area under the curve of .784 compared to a baseline of .618. To perform clinical validation of the topics derived from the LDA model, they manually evaluated and annotated the topics, identifying the most informative vocabulary for the top ten topics. With their training data, they found the strongest coherence occurred in topics involving substance use, suicidality, and anxiety disorders. But given the unsupervised nature of the LDA clustering algorithm, the topic coherence they observed is not guaranteed across data sets."
],
[
"[2]The vast majority of patients in our target cohort are",
"dependents on a parental private health insurance plan.",
"Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.",
"These patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.",
"We also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction.",
"After using the RPDR query tool to extract EHR paragraphs from the RPDR database, we created a training corpus by categorizing the extracted paragraphs according to their risk factor domain using a lexicon of 120 keywords that were identified by the clinicians involved in this project. Certain domains – particularly those involving thoughts and other abstract concepts – are often identifiable by MWEs rather than single words. The same clinicians who identified the keywords manually examined the bigrams and trigrams with the highest TF-IDF scores for each domain in the categorized paragraphs, identifying those which are conceptually related to the given domain. We then used this lexicon of 775 keyphrases to identify more relevant training paragraphs in RPDR and treat them as (non-stemmed) unigrams when generating the matrix. By converting MWEs such as `shortened attention span', `unusual motor activity', `wide-ranging affect', or `linear thinking' to non-stemmed unigrams, the TF-IDF score (and therefore the predictive value) of these terms is magnified. In total, we constructed a corpus of roughly 100,000 paragraphs consisting of 7,000,000 tokens for training our model."
],
[
"In order to evaluate our models, we annotated 1,654 paragraphs selected from the 240,000 paragraphs extracted from Meditech with the clinically relevant domains described in Table TABREF3 . The annotation task was completed by three licensed clinicians. All paragraphs were removed from the surrounding EHR context to ensure annotators were not influenced by the additional contextual information. Our domain classification models consider each paragraph independently and thus we designed the annotation task to mirror the information available to the models.",
"The annotators were instructed to label each paragraph with one or more of the seven risk factor domains. In instances where more than one domain was applicable, annotators assigned the domains in order of prevalence within the paragraph. An eighth label, `Other', was included if a paragraph was ambiguous, uninterpretable, or about a domain not included in the seven risk factor domains (e.g. non-psychiatric medical concerns and lab results). The annotations were then reviewed by a team of two clinicians who adjudicated collaboratively to create a gold standard. The gold standard and the clinician-identified keywords and MWEs have received IRB approval for release to the community. They are available as supplementary data to this paper."
],
[
"Inter-annotator agreement (IAA) was assessed using a combination of Fleiss's Kappa (a variant of Scott's Pi that measures pairwise agreement for annotation tasks involving more than two annotators) BIBREF16 and Cohen's Multi-Kappa as proposed by Davies and Fleiss davies1982measuring. Table TABREF6 shows IAA calculations for both overall agreement and agreement on the first (most important) domain only. Following adjudication, accuracy scores were calculated for each annotator by evaluating their annotations against the gold standard.",
"Overall agreement was generally good and aligned almost exactly with the IAA on the first domain only. Out of the 1,654 annotated paragraphs, 671 (41%) had total agreement across all three annotators. We defined total agreement for the task as a set-theoretic complete intersection of domains for a paragraph identified by all annotators. 98% of paragraphs in total agreement involved one domain. Only 35 paragraphs had total disagreement, which we defined as a set-theoretic null intersection between the three annotators. An analysis of the 35 paragraphs with total disagreement showed that nearly 30% included the term “blunted/restricted\". In clinical terminology, these terms can be used to refer to appearance, affect, mood, or emotion. Because the paragraphs being annotated were extracted from larger clinical narratives and examined independently of any surrounding context, it was difficult for the annotators to determine the most appropriate domain. This lack of contextual information resulted in each annotator using a different `default' label: Appearance, Mood, and Other. During adjudication, Other was decided as the most appropriate label unless the paragraph contained additional content that encompassed other domains, as it avoids making unnecessary assumptions. [3]Suicidal ideation [4]Homicidal ideation [5]Ethyl alcohol and ethanol",
"A Fleiss's Kappa of 0.575 lies on the boundary between `Moderate' and `Substantial' agreement as proposed by Landis and Koch landis1977measurement. This is a promising indication that our risk factor domains are adequately defined by our present guidelines and can be employed by clinicians involved in similar work at other institutions.",
"The fourth column in Table TABREF6 , Mean Accuracy, was calculated by averaging the three annotator accuracies as evaluated against the gold standard. This provides us with an informative baseline of human parity on the domain classification task.",
"[6]Rectified Linear Units, INLINEFORM0 BIBREF17 [7]Adaptive Moment Estimation BIBREF18 "
],
[
"Figure FIGREF8 illustrates the data pipeline for generating our training and testing corpora, and applying them to our classification models.",
"We use the TfidfVectorizer tool included in the scikit-learn machine learning toolkit BIBREF19 to generate our TF-IDF vector space models, stemming tokens with the Porter Stemmer tool provided by the NLTK library BIBREF20 , and calculating TF-IDF scores for unigrams, bigrams, and trigrams. Applying Singular Value Decomposition (SVD) to the TF-IDF matrix, we reduce the vector space to 100 dimensions, which Zhang et al. zhang2011comparative found to improve classifier performance.",
"Starting with the approach taken by McCoy et al. mccoy2015clinical, who used aggregate cosine similarity scores to compute domain similarity directly from their TF-IDF vector space model, we extend this method by training a suite of three-layer multilayer perceptron (MLP) and radial basis function (RBF) neural networks using a variety of parameters to compare performance. We employ the Keras deep learning library BIBREF21 using a TensorFlow backend BIBREF22 for this task. The architectures of our highest performing MLP and RBF models are summarized in Table TABREF7 . Prototype vectors for the nodes in the hidden layer of our RBF model are selected via k-means clustering BIBREF23 on each domain paragraph megadocument individually. The RBF transfer function for each hidden layer node is assigned the same width, which is based off the maximum Euclidean distance between the centroids that were computed using k-means.",
"To prevent overfitting to the training data, we utilize a dropout rate BIBREF24 of 0.2 on the input layer of all models and 0.5 on the MLP hidden layer.",
"Since our classification problem is multiclass, multilabel, and open-world, we employ seven nodes with sigmoid activations in the output layer, one for each risk factor domain. This allows us to identify paragraphs that fall into more than one of the seven domains, as well as determine paragraphs that should be classified as Other. Unlike the traditionally used softmax activation function, which is ideal for single-label, closed-world classification tasks, sigmoid nodes output class likelihoods for each node independently without the normalization across all classes that occurs in softmax.",
"We find that the risk factor domains vary in the degree of homogeneity of language used, and as such certain domains produce higher similarity scores, on average, than others. To account for this, we calculate threshold similarity scores for each domain using the formula min=avg(sim)+ INLINEFORM0 * INLINEFORM1 (sim), where INLINEFORM2 is standard deviation and INLINEFORM3 is a constant, which we set to 0.78 for our MLP model and 1.2 for our RBF model through trial-and-error. Employing a generalized formula as opposed to manually identifying threshold similarity scores for each domain has the advantage of flexibility in regards to the target data, which may vary in average similarity scores depending on its similarity to the training data. If a paragraph does not meet threshold on any domain, it is classified as Other."
],
[
"Table TABREF9 shows the performance of our models on classifying the paragraphs in our gold standard. To assess relative performance of feature representations, we also include performance metrics of our models without MWEs. Because this is a multilabel classification task we use macro-averaging to compute precision, recall, and F1 scores for each paragraph in the testing set. In identifying domains individually, our models achieved the highest per-domain scores on Substance (F1 INLINEFORM0 0.8) and the lowest scores on Interpersonal and Mood (F1 INLINEFORM1 0.5). We observe a consistency in per-domain performance rankings between our MLP and RBF models.",
"The wide variance in per-domain performance is due to a number of factors. Most notably, the training examples we extracted from RPDR – while very comparable to our target OnTrackTM data – may not have an adequate variety of content and range of vocabulary. Although using keyword and MWE matching to create our training corpus has the advantage of being significantly less labor intensive than manually labeling every paragraph in the corpus, it is likely that the homogeneity of language used in the training paragraphs is higher than it would be otherwise. Additionally, all of the paragraphs in the training data are assigned exactly one risk factor domain even if they actually involve multiple risk factor domains, making the clustering behavior of the paragraphs more difficult to define. Figure FIGREF10 illustrates the distribution of paragraphs in vector space using 2-component Linear Discriminant Analysis (LDA) BIBREF26 .",
"Despite prior research indicating that similar classification tasks to ours are more effectively performed by RBF networks BIBREF27 , BIBREF28 , BIBREF29 , we find that a MLP network performs marginally better with significantly less preprocessing (i.e. k-means and width calculations) involved. We can see in Figure FIGREF10 that Thought Process, Appearance, Substance, and – to a certain extent – Occupation clearly occupy specific regions, whereas Interpersonal, Mood, and Thought Content occupy the same noisy region where multiple domains overlap. Given that similarity is computed using Euclidean distance in an RBF network, it is difficult to accurately classify paragraphs that fall in regions occupied by multiple risk factor domain clusters since prototype centroids from the risk factor domains will overlap and be less differentiable. This is confirmed by the results in Table TABREF9 , where the differences in performance between the RBF and MLP models are more pronounced in the three overlapping domains (0.496 vs 0.448 for Interpersonal, 0.530 vs 0.496 for Mood, and 0.721 vs 0.678 for Thought Content) compared to the non-overlapping domains (0.564 vs 0.566 for Appearance, 0.592 vs 0.598 for Occupation, 0.797 vs 0.792 for Substance, and 0.635 vs 0.624 for Thought Process). We also observe a similarity in the words and phrases with the highest TF-IDF scores across the overlapping domains: many of the Thought Content words and phrases with the highest TF-IDF scores involve interpersonal relations (e.g. `fear surrounding daughter', `father', `family history', `familial conflict') and there is a high degree of similarity between high-scoring words for Mood (e.g. `meets anxiety criteria', `cope with mania', `ocd'[8]) and Thought Content (e.g. `mania', `feels anxious', `feels exhausted').",
"[8]Obsessive-compulsive disorder",
"MWEs play a large role in correctly identifying risk factor domains. Factoring them into our models increased classification performance by 15%, a marked improvement over our baseline model. This aligns with our expectations that MWEs comprised of a quotidian vocabulary hold much more clinical significance than when the words in the expressions are treated independently.",
"Threshold similarity scores also play a large role in determining the precision and recall of our models: higher thresholds lead to a smaller number of false positives and a greater number of false negatives for each risk factor domain. Conversely, more paragraphs are incorrectly classified as Other when thresholds are set higher. Since our classifier will be used in future work as an early step in a data analysis pipeline for determining readmission risk, misclassifying a paragraph with an incorrect risk factor domain at this stage can lead to greater inaccuracies at later stages. Paragraphs misclassified as Other, however, will be discarded from the data pipeline. Therefore, we intentionally set a conservative threshold where only the most confidently labeled paragraphs are assigned membership in a particular domain."
],
[
"To achieve our goal of creating a framework for a readmission risk classifier, the present study performed necessary evaluation steps by updating and adding to our model iteratively. In the first stage of the project, we focused on collecting the data necessary for training and testing, and on the domain classification annotation task. At the same time, we began creating the tools necessary for automatically extracting domain relevance scores at the paragraph and document level from patient EHRs using several forms of vectorization and topic modeling. In future versions of our risk factor domain classification model we will explore increasing robustness through sequence modeling that considers more contextual information.",
"Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time.",
"We will also take into account structured data that have been collected on the target cohort throughout the course of this study such as brain based electrophysiological (EEG) biomarkers, structural brain anatomy from MRI scans (gray matter volume, cortical thickness, cortical surface-area), social and role functioning assessments, personality assessment (NEO-FFI[9]), and various symptom scales (PANSS[10], MADRS[11], YMRS[12]). For each feature we consider adding, we will evaluate the performance of the classifier with and without the feature to determine its contribution as a predictor of readmission."
],
[
"This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2018 Workshop reviewers for their constructive and helpful comments.",
"[9]NEO Five-Factor Inventory BIBREF30 [10]Positive and Negative Syndrome Scale BIBREF31 [11]Montgomery-Asperg Depression Rating Scale BIBREF32 [12]Young Mania Rating Scale BIBREF33 "
]
],
"section_name": [
"Introduction",
"Related Work",
"Data",
"Annotation Task",
"Inter-Annotator Agreement",
"Topic Extraction",
"Results and Discussion",
"Future Work and Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"096ace95350d743436952360918474c6160465ba"
],
"answer": [
{
"evidence": [
"Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time."
],
"extractive_spans": [],
"free_form_answer": "distinguishing between clinically positive and negative phenomena within each risk factor domain and accounting for structured data collected on the target cohort",
"highlighted_evidence": [
"Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"06b60e5ec5adfa077523088275192cbf8e031661"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs."
],
"extractive_spans": [],
"free_form_answer": "Achieved the highest per-domain scores on Substance (F1 ≈ 0.8) and the lowest scores on Interpersonal and Mood (F1 ≈ 0.5), and show consistency in per-domain performance rankings between MLP and RBF models.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"c38bf256704127e0cac06bbceb4790090bb9063a"
],
"answer": [
{
"evidence": [
"Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.",
"These patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.",
"We also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction."
],
"extractive_spans": [
" a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital",
"an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.\n\nThese patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.\n\nWe also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What additional features are proposed for future work?",
"What are their initial results on this task?",
"What datasets did the authors use?"
],
"question_id": [
"c82e945b43b2e61c8ea567727e239662309e9508",
"fbee81a9d90ff23603ee4f5986f9e8c0eb035b52",
"39cf0b3974e8a19f3745ad0bcd1e916bf20eeab8"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Demographic breakdown of the target cohort.",
"Table 2: Annotation scheme for the domain classification task.",
"Table 3: Inter-annotator agreement",
"Table 4: Architectures of our highest-performing MLP and RBF networks.",
"Figure 1: Data pipeline for training and evaluating our risk factor domain classifiers.",
"Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs.",
"Figure 2: 2-component linear discriminant analysis of the RPDR training data."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"6-Figure1-1.png",
"6-Table5-1.png",
"7-Figure2-1.png"
]
} | [
"What additional features are proposed for future work?",
"What are their initial results on this task?"
] | [
[
"1809.05752-Future Work and Conclusion-1"
],
[
"1809.05752-6-Table5-1.png"
]
] | [
"distinguishing between clinically positive and negative phenomena within each risk factor domain and accounting for structured data collected on the target cohort",
"Achieved the highest per-domain scores on Substance (F1 ≈ 0.8) and the lowest scores on Interpersonal and Mood (F1 ≈ 0.5), and show consistency in per-domain performance rankings between MLP and RBF models."
] | 126 |
2001.01589 | Morphological Word Segmentation on Agglutinative Languages for Neural Machine Translation | Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years. However, in consideration of efficiency, a limited-size vocabulary that only contains the top-N highest frequency words are employed for model training, which leads to many rare and unknown words. It is rather difficult when translating from the low-resource and morphologically-rich agglutinative languages, which have complex morphology and large vocabulary. In this paper, we propose a morphological word segmentation method on the source-side for NMT that incorporates morphology knowledge to preserve the linguistic and semantic information in the word structure while reducing the vocabulary size at training time. It can be utilized as a preprocessing tool to segment the words in agglutinative languages for other natural language processing (NLP) tasks. Experimental results show that our morphologically motivated word segmentation method is better suitable for the NMT model, which achieves significant improvements on Turkish-English and Uyghur-Chinese machine translation tasks on account of reducing data sparseness and language complexity. | {
"paragraphs": [
[
"Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus.",
"Both the Turkish and Uyghur are agglutinative and highly-inflected languages in which the word is formed by suffixes attaching to a stem BIBREF4. The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as “stem + suffix1 + suffix2 + ... + suffixN”. A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case, number, gender, and so on. The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology, so when translating from the agglutinative languages, many words are unseen at training time. Moreover, due to the semantic context, the same word generally has different segmentation forms in the training corpus.",
"For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT, we propose a morphological word segmentation method on the source-side of Turkish-English and Uyghur-Chinese machine translation tasks, which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training. In this paper, we investigate and compare the following segmentation strategies:",
"Stem with combined suffix",
"Stem with singular suffix",
"Byte Pair Encoding (BPE)",
"BPE on stem with combined suffix",
"BPE on stem with singular suffix",
"The latter two segmentation strategies are our newly proposed methods. Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 1.2 and 2.5 BLEU points on Turkish-English and Uyghur-Chinese machine translation tasks over the strong baseline of pure BPE method respectively, indicating that it can provide better translation performance for the NMT model."
],
[
"We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1."
],
[
"The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes. Both of the stem and suffix are called morphemes, and they are the smallest functional unit in agglutinative languages. Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6. Morpheme segmentation can segment the complex word into morpheme units of stem and suffix. This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highly-inflected languages."
],
[
"In this segmentation strategy, each word is segmented into a stem unit and a combined suffix unit. We add “##” behind the stem unit and add “$$” behind the combined suffix unit. We denote this method as SCS. The segmented word can be denoted as two parts of “stem##” and “suffix1suffix2...suffixN$$”. If the original word has no suffix unit, the word is treated as its stem unit. All the following segmentation strategies will follow this rule."
],
[
"In this segmentation strategy, each word is segmented into a stem unit and a sequence of suffix units. We add “##” behind the stem unit and add “$$” behind each singular suffix unit. We denote this method as SSS. The segmented word can be denoted as a sequence of “stem##”, “suffix1$$”, “suffix2$$” until “suffixN$$”."
],
[
"BPE BIBREF7 is originally a data compression technique and it is adapted by BIBREF5 for word segmentation and vocabulary reduction by encoding the rare and unknown words as a sequence of subword units, in which the most frequent character sequences are merged iteratively. Frequent character n-grams are eventually merged into a single symbol. This is based on the intuition that various word classes are translatable via smaller units than words. This method making the NMT model capable of open-vocabulary translation, which can generalize to translate and produce new words on the basis of these subword units. The BPE algorithm can be run on the dictionary extracted from a training text, with each word being weighted by its frequency. In this segmentation strategy, we add “@@” behind each no-final subword unit of the segmented word."
],
[
"The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.",
"Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation."
],
[
"In this segmentation strategy, firstly we segment each word into a stem unit and a combined suffix unit as SCS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind the combined suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SCS."
],
[
"In this segmentation strategy, firstly we segment each word into a stem unit and a sequence of suffix units as SSS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind each singular suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SSS."
],
[
"Following BIBREF9, we use the WIT corpus BIBREF10 and SETimes corpus BIBREF11 for model training, and use the newsdev2016 from Workshop on Machine Translation in 2016 (WMT2016) for validation. The test data are newstest2016 and newstest2017."
],
[
"We use the news data from China Workshop on Machine Translation in 2017 (CWMT2017) for model training, validation and test."
],
[
"We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively."
],
[
"We set the number of merge operations on the stem units in the consideration of keeping the vocabulary size of BPE, BPE-SCS and BPE-SSS segmentation strategies on the same scale. We will elaborate the number settings for our proposed word segmentation strategies in this section.",
"In the Turkish-English machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 35K, set the number of merge operations on the stem units for BPE-SCS strategy to 15K, and set the number of merge operations on the stem units for BPE-SSS strategy to 25K. In the Uyghur-Chinese machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 38K, set the number of merge operations on the stem units for BPE-SCS strategy to 10K, and set the number of merge operations on the stem units for BPE-SSS strategy to 35K. The detailed training corpus statistics with different segmentation strategies of Turkish and Uyghur are shown in Table 4 and Table 5 respectively.",
"According to Table 4 and Table 5, we can find that both the Turkish and Uyghur have a very large vocabulary even in the low-resource training corpus. So we propose the morphological word segmentation strategies of BPE-SCS and BPE-SSS that additionally applying BPE on the stem units after morpheme segmentation, which not only consider the morphological properties but also eliminate the rare and unknown words."
],
[
"We employ the Transformer model BIBREF13 with self-attention mechanism architecture implemented in Sockeye toolkit BIBREF14. Both the encoder and decoder have 6 layers. We set the number of hidden units to 512, the number of heads for self-attention to 8, the source and target word embedding size to 512, and the number of hidden units in feed-forward layers to 2048. We train the NMT model by using the Adam optimizer BIBREF15 with a batch size of 128 sentences, and we shuffle all the training data at each epoch. The label smoothing is set to 0.1. We report the result of averaging the parameters of the 4 best checkpoints on the validation perplexity. Decoding is performed by beam search with beam size of 5. To effectively evaluate the machine translation quality, we report case-sensitive BLEU score with standard tokenization and character n-gram ChrF3 score ."
],
[
"In this paper, we investigate and compare morpheme segmentation, BPE and our proposed morphological segmentation strategies on the low resource and morphologically-rich agglutinative languages. Experimental results of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 6 and Table 7 respectively."
],
[
"According to Table 6 and Table 7, we can find that both the BPE-SCS and BPE-SSS strategies outperform morpheme segmentation and the strong baseline of pure BPE method. Especially, the BPE-SSS strategy is better and it achieves significant improvement of up to 1.2 BLEU points on Turkish-English machine translation task and 2.5 BLEU points on Uyghur-Chinese machine translation task. Furthermore, we also find that the translation performance of our proposed segmentation strategy on Turkish-English machine translation task is not obvious than Uyghur-Chinese machine translation task, the probable reasons are: the training corpus of Turkish-English consists of talk and news data while most of the talk data are short informal sentences compared with the news data, which cannot provide more language information for the NMT model. Moreover, the test corpus consists of news data, so due to the data domain is different, the improvement of machine translation quality is limited.",
"In addition, we estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the machine translation quality. Experimental results are shown in Table 8 and Table 9. We find that the number of 25K for Turkish, 30K and 35K for Uyghur maximizes the translation performance. The probable reason is that these numbers of merge operations are able to generate a more appropriate vocabulary that containing effective morpheme units and moderate subword units, which makes better generalization over the morphologically-rich words."
],
[
"The NMT system is typically trained with a limited vocabulary, which creates bottleneck on translation accuracy and generalization capability. Many word segmentation methods have been proposed to cope with the above problems, which consider the morphological properties of different languages.",
"Bradbury and Socher BIBREF16 employed the modified Morfessor to provide morphology knowledge into word segmentation, but they neglected the morphological varieties between subword units, which might result in ambiguous translation results. Sanchez-Cartagena and Toral BIBREF17 proposed a rule-based morphological word segmentation for Finnish, which applies BPE on all the morpheme units uniformly without distinguishing their inner morphological roles. Huck BIBREF18 explored target-side segmentation method for German, which shows that the cascading of suffix splitting and compound splitting with BPE can achieve better translation results. Ataman et al. BIBREF19 presented a linguistically motivated vocabulary reduction approach for Turkish, which optimizes the segmentation complexity with constraint on the vocabulary based on a category-based hidden markov model (HMM). Our work is closely related to their idea while ours are more simple and realizable. Tawfik et al. BIBREF20 confirmed that there is some advantage from using a high accuracy dialectal segmenter jointly with a language independent word segmentation method like BPE. The main difference is that their approach needs sufficient monolingual data additionally to train a segmentation model while ours do not need any external resources, which is very convenient for word segmentation on the low-resource and morphologically-rich agglutinative languages."
],
[
"In this paper, we investigate morphological segmentation strategies on the low-resource and morphologically-rich languages of Turkish and Uyghur. Experimental results show that our proposed morphologically motivated word segmentation method is better suitable for NMT. And the BPE-SSS strategy achieves the best machine translation performance, as it can better preserve the syntactic and semantic information of the words with complex morphology as well as reduce the vocabulary size for model training. Moreover, we also estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the translation quality, and we find that an appropriate vocabulary size is more useful for the NMT model.",
"In future work, we are planning to incorporate more linguistic and morphology knowledge into the training process of NMT to enhance its capacity of capturing syntactic structure and semantic information on the low-resource and morphologically-rich languages."
],
[
"This work is supported by the National Natural Science Foundation of China, the Open Project of Key Laboratory of Xinjiang Uygur Autonomous Region, the Youth Innovation Promotion Association of the Chinese Academy of Sciences, and the High-level Talents Introduction Project of Xinjiang Uyghur Autonomous Region."
]
],
"section_name": [
"Introduction",
"Approach",
"Approach ::: Morpheme Segmentation",
"Approach ::: Morpheme Segmentation ::: Stem with Combined Suffix",
"Approach ::: Morpheme Segmentation ::: Stem with Singular Suffix",
"Approach ::: Byte Pair Encoding (BPE)",
"Approach ::: Morphologically Motivated Segmentation",
"Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Combined Suffix",
"Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Singular Suffix",
"Experiments ::: Experimental Setup ::: Turkish-English Data :",
"Experiments ::: Experimental Setup ::: Uyghur-Chinese Data :",
"Experiments ::: Experimental Setup ::: Data Preprocessing :",
"Experiments ::: Experimental Setup ::: Number of Merge Operations :",
"Experiments ::: NMT Configuration",
"Results",
"Discussion",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"8282253adbf7ac7e6158ff0b754a6b9d59034db0"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"a41011c056c976583dbf7ab2539065e7263beddf"
],
"answer": [
{
"evidence": [
"The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.",
"Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation."
],
"extractive_spans": [],
"free_form_answer": "A BPE model is applied to the stem after morpheme segmentation.",
"highlighted_evidence": [
"The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. ",
"Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"b791a08714ae7a7ec762f5a4b6c5e062579a4f15"
],
"answer": [
{
"evidence": [
"We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.",
"We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively."
],
"extractive_spans": [
"morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5",
"Zemberek",
"BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. ",
"We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"06be1d572fd7d71ab3d646c5f4a4f4ed57a31b52"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How many linguistic and semantic features are learned?",
"How is morphology knowledge implemented in the method?",
"How does the word segmentation method work?",
"Is the word segmentation method independently evaluated?"
],
"question_id": [
"1f6180bba0bc657c773bd3e4269f87540a520ead",
"57388bf2693d71eb966d42fa58ab66d7f595e55f",
"47796c7f0a7de76ccb97ccbd43dc851bb8a613d5",
"9d5153a7553b7113716420a6ddceb59f877eb617"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"word segmentation",
"word segmentation",
"word segmentation",
"word segmentation"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: The sentence examples with different segmentation strategies for Turkish-English.",
"Table 2: The training corpus statistics of TurkishEnglish machine translation task.",
"Table 3: The training corpus statistics of UyghurChinese machine translation task.",
"Table 4: The training corpus statistics with different segmentation strategies of Turkish",
"Table 5: The training corpus statistics with different segmentation strategies of Uyghur",
"Table 6: Experimental results of Turkish-English machine translation task.",
"Table 7: Experimental results of Uyghur-Chinese machine translation task.",
"Table 8: Different numbers of merge operations for BPE-SSS strategy on Turkish-English.",
"Table 9: Different numbers of merge operations for BPE-SSS strategy on Uyghur-Chinese."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"5-Table7-1.png",
"6-Table8-1.png",
"6-Table9-1.png"
]
} | [
"How is morphology knowledge implemented in the method?"
] | [
[
"2001.01589-Approach ::: Morphologically Motivated Segmentation-1",
"2001.01589-Approach ::: Morphologically Motivated Segmentation-0"
]
] | [
"A BPE model is applied to the stem after morpheme segmentation."
] | 127 |
1910.05456 | Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge | How does knowledge of one language's morphology influence learning of inflection rules in a second one? In order to investigate this question in artificial neural network models, we perform experiments with a sequence-to-sequence architecture, which we train on different combinations of eight source and three target languages. A detailed analysis of the model outputs suggests the following conclusions: (i) if source and target language are closely related, acquisition of the target language's inflectional morphology constitutes an easier task for the model; (ii) knowledge of a prefixing (resp. suffixing) language makes acquisition of a suffixing (resp. prefixing) language's morphology more challenging; and (iii) surprisingly, a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology, independent of their relatedness. | {
"paragraphs": [
[
"A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax.",
"Within the area of natural language processing (NLP) research, experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3, BIBREF4, BIBREF5. Often, so-called probing tasks are used, which require a specific subset of linguistic knowledge and can, thus, be leveraged for qualitative evaluation. The goal is to answer the question: What do neural networks learn that helps them to succeed in a given task?",
"Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the \"native language\", in neural network models.",
"To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology."
],
[
"Many of the world's languages exhibit rich inflectional morphology: the surface form of an individual lexical entry changes in order to express properties such as person, grammatical gender, or case. The citation form of a lexical entry is referred to as the lemma. The set of all possible surface forms or inflections of a lemma is called its paradigm. Each inflection within a paradigm can be associated with a tag, i.e., 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance. We display the paradigms of dance and eat in Table TABREF1.",
"The presence of rich inflectional morphology is problematic for NLP systems as it increases word form sparsity. For instance, while English verbs can have up to 5 inflected forms, Archi verbs have thousands BIBREF7, even by a conservative count. Thus, an important task in the area of morphology is morphological inflection BIBREF8, BIBREF9, which consists of mapping a lemma to an indicated inflected form. An (irregular) English example would be",
"with PAST being the target tag, denoting the past tense form. Additionally, a rich inflectional morphology is also challenging for L2 language learners, since both rules and their exceptions need to be memorized.",
"In NLP, morphological inflection has recently frequently been cast as a sequence-to-sequence problem, where the sequence of target (sub-)tags together with the sequence of input characters constitute the input sequence, and the characters of the inflected word form the output. Neural models define the state of the art for the task and obtain high accuracy if an abundance of training data is available. Here, we focus on learning of inflection from limited data if information about another language's morphology is already known. We, thus, loosely simulate an L2 learning setting."
],
[
"Let ${\\cal M}$ be the paradigm slots which are being expressed in a language, and $w$ a lemma in that language. We then define the paradigm $\\pi $ of $w$ as:",
"$f_k[w]$ denotes an inflected form corresponding to tag $t_{k}$, and $w$ and $f_k[w]$ are strings consisting of letters from an alphabet $\\Sigma $.",
"The task of morphological inflection consists of predicting a missing form $f_i[w]$ from a paradigm, given the lemma $w$ together with the tag $t_i$."
],
[
"The models we experiment with are based on a pointer–generator network architecture BIBREF10, BIBREF11, i.e., a recurrent neural network (RNN)-based sequence-to-sequence network with attention and a copy mechanism. A standard sequence-to-sequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has, thus, been subject to cognitively motivated experiments BIBREF14 before. Here, however, we choose the pointer–generator variant of sharma-katrapati-sharma:2018:K18-30, since it performs better in low-resource settings, which we will assume for our target languages. We will explain the model shortly in the following and refer the reader to the original paper for more details."
],
[
"Our architecture employs two separate encoders, which are both bi-directional long short-term memory (LSTM) networks BIBREF15: The first processes the morphological tags which describe the desired target form one by one. The second encodes the sequence of characters of the input word."
],
[
"Two separate attention mechanisms are used: one per encoder LSTM. Taking all respective encoder hidden states as well as the current decoder hidden state as input, each of them outputs a so-called context vector, which is a weighted sum of all encoder hidden states. The concatenation of the two individual context vectors results in the final context vector $c_t$, which is the input to the decoder at time step $t$."
],
[
"Our decoder consists of a uni-directional LSTM. Unlike a standard sequence-to-sequence model, a pointer–generator network is not limited to generating characters from the vocabulary to produce the output. Instead, the model gives certain probability to copying elements from the input over to the output. The probability of a character $y_t$ at time step $t$ is computed as a sum of the probability of $y_t$ given by the decoder and the probability of copying $y_t$, weighted by the probabilities of generating and copying:",
"$p_{\\textrm {dec}}(y_t)$ is calculated as an LSTM update and a projection of the decoder state to the vocabulary, followed by a softmax function. $p_{\\textrm {copy}}(y_t)$ corresponds to the attention weights for each input character. The model computes the probability $\\alpha $ with which it generates a new output character as",
"for context vector $c_t$, decoder state $s_t$, embedding of the last output $y_{t-1}$, weights $w_c$, $w_s$, $w_y$, and bias vector $b$. It has been shown empirically that the copy mechanism of the pointer–generator network architecture is beneficial for morphological generation in the low-resource setting BIBREF16."
],
[
"Pretraining and successive fine-tuning of neural network models is a common approach for handling of low-resource settings in NLP. The idea is that certain properties of language can be learned either from raw text, related tasks, or related languages. Technically, pretraining consists of estimating some or all model parameters on examples which do not necessarily belong to the final target task. Fine-tuning refers to continuing training of such a model on a target task, whose data is often limited. While the sizes of the pretrained model parameters usually remain the same between the two phases, the learning rate or other details of the training regime, e.g., dropout, might differ. Pretraining can be seen as finding a suitable initialization of model parameters, before training on limited amounts of task- or language-specific examples.",
"In the context of morphological generation, pretraining in combination with fine-tuning has been used by kann-schutze-2018-neural, which proposes to pretrain a model on general inflection data and fine-tune on examples from a specific paradigm whose remaining forms should be automatically generated. Famous examples for pretraining in the wider area of NLP include BERT BIBREF17 or GPT-2 BIBREF18: there, general properties of language are learned using large unlabeled corpora.",
"Here, we are interested in pretraining as a simulation of familiarity with a native language. By investigating a fine-tuned model we ask the question: How does extensive knowledge of one language influence the acquisition of another?"
],
[
"We choose three target languages.",
"English (ENG) is a morphologically impoverished language, as far as inflectional morphology is concerned. Its verbal paradigm only consists of up to 5 different forms and its nominal paradigm of only up to 2. However, it is one of the most frequently spoken and taught languages in the world, making its acquisition a crucial research topic.",
"Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\\rightarrow $ ue).",
"Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing."
],
[
"For pretraining, we choose languages with different degrees of relatedness and varying morphological similarity to English, Spanish, and Zulu. We limit our experiments to languages which are written in Latin script.",
"As an estimate for morphological similarity we look at the features from the Morphology category mentioned in The World Atlas of Language Structures (WALS). An overview of the available features as well as the respective values for our set of languages is shown in Table TABREF13.",
"We decide on Basque (EUS), French (FRA), German (DEU), Hungarian (HUN), Italian (ITA), Navajo (NAV), Turkish (TUR), and Quechua (QVH) as source languages.",
"Basque is a language isolate. Its inflectional morphology makes similarly frequent use of prefixes and suffixes, with suffixes mostly being attached to nouns, while prefixes and suffixes can both be employed for verbal inflection.",
"French and Italian are Romance languages, and thus belong to the same family as the target language Spanish. Both are suffixing and fusional languages.",
"German, like English, belongs to the Germanic language family. It is a fusional, predominantly suffixing language and, similarly to Spanish, makes use of stem changes.",
"Hungarian, a Finno-Ugric language, and Turkish, a Turkic language, both exhibit an agglutinative morphology, and are predominantly suffixing. They further have vowel harmony systems.",
"Navajo is an Athabaskan language and the only source language which is strongly prefixing. It further exhibits consonant harmony among its sibilants BIBREF19, BIBREF20.",
"Finally, Quechua, a Quechuan language spoken in South America, is again predominantly suffixing and unrelated to all of our target languages."
],
[
"We mostly use the default hyperparameters by sharma-katrapati-sharma:2018:K18-30. In particular, all RNNs have one hidden layer of size 100, and all input and output embeddings are 300-dimensional.",
"For optimization, we use ADAM BIBREF21. Pretraining on the source language is done for exactly 50 epochs. To obtain our final models, we then fine-tune different copies of each pretrained model for 300 additional epochs for each target language. We employ dropout BIBREF22 with a coefficient of 0.3 for pretraining and, since that dataset is smaller, with a coefficient of 0.5 for fine-tuning.",
"We make use of the datasets from the CoNLL–SIGMORPHON 2018 shared task BIBREF9. The organizers provided a low, medium, and high setting for each language, with 100, 1000, and 10000 examples, respectively. For all L1 languages, we train our models on the high-resource datasets with 10000 examples. For fine-tuning, we use the low-resource datasets."
],
[
"In Table TABREF18, we show the final test accuracy for all models and languages. Pretraining on EUS and NAV results in the weakest target language inflection models for ENG, which might be explained by those two languages being unrelated to ENG and making at least partial use of prefixing, while ENG is a suffixing language (cf. Table TABREF13). In contrast, HUN and ITA yield the best final models for ENG. This is surprising, since DEU is the language in our experiments which is closest related to ENG.",
"For SPA, again HUN performs best, followed closely by ITA. While the good performance of HUN as a source language is still unexpected, ITA is closely related to SPA, which could explain the high accuracy of the final model. As for ENG, pretraining on EUS and NAV yields the worst final models – importantly, accuracy is over $15\\%$ lower than for QVH, which is also an unrelated language. This again suggests that the prefixing morphology of EUS and NAV might play a role.",
"Lastly, for ZUL, all models perform rather poorly, with a minimum accuracy of 10.7 and 10.8 for the source languages QVH and EUS, respectively, and a maximum accuracy of 24.9 for a model pretrained on Turkish. The latter result hints at the fact that a regular and agglutinative morphology might be beneficial in a source language – something which could also account for the performance of models pretrained on HUN."
],
[
"For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories."
],
[
"SUB(X): This error consists of a wrong substitution of one character with another. SUB(V) and SUB(C) denote this happening with a vowel or a consonant, respectively. Letters that differ from each other by an accent count as different vowels.",
"Example: decultared instead of decultured",
"DEL(X): This happens when the system ommits a letter from the output. DEL(V) and DEL(C) refer to a missing vowel or consonant, respectively.",
"Example: firte instead of firtle",
"NO_CHG(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (NO_CHG(V)) or a consonant (NO_CHG(C)), but this is missing in the predicted form.",
"Example: verto instead of vierto",
"MULT: This describes cases where two or more errors occur in the stem. Errors concerning the affix are counted for separately.",
"Example: aconcoonaste instead of acondicionaste",
"ADD(X): This error occurs when a letter is mistakenly added to the inflected form. ADD(V) refers to an unnecessary vowel, ADD(C) refers to an unnecessary consonant.",
"Example: compillan instead of compilan",
"CHG2E(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (CHG2E(V)) or a consonant (CHG2E(C)), and this is done, but the resulting vowel or consonant is incorrect.",
"Example: propace instead of propague"
],
[
"AFF: This error refers to a wrong affix. This can be either a prefix or a suffix, depending on the correct target form.",
"Example: ezoJulayi instead of esikaJulayi",
"CUT: This consists of cutting too much of the lemma's prefix or suffix before attaching the inflected form's prefix or suffix, respectively.",
"Example: irradiseis instead of irradiaseis"
],
[
"REFL: This happens when a reflective pronoun is missing in the generated form.",
"Example: doliéramos instead of nos doliéramos",
"REFL_LOC: This error occurs if the reflective pronouns appears at an unexpected position within the generated form.",
"Example: taparsebais instead of os tapabais",
"OVERREG: Overregularization errors occur when the model predicts a form which would be correct if the lemma's inflections were regular but they are not.",
"Example: underteach instead of undertaught"
],
[
"Table TABREF35 displays the errors found in the 75 first ENG development examples, for each source language. From Table TABREF19, we know that HUN $>$ ITA $>$ TUR $>$ DEU $>$ FRA $>$ QVH $>$ NAV $>$ EUS, and we get a similar picture when analyzing the first examples. Thus, especially keeping HUN and TUR in mind, we cautiously propose a first conclusion: familiarity with languages which exhibit an agglutinative morphology simplifies learning of a new language's morphology.",
"Looking at the types of errors, we find that EUS and NAV make the most stem errors. For QVH we find less, but still over 10 more than for the remaining languages. This makes it seem that models pretrained on prefixing or partly prefixing languages indeed have a harder time to learn ENG inflectional morphology, and, in particular, to copy the stem correctly. Thus, our second hypotheses is that familiarity with a prefixing language might lead to suspicion of needed changes to the part of the stem which should remain unaltered in a suffixing language. DEL(X) and ADD(X) errors are particularly frequent for EUS and NAV, which further suggests this conclusion.",
"Next, the relatively large amount of stem errors for QVH leads to our second hypothesis: language relatedness does play a role when trying to produce a correct stem of an inflected form. This is also implied by the number of MULT errors for EUS, NAV and QVH, as compared to the other languages.",
"Considering errors related to the affixes which have to be generated, we find that DEU, HUN and ITA make the fewest. This further suggests the conclusion that, especially since DEU is the language which is closest related to ENG, language relatedness plays a role for producing suffixes of inflected forms as well.",
"Our last observation is that many errors are not found at all in our data sample, e.g., CHG2E(X) or NO_CHG(C). This can be explained by ENG having a relatively poor inflectional morphology, which does not leave much room for mistakes."
],
[
"The errors committed for SPA are shown in Table TABREF37, again listed by source language. Together with Table TABREF19 it gets clear that SPA inflectional morphology is more complex than that of ENG: systems for all source languages perform worse.",
"Similarly to ENG, however, we find that most stem errors happen for the source languages EUS and NAV, which is further evidence for our previous hypothesis that familiarity with prefixing languages impedes acquisition of a suffixing one. Especially MULT errors are much more frequent for EUS and NAV than for all other languages. ADD(X) happens a lot for EUS, while ADD(C) is also frequent for NAV. Models pretrained on either language have difficulties with vowel changes, which reflects in NO_CHG(V). Thus, we conclude that this phenomenon is generally hard to learn.",
"Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well."
],
[
"In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.",
"Besides that, results differ from those for ENG and SPA. First of all, more mistakes are made for all source languages. However, there are also several finer differences. For ZUL, the model pretrained on QVH makes the most stem errors, in particular 4 more than the EUS model, which comes second. Given that ZUL is a prefixing language and QVH is suffixing, this relative order seems important. QVH also committs the highest number of MULT errors.",
"The next big difference between the results for ZUL and those for ENG and SPA is that DEL(X) and ADD(X) errors, which previously have mostly been found for the prefixing or partially prefixing languages EUS and NAV, are now most present in the outputs of suffixing languages. Namely, DEL(C) occurs most for FRA and ITA, DEL(V) for FRA and QVH, and ADD(C) and ADD(V) for HUN. While some deletion and insertion errors are subsumed in MULT, this does not fully explain this difference. For instance, QVH has both the second most DEL(V) and the most MULT errors.",
"The overall number of errors related to the affix seems comparable between models with different source languages. This weakly supports the hypothesis that relatedness reduces affix-related errors, since none of the pretraining languages in our experiments is particularly close to ZUL. However, we do find more CUT errors for HUN and TUR: again, these are suffixing, while CUT for the target language SPA mostly happened for the prefixing languages EUS and NAV."
],
[
"A limitation of our work is that we only include languages that are written in Latin script. An interesting question for future work might, thus, regard the effect of disjoint L1 and L2 alphabets.",
"Furthermore, none of the languages included in our study exhibits a templatic morphology. We make this choice because data for templatic languages is currently mostly available in non-Latin alphabets. Future work could investigate languages with templatic morphology as source or target languages, if needed by mapping the language's alphabet to Latin characters.",
"Finally, while we intend to choose a diverse set of languages for this study, our overall number of languages is still rather small. This affects the generalizability of the results, and future work might want to look at larger samples of languages."
],
[
"Most research on inflectional morphology in NLP within the last years has been related to the SIGMORPHON and CoNLL–SIGMORPHON shared tasks on morphological inflection, which have been organized yearly since 2016 BIBREF6. Traditionally being focused on individual languages, the 2019 edition BIBREF23 contained a task which asked for transfer learning from a high-resource to a low-resource language. However, source–target pairs were predefined, and the question of how the source language influences learning besides the final accuracy score was not considered. Similarly to us, kyle performed a manual error analysis of morphological inflection systems for multiple languages. However, they did not investigate transfer learning, but focused on monolingual models.",
"Outside the scope of the shared tasks, kann-etal-2017-one investigated cross-lingual transfer for morphological inflection, but was limited to a quantitative analysis. Furthermore, that work experimented with a standard sequence-to-sequence model BIBREF12 in a multi-task training fashion BIBREF24, while we pretrain and fine-tune pointer–generator networks. jin-kann-2017-exploring also investigated cross-lingual transfer in neural sequence-to-sequence models for morphological inflection. However, their experimental setup mimicked kann-etal-2017-one, and the main research questions were different: While jin-kann-2017-exploring asked how cross-lingual knowledge transfer works during multi-task training of neural sequence-to-sequence models on two languages, we investigate if neural inflection models demonstrate interesting differences in production errors depending on the pretraining language. Besides that, we differ in the artificial neural network architecture and language pairs we investigate."
],
[
"Cross-lingual transfer learning has been used for a large variety NLP of tasks, e.g., automatic speech recognition BIBREF25, entity recognition BIBREF26, language modeling BIBREF27, or parsing BIBREF28, BIBREF29, BIBREF30. Machine translation has been no exception BIBREF31, BIBREF32, BIBREF33. Recent research asked how to automatically select a suitable source language for a given target language BIBREF34. This is similar to our work in that our findings could potentially be leveraged to find good source languages."
],
[
"Finally, a lot of research has focused on human L1 and L2 acquisition of inflectional morphology BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40.",
"To name some specific examples, marques2011study investigated the effect of a stay abroad on Spanish L2 acquisition, including learning of its verbal morphology in English speakers. jia2003acquisition studied how Mandarin Chinese-speaking children learned the English plural morpheme. nicoladis2012young studied the English past tense acquisition in Chinese–English and French–English bilingual children. They found that, while both groups showed similar production accuracy, they differed slightly in the type of errors they made. Also considering the effect of the native language explicitly, yang2004impact investigated the acquisition of the tense-aspect system in an L2 for speakers of a native language which does not mark tense explicitly.",
"Finally, our work has been weakly motivated by bliss2006l2. There, the author asked a question for human subjects which is similar to the one we ask for neural models: How does the native language influence L2 acquisition of inflectional morphology?"
],
[
"Motivated by the fact that, in humans, learning of a second language is influenced by a learner's native language, we investigated a similar question in artificial neural network models for morphological inflection: How does pretraining on different languages influence a model's learning of inflection in a target language?",
"We performed experiments on eight different source languages and three different target languages. An extensive error analysis of all final models showed that (i) for closely related source and target languages, acquisition of target language inflection gets easier; (ii) knowledge of a prefixing language makes learning of inflection in a suffixing language more challenging, as well as the other way around; and (iii) languages which exhibit an agglutinative morphology facilitate learning of inflection in a second language.",
"Future work might leverage those findings to improve neural network models for morphological inflection in low-resource languages, by choosing suitable source languages for pretraining.",
"Another interesting next step would be to investigate how the errors made by our models compare to those by human L2 learners with different native languages. If the exhibited patterns resemble each other, computational models could be used to predict errors a person will make, which, in turn, could be leveraged for further research or the development of educational material."
],
[
"I would like to thank Samuel R. Bowman and Kyle Gorman for helpful discussions and suggestions. This work has benefited from the support of Samsung Research under the project Improving Deep Learning using Latent Structure and from the donation of a Titan V GPU by NVIDIA Corporation."
]
],
"section_name": [
"Introduction",
"Task",
"Task ::: Formal definition.",
"Model ::: Pointer–Generator Network",
"Model ::: Pointer–Generator Network ::: Encoders.",
"Model ::: Pointer–Generator Network ::: Attention.",
"Model ::: Pointer–Generator Network ::: Decoder.",
"Model ::: Pretraining and Finetuning",
"Experimental Design ::: Target Languages",
"Experimental Design ::: Source Languages",
"Experimental Design ::: Hyperparameters and Data",
"Quantitative Results",
"Qualitative Results",
"Qualitative Results ::: Stem Errors",
"Qualitative Results ::: Affix Errors",
"Qualitative Results ::: Miscellaneous Errors",
"Qualitative Results ::: Error Analysis: English",
"Qualitative Results ::: Error Analysis: Spanish",
"Qualitative Results ::: Error Analysis: Zulu",
"Qualitative Results ::: Limitations",
"Related Work ::: Neural network models for inflection.",
"Related Work ::: Cross-lingual transfer in NLP.",
"Related Work ::: Acquisition of morphological inflection.",
"Conclusion and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"a38dc2ad92ff5c2cda31f3be4f22daba2e001e98"
],
"answer": [
{
"evidence": [
"Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\\rightarrow $ ue).",
"Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.",
"Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.",
"In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\\rightarrow $ ue).",
"We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.",
"Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.",
"Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"266852dc68f118fe7f769bd3dbfcb6c1db052e63"
],
"answer": [
{
"evidence": [
"Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing."
],
"extractive_spans": [
"Zulu"
],
"free_form_answer": "",
"highlighted_evidence": [
"We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"06c8cd73539b38eaffa4705ef799087a155fc99d"
],
"answer": [
{
"evidence": [
"Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the \"native language\", in neural network models.",
"For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories."
],
"extractive_spans": [],
"free_form_answer": "Comparison of test accuracies of neural network models on an inflection task and qualitative analysis of the errors",
"highlighted_evidence": [
"We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages.",
"For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison.",
"We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"d48c287a47f6b52d11af7fb02494192a5b5e04cb"
],
"answer": [
{
"evidence": [
"To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology."
],
"extractive_spans": [
"English, Spanish and Zulu"
],
"free_form_answer": "",
"highlighted_evidence": [
"To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Are agglutinative languages used in the prediction of both prefixing and suffixing languages?",
"What is an example of a prefixing language?",
"How is the performance on the task evaluated?",
"What are the tree target languages studied in the paper?"
],
"question_id": [
"fc29bb14f251f18862c100e0d3cd1396e8f2c3a1",
"f3e96c5487d87557a661a65395b0162033dc05b3",
"74db8301d42c7e7936eb09b2171cd857744c52eb",
"587885bc86543b8f8b134c20e2c62f6251195571"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"morphology",
"morphology",
"morphology",
"morphology"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Paradigms of the English lemmas dance and eat. dance has 4 distinct inflected forms; eat has 5.",
"Table 2: WALS features from the Morphology category. 20A: 0=Exclusively concatenative, 1=N/A. 21A: 0=No case, 1=Monoexponential case, 2=Case+number, 3=N/A. 21B: 0=monoexponential TAM, 1=TAM+agreement, 2=N/A. 22A: 0=2-3 categories per word, 1=4-5 categories per word, 2=N/A, 3=6-7 categories per word, 4=8-9 categories per word. 23A: 0=Dependent marking, 1=Double marking, 2=Head marking, 3=No marking, 4=N/A. 24A: 0=Dependent marking, 1=N/A, 2=Double marking. 25A: 0=Dependent-marking, 1=Inconsistent or other, 2=N/A. 25B: 0=Non-zero marking, 1=N/A. 26A: 0=Strongly suffixing, 1=Strong prefixing, 2=Equal prefixing and suffixing. 27A: 0=No productive reduplication, 1=Full reduplication only, 2=Productive full and partial reduplication. 28A: 0=Core cases only, 1=Core and non-core, 2=No case marking, 3=No syncretism, 4=N/A. 29A: 0=Syncretic, 1=Not syncretic, 2=N/A.",
"Table 3: Test accuracy.",
"Table 4: Validation accuracy.",
"Table 5: Error analysis for ENG as the model’s L2.",
"Table 7: Error analysis for ZUL as the model’s L2.",
"Table 6: Error analysis for SPA as the model’s L2."
],
"file": [
"1-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"6-Table5-1.png",
"7-Table7-1.png",
"7-Table6-1.png"
]
} | [
"How is the performance on the task evaluated?"
] | [
[
"1910.05456-Introduction-2",
"1910.05456-Qualitative Results-0"
]
] | [
"Comparison of test accuracies of neural network models on an inflection task and qualitative analysis of the errors"
] | 129 |
1806.00722 | Dense Information Flow for Neural Machine Translation | Recently, neural machine translation has achieved remarkable progress by introducing well-designed deep neural networks into its encoder-decoder framework. From the optimization perspective, residual connections are adopted to improve learning performance for both encoder and decoder in most of these deep architectures, and advanced attention connections are applied as well. Inspired by the success of the DenseNet model in computer vision problems, in this paper, we propose a densely connected NMT architecture (DenseNMT) that is able to train more efficiently for NMT. The proposed DenseNMT not only allows dense connection in creating new features for both encoder and decoder, but also uses the dense attention structure to improve attention quality. Our experiments on multiple datasets show that DenseNMT structure is more competitive and efficient. | {
"paragraphs": [
[
"Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years. Starting from the encoder-decoder framework BIBREF0 , NMT starts to show promising results in many language pairs. The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable. The attention mechanism BIBREF1 added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions BIBREF2 , BIBREF3 , BIBREF4 .",
"One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages. From the optimization perspective, deeper models are hard to efficiently back-propagate the gradients, and this phenomenon as well as its solution is better explored in the computer vision society. Residual networks (ResNet) BIBREF5 achieve great performance in a wide range of tasks, including image classification and image segmentation. Residual connections allow features from previous layers to be accumulated to the next layer easily, and make the optimization of the model efficiently focus on refining upper layer features.",
"NMT is considered as a challenging problem due to its sequence-to-sequence generation framework, and the goal of comprehension and reorganizing from one language to the other. Apart from the encoder block that works as a feature generator, the decoder network combining with the attention mechanism bring new challenges to the optimization of the models. While nowadays best-performing NMT systems use residual connections, we question whether this is the most efficient way to propagate information through deep models. In this paper, inspired by the idea of using dense connections for training computer vision tasks BIBREF6 , we propose a densely connected NMT framework (DenseNMT) that efficiently propagates information from the encoder to the decoder through the attention component. Taking the CNN-based deep architecture as an example, we verify the efficiency of DenseNMT. Our contributions in this work include: (i) by comparing the loss curve, we show that DenseNMT allows the model to pass information more efficiently, and speeds up training; (ii) we show through ablation study that dense connections in all three blocks altogether help improve the performance, while not increasing the number of parameters; (iii) DenseNMT allows the models to achieve similar performance with much smaller embedding size; (iv) DenseNMT on IWSLT14 German-English and Turkish-English translation tasks achieves new benchmark BLEU scores, and the result on WMT14 English-German task is more competitive than the residual connections based baseline model."
],
[
"In this section, we introduce our DenseNMT architecture. In general, compared with residual connected NMT models, DenseNMT allows each layer to provide its information to all subsequent layers directly. Figure FIGREF9 - FIGREF15 show the design of our model structure by parts.",
"We start with the formulation of a regular NMT model. Given a set of sentence pairs INLINEFORM0 , an NMT model learns parameter INLINEFORM1 by maximizing the log-likelihood function: DISPLAYFORM0 ",
"For every sentence pair INLINEFORM0 , INLINEFORM1 is calculated based on the decomposition: DISPLAYFORM0 ",
"where INLINEFORM0 is the length of sentence INLINEFORM1 . Typically, NMT models use the encoder-attention-decoder framework BIBREF1 , and potentially use multi-layer structure for both encoder and decoder. Given a source sentence INLINEFORM2 with length INLINEFORM3 , the encoder calculates hidden representations by layer. We denote the representation in the INLINEFORM4 -th layer as INLINEFORM5 , with dimension INLINEFORM6 , where INLINEFORM7 is the dimension of features in layer INLINEFORM8 . The hidden representation at each position INLINEFORM9 is either calculated by: DISPLAYFORM0 ",
"for recurrent transformation INLINEFORM0 such as LSTM and GRU, or by: DISPLAYFORM0 ",
"for parallel transformation INLINEFORM0 . On the other hand, the decoder layers INLINEFORM1 follow similar structure, while getting extra representations from the encoder side. These extra representations are also called attention, and are especially useful for capturing alignment information.",
"In our experiments, we use convolution based transformation for INLINEFORM0 due to both its efficiency and high performance, more formally, DISPLAYFORM0 ",
" INLINEFORM0 is the gated linear unit proposed in BIBREF11 and the kernel size is INLINEFORM1 . DenseNMT is agnostic to the transformation function, and we expect it to also work well combining with other transformations, such as LSTM, self-attention and depthwise separable convolution."
],
[
"Different from residual connections, later layers in the dense encoder are able to use features from all previous layers by concatenating them: DISPLAYFORM0 ",
"Here, INLINEFORM0 is defined in Eq. ( EQREF10 ), INLINEFORM1 represents concatenation operation. Although this brings extra connections to the network, with smaller number of features per layer, the architecture encourages feature reuse, and can be more compact and expressive. As shown in Figure FIGREF9 , when designing the model, the hidden size in each layer is much smaller than the hidden size of the corresponding layer in the residual-connected model.",
"While each encoder layer perceives information from its previous layers, each decoder layer INLINEFORM0 has two information sources: previous layers INLINEFORM1 , and attention values INLINEFORM2 . Therefore, in order to allow dense information flow, we redefine the generation of INLINEFORM3 -th layer as a nonlinear function over all its previous decoder layers and previous attentions. This can be written as: DISPLAYFORM0 ",
"where INLINEFORM0 is the attention value using INLINEFORM1 -th decoder layer and information from encoder side, which will be specified later. Figure FIGREF13 shows the comparison of a dense decoder with a regular residual decoder. The dimensions of both attention values and hidden layers are chosen with smaller values, yet the perceived information for each layer consists of a higher dimension vector with more representation power. The output of the decoder is a linear transformation of the concatenation of all layers by default. To compromise to the increment of dimensions, we use summary layers, which will be introduced in Section 3.3. With summary layers, the output of the decoder is only a linear transformation of the concatenation of the upper few layers."
],
[
"Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2). However, most of them only use the last encoder layer. In order to pass more abundant information from the encoder side to the decoder side, the attention block needs to be more expressive. Following the recent development of designing attention architectures, we propose DenseAtt as the dense attention block, which serves for the dense connection between the encoder and the decoder side. More specifically, two options are proposed accordingly. For each decoding step in the corresponding decoder layer, the two options both calculate attention using multiple encoder layers. The first option is more compressed, while the second option is more expressive and flexible. We name them as DenseAtt-1 and DenseAtt-2 respectively. Figure FIGREF15 shows the architecture of (a) multi-step attention BIBREF2 , (b) DenseAtt-1, and (c) DenseAtt-2 in order. In general, a popular multiplicative attention module can be written as: DISPLAYFORM0 ",
"where INLINEFORM0 represent query, key, value respectively. We will use this function INLINEFORM1 in the following descriptions.",
"In the decoding phase, we use a layer-wise attention mechanism, such that each decoder layer absorbs different attention information to adjust its output. Instead of treating the last hidden layer as the encoder's output, we treat the concatenation of all hidden layers from encoder side as the output. The decoder layer multiplies with the encoder output to obtain the attention weights, which is then multiplied by a linear combination of the encoder output and the sentence embedding. The attention output of each layer INLINEFORM0 can be formally written as: DISPLAYFORM0 ",
"where INLINEFORM0 is the multiplicative attention function, INLINEFORM1 is a concatenation operation that combines all features, and INLINEFORM2 is a linear transformation function that maps each variable to a fixed dimension in order to calculate the attention value. Notice that we explicitly write the INLINEFORM3 term in ( EQREF19 ) to keep consistent with the multi-step attention mechanism, as pictorially shown in Figure FIGREF15 (a).",
"Notice that the transformation INLINEFORM0 in DenseAtt-1 forces the encoder layers to be mixed before doing attention. Since we use multiple hidden layers from the encoder side to get an attention value, we can alternatively calculate multiple attention values before concatenating them. In another word, the decoder layer can get different attention values from different encoder layers. This can be formally expressed as: DISPLAYFORM0 ",
"where the only difference from Eq. ( EQREF19 ) is that the concatenation operation is substituted by a summation operation, and is put after the attention function INLINEFORM0 . This method further increases the representation power in the attention block, while maintaining the same number of parameters in the model."
],
[
"Since the number of features fed into nonlinear operation is accumulated along the path, the parameter size increases accordingly. For example, for the INLINEFORM0 -th encoder layer, the input dimension of features is INLINEFORM1 , where INLINEFORM2 is the feature dimension in previous layers, INLINEFORM3 is the embedding size. In order to avoid the calculation bottleneck for later layers due to large INLINEFORM4 , we introduce the summary layer for deeper models. It summarizes the features for all previous layers and projects back to the embedding size, so that later layers of both the encoder and the decoder side do not need to look back further. The summary layers can be considered as contextualized word vectors in a given sentence BIBREF12 . We add one summary layer after every INLINEFORM5 layers, where INLINEFORM6 is the hyperparameter we introduce. Accordingly, the input dimension of features is at most INLINEFORM7 for the last layer of the encoder. Moreover, combined with the summary layer setting, our DenseAtt mechanism allows each decoder layer to calculate the attention value focusing on the last few encoder layers, which consists of the last contextual embedding layer and several dense connected layers with low dimension. In practice, we set INLINEFORM8 as 5 or 6."
],
[
"Figure FIGREF9 and Figure FIGREF13 show the difference of information flow compared with a residual-based encoder/decoder. For residual-based models, each layer can absorb a single high-dimensional vector from its previous layer as the only information, while for DenseNMT, each layer can utilize several low-dimensional vectors from its previous layers and a high-dimensional vector from the first layer (embedding layer) as its information. In DenseNMT, each layer directly provides information to its later layers. Therefore, the structure allows feature reuse, and encourages upper layers to focus on creating new features. Furthermore, the attention block allows the embedding vectors (as well as other hidden layers) to guide the decoder's generation more directly; therefore, during back-propagation, the gradient information can be passed directly to all encoder layers simultaneously."
],
[
"We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German.",
"We preprocess the IWSLT14 German-English dataset following byte-pair-encoding (BPE) method BIBREF13 . We learn 25k BPE codes using the joint corpus of source and target languages. We randomly select 7k from IWSLT14 German-English as the development set , and the test set is a concatenation of dev2010, tst2010, tst2011 and tst2012, which is widely used in prior works BIBREF14 , BIBREF15 , BIBREF16 .",
"For the Turkish-English translation task, we use the data provided by IWSLT14 BIBREF17 and the SETimes corpus BIBREF17 following BIBREF18 . After removing sentence pairs with length ratio over 9, we obtain 360k sentence pairs. Since there is little commonality between the two languages, we learn 30k size BPE codes separately for Turkish and English. In addition to this, we give another preprocessing for Turkish sentences and use word-level English corpus. For Turkish sentences, following BIBREF19 , BIBREF18 , we use the morphology tool Zemberek with disambiguation by the morphological analysis BIBREF20 and removal of non-surface tokens. Following BIBREF18 , we concatenate tst2011, tst2012, tst2013, tst2014 as our test set. We concatenate dev2010 and tst2010 as the development set.",
"We preprocess the WMT14 English-German dataset using a BPE code size of 40k. We use the concatenation of newstest2013 and newstest2012 as the development set."
],
[
"As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96.",
"Since NMT model usually allocates a large proportion of its parameters to the source/target sentence embedding and softmax matrix, we explore in our experiments to what extent decreasing the dimensions of the three parts would harm the BLEU score. We change the dimensions of the source embedding, the target embedding as well as the softmax matrix simultaneously to smaller values, and then project each word back to the original embedding dimension through a linear transformation. This significantly reduces the number of total parameters, while not influencing the upper layer structure of the model.",
"We also introduce three additional models we use for ablation study, all using 4-layer structure. Based on the residual connected BASE-4L model, (1) DenseENC-4L only makes encoder side dense, (2) DenseDEC-4L only makes decoder side dense, and (3) DenseAtt-4L only makes the attention dense using DenseAtt-2. There is no summary layer in the models, and both DenseENC-4L and DenseDEC-4L use hidden size 128. Again, by reducing the hidden size, we ensure that different 4-layer models have similar model sizes.",
"Our design for the WMT14 English-German model follows the best performance model provided in BIBREF2 . The construction of our model is straightforward: our 15-layer model DenseNMT-En-De-15 uses dense connection with DenseAtt-2, INLINEFORM0 . The hidden number in each layer is INLINEFORM1 that of the original model, while the kernel size maintains the same."
],
[
"We use Nesterov Accelerated Gradient (NAG) BIBREF21 as our optimizer, and the initial learning rate is set to INLINEFORM0 . For German-English and Turkish-English experiments, the learning rate will shrink by 10 every time the validation loss increases. For the English-German dataset, in consistent with BIBREF2 , the learning rate will shrink by 10 every epoch since the first increment of validation loss. The system stops training until the learning rate is less than INLINEFORM1 . All models are trained end-to-end without any warmstart techniques. We set our batch size for the WMT14 English-German dataset to be 48, and additionally tune the length penalty parameter, in consistent with BIBREF2 . For other datasets, we set batch size to be 32. During inference, we use a beam size of 5."
],
[
"We first show that DenseNMT helps information flow more efficiently by presenting the training loss curve. All hyperparameters are fixed in each plot, only the models are different. In Figure FIGREF30 , the loss curves for both training and dev sets (before entering the finetuning period) are provided for De-En, Tr-En and Tr-En-morph. For clarity, we compare DenseNMT-4L-2 with BASE-4L. We observe that DenseNMT models are consistently better than residual-connected models, since their loss curves are always below those of the baseline models. The effect is more obvious on the WMT14 English-German dataset. We rerun the best model provided by BIBREF2 and compare with our model. In Figure FIGREF33 , where train/test loss curve are provided, DenseNMT-En-De-15 reaches the same level of loss and starts finetuning (validation loss starts to increase) at epoch 13, which is 35% faster than the baseline.",
"Adding dense connections changes the architecture, and would slightly influence training speed. For the WMT14 En-De experiments, the computing time for both DenseNMT and the baseline (with similar number of parameters and same batch size) tested on single M40 GPU card are 1571 and 1710 word/s, respectively. While adding dense connections influences the per-iteration training slightly (8.1% reduction of speed), it uses many fewer epochs, and achieves a better BLEU score. In terms of training time, DenseNMT uses 29.3%(before finetuning)/22.9%(total) less time than the baseline."
],
[
"Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well.",
"Furthermore, in Table TABREF36 , we investigate DenseNMT models through ablation study. In order to make the comparison fair, six models listed have roughly the same number of parameters. On De-En, Tr-En and Tr-En-morph, we see improvement by making the encoder dense, making the decoder dense, and making the attention dense. Fully dense-connected model DenseNMT-4L-1 further improves the translation accuracy. By allowing more flexibility in dense attention, DenseNMT-4L-2 provides the highest BLEU scores for all three experiments.",
"From the experiments, we have seen that enlarging the information flow in the attention block benefits the models. The dense attention block provides multi-layer information transmission from the encoder to the decoder, and to the output as well. Meanwhile, as shown by the ablation study, the dense-connected encoder and decoder both give more powerful representations than the residual-connected counterparts. As a result, the integration of the three parts improve the accuracy significantly."
],
[
"From Table TABREF32 , we also observe that DenseNMT performs better with small embedding sizes compared to residual-connected models with regular embedding size. For example, on Tr-En model, the 8-layer DenseNMT-8L-2 model with embedding size 64 matches the BLEU score of the 8-layer BASE model with embedding size 256, while the number of parameter of the former one is only INLINEFORM0 of the later one. In all genres, DenseNMT model with embedding size 128 is comparable or even better than the baseline model with embedding size 256.",
"While overlarge embedding sizes hurt accuracy because of overfitting issues, smaller sizes are not preferable because of insufficient representation power. However, our dense models show that with better model design, the embedding information can be well concentrated on fewer dimensions, e.g., 64. This is extremely helpful when building models on mobile and small devices where the model size is critical. While there are other works that stress the efficiency issue by using techniques such as separable convolution BIBREF3 , and shared embedding BIBREF4 , our DenseNMT framework is orthogonal to those approaches. We believe that other techniques would produce more efficient models through combining with our DenseNMT framework."
],
[
"For the IWSLT14 German-English dataset, we compare with the best results reported from literatures. To be consistent with prior works, we also provide results using our model directly on the dataset without BPE preprocessing. As shown in Table TABREF39 , DenseNMT outperforms the phrase-structure based network NPMT BIBREF16 (with beam size 10) by 1.2 BLEU, using a smaller beam size, and outperforms the actor-critic method based algorithm BIBREF15 by 2.8 BLEU. For reference, our model trained on the BPE preprocessed dataset achieves 32.26 BLEU, which is 1.93 BLEU higher than our word-based model. For Turkish-English task, we compare with BIBREF19 which uses the same morphology preprocessing as our Tr-En-morph. As shown in Table TABREF37 , our baseline is higher than the previous result, and we further achieve new benchmark result with 24.36 BLEU average score. For WMT14 English-German, from Table TABREF41 , we can see that DenseNMT outperforms ConvS2S model by 0.36 BLEU score using 35% fewer training iterations and 20% fewer parameters. We also compare with another convolution based NMT model: SliceNet BIBREF3 , which explores depthwise separable convolution architectures. SliceNet-Full matches our result, and SliceNet-Super outperforms by 0.58 BLEU score. However, both models have 2.2x more parameters than our model. We expect DenseNMT structure could help improve their performance as well."
],
[
"In this work, we have proposed DenseNMT as a dense-connection framework for translation tasks, which uses the information from embeddings more efficiently, and passes abundant information from the encoder side to the decoder side. Our experiments have shown that DenseNMT is able to speed up the information flow and improve translation accuracy. For the future work, we will combine dense connections with other deep architectures, such as RNNs BIBREF7 and self-attention networks BIBREF4 ."
]
],
"section_name": [
"Introduction",
"DenseNMT",
"Dense encoder and decoder",
"Dense attention",
"Summary layers",
"Analysis of information flow",
"Datasets",
"Model and architect design",
"Training setting",
"Training curve",
"DenseNMT improves accuracy with similar architectures and model sizes",
"DenseNMT with smaller embedding size",
"DenseNMT compares with state-of-the-art results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"99949e192d00f333149953b64edf7e6a9477fb4a"
],
"answer": [
{
"evidence": [
"As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96."
],
"extractive_spans": [
" 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256"
],
"free_form_answer": "",
"highlighted_evidence": [
"As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"8d4cbe2a29b96fd4828148a9dcbc3eda632727fc"
],
"answer": [
{
"evidence": [
"Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" In almost all genres, DenseNMT models are significantly better than the baselines."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"f082601cbeac77ac91a9ffc5f67f60793490f945"
],
"answer": [
{
"evidence": [
"We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German."
],
"extractive_spans": [
"German-English",
"Turkish-English",
"English-German"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0713fba151dd43c9169a7711fbe85a986e201788"
],
"answer": [
{
"evidence": [
"We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German."
],
"extractive_spans": [],
"free_form_answer": "IWSLT14 German-English, IWSLT14 Turkish-English, WMT14 English-German",
"highlighted_evidence": [
"We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what are the baselines?",
"did they outperform previous methods?",
"what language pairs are explored?",
"what datasets were used?"
],
"question_id": [
"26b5c090f72f6d51e5d9af2e470d06b2d7fc4a98",
"8c0621016e96d86a7063cb0c9ec20c76a2dba678",
"f1214a05cc0e6d870c789aed24a8d4c768e1db2f",
"41d3ab045ef8e52e4bbe5418096551a22c5e9c43"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Comparison of dense-connected encoder and residual-connected encoder. Left: regular residual-connected encoder. Right: dense-connected encoder. Information is directly passed from blue blocks to the green block.",
"Figure 2: Comparison of dense-connected decoder and residual-connected decoder. Left: regular residual-connected decoder. Right: dense-connected decoder. Ellipsoid stands for attention block. Information is directly passed from blue blocks to the green block.",
"Figure 3: Illustration of DenseAtt mechanisms. For clarity, We only plot the attention block for a single decoder layer. (a): multi-step attention (Gehring et al., 2017), (b): DenseAtt-1, (c): DenseAtt-2. L(·) is the linear projection function. The ellipsoid stands for the core attention operation as shown in Eq. (8).",
"Figure 4: Training curve (T) and validation curve (V) comparison. Left: IWSLT14 German-English (De-En). Middle: Turkish-English, BPE encoding (Tr-En). Right: TurkishEnglish, morphology encoding (Tr-En-morph).",
"Figure 5: Training curve and test curve comparison on WMT14 English-German translation task.",
"Table 1: BLEU score on IWSLT German-English and Turkish-English translation tasks. We compare models using different embedding sizes, and keep the model size consistent within each column.",
"Table 2: Ablation study for encoder block, decoder block, and attention block in DenseNMT.",
"Table 3: Accuracy on Turkish-English translation task in terms of BLEU score.",
"Table 4: Accuracy on IWSLT14 German-English translation task in terms of BLEU score.",
"Table 5: Accuracy on WMT14 English-German translation task in terms of BLEU score."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png"
]
} | [
"what datasets were used?"
] | [
[
"1806.00722-Datasets-0"
]
] | [
"IWSLT14 German-English, IWSLT14 Turkish-English, WMT14 English-German"
] | 131 |
1904.08386 | Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism | Literary critics often attempt to uncover meaning in a single work of literature through careful reading and analysis. Applying natural language processing methods to aid in such literary analyses remains a challenge in digital humanities. While most previous work focuses on"distant reading"by algorithmically discovering high-level patterns from large collections of literary works, here we sharpen the focus of our methods to a single literary theory about Italo Calvino's postmodern novel Invisible Cities, which consists of 55 short descriptions of imaginary cities. Calvino has provided a classification of these cities into eleven thematic groups, but literary scholars disagree as to how trustworthy his categorization is. Due to the unique structure of this novel, we can computationally weigh in on this debate: we leverage pretrained contextualized representations to embed each city's description and use unsupervised methods to cluster these embeddings. Additionally, we compare results of our computational approach to similarity judgments generated by human readers. Our work is a first step towards incorporating natural language processing into literary criticism. | {
"paragraphs": [
[
"Literary critics form interpretations of meaning in works of literature. Building computational models that can help form and test these interpretations is a fundamental goal of digital humanities research BIBREF0 . Within natural language processing, most previous work that engages with literature relies on “distant reading” BIBREF1 , which involves discovering high-level patterns from large collections of stories BIBREF2 , BIBREF3 . We depart from this trend by showing that computational techniques can also engage with literary criticism at a closer distance: concretely, we use recent advances in text representation learning to test a single literary theory about the novel Invisible Cities by Italo Calvino.",
"Framed as a dialogue between the traveler Marco Polo and the emperor Kublai Khan, Invisible Cities consists of 55 prose poems, each of which describes an imaginary city. Calvino categorizes these cities into eleven thematic groups that deal with human emotions (e.g., desires, memories), general objects (eyes, sky, signs), and unusual properties (continuous, hidden, thin). Many critics argue that Calvino's labels are not meaningful, while others believe that there is a distinct thematic separation between the groups, including the author himself BIBREF4 . The unique structure of this novel — each city's description is short and self-contained (Figure FIGREF1 ) — allows us to computationally examine this debate.",
"As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects.",
"While prior work has computationally analyzed a single book BIBREF7 , our work goes beyond simple word frequency or n-gram counts by leveraging the power of pretrained language models to engage with literary criticism. Admittedly, our approach and evaluations are specific to Invisible Cities, but we believe that similar analyses of more conventionally-structured novels could become possible as text representation methods improve. We also highlight two challenges of applying computational methods to literary criticisms: (1) text representation methods are imperfect, especially when given writing as complex as Calvino's; and (2) evaluation is difficult because there is no consensus among literary critics on a single “correct” interpretation."
],
[
"Before describing our method and results, we first review critical opinions on both sides of whether Calvino's thematic groups meaningfully characterize his city descriptions."
],
[
"We focus on measuring to what extent computers can recover Calvino's thematic groupings when given just raw text of the city descriptions. At a high level, our approach (Figure FIGREF4 ) involves (1) computing a vector representation for every city and (2) performing unsupervised clustering of these representations. The rest of this section describes both of these steps in more detail."
],
[
"While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm."
],
[
"Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4 "
],
[
"While the results from the above section allow us to compare our three computational methods against each other, we additionally collect human judgments to further ground our results. In this section, we first describe our human experiment before quantitatively analyzing our results."
],
[
"We compare clusters computed on different representations using community purity; additionally, we compare these computational methods to humans by their accuracy on the odd-one-out task.",
"City representations computed using language model-based representation (ELMo and BERT) achieve significantly higher purity than a clustering induced from random representations, indicating that there is at least some meaningful coherence to Calvino's thematic groups (first row of Table TABREF11 ). ELMo representations yield the highest purity among the three methods, which is surprising as BERT is a bigger model trained on data from books (among other domains). Both ELMo and BERT outperform GloVe, which intuitively makes sense because the latter do not model the order or structure of the words in each description.",
"While the purity of our methods is higher than that of a random clustering, it is still far below 1. To provide additional context to these results, we now switch to our “odd-one-out” task and compare directly to human performance. For each triplet of cities, we identify the intruder as the city with the maximum Euclidean distance from the other two. Interestingly, crowd workers achieve only slightly higher accuracy than ELMo city representations; their interannotator agreement is also low, which indicates that close reading to analyze literary coherence between multiple texts is a difficult task, even for human annotators. Overall, results from both computational and human approaches suggests that the author-assigned labels are not entirely arbitrary, as we can reliably recover some of the thematic groups."
],
[
"Our quantitative results suggest that while vector-based city representations capture some thematic similarities, there is much room for improvement. In this section, we first investigate whether the learned clusters provide evidence for any arguments put forth by literary critics on the novel. Then, we explore possible reasons that the learned clusters deviate from Calvino's."
],
[
"Most previous work within the NLP community applies distant reading BIBREF1 to large collections of books, focusing on modeling different aspects of narratives such as plots and event sequences BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , characters BIBREF2 , BIBREF26 , BIBREF27 , BIBREF28 , and narrative similarity BIBREF3 . In the same vein, researchers in computational literary analysis have combined statistical techniques and linguistics theories to perform quantitative analysis on large narrative texts BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , but these attempts largely rely on techniques such as word counting, topic modeling, and naive Bayes classifiers and are therefore not able to capture the meaning of sentences or paragraphs BIBREF34 . While these works discover general patterns from multiple literary works, we are the first to use cutting-edge NLP techniques to engage with specific literary criticism about a single narrative.",
"There has been other computational work that focuses on just a single book or a small number of books, much of it focused on network analysis: BIBREF35 extract character social networks from Alice in Wonderland, while BIBREF36 recover social networks from 19th century British novels. BIBREF37 disentangles multiple narrative threads within the novel Infinite Jest, while BIBREF7 provides several automated statistical methods for close reading and test them on the award-winning novel Cloud Atlas (2004). Compared to this work, we push further on modeling the content of the narrative by leveraging pretrained language models."
],
[
"Our work takes a first step towards computationally engaging with literary criticism on a single book using state-of-the-art text representation methods. While we demonstrate that NLP techniques can be used to support literary analyses and obtain new insights, they also have clear limitations (e.g., in understanding abstract themes). As text representation methods become more powerful, we hope that (1) computational tools will become useful for analyzing novels with more conventional structures, and (2) literary criticism will be used as a testbed for evaluating representations."
],
[
"We thank the anonymous reviewers for their insightful comments. Additionally, we thank Nader Akoury, Garrett Bernstein, Chenghao Lv, Ari Kobren, Kalpesh Krishna, Saumya Lal, Tu Vu, Zhichao Yang, Mengxue Zhang and the UMass NLP group for suggestions that improved the paper's clarity, coverage of related work, and analysis experiments."
]
],
"section_name": [
"Introduction",
"Literary analyses of Invisible Cities",
"A Computational Analysis",
"Embedding city descriptions",
"Clustering city representations",
"Evaluating clustering assignments",
"Quantitative comparison",
"Examining the learned clusters",
"Related work",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"e922a0f6eac0005885474470b7736de70242bb0e"
],
"answer": [
{
"evidence": [
"While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm."
],
"extractive_spans": [
"We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm."
],
"free_form_answer": "",
"highlighted_evidence": [
"While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"0e5c9c260e8ca6a68b18fb79abfb55a275eca5ba"
],
"answer": [
{
"evidence": [
"As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects."
],
"extractive_spans": [],
"free_form_answer": "Using crowdsourcing ",
"highlighted_evidence": [
"We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"071a9ef44d77bb5d6274e45217df6ecb1025fe8d"
],
"answer": [
{
"evidence": [
"Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4"
],
"extractive_spans": [
" We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. "
],
"free_form_answer": "",
"highlighted_evidence": [
"Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they model a city description using embeddings?",
"How do they obtain human judgements?",
"Which clustering method do they use to cluster city description embeddings?"
],
"question_id": [
"508580af51483b5fb0df2630e8ea726ff08d537b",
"89d1687270654979c53d0d0e6a845cdc89414c67",
"fc6cfac99636adda28654e1e19931c7394d76c7c"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Calvino labels the thematically-similar cities in the top row as cities & the dead. However, although the bottom two cities share a theme of desire, he assigns them to different groups.",
"Figure 2: We first embed each city by averaging token representations derived from a pretrained model such as ELMo. Then, we feed the city embeddings to a clustering algorithm and analyze the learned clusters.",
"Table 1: Results from cluster purity and accuracy on the “odd-one-out” task suggests that Calvino’s thematic groups are not completely arbitrary."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png"
]
} | [
"How do they obtain human judgements?"
] | [
[
"1904.08386-Introduction-2"
]
] | [
"Using crowdsourcing "
] | 133 |
1909.00754 | Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation | Existing approaches to dialogue state tracking rely on pre-defined ontologies consisting of a set of all possible slot types and values. Though such approaches exhibit promising performance on single-domain benchmarks, they suffer from computational complexity that increases proportionally to the number of pre-defined slots that need tracking. This issue becomes more severe when it comes to multi-domain dialogues which include larger numbers of slots. In this paper, we investigate how to approach DST using a generation framework without the pre-defined ontology list. Given each turn of user utterance and system response, we directly generate a sequence of belief states by applying a hierarchical encoder-decoder structure. In this way, the computational complexity of our model will be a constant regardless of the number of pre-defined slots. Experiments on both the multi-domain and the single domain dialogue state tracking dataset show that our model not only scales easily with the increasing number of pre-defined domains and slots but also reaches the state-of-the-art performance. | {
"paragraphs": [
[
"A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system BIBREF7 . For each dialogue turn, a DST module takes a user utterance and the dialogue history as input, and outputs a belief estimate of the dialogue state. Then a machine action is decided based on the dialogue state according to a dialogue policy module, after which a machine response is generated.",
"Traditionally, a dialogue state consists of a set of requests and joint goals, both of which are represented by a set of slot-value pairs (e.g. (request, phone), (area, north), (food, Japanese)) BIBREF8 . In a recently proposed multi-domain dialogue state tracking dataset, MultiWoZ BIBREF9 , a representation of dialogue state consists of a hierarchical structure of domain, slot, and value is proposed. This is a more practical scenario since dialogues often include multiple domains simultaneously.",
"Many recently proposed DSTs BIBREF2 , BIBREF10 are based on pre-defined ontology lists that specify all possible slot values in advance. To generate a distribution over the candidate set, previous works often take each of the slot-value pairs as input for scoring. However, in real-world scenarios, it is often not practical to enumerate all possible slot value pairs and perform scoring from a large dynamically changing knowledge base BIBREF11 . To tackle this problem, a popular direction is to build a fixed-length candidate set that is dynamically updated throughout the dialogue development. cpt briefly summaries the inference time complexity of multiple state-of-the-art DST models following this direction. Since the inference complexity of all of previous model is at least proportional to the number of the slots, these models will struggle to scale to multi-domain datasets with much larger numbers of pre-defined slots.",
"In this work, we formulate the dialogue state tracking task as a sequence generation problem, instead of formulating the task as a pair-wise prediction problem as in existing work. We propose the COnditional MEmory Relation Network (COMER), a scalable and accurate dialogue state tracker that has a constant inference time complexity. ",
"Specifically, our model consists of an encoder-decoder network with a hierarchically stacked decoder to first generate the slot sequences in the belief state and then for each slot generate the corresponding value sequences. The parameters are shared among all of our decoders for the scalability of the depth of the hierarchical structure of the belief states. COMER applies BERT contextualized word embeddings BIBREF12 and BPE BIBREF13 for sequence encoding to ensure the uniqueness of the representations of the unseen words. The word embeddings for sequence generation are initialized and fixed with the static word embeddings generated from BERT to have the potential of generating unseen words."
],
[
"f1 shows a multi-domain dialogue in which the user wants the system to first help book a train and then reserve a hotel. For each turn, the DST will need to track the slot-value pairs (e.g. (arrive by, 20:45)) representing the user goals as well as the domain that the slot-value pairs belongs to (e.g. train, hotel). Instead of representing the belief state via a hierarchical structure, one can also combine the domain and slot together to form a combined slot-value pair (e.g. (train; arrive by, 20:45) where the combined slot is “train; arrive by\"), which ignores the subordination relationship between the domain and the slots.",
"A typical fallacy in dialogue state tracking datasets is that they make an assumption that the slot in a belief state can only be mapped to a single value in a dialogue turn. We call this the single value assumption. Figure 2 shows an example of this fallacy from the WoZ2.0 dataset: Based on the belief state label (food, seafood), it will be impossible for the downstream module in the dialogue system to generate sample responses that return information about Chinese restaurants. A correct representation of the belief state could be (food, seafood $>$ chinese). This would tell the system to first search the database for information about seafood and then Chinese restaurants. The logical operator “ $>$ \" indicates which retrieved information should have a higher priority to be returned to the user. Thus we are interested in building DST modules capable of generating structured sequences, since this kind of sequence representation of the value is critical for accurately capturing the belief states of a dialogue."
],
[
"Given a dialogue $D$ which consists of $T$ turns of user utterances and system actions, our target is to predict the state at each turn. Different from previous methods which formulate multi-label state prediction as a collection of binary prediction problems, COMER adapts the task into a sequence generation problem via a Seq2Seq framework.",
"As shown in f3, COMER consists of three encoders and three hierarchically stacked decoders. We propose a novel Conditional Memory Relation Decoder (CMRD) for sequence decoding. Each encoder includes an embedding layer and a BiLSTM. The encoders take in the user utterance, the previous system actions, and the previous belief states at the current turn, and encodes them into the embedding space. The user encoder and the system encoder use the fixed BERT model as the embedding layer.",
"Since the slot value pairs are un-ordered set elements of a domain in the belief states, we first order the sequence of domain according to their frequencies as they appear in the training set BIBREF14 , and then order the slot value pairs in the domain according to the slot's frequencies of as they appear in a domain. After the sorting of the state elements, We represent the belief states following the paradigm: (Domain1- Slot1, Value1; Slot2, Value2; ... Domain2- Slot1, Value1; ...) for a more concise representation compared with the nested tuple representation.",
"All the CMRDs take the same representations from the system encoder, user encoder and the belief encoder as part of the input. In the procedure of hierarchical sequence generation, the first CMRD takes a zero vector for its condition input $\\mathbf {c}$ , and generates a sequence of the domains, $D$ , as well as the hidden representation of domains $H_D$ . For each $d$ in $D$ , the second CMRD then takes the corresponding $h_d$ as the condition input and generates the slot sequence $S_d$ , and representations, $H_{S,d}$ . Then for each $s$ in $S$ , the third CMRD generates the value sequence $D$0 based on the corresponding $D$1 . We update the belief state with the new $D$2 pairs and perform the procedure iteratively until a dialogue is completed. All the CMR decoders share all of their parameters.",
"Since our model generates domains and slots instead of taking pre-defined slots as inputs, and the number of domains and slots generated each turn is only related to the complexity of the contents covered in a specific dialogue, the inference time complexity of COMER is $O(1)$ with respect to the number of pre-defined slots and values."
],
[
"Let $X$ represent a user utterance or system transcript consisting of a sequence of words $\\lbrace w_1,\\ldots ,w_T\\rbrace $ . The encoder first passes the sequence $\\lbrace \\mathit {[CLS]},w_1,\\ldots ,w_T,\\mathit {[SEP]}\\rbrace $ into a pre-trained BERT model and obtains its contextual embeddings $E_{X}$ . Specifically, we leverage the output of all layers of BERT and take the average to obtain the contextual embeddings.",
"For each domain/slot appeared in the training set, if it has more than one word, such as `price range', `leave at', etc., we feed it into BERT and take the average of the word vectors to form the extra slot embedding $E_{s}$ . In this way, we map each domain/slot to a fixed embedding, which allows us to generate a domain/slot as a whole instead of a token at each time step of domain/slot sequence decoding. We also construct a static vocabulary embedding $E_{v}$ by feeding each token in the BERT vocabulary into BERT. The final static word embedding $E$ is the concatenation of the $E_{v}$ and $E_{s}$ .",
"After we obtain the contextual embeddings for the user utterance, system action, and the static embeddings for the previous belief state, we feed each of them into a Bidirectional LSTM BIBREF15 . ",
"$$\\begin{aligned}\n\\mathbf {h}_{a_t} & = \\textrm {BiLSTM}(\\mathbf {e}_{X_{a_t}}, \\mathbf {h}_{a_{t-1}}) \\\\\n\\mathbf {h}_{u_t} & = \\textrm {BiLSTM}(\\mathbf {e}_{X_{u_t}}, \\mathbf {h}_{u_{t-1}}) \\\\\n\\mathbf {h}_{b_t} & = \\textrm {BiLSTM}(\\mathbf {e}_{X_{b_t}}, \\mathbf {h}_{b_{t-1}}) \\\\\n\\mathbf {h}_{a_0} & = \\mathbf {h}_{u_0} = \\mathbf {h}_{b_0} = c_{0}, \\\\\n\\end{aligned}$$ (Eq. 7) ",
"where $c_{0}$ is the zero-initialized hidden state for the BiLSTM. The hidden size of the BiLSTM is $d_m/2$ . We concatenate the forward and the backward hidden representations of each token from the BiLSTM to obtain the token representation $\\mathbf {h}_{k_t}\\in R^{d_m}$ , $k\\in \\lbrace a,u,b\\rbrace $ at each time step $t$ . The hidden states of all time steps are concatenated to obtain the final representation of $H_{k}\\in R^{T \\times d_m}, k \\in \\lbrace a,u,B\\rbrace $ . The parameters are shared between all of the BiLSTMs."
],
[
"Inspired by Residual Dense Networks BIBREF16 , End-to-End Memory Networks BIBREF17 and Relation Networks BIBREF18 , we here propose the Conditional Memory Relation Decoder (CMRD). Given a token embedding, $\\mathbf {e}_x$ , CMRD outputs the next token, $s$ , and the hidden representation, $h_s$ , with the hierarchical memory access of different encoded information sources, $H_B$ , $H_a$ , $H_u$ , and the relation reasoning under a certain given condition $\\mathbf {c}$ , $\n\\mathbf {s}, \\mathbf {h}_s= \\textrm {CMRD}(\\mathbf {e}_x, \\mathbf {c}, H_B, H_a, H_u),\n$ ",
"the final output matrices $S,H_s \\in R^{l_s\\times d_m}$ are concatenations of all generated $\\mathbf {s}$ and $\\mathbf {h}_s$ (respectively) along the sequence length dimension, where $d_m$ is the model size, and $l_s$ is the generated sequence length. The general structure of the CMR decoder is shown in Figure 4 . Note that the CMR decoder can support additional memory sources by adding the residual connection and the attention block, but here we only show the structure with three sources: belief state representation ( $H_B$ ), system transcript representation ( $H_a$ ), and user utterance representation ( $H_u$ ), corresponding to a dialogue state tracking scenario. Since we share the parameters between all of the decoders, thus CMRD is actually a 2-dimensional auto-regressive model with respect to both the condition generation and the sequence generation task.",
"At each time step $t$ , the CMR decoder first embeds the token $x_t$ with a fixed token embedding $E\\in R^{d_e\\times d_v}$ , where $d_e$ is the embedding size and $d_v$ is the vocabulary size. The initial token $x_0$ is “[CLS]\". The embedded vector $\\textbf {e}_{x_t}$ is then encoded with an LSTM, which emits a hidden representation $\\textbf {h}_0 \\in R^{d_m}$ , $\n\\textbf {h}_0= \\textrm {LSTM}(\\textbf {e}_{x_t},\\textbf {q}_{t-1}).\n$ ",
"where $\\textbf {q}_t$ is the hidden state of the LSTM. $\\textbf {q}_0$ is initialized with an average of the hidden states of the belief encoder, the system encoder and the user encoder which produces $H_B$ , $H_a$ , $H_u$ respectively.",
" $\\mathbf {h}_0$ is then summed (element-wise) with the condition representation $\\mathbf {c}\\in R^{d_m}$ to produce $\\mathbf {h}_1$ , which is (1) fed into the attention module; (2) used for residual connection; and (3) concatenated with other $\\mathbf {h}_i$ , ( $i>1$ ) to produce the concatenated working memory, $\\mathbf {r_0}$ , for relation reasoning, $\n\\mathbf {h}_1 & =\\mathbf {h}_0+\\mathbf {c},\\\\\n\\mathbf {h}_2 & =\\mathbf {h}_1+\\text{Attn}_{\\text{belief}}(\\mathbf {h}_1,H_e),\\\\\n\\mathbf {h}_3 & = \\mathbf {h}_2+\\text{Attn}_{\\text{sys}}(\\mathbf {h}_2,H_a),\\\\\n\\mathbf {h}_4 & = \\mathbf {h}_3+\\text{Attn}_{\\text{usr}}(\\mathbf {h}_3,H_u),\\\\\n\\mathbf {r} & = \\mathbf {h}_1\\oplus \\mathbf {h}_2\\oplus \\mathbf {h}_3\\oplus \\mathbf {h}_4 \\in R^{4d_m},\n$ ",
" where $\\text{Attn}_k$ ( $k\\in \\lbrace \\text{belief}, \\text{sys},\\text{usr}\\rbrace $ ) are the attention modules applied respectively to $H_B$ , $H_a$ , $H_u$ , and $\\oplus $ means the concatenation operator. The gradients are blocked for $ \\mathbf {h}_1,\\mathbf {h}_2,\\mathbf {h}_3$ during the back-propagation stage, since we only need them to work as the supplementary memories for the relation reasoning followed.",
"The attention module takes a vector, $\\mathbf {h}\\in R^{d_m}$ , and a matrix, $H\\in R^{d_m\\times l}$ as input, where $l$ is the sequence length of the representation, and outputs $\\mathbf {h}_a$ , a weighted sum of the column vectors in $H$ . $\n\\mathbf {a} & =W_1^T\\mathbf {h}+\\mathbf {b}_1& &\\in R^{d_m},\\\\\n\\mathbf {c} &=\\text{softmax}(H^Ta)& &\\in R^l,\\\\\n\\mathbf {h} &=H\\mathbf {c}& &\\in R^{d_m},\\\\\n\\mathbf {h}_a &=W_2^T\\mathbf {h}+\\mathbf {b}_2& &\\in R^{d_m},\n$ ",
" where the weights $W_1\\in R^{d_m \\times d_m}$ , $W_2\\in R^{d_m \\times d_m}$ and the bias $b_1\\in R^{d_m}$ , $b_2\\in R^{d_m}$ are the learnable parameters.",
"The order of the attention modules, i.e., first attend to the system and the user and then the belief, is decided empirically. We can interpret this hierarchical structure as the internal order for the memory processing, since from the daily life experience, people tend to attend to the most contemporary memories (system/user utterance) first and then attend to the older history (belief states). All of the parameters are shared between the attention modules.",
"The concatenated working memory, $\\mathbf {r}_0$ , is then fed into a Multi-Layer Perceptron (MLP) with four layers, $\n\\mathbf {r}_1 & =\\sigma (W_1^T\\mathbf {r}_0+\\mathbf {b}_1),\\\\\n\\mathbf {r}_2 & =\\sigma (W_2^T\\mathbf {r}_1+\\mathbf {b}_2),\\\\\n\\mathbf {r}_3 & = \\sigma (W_3^T\\mathbf {r}_2+\\mathbf {b}_3),\\\\\n\\mathbf {h}_s & = \\sigma (W_4^T\\mathbf {r}_3+\\mathbf {b}_4),\n$ ",
" where $\\sigma $ is a non-linear activation, and the weights $W_1 \\in R^{4d_m \\times d_m}$ , $W_i \\in R^{d_m \\times d_m}$ and the bias $b_1 \\in R^{d_m}$ , $b_i \\in R^{d_m}$ are learnable parameters, and $2\\le i\\le 4$ . The number of layers for the MLP is decided by the grid search.",
"The hidden representation of the next token, $\\mathbf {h}_s$ , is then (1) emitted out of the decoder as a representation; and (2) fed into a dropout layer with drop rate $p$ , and a linear layer to generate the next token, $\n\\mathbf {h}_k & =\\text{dropout}(\\mathbf {h}_s)& &\\in R^{d_m},\\\\\n\\mathbf {h}_o & =W_k^T\\mathbf {h}_k+\\mathbf {b}_k& &\\in R^{d_e},\\\\\n\\mathbf {p}_s & =\\text{softmax}(E^T\\mathbf {h}_o)& &\\in R^{d_v},\\\\\ns & =\\text{argmax}(\\mathbf {p}_s)& &\\in R,\n$ ",
" where the weight $W_k\\in R^{d_m \\times d_e}$ and the bias $b_k\\in R^{d_e}$ are learnable parameters. Since $d_e$ is the embedding size and the model parameters are independent of the vocabulary size, the CMR decoder can make predictions on a dynamic vocabulary and implicitly supports the generation of unseen words. When training the model, we minimize the cross-entropy loss between the output probabilities, $\\mathbf {p}_s$ , and the given labels."
],
[
"We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 .",
"Based on the statistics from these two datasets, we can calculate the theoretical Inference Time Multiplier (ITM), $K$ , as a metric of scalability. Given the inference time complexity, ITM measures how many times a model will be slower when being transferred from the WoZ2.0 dataset, $d_1$ , to the MultiWoZ dataset, $d_2$ , $\nK= h(t)h(s)h(n)h(m)\\\\\n$ $\nh(x)=\\left\\lbrace \n\\begin{array}{lcl}\n1 & &O(x)=O(1),\\\\\n\\frac{x_{d_2}}{x_{d_1}}& & \\text{otherwise},\\\\\n\\end{array}\\right.\n\n$ ",
"where $O(x)$ means the Inference Time Complexity (ITC) of the variable $x$ . For a model having an ITC of $O(1)$ with respect to the number of slots $n$ , and values $m$ , the ITM will be a multiplier of 2.15x, while for an ITC of $O(n)$ , it will be a multiplier of 25.1, and 1,143 for $O(mn)$ .",
"As a convention, the metric of joint goal accuracy is used to compare our model to previous work. The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided. This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system. In this work, the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement."
],
[
"We use the $\\text{BERT}_\\text{large}$ model for both contextual and static embedding generation. All LSTMs in the model are stacked with 2 layers, and only the output of the last layer is taken as a hidden representation. ReLU non-linearity is used for the activation function, $\\sigma $ .",
"The hyper-parameters of our model are identical for both the WoZ2.0 and the MultiwoZ datasets: dropout rate $p=0.5$ , model size $d_m=512$ , embedding size $d_e=1024$ . For training on WoZ2.0, the model is trained with a batch size of 32 and the ADAM optimizer BIBREF21 for 150 epochs, while for MultiWoZ, the AMSGrad optimizer BIBREF22 and a batch size of 16 is adopted for 15 epochs of training. For both optimizers, we use a learning rate of 0.0005 with a gradient clip of 2.0. We initialize all weights in our model with Kaiming initialization BIBREF23 and adopt zero initialization for the bias. All experiments are conducted on a single NVIDIA GTX 1080Ti GPU."
],
[
"To measure the actual inference time multiplier of our model, we evaluate the runtime of the best-performing models on the validation sets of both the WoZ2.0 and MultiWoZ datasets. During evaluation, we set the batch size to 1 to avoid the influence of data parallelism and sequence padding. On the validation set of WoZ2.0, we obtain a runtime of 65.6 seconds, while on MultiWoZ, the runtime is 835.2 seconds. Results are averaged across 5 runs. Considering that the validation set of MultiWoZ is 5 times larger than that of WoZ2.0, the actual inference time multiplier is 2.54 for our model. Since the actual inference time multiplier roughly of the same magnitude as the theoretical value of 2.15, we can confirm empirically that we have the $O(1)$ inference time complexity and thus obtain full scalability to the number of slots and values pre-defined in an ontology.",
"c compares our model with the previous state-of-the-art on both the WoZ2.0 test set and the MultiWoZ test set. For the WoZ2.0 dataset, we maintain performance at the level of the state-of-the-art, with a marginal drop of 0.3% compared with previous work. Considering the fact that WoZ2.0 is a relatively small dataset, this small difference does not represent a significant big performance drop. On the muli-domain dataset, MultiWoZ, our model achieves a joint goal accuracy of 45.72%, which is significant better than most of the previous models other than TRADE which applies the copy mechanism and gains better generalization ability on named entity coping."
],
[
"To prove the effectiveness of our structure of the Conditional Memory Relation Decoder (CMRD), we conduct ablation experiments on the WoZ2.0 dataset. We observe an accuracy drop of 1.95% after removing residual connections and the hierarchical stack of our attention modules. This proves the effectiveness of our hierarchical attention design. After the MLP is replaced with a linear layer of hidden size 512 and the ReLU activation function, the accuracy further drops by 3.45%. This drop is partly due to the reduction of the number of the model parameters, but it also proves that stacking more layers in an MLP can improve the relational reasoning performance given a concatenation of multiple representations from different sources.",
"We also conduct the ablation study on the MultiWoZ dataset for a more precise analysis on the hierarchical generation process. For joint domain accuracy, we calculate the probability that all domains generated in each turn are exactly matched with the labels provided. The joint domain-slot accuracy further calculate the probability that all domains and slots generated are correct, while the joint goal accuracy requires all the domains, slots and values generated are exactly matched with the labels. From abm, We can further calculate that given the correct slot prediction COMER has 83.52% chance to make the correct value prediction. While COMER has done great job on domain prediction (95.53%) and value prediction (83.52%), the accuracy of the slot prediction given the correct domain is only 57.30%. We suspect that this is because we only use the previous belief state to represent the dialogue history, and the inter-turn reasoning ability on the slot prediction suffers from the limited context and the accuracy is harmed due to the multi-turn mapping problem BIBREF4 . We can also see that the JDS Acc. has an absolute boost of 5.48% when we switch from the combined slot representation to the nested tuple representation. This is because the subordinate relationship between the domains and the slots can be captured by the hierarchical sequence generation, while this relationship is missed when generating the domain and slot together via the combined slot representation."
],
[
"f5 shows an example of the belief state prediction result in one turn of a dialogue on the MultiWoZ test set. The visualization includes the CMRD attention scores over the belief states, system transcript and user utterance during the decoding stage of the slot sequence.",
"From the system attention (top right), since it is the first attention module and no previous context information is given, it can only find the information indicating the slot “departure” from the system utterance under the domain condition, and attend to the evidence “leaving” correctly during the generation step of “departure”. From the user attention, we can see that it captures the most helpful keywords that are necessary for correct prediction, such as “after\" for “day\" and “leave at”, “to\" for “destination\". Moreover, during the generation step of “departure”, the user attention successfully discerns that, based on the context, the word “leave” is not the evidence that need to be accumulated and choose to attend nothing in this step. For the belief attention, we can see that the belief attention module correctly attends to a previous slot for each generation step of a slot that has been presented in the previous state. For the generation step of the new slot “destination\", since the previous state does not have the “destination\" slot, the belief attention module only attends to the `-' mark after the `train' domain to indicate that the generated word should belong to this domain."
],
[
"Semi-scalable Belief Tracker BIBREF1 proposed an approach that can generate fixed-length candidate sets for each of the slots from the dialogue history. Although they only need to perform inference for a fixed number of values, they still need to iterate over all slots defined in the ontology to make a prediction for a given dialogue turn. In addition, their method needs an external language understanding module to extract the exact entities from a dialogue to form candidates, which will not work if the label value is an abstraction and does not have the exact match with the words in the dialogue.",
"StateNet BIBREF3 achieves state-of-the-art performance with the property that its parameters are independent of the number of slot values in the candidate set, and it also supports online training or inference with dynamically changing slots and values. Given a slot that needs tracking, it only needs to perform inference once to make the prediction for a turn, but this also means that its inference time complexity is proportional to the number of slots.",
"TRADE BIBREF4 achieves state-of-the-art performance on the MultiWoZ dataset by applying the copy mechanism for the value sequence generation. Since TRADE takes $n$ combinations of the domains and slots as the input, the inference time complexity of TRADE is $O(n)$ . The performance improvement achieved by TRADE is mainly due to the fact that it incorporates the copy mechanism that can boost the accuracy on the ‘name’ slot, which mainly needs the ability in copying names from the dialogue history. However, TRADE does not report its performance on the WoZ2.0 dataset which does not have the ‘name’ slot.",
"DSTRead BIBREF6 formulate the dialogue state tracking task as a reading comprehension problem by asking slot specified questions to the BERT model and find the answer span in the dialogue history for each of the pre-defined combined slot. Thus its inference time complexity is still $O(n)$ . This method suffers from the fact that its generation vocabulary is limited to the words occurred in the dialogue history, and it has to do a manual combination strategy with another joint state tracking model on the development set to achieve better performance.",
"Contextualized Word Embedding (CWE) was first proposed by BIBREF25 . Based on the intuition that the meaning of a word is highly correlated with its context, CWE takes the complete context (sentences, passages, etc.) as the input, and outputs the corresponding word vectors that are unique under the given context. Recently, with the success of language models (e.g. BIBREF12 ) that are trained on large scale data, contextualizeds word embedding have been further improved and can achieve the same performance compared to (less flexible) finely-tuned pipelines.",
"Sequence Generation Models. Recently, sequence generation models have been successfully applied in the realm of multi-label classification (MLC) BIBREF14 . Different from traditional binary relevance methods, they proposed a sequence generation model for MLC tasks which takes into consideration the correlations between labels. Specifically, the model follows the encoder-decoder structure with an attention mechanism BIBREF26 , where the decoder generates a sequence of labels. Similar to language modeling tasks, the decoder output at each time step will be conditioned on the previous predictions during generation. Therefore the correlation between generated labels is captured by the decoder."
],
[
"In this work, we proposed the Conditional Memory Relation Network (COMER), the first dialogue state tracking model that has a constant inference time complexity with respect to the number of domains, slots and values pre-defined in an ontology. Besides its scalability, the joint goal accuracy of our model also achieve the similar performance compared with the state-of-the-arts on both the MultiWoZ dataset and the WoZ dataset. Due to the flexibility of our hierarchical encoder-decoder framework and the CMR decoder, abundant future research direction remains as applying the transformer structure, incorporating open vocabulary and copy mechanism for explicit unseen words generation, and inventing better dialogue history access mechanism to accommodate efficient inter-turn reasoning.",
"Acknowledgements. This work is partly supported by NSF #1750063. We thank all the reviewers for their constructive suggestions. We also want to thank Zhuowen Tu and Shengnan Zhang for the early discussions of the project."
]
],
"section_name": [
"Introduction",
"Motivation",
"Hierarchical Sequence Generation for DST",
"Encoding Module",
"Conditional Memory Relation Decoder",
"Experimental Setting",
"Implementation Details",
"Results",
"Ablation Study",
"Qualitative Analysis",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1719244c479765727dd6d5390c98e27c6542dcf3"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model (Mrksic et al., 2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ (Budzianowski et al., 2018)."
],
"extractive_spans": [],
"free_form_answer": "single-domain setting",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model (Mrksic et al., 2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ (Budzianowski et al., 2018)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"6bb60dc60817a1c2173999d45e505239c8d445c6"
],
"answer": [
{
"evidence": [
"As a convention, the metric of joint goal accuracy is used to compare our model to previous work. The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided. This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system. In this work, the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement."
],
"extractive_spans": [
"joint goal accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a convention, the metric of joint goal accuracy is used to compare our model to previous work."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"072d9a6fe27796947c3aeae2420eccb567a8da36"
],
"answer": [
{
"evidence": [
"We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 ."
],
"extractive_spans": [
"the single domain dataset, WoZ2.0 ",
"the multi-domain dataset, MultiWoZ"
],
"free_form_answer": "",
"highlighted_evidence": [
"We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Does this approach perform better in the multi-domain or single-domain setting?",
"What are the performance metrics used?",
"Which datasets are used to evaluate performance?"
],
"question_id": [
"ed7a3e7fc1672f85a768613e7d1b419475950ab4",
"72ceeb58e783e3981055c70a3483ea706511fac3",
"9bfa46ad55136f2a365e090ce585fc012495393c"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: The Inference Time Complexity (ITC) of previous DST models. The ITC is calculated based on how many times inference must be performed to complete a prediction of the belief state in a dialogue turn, where m is the number of values in a pre-defined ontology list and n is the number of slots.",
"Figure 1: An example dialogue from the multi-domain dataset, MultiWOZ. At each turn, the DST needs to output the belief state, a nested tuple of (DOMAIN, (SLOT, VALUE)), immediately after the user utterance ends. The belief state is accumulated as the dialogue proceeds. Turns are separated by black lines.",
"Figure 2: An example in the WoZ2.0 dataset that invalidates the single value assumption. It is impossible for the system to generate the sample response about the Chinese restaurant with the original belief state (food, seafood). A correction could be made as (food, seafood > chinese) which has multiple values and a logical operator “>”.",
"Figure 3: The general model architecture of the Hierarchical Sequence Generation Network. The Conditional Memory Relation (CMR) decoders (gray) share all of their parameters.",
"Figure 4: The general structure of the Conditional Memory Relation Decoder. The decoder output, s (e.g. “food”), is refilled to the LSTM for the decoding of the next step. The blue lines in the figure means that the gradients are blocked during the back propagation stage.",
"Table 2: The statistics of the WoZ2.0 and the MultiWoZ datasets.",
"Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model (Mrksic et al., 2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ (Budzianowski et al., 2018).",
"Table 4: The ablation study on the WoZ2.0 dataset with the joint goal accuracy on the test set. For “- Hierachical-Attn”, we remove the residual connections between the attention modules in the CMR decoders and all the attention memory access are based on the output from the LSTM. For “- MLP”, we further replace the MLP with a single linear layer with the nonlinear activation.",
"Table 5: The ablation study on the MultiWoZ dataset with the joint domain accuracy (JD Acc.), joint domain-slot accuracy (JDS Acc.) and joint goal accuracy (JG Acc.) on the test set. For “- ShareParam”, we remove the parameter sharing mechanism on the encoders and the attention module. For “- Order”, we further arrange the order of the slots according to its global frequencies in the training set instead of the local frequencies given the domain it belongs to. For “- Nested”, we do not generate domain sequences but generate combined slot sequences which combines the domain and the slot together. For “- BlockGrad”, we further remove the gradient blocking mechanism in the CMR decoder.",
"Figure 5: An example belief prediction of our model on the MultiWoZ test set. The attention scores for belief states, system transcript and user utterance in CMRD is visualized on the right. Each row corresponds to the attention score of the generation step of a particular slot under the ‘train’ domain."
],
"file": [
"1-Table1-1.png",
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"5-Figure4-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"9-Figure5-1.png"
]
} | [
"Does this approach perform better in the multi-domain or single-domain setting?"
] | [
[
"1909.00754-7-Table3-1.png"
]
] | [
"single-domain setting"
] | 134 |
1906.00180 | Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization | Can neural nets learn logic? We approach this classic question with current methods, and demonstrate that recurrent neural networks can learn to recognize first order logical entailment relations between expressions. We define an artificial language in first-order predicate logic, generate a large dataset of sample 'sentences', and use an automatic theorem prover to infer the relation between random pairs of such sentences. We describe a Siamese neural architecture trained to predict the logical relation, and experiment with recurrent and recursive networks. Siamese Recurrent Networks are surprisingly successful at the entailment recognition task, reaching near perfect performance on novel sentences (consisting of known words), and even outperforming recursive networks. We report a series of experiments to test the ability of the models to perform compositional generalization. In particular, we study how they deal with sentences of unseen length, and sentences containing unseen words. We show that set-ups using LSTMs and GRUs obtain high scores on these tests, demonstrating a form of compositionality. | {
"paragraphs": [
[
"State-of-the-art models for almost all popular natural language processing tasks are based on deep neural networks, trained on massive amounts of data. A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language, and to what extent they rely on storing massive amounts of exemplars and only make `local' generalizations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . This question has led to (sometimes heated) debates between deep learning enthusiasts that are convinced neural networks can do almost anything, and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems, pointing out that crucial tests distinguishing between generalization and memorization have not been applied.",
"In this paper, we take a pragmatic perspective on these issues. As the target for learning we use entailment relations in an artificial language, defined using first order logic (FOL), that is unambiguously compositional. We ask whether popular deep learning methods are capable in principle of acquiring the compositional rules that characterize it, and focus in particular on recurrent neural networks that are unambiguously `connectionist': trained recurrent nets do not rely on symbolic data and control structures such as trees and global variable binding, and can straightforwardly be implemented in biological networks BIBREF8 or neuromorphic hardware BIBREF9 . We report positive results on this challenge, and in the process develop a series of tests for compositional generalization that address the concerns of deep learning skeptics.",
"The paper makes three main contributions. First, we develop a protocol for automatically generating data that can be used in entailment recognition tasks. Second, we demonstrate that several deep learning architectures succeed at one such task. Third, we present and apply a number of experiments to test whether models are capable of compositional generalization."
],
[
"The data generation process is inspired by BIBREF13 : an artificial language is defined, sentences are generated according to its grammar and the entailment relation between pairs of such sentences is established according to a fixed background logic. However, our language is significantly more complex, and instead of natural logic we use FOL."
],
[
"Our main model is a recurrent network, sketched in Figure 4 . It is a so-called `Siamese' network because it uses the same parameters to process the left and the right sentence. The upper part of the model is identical to BIBREF13 's recursive networks. It consists of a comparison layer and a classification layer, after which a softmax function is applied to determine the most probable target class. The comparison layer takes the concatenation of two sentence vectors as input. The number of cells equals the number of words, so it differs per sentence.",
"Our set-up resembles the Siamese architecture for learning sentence similarity of BIBREF25 and the LSTM classifier described in BIBREF18 . In the diagram, the dashed box indicates the location of an arbitrary recurrent unit. We consider SRN BIBREF26 , GRU BIBREF27 and LSTM BIBREF28 ."
],
[
"Training and testing accuracies after 50 training epochs, averaged over five different model runs, are shown in Table UID18 . All recurrent models outperform the summing baseline. Even the simplest recurrent network, the SRN, achieves higher training and testing accuracy scores than the tree-shaped matrix model. The GRU and LSTM even beat the tensor model. The LSTM obtains slightly lower scores than the GRU, which is unexpected given its more complex design, but perhaps the current challenge does not require separate forget and input gates. For more insight into the types of errors made by the best-performing (GRU-based) model, we refer to the confusion matrices in Appendix \"Error statistics\" .",
"The consistently higher testing accuracy provides evidence that the recurrent networks are not only capable of recognizing FOL entailment relations between unseen sentences. They can also outperform the tree-shaped models on this task, although they do not use any of the symbolic structure that seemed to explain the success of their recursive predecessors. The recurrent classifiers have learned to apply their own strategies, which we will investigate in the remainder of this paper."
],
[
"Compositionality is the ability to interpret and generate a possibly infinite number of constructions from known constituents, and is commonly understood as one of the fundamental aspects of human learning and reasoning ( BIBREF30 , BIBREF31 ). It has often been claimed that neural networks operate on a merely associative basis, lacking the compositional capacities to develop systematicity without an abundance of training data. See e.g. BIBREF1 , BIBREF2 , BIBREF32 . Especially recurrent models have recently been regarded quite sceptically in this respect, following the negative results established by BIBREF3 and BIBREF4 . Their research suggests that recurrent networks only perform well provided that there are no systematic discrepancies between train and test data, whereas human learning is robust with respect to such differences thanks to compositionality.",
"In this section, we report more positive results on compositional reasoning of our Siamese networks. We focus on zero-shot generalization: correct classification of examples of a type that has not been observed before. Provided that atomic constituents and production rules are understood, compositionality does not require that abundantly many instances embodying a semantic category are observed. We will consider in turn what set-up is required to demonstrate zero-shot generalization to unseen lengths, and to generalization to sentences composed of novel words."
],
[
"We test if our recurrent models are capable of generalization to unseen lengths. Neural models are often considered incapable of such generalization, allegedly because they are limited to the training space BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . We want to test if this is the case for the recurrent models studied in this paper. The language $\\mathcal {L}$ licenses a heavily constrained set of grammatical configurations, but it does allow the sentence length to vary according to the number of included negations. A perfectly compositional model should be able to interpret statements containing any number of negations, on condition that it has seen an instantiation at least once at each position where this is allowed.",
"In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table UID19 .",
"All recurrent models obtain (near-)perfect training accuracy scores. What happens on the test set is interesting. It turns out that the GRU and LSTM can generalize from lengths 5, 7 and 8 to 6 and 9 very well, while the SRN faces serious difficulties. It seems that training on lengths 5, 7 and 8, and thereby skipping length 6, enables the GRU and LSTM to generalize to unseen sentence lengths 6 and 9. Training on lengths 5-7 and testing on lengths 8-9 yields low test scores for all models. The GRU and LSTM gates appear to play a crucial role, because the results show that the SRN does not have this capacity at all."
],
[
"In the next experiment, we assess whether our GRU-based model, which performed best in the preceding experiments, is capable of zero-shot generalization to sentences with novel words. The current set-up cannot deal with unknown words, so instead of randomly initializing an embedding matrix that is updated during training, we use pretrained, 50-dimensional GloVe embeddings BIBREF37 that are kept constant. Using GloVe embeddings, the GRU model obtains a mean training accuracy of 100.0% and a testing accuracy of 95.9% (averaged over five runs). The best-performing model (with 100.0% training and 97.1% testing accuracy) is used in the following zero-shot experiments.",
"One of the most basic relations on the level of lexical semantics is synonymy, which holds between words with equivalent meanings. In the language $\\mathcal {L}$ , a word can be substituted with one of its synonyms without altering the entailment relation assigned to the sentence pairs that contain it. If the GRU manages to perform well on such a modified data set after receiving the pretrained GloVe embedding of the unseen word, this is a first piece of evidence for its zero-shot generalization skills. We test this for several pairs of synonymous words. The best-performing GRU is first evaluated with respect to the fragment of the test data containing the original word $w$ , and consequently with respect to that same fragment after replacing the original word with its synonym $s(w)$ . The pairs of words, the cosine distance $cos\\_dist(w,s(w))$ between their GloVe embeddings and the obtained results are listed in Table 6 .",
"For the first three examples in Table 6 , substitution only decreases testing accuracy by a few percentage points. Apparently, the word embeddings of the synonyms encode the lexical properties that the GRU needs to recognize that the same entailment relations apply to the sentence pairs. This does not prove that the model has distilled essential information about hyponymy from the GloVe embeddings. It could also be that the word embeddings of the replacement words are geometrically very similar to the originals, so that it is an algebraic necessity that the same results arise. However, this suspicion is inconsistent with the result of changing `hate' into `detest'. The cosine distance between these words is 0.56, so according to this measure their vectors are more similar than those representing `love' and `adore' (which have a cosine distance of 0.57). Nonetheless, replacing `hate' with `detest' confuses the model, whereas substitution of `love' into `adore' only decreases testing accuracy by 4.5 percentage points. This illustrates that robustness of the GRU in this respect is not a matter of simple vector similarity. In those cases where substitution into synonyms does not confuse the model it must have recognized a non-trivial property of the new word embedding that licenses particular inferences.",
"In our next experiment, we replace a word not by its synonym, but by a word that has the same semantics in the context of artificial language $\\mathcal {L}$ . We thus consider pairs of words that can be substituted with each other without affecting the entailment relation between any pair of sentences in which they feature. We call such terms `ontological twins'. Technically, if $\\odot $ is an arbitrary lexical entailment relation and $\\mathcal {O}$ is an ontology, then $w$ and $v$ are ontological twins if and only if $w, v \\in \\mathcal {O}$ and for all $u \\in \\mathcal {O}$ , if $u \\notin \\lbrace w,v \\rbrace $ then $w \\odot u \\Leftrightarrow v \\odot u$ . This trivially applies to self-identical terms or synonyms, but in the strictly defined hierarchy of $\\mathcal {L}$ it is also the case for pairs of terms $\\odot $0 that maintain the same lexical entailment relations to all other terms in the taxonomy.",
"Examples of ontological twins in the taxonomy of nouns $\\mathcal {N}^{\\mathcal {L}}$ are `Romans' and `Venetians' . This can easily be verified in the Venn diagram of Figure 1 by replacing `Romans' with `Venetians' and observing that the same hierarchy applies. The same holds for e.g. `Germans' and `Polish' or for `children' and `students'. For several such word-twin pairs the GRU is evaluated with respect to the fragment of the test data containing the original word $w$ , and with respect to that same fragment after replacing the original word with ontological twin $t(w)$ . Results are shown in Table 7 .",
"The examples in Table 7 suggest that the best-performing GRU is largely robust with respect to substitution into ontological twins. Replacing `Romans' with other urban Italian demonyms hardly affects model accuracy on the modified fragment of the test data. As before, there appears to be no correlation with vector similarity because the cosine distance between the different twin pairs has a much higher variation than the corresponding accuracy scores. `Germans' can be changed into `Polish' without significant deterioration, but substitution with `Dutch' greatly decreases testing accuracy. The situation is even worse for `Spanish'. Again, cosine similarity provides no explanation - `Spanish' is still closer to `Germans' than `Neapolitans' to `Romans'. Rather, the accuracy appears to be negatively correlated with the geographical distance between the national demonyms. After replacing `children' with `students', `women' or `linguists', testing scores are still decent.",
"So far, we replaced individual words in order to assess whether the GRU can generalize from the vocabulary to new notions that have comparable semantics in the context of this entailment recognition task. The examples have illustrated that the model tends to do this quite well. In the last zero-shot learning experiment, we replace sets of nouns instead of single words, in order to assess the flexibility of the relational semantics that our networks have learned. Formally, the replacement can be regarded as a function $r$ , mapping words $w$ to substitutes $r(w)$ . Not all items have to be replaced. For an ontology $\\mathcal {O}$ , the function $r$ must be such that for any $w, v \\in \\mathcal {O}$ and lexical entailment relation $\\odot $ , $w \\odot v \\Leftrightarrow r(w) \\odot r(v)$ . The result of applying $r$ can be called an `alternative hierarchy'.",
"An example of an alternative hierarchy is the result of the replacement function $r_1$ that maps `Romans' to `Parisians' and `Italians' to `French'. Performing this substitution in the Venn diagram of Figure 1 shows that the taxonomy remains structurally intact. The best-performing GRU is evaluated on the fragment of the test data containing `Romans' or `Italians', and consequently on the same fragment after implementing replacement $r_1$ and providing the model with the GloVe embeddings of the unseen words. Replacement $r_1$ is incrementally modified up until replacement $r_4$ , which substitutes all nouns in $\\mathcal {N}^{\\mathcal {L}}$ . The results of applying $r_1$ to $r_4$ are shown in Table 8 .",
"The results are positive: the GRU obtains 86.7% accuracy even after applying $r_4$ , which substitutes the entire ontology $\\mathcal {N}^{\\mathcal {L}}$ so that no previously encountered nouns are present in the test set anymore, although the sentences remain thematically somewhat similar to the original sentences. Testing scores are above 87% for the intermediate substitutions $r_1$ to $r_3$ . This outcome clearly shows that the classifier does not depend on a strongly customized word vector distribution in order to recognize higher-level entailment relations. Even if all nouns are replaced by alternatives with embeddings that have not been witnessed or optimized beforehand, the model obtains a high testing accuracy. This establishes obvious compositional capacities, because familiarity with structure and information about lexical semantics in the form of word embeddings are enough for the model to accommodate configurations of unseen words.",
"What happens when we consider ontologies that have the same structure, but are thematically very different from the original ontology? Three such alternative hierarchies are considered: $r_{animals}$ , $r_{religion}$ and $r_{America}$ . Each of these functions relocalizes the noun ontology in a totally different domain of discourse, as indicated by their names. Table 9 specifies the functions and their effect.",
"Testing accuracy decreases drastically, which indicates that the model is sensitive to the changing topic. Variation between the scores obtained after the three transformations is limited. Although they are much lower than before, they are still far above chance level for a seven-class problem. This suggests that the model is not at a complete loss as to the alternative noun hierarchies. Possibly, including a few relevant instances during training could already improve the results."
],
[
"We established that our Siamese recurrent networks (with SRN, GRU or LSTM cells) are able to recognize logical entailment relations without any a priori cues about syntax or semantics of the input expressions. Indeed, some of the recurrent set-ups even outperform tree-shaped networks, whose topology is specifically designed to deal with such tasks. This indicates that recurrent networks can develop representations that can adequately process a formal language with a nontrivial hierarchical structure. The formal language we defined did not exploit the full expressive power of first-order predicate logic; nevertheless by using standard first-order predicate logic, a standard theorem prover, and a set-up where the training set only covers a tiny fraction of the space of possible logical expressions, our experiments avoid the problems observed in earlier attempts to demonstrate logical reasoning in recurrent networks.",
"The experiments performed in the last few sections moreover show that the GRU and LSTM architectures exhibit at least basic forms of compositional generalization. In particular, the results of the zero-shot generalization experiments with novel lengths and novel words cannot be explained with a `memorize-and-interpolate' account, i.e. an account of the working of deep neural networks that assumes all they do is store enormous training sets and generalize only locally. These results are relevant pieces of evidence in the decades-long debate on whether or not connectionist networks are fundamentally able to learn compositional solutions. Although we do not have the illusion that our work will put this debate to an end, we hope that it will help bring deep learning enthusiasts and skeptics a small step closer."
]
],
"section_name": [
"Introduction & related work",
"Task definition & data generation",
"Learning models",
"Results",
"Zero-shot, compositional generalization",
"Unseen lengths",
"Unseen words",
"Discussion & Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"fbf076324c189bbfe7b495126bb96ec2d2615877"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"6d770b8b216014237faef17fcf6724d7bec052d4"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"07490d0181eb9040b4d19a9a8180db5dfb790df3"
],
"answer": [
{
"evidence": [
"In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table UID19 ."
],
"extractive_spans": [],
"free_form_answer": "70,000",
"highlighted_evidence": [
"As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How does the automatic theorem prover infer the relation?",
"If these model can learn the first-order logic on artificial language, why can't it lear for natural language?",
"How many samples did they generate for the artificial language?"
],
"question_id": [
"42812113ec720b560eb9463ff5e74df8764d1bff",
"4f4892f753b1d9c5e5e74c7c94d8c9b6ef523e7b",
"f258ada8577bb71873581820a94695f4a2c223b3"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Venn diagrams visualizing the taxonomy of (a) nouns NL and (b) verbs VL in L.",
"Table 3: FOL axiom representations of lexical entailment relations. For definition of relations, see Table 2.",
"Figure 3: Visualization of the general recurrent model. The region in the dashed box represents any recurrent cell, which is repeatedly applied until the final sentence vector is returned.",
"Table 5: Accuracy scores on the FOL inference task for models trained on pairs of sentences with lengths 5, 7 or 8 and tested on pairs of sentences with lengths 6 or 9. Mean and standard deviation over five runs.",
"Table 6: Effect on best-performing GRU of replacing words w by unseen synonyms s(w) in the test set and providing the model with the corresponding GloVe embedding.",
"Table 7: Effect on best-performing GRU of replacing words w by unseen ontological twins t(w) in the test set and providing the model with the corresponding GloVe embedding.",
"Table 8: Effect on best-performing GRU of replacing noun ontology NL with alternative hierarchies as per the replacement functions r1 to r4. Vertical dots indicate that cell entries do not change on the next row.",
"Table 9: Effect on best-performing GRU of replacing noun ontology NL with alternative hierarchies as per the replacement functions ranimals, rreligion and rAmerica. Accuracy is measured on the test set after applying the respective replacement functions.",
"Figure 4: Histogram showing the relative frequency of each entailment relation in the train and test set.",
"Figure 5: Confusion matrices of the best-performing GRU with respect to the test set. Rows represent targets, columns predictions. (a) row-normalized results for all test instances. (b) unnormalized results for misclassified test instances. Clearly, most errors are due to unrecognized or wrongly attributed independence."
],
"file": [
"2-Figure1-1.png",
"3-Table3-1.png",
"4-Figure3-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"7-Table7-1.png",
"8-Table8-1.png",
"8-Table9-1.png",
"12-Figure4-1.png",
"12-Figure5-1.png"
]
} | [
"How many samples did they generate for the artificial language?"
] | [
[
"1906.00180-Unseen lengths-1"
]
] | [
"70,000"
] | 135 |
1906.04571 | Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology | Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminine-inflected sentences in such languages. For Spanish and Hebrew, our approach achieves F1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. | {
"paragraphs": [
[
"One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases. This is because NLP systems depend on language corpora, which are inherently “not objective; they are creations of human design” BIBREF0 . One type of societal bias that has received considerable attention from the NLP community is gender stereotyping BIBREF1 , BIBREF2 , BIBREF3 . Gender stereotypes can manifest in language in overt ways. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women BIBREF4 .",
"To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta.",
"In this paper, we present a new approach to counterfactual data augmentation BIBREF10 for mitigating gender stereotypes associated with animate nouns (i.e., nouns that represent people) for morphologically rich languages. We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns. We use this model as part of a four-step process, depicted in fig:pipeline, to reinflect entire sentences following an intervention on the grammatical gender of one word. We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level INLINEFORM0 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively. We also conduct an extrinsic evaluation using four languages. Following DBLP:journals/corr/abs-1807-11714, we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality."
],
[
"Men and women are mentioned at different rates in text BIBREF11 . This problem is exacerbated in certain contexts. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system. Gender stereotypes of this sort have been observed in word embeddings BIBREF5 , BIBREF3 , contextual word embeddings BIBREF12 , and co-reference resolution systems BIBREF13 , BIBREF9 inter alia."
],
[
"In this section, we present a Markov random field BIBREF17 for morpho-syntactic agreement. This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags. Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.",
"A dependency tree for a sentence (see fig:tree for an example) is a set of ordered triples INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are positions in the sentence (or a distinguished root symbol) and INLINEFORM3 is the label of the edge INLINEFORM4 in the tree; each position occurs exactly once as the first element in a triple. Each dependency tree INLINEFORM5 is associated with a sequence of morpho-syntactic tags INLINEFORM6 and a sequence of part-of-speech (POS) tags INLINEFORM7 . For example, the tags INLINEFORM8 and INLINEFORM9 for ingeniero are INLINEFORM10 and INLINEFORM11 , respectively, because ingeniero is a masculine, singular noun. For notational simplicity, we define INLINEFORM12 to be the set of all length- INLINEFORM13 sequences of morpho-syntactic tags.",
"We define the probability of INLINEFORM0 given INLINEFORM1 and INLINEFORM2 as DISPLAYFORM0 ",
" where the binary factor INLINEFORM0 scores how well the morpho-syntactic tags INLINEFORM1 and INLINEFORM2 agree given the POS tags INLINEFORM3 and INLINEFORM4 and the label INLINEFORM5 . For example, consider the INLINEFORM6 (adjectival modifier) edge from experto to ingeniero in fig:tree. The factor INLINEFORM7 returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., INLINEFORM8 and INLINEFORM9 ) and a low score if they do not (e.g., INLINEFORM10 and INLINEFORM11 ). The unary factor INLINEFORM12 scores a morpho-syntactic tag INLINEFORM13 outside the context of the dependency tree. As we explain in sec:constraint, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them. eq:dist is normalized by the following partition function: INLINEFORM14 ",
" INLINEFORM0 can be calculated using belief propagation; we provide the update equations that we use in sec:bp. Our model is depicted in fig:fg. It is noteworthy that this model is delexicalized—i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves."
],
[
"We consider a linear parameterization and a neural parameterization of the binary factor INLINEFORM0 .",
"We define a matrix INLINEFORM0 for each triple INLINEFORM1 , where INLINEFORM2 is the number of morpho-syntactic subtags. For example, INLINEFORM3 has two subtags INLINEFORM4 and INLINEFORM5 . We then define INLINEFORM6 as follows: INLINEFORM7 ",
" where INLINEFORM0 is a multi-hot encoding of INLINEFORM1 .",
"As an alternative, we also define a neural parameterization of INLINEFORM0 to allow parameter sharing among edges with different parts of speech and labels: INLINEFORM1 ",
" where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 and INLINEFORM3 define the structure of the neural parameterization and each INLINEFORM4 is an embedding function.",
"We use the unary factors only to force or disallow particular tags when performing an intervention. Specifically, we define DISPLAYFORM0 ",
"where INLINEFORM0 is a strength parameter that determines the extent to which INLINEFORM1 should remain unchanged following an intervention. In the limit as INLINEFORM2 , all tags will remain unchanged except for the tag directly involved in the intervention."
],
[
"Because our MRF is acyclic and tree-shaped, we can use belief propagation BIBREF18 to perform exact inference. The algorithm is a generalization of the forward-backward algorithm for hidden Markov models BIBREF19 . Specifically, we pass messages from the leaves to the root and vice versa. The marginal distribution of a node is the point-wise product of all its incoming messages; the partition function INLINEFORM0 is the sum of any node's marginal distribution. Computing INLINEFORM1 takes polynomial time BIBREF18 —specifically, INLINEFORM2 where INLINEFORM3 is the number of morpho-syntactic tags. Finally, inferring the highest-probability morpho-syntactic tag sequence INLINEFORM4 given INLINEFORM5 and INLINEFORM6 can be performed using the max-product modification to belief propagation."
],
[
"We use gradient-based optimization. We treat the negative log-likelihood INLINEFORM0 as the loss function for tree INLINEFORM1 and compute its gradient using automatic differentiation BIBREF20 . We learn the parameters of sec:param by optimizing the negative log-likelihood using gradient descent."
],
[
"As explained in sec:gender, our goal is to transform sentences like sent:msc to sent:fem by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. For example, if we change the morpho-syntactic tag for ingeniero from [msc;sg] to [fem;sg], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [in; pr; sg]. If we intervene on the INLINEFORM0 word in a sentence, changing its tag from INLINEFORM1 to INLINEFORM2 , then using our model to infer the manner in which the remaining tags must be updated means using INLINEFORM3 to identify high-probability tags for the remaining words.",
"Crucially, we wish to change as little as possible when intervening on a gendered word. The unary factors INLINEFORM0 enable us to do exactly this. As described in the previous section, the strength parameter INLINEFORM1 determines the extent to which INLINEFORM2 should remain unchanged following an intervention—the larger the value, the less likely it is that INLINEFORM3 will be changed.",
"Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms. This task has received considerable attention from the NLP community BIBREF21 , BIBREF22 . We use the inflection model of D18-1473. This model conditions on the lemma INLINEFORM0 and morpho-syntactic tag INLINEFORM1 to form a distribution over possible inflections. For example, given experto and INLINEFORM2 , the trained inflection model will assign a high probability to expertas. We provide accuracies for the trained inflection model in tab:reinflect."
],
[
"We used the Adam optimizer BIBREF23 to train both parameterizations of our model until the change in dev-loss was less than INLINEFORM0 bits. We set INLINEFORM1 without tuning, and chose a learning rate of INLINEFORM2 and weight decay factor of INLINEFORM3 after tuning. We tuned INLINEFORM4 in the set INLINEFORM5 and chose INLINEFORM6 . For the neural parameterization, we set INLINEFORM7 and INLINEFORM8 without any tuning. Finally, we trained the inflection model using only gendered words.",
"We evaluate our approach both intrinsically and extrinsically. For the intrinsic evaluation, we focus on whether our approach yields the correct morpho-syntactic tags and the correct reinflections. For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models."
],
[
"To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language. Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank. The average length of these extracted sentences was 37 words. We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly. We chose Spanish and Hebrew because gender agreement operates differently in each language. We provide corpus statistics for both languages in the top two rows of tab:data.",
"We created a hard-coded INLINEFORM0 to serve as a baseline for each language. For Spanish, we only activated, i.e. set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns. We created two separate baselines because gender agreement operates differently in each language.",
"To evaluate our approach, we held all morpho-syntactic subtags fixed except for gender. For each annotated sentence, we intervened on the gender of the animate noun. We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata. Finally, we used the annotations to compute the tag-level INLINEFORM0 score and the form-level accuracy, excluding the animate nouns on which we intervened.",
"We present the results in tab:intrinsic. Recall is consistently significantly lower than precision. As expected, the baselines have the highest precision (though not by much). This is because they reflect well-known rules for each language. That said, they have lower recall than our approach because they fail to capture more subtle relationships.",
"For both languages, our approach struggles with conjunctions. For example, consider the phrase él es un ingeniero y escritor (he is an engineer and a writer). Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora. This is because two nouns do not normally need to have the same gender when they are conjoined. Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person. Note that including co-reference information in our MRF would create cycles and inference would no longer be exact. Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.",
"Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization. We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient."
],
[
"We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping. Following DBLP:journals/corr/abs-1807-11714, focus on neural language models. We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.",
"As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model INLINEFORM0 for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful. The translations we use for these adjectives are given in sec:translation. We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes. For example, consider DISPLAYFORM0 ",
"If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer). If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive. In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive. If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.",
"Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): DISPLAYFORM0 ",
"We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see tab:data). For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using BIBREF24 's parser and extracted taggings and lemmata using the method of BIBREF25 . We automatically extracted an animacy gazetteer from WordNet BIBREF26 and then manually filtered the output for correctness. We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in tab:anim. For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on the noun, and then used our approach to transform the sentence. For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders. Choosing which sentences to duplicate is a difficult task. For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations. Multilingual animacy detection BIBREF27 might help with this challenge; co-reference information might additionally help.",
"For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of BIBREF28 using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. We then computed gender stereotyping and grammaticality as described above. We provide example phrases in tab:lm; we provide a more extensive list of phrases in app:queries.",
"fig:bias demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. It is immediately apparent that our approch reduces gender stereotyping. On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively). We expected that naïve swapping of gendered words would also reduce gender stereotyping. Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages. For Spanish, we also examine specific words that are stereotyped toward men or women. We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender. fig:espbias suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.",
"The grammaticality of the corpora following CDA differs between languages. That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus. Given that we know the model did not perform as accurately for Hebrew (see tab:intrinsic), this finding is not surprising."
],
[
"In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology—specifically languages that exhibit gender agreement. To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English. For example, BIBREF5 proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; BIBREF10 studied gender stereotypes in language models; and BIBREF13 introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution. The most closely related work is that of BIBREF9 , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages. Our approach is specifically intended to yield grammatical sentences when applied to such languages. BIBREF29 also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation."
],
[
"We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information."
],
[
"The last author acknowledges a Facebook Fellowship."
],
[
"Our belief propagation update equations are DISPLAYFORM0 DISPLAYFORM1 ",
" where INLINEFORM0 returns the set of neighbouring nodes of node INLINEFORM1 . The belief at any node is given by DISPLAYFORM0 "
],
[
"tab:fem and tab:masc contain the feminine and masculine translations of the four adjectives that we used."
],
[
"For each noun in our animacy gazetteer, we generated sixteen phrases. Consider the noun engineer as an example. We created four phrases—one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer. These phrases, as well as their prefix log-likelihoods are provided below in tab:query."
]
],
"section_name": [
"Introduction",
"Gender Stereotypes in Text",
"A Markov Random Field for Morpho-Syntactic Agreement",
"Parameterization",
"Inference",
"Parameter Estimation",
"Intervention",
"Experiments",
"Intrinsic Evaluation",
"Extrinsic Evaluation",
"Related Work",
"Conclusion",
"Acknowledgments",
"Belief Propagation Update Equations",
"Adjective Translations",
"Extrinsic Evaluation Example Phrases"
]
} | {
"answers": [
{
"annotation_id": [
"075ffbc4f5f1ee3b32ee07258113e5fa1412fe04"
],
"answer": [
{
"evidence": [
"To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta."
],
"extractive_spans": [],
"free_form_answer": "Because, unlike other languages, English does not mark grammatical genders",
"highlighted_evidence": [
"Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ea88ebb09c6cad72c89bedff07780b036d2c3159"
],
"answer": [
{
"evidence": [
"Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "by calculating log ratio of grammatical phrase over ungrammatical phrase",
"highlighted_evidence": [
"Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase):"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"a3e52b132398d3f6dc4a4f6ba7dc77b9e6898d89"
],
"answer": [
{
"evidence": [
"We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information."
],
"extractive_spans": [
"Markov random field with an optional neural parameterization"
],
"free_form_answer": "",
"highlighted_evidence": [
"We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Why does not the approach from English work on other languages?",
"How do they measure grammaticality?",
"Which model do they use to convert between masculine-inflected and feminine-inflected sentences?"
],
"question_id": [
"f7817b949605fb04b1e4fec9dd9ca8804fb92ae9",
"8255f74cae1352e5acb2144fb857758dda69be02",
"db62d5d83ec187063b57425affe73fef8733dd28"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Transformation of Los ingenieros son expertos (i.e., The male engineers are skilled) to Las ingenieras son expertas (i.e., The female engineers are skilled). We extract the properties of each word in the sentence. We then fix a noun and its tags and infer the manner in which the remaining tags must be updated. Finally, we reinflect the lemmata to their new forms.",
"Figure 2: Dependency tree for the sentence El ingeniero alemán es muy experto.",
"Figure 3: Factor graph for the sentence El ingeniero alemán es muy experto.",
"Table 1: Morphological reinflection accuracies.",
"Table 2: Language data.",
"Table 3: Tag-level precision, recall, F1 score, and accuracy and form-level accuracy for the baselines (“– BASE”) and for our approach (“–LIN” is the linear parameterization, “–NN” is the neural parameterization).",
"Figure 4: Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naı̈ve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”).",
"Table 4: Animate noun statistics.",
"Figure 5: Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naı̈ve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”).",
"Table 5: Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naı̈ve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”). Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by “*”). Gender stereotyping is measured using phrases 1 and 2. Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.",
"Table 8: Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naı̈ve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”). Ungrammatical phrases are denoted by “*”.",
"Table 6: Feminine translations of good, bad, smart, beautiful in French, Hebrew, Italian, and Spanish",
"Table 7: Masculine translations of good, bad, smart, beautiful in French, Hebrew, Italian, and Spanish"
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Figure4-1.png",
"6-Table4-1.png",
"7-Figure5-1.png",
"7-Table5-1.png",
"11-Table8-1.png",
"11-Table6-1.png",
"11-Table7-1.png"
]
} | [
"Why does not the approach from English work on other languages?"
] | [
[
"1906.04571-Introduction-1"
]
] | [
"Because, unlike other languages, English does not mark grammatical genders"
] | 137 |
2002.11402 | Detecting Potential Topics In News Using BERT, CRF and Wikipedia | For a news content distribution platform like Dailyhunt, Named Entity Recognition is a pivotal task for building better user recommendation and notification algorithms. Apart from identifying names, locations, organisations from the news for 13+ Indian languages and use them in algorithms, we also need to identify n-grams which do not necessarily fit in the definition of Named-Entity, yet they are important. For example, "me too movement", "beef ban", "alwar mob lynching". In this exercise, given an English language text, we are trying to detect case-less n-grams which convey important information and can be used as topics and/or hashtags for a news. Model is built using Wikipedia titles data, private English news corpus and BERT-Multilingual pre-trained model, Bi-GRU and CRF architecture. It shows promising results when compared with industry best Flair, Spacy and Stanford-caseless-NER in terms of F1 and especially Recall. | {
"paragraphs": [
[
"Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4.",
"Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5. Authors have used approaches of Gibbs sampling and conditional random field (CRF) for non-local information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6.",
"Deep learning approaches in NLP use document, word or token representations instead of one-hot encoded vectors. With the rise of transfer learning, pretrained Word2VecBIBREF7, GloVeBIBREF8, fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks (RNN) to detect NERs. Using LSTM layers followed by CRF layes with pretrained word-embeddings as input has been explored hereBIBREF10. Also, CNNs with character embeddings as inputs followed by bi-directional LSTM and CRF layers, were explored hereBIBREF11.",
"With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years. Approach of using these pretrained models like ElmoBIBREF13, FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show state-of-the-art performance.",
"There are very few approaches where caseless NER task is explored. In this recent paperBIBREF16 authors have explored effects of \"Cased\" entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text.",
"In another paperBIBREF17, authors have proposed True-Case pre-training before using BiLSTM+CRF approach to detect NERs effectively. Though it shows good results over previous approaches, it is not useful in Indian Languages context as there is no concept of cases.",
"In our approach, we are focusing more on data preparation for our definition of topics using some of the state-of-art architectures based on BERT, LSTM/GRU and CRF layers as they have been explored in previous approaches mentioned above. Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score. And comparisons have been made with available and ready-to-use open-source libraries from the productionization perspective."
],
[
"We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows -",
"Remove titles with common words : \"are\", \"the\", \"which\"",
"Remove titles with numeric values : 29, 101",
"Remove titles with technical components, driver names, transistor names : X00, lga-775",
"Remove 1-gram titles except locations (almost 80% of these also appear in remaining n-gram titles)",
"After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years.",
"We then created a parallel corpus format as shown in Table 1. Using pre-trained Bert-Tokenizer from hugging-face, converted words in sentences to tokenes. Caseless-BERT pre-trained tokenizer is used. Notice that some of the topic words are broken into tokens and NER tag has been repeated accordingly. For example, in Table 1 second row, word \"harassment\" is broken into \"har ##ass ##ment\". Similarly, one \"NER\" tag is repeated three times to keep the length of sequence-pair same. Finally, for around 3 million news articles, parallel corpus is created, which is of around 150 million sentences, with around 3 billion words (all lower cased) and with around 5 billion tokens approximately."
],
[
"We tried multiple variations of LSTM and GRU layes, with/without CRF layer. There is a marginal gain in using GRU layers over LSTM. Also, we saw gain in using just one layers of GRU instead of more. Finally, we settled on the architecture, shown in Figure 1 for the final training, based on validation set scores with sample training set.",
"Text had to be tokenized using pytorch-pretrained-bert as explained above before passing to the network. Architecture is built using tensorflow/keras. Coding inspiration taken from BERT-keras and for CRF layer keras-contrib. If one is more comfortable in pytorch there are many examples available on github, but pytorch-bert-crf-ner is better for an easy start.",
"We used BERT-Multilingual model so that we can train and fine-tune the same model for other Indian languages. You can take BERT-base or BERT-large for better performance with only English dataset. Or you can use DistilBERT for English and DistilmBERT for 104 languages for faster pre-training and inferences. Also, we did not choose AutoML approach for hyper-parameter tuning which could have resulted in much more accurate results but at the same time could have taken very long time as well. So instead, chose and tweaked the parameters based on initial results.",
"We trained two models, one with sequence length 512 to capture document level important n-grams and second with sequence length 64 to capture sentence/paragraph level important n-grams. Through experiments it was evident that, sequence length plays a vital role in deciding context and locally/globally important n-grams. Final output is a concatenation of both the model outputs."
],
[
"Trained the topic model on single 32gb NVidia-V100 and it took around 50 hours to train the model with sequence length 512. We had to take 256gb ram machine to accommodate all data in memory for faster read/write. Also, trained model with 64 sequence length in around 17 hours.",
"It is very important to note that sequence length decides how many bert-tokens you can pass for inference and also decides training time and accuracy. Ideally more is better because inference would be faster as well. For 64 sequence length, we are moving 64-token window over whole token-text and recognising topics in each window. So, one should choose sequence length according to their use case. Also, we have explained before our motivation of choosing 2 separate sequence lengths models.",
"We stopped the training for both the models when it crossed 70% precision, 90% recall on training and testing sets, as we were just looking to get maximum recall and not bothered about precision in our case. Both the models reach this point at around 16 epochs."
],
[
"Comparison with existing open-source NER libraries is not exactly fair as they are NOT trained for detecting topics and important n-grams, also NOT trained for case-less text. But they are useful in testing and benchmarking if our model is detecting traditional NERs or not, which it should capture, as Wikipedia titles contains almost all Names, Places and Organisation names. You can check the sample output here",
"Comparisons have been made among Flair-NER, Stanford-caseless-NER (used english.conll.4class.caseless as it performed better than 3class and 7class), Spacy-NER and our models. Of which only Stanford-NER provides case-less models. In Table 2, scores are calculated by taking traditional NER list as reference. In Table 4, same is done with Wikipedia Titles reference set.",
"As you can see in Table 2 & 3, recall is great for our model but precision is not good as Model is also trying to detect new potential topics which are not there even in reference Wikipedia-Titles and NER sets. In capturing Wikipedia topics our model clearly surpasses other models in all scores.",
"Spacy results are good despite not being trained for case-less data. In terms of F1 and overall stability Spacy did better than Stanford NER, on our News Validation set. Similarly, Stanford did well in Precision but could not catch up with Spacy and our model in terms of Recall. Flair overall performed poorly, but as said before these open-source models are not trained for our particular use-case."
],
[
"Lets check some examples for detailed analysis of the models and their results. Following is the economy related news.",
"Example 1 : around $1–1.5 trillion or around two percent of global gdp, are lost to corruption every year, president of the natural resource governance institute nrgi has said. speaking at a panel on integrity in public governance during the world bank group and international monetary fund annual meeting on sunday, daniel kaufmann, president of nrgi, presented the statistic, result of a study by the nrgi, an independent, non-profit organisation based in new york. however, according to kaufmann, the figure is only the direct costs of corruption as it does not factor in the opportunities lost on innovation and productivity, xinhua news agency reported. a country that addresses corruption and significantly improves rule of law can expect a huge increase in per capita income in the long run, the study showed. it will also see similar gains in reducing infant mortality and improving education, said kaufmann.",
"Detected NERs can be seen per model in Table 4. Our model do not capture numbers as we have removed all numbers from my wiki-titles as topics. Reason behind the same is that we can easily write regex to detect currency, prices, time, date and deep learning is not required for the same. Following are few important n-grams only our models was able to capture -",
"capita income",
"infant mortality",
"international monetary fund annual meeting",
"natural resource governance institute",
"public governance",
"At the same time, we can see that Spacy did much better than Stanford-caseless NER and Flair could not capture any of the NERs. Another example of a news in political domain and detected NERs can be seen per model in Table 5.",
"Example 2 : wearing the aam aadmi party's trademark cap and with copies of the party's five-year report card in hand, sunita kejriwal appears completely at ease. it's a cold winter afternoon in delhi, as the former indian revenue service (irs) officer hits the campaign trail to support her husband and batchmate, chief minister arvind kejriwal. emerging from the background for the first time, she is lending her shoulder to the aap bandwagon in the new delhi assembly constituency from where the cm, then a political novice, had emerged as the giant killer by defeating congress incumbent sheila dikshit in 2013.",
"Correct n-grams captured only by our model are -",
"aam aadmi party",
"aap bandwagon",
"delhi assembly constituency",
"giant killer",
"indian revenue service",
"political novice",
"In this example, Stanford model did better and captured names properly, for example \"sheila dikshit\" which Spacy could not detect but Spacy captureed almost all numeric values along with numbers expressed in words.",
"It is important to note that, our model captures NERs with some additional words around them. For example, \"president of nrgi\" is detected by the model but not \"ngri\". But model output does convey more information than the later. To capture the same for all models (and to make comparison fair), partial match has been enabled and if correct NER is part of predictied NER then later one is marked as matched. This could be the reason for good score for Spacy. Note that, partial match is disabled for Wikipedia Titles match task as shown in Table 3. Here, our model outperformed all the models."
],
[
"Through this exercise, we were able to test out the best suitable model architecture and data preparation steps so that similar models could be trained for Indian languages. Building cased or caseless NERs for English was not the final goal and this has already been benchmarked and explored before in previous approaches explained in \"Related Work\" section. We didn't use traditional datasets for model performance comparisons & benchmarks. As mentioned before, all the comparisons are being done with open-source models and libraries from the productionization point of view. We used a english-news validation dataset which is important and relevant to our specific task and all validation datasets and raw output results can be found at our github link .",
"Wikipedia titles for Indian languages are very very less and resulting tagged data is even less to run deep architectures. We are trying out translations/transliterations of the English-Wiki-Titles to improve Indic-languages entity/topics data.",
"This approach is also useful in building news-summarizing models as it detects almost all important n-grams present in the news. Output of this model can be introduced in a summarization network to add more bias towards important words and bias for their inclusion."
]
],
"section_name": [
"Introduction & Related Work",
"Data Preparation",
"Experiments ::: Model Architecture",
"Experiments ::: Training",
"Experiments ::: Results",
"Experiments ::: Discussions",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"79e09627dc6d58f94ae96f07ebbfa6e8bedb4338"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference",
"FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference"
],
"extractive_spans": [],
"free_form_answer": "Between the model and Stanford, Spacy and Flair the differences are 42.91, 25.03, 69.8 with Traditional NERs as reference and 49.88, 43.36, 62.43 with Wikipedia titles as reference.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference",
"FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"07c6cdfd9c473ddcfd4e653e5146e6c80be4c5a4"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference",
"FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference"
],
"extractive_spans": [],
"free_form_answer": "F1 score and Recall are 68.66, 80.08 with Traditional NERs as reference and 59.56, 69.76 with Wikipedia titles as reference.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference",
"FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"18a2a4c3ecdea3f8c21a0400e3b957facea2a0b6"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1. BERT + Bi-GRU + CRF, Final Architecture Chosen For Topic Detection Task."
],
"extractive_spans": [],
"free_form_answer": "4 layers",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1. BERT + Bi-GRU + CRF, Final Architecture Chosen For Topic Detection Task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"e20e4bed7b4ec73f1dc1206c120bb196fcf44314"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We have a dump of 15 million English news articles published in past 4 years."
],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"99c7927e72f3d6e93fd6da0841966e85c4fe4c95"
],
"answer": [
{
"evidence": [
"We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows -",
"After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years."
],
"extractive_spans": [
"English wikipedia dataset has more than 18 million",
"a dump of 15 million English news articles "
],
"free_form_answer": "",
"highlighted_evidence": [
"We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. ",
"After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the difference in recall score between the systems?",
"What is their f1 score and recall?",
"How many layers does their system have?",
"Which news corpus is used?",
"How large is the dataset they used?"
],
"question_id": [
"1771a55236823ed44d3ee537de2e85465bf03eaf",
"1d74fd1d38a5532d20ffae4abbadaeda225b6932",
"da8bda963f179f5517a864943dc0ee71249ee1ce",
"5c059a13d59947f30877bed7d0180cca20a83284",
"a1885f807753cff7a59f69b5cf6d0fdef8484057"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1. Parallel Corpus Preparation with BERT Tokenizer",
"Table 2. Comparison with Traditional NERs as reference",
"Table 3. Comparison with Wikipedia titles as reference",
"Figure 1. BERT + Bi-GRU + CRF, Final Architecture Chosen For Topic Detection Task.",
"Table 4. Recognised Named Entities Per Model - Example 1",
"Table 5. Recognised Named Entities Per Model - Example 2"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"3-Figure1-1.png",
"6-Table4-1.png",
"6-Table5-1.png"
]
} | [
"What is the difference in recall score between the systems?",
"What is their f1 score and recall?",
"How many layers does their system have?"
] | [
[
"2002.11402-3-Table2-1.png",
"2002.11402-3-Table3-1.png"
],
[
"2002.11402-3-Table2-1.png",
"2002.11402-3-Table3-1.png"
],
[
"2002.11402-3-Figure1-1.png"
]
] | [
"Between the model and Stanford, Spacy and Flair the differences are 42.91, 25.03, 69.8 with Traditional NERs as reference and 49.88, 43.36, 62.43 with Wikipedia titles as reference.",
"F1 score and Recall are 68.66, 80.08 with Traditional NERs as reference and 59.56, 69.76 with Wikipedia titles as reference.",
"4 layers"
] | 141 |
2002.00652 | How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context | Recently semantic parsing in context has received a considerable attention, which is challenging since there are complex contextual phenomena. Previous works verified their proposed methods in limited scenarios, which motivates us to conduct an exploratory study on context modeling methods under real-world semantic parsing in context. We present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. We evaluate 13 context modeling methods on two large complex cross-domain datasets, and our best model achieves state-of-the-art performances on both datasets with significant improvements. Furthermore, we summarize the most frequent contextual phenomena, with a fine-grained analysis on representative models, which may shed light on potential research directions. | {
"paragraphs": [
[
"Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries.",
"A number of context modeling methods have been suggested in the literature to address SPC BIBREF3, BIBREF4, BIBREF2, BIBREF5, BIBREF6. These methods proposed to leverage two categories of context: recent questions and precedent logic form. It is natural to leverage recent questions as context. Taking the example from Figure FIGREF1, when parsing $Q_3$, we also need to take $Q_1$ and $Q_2$ as input. We can either simply concatenate the input questions, or use a model to encode them hierarchically BIBREF4. As for the second category, instead of taking a bag of recent questions as input, it only considers the precedent logic form. For instance, when parsing $Q_3$, we only need to take $S_2$ as context. With such a context, the decoder can attend over it, or reuse it via a copy mechanism BIBREF4, BIBREF5. Intuitively, methods that fall into this category enjoy better generalizability, as they only rely on the last logic form as context, no matter at which turn. Notably, these two categories of context can be used simultaneously.",
"However, it remains unclear how far we are from effective context modeling. First, there is a lack of thorough comparisons of typical context modeling methods on complex SPC (e.g. cross-domain). Second, none of previous works verified their proposed context modeling methods with the grammar-based decoding technique, which has been developed for years and proven to be highly effective in semantic parsing BIBREF7, BIBREF8, BIBREF9. To obtain better performance, it is worthwhile to study how context modeling methods collaborate with the grammar-based decoding. Last but not the least, there is limited understanding of how context modeling methods perform on various contextual phenomena. An in-depth analysis can shed light on potential research directions.",
"In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance."
],
[
"In the task of semantic parsing in context, we are given a dataset composed of dialogues. Denoting $\\langle \\mathbf {x}_1,...,\\mathbf {x}_n\\rangle $ a sequence of natural language questions in a dialogue, $\\langle \\mathbf {y}_1,...,\\mathbf {y}_n\\rangle $ are their corresponding SQL queries. Each SQL query is conditioned on a multi-table database schema, and the databases used in test do not appear in training. In this section, we first present a base model without considering context. Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods. Finally, we present how to augment the model with BERT BIBREF10."
],
[
"We employ the popularly used attention-based sequence-to-sequence architecture BIBREF11, BIBREF12 to build our base model. As shown in Figure FIGREF6, the base model consists of a question encoder and a grammar-based decoder. For each question, the encoder provides contextual representations, while the decoder generates its corresponding SQL query according to a predefined grammar."
],
[
"To capture contextual information within a question, we apply Bidirectional Long Short-Term Memory Neural Network (BiLSTM) as our question encoder BIBREF13, BIBREF14. Specifically, at turn $i$, firstly every token $x_{i,k}$ in $\\mathbf {x}_{i}$ is fed into a word embedding layer $\\mathbf {\\phi }^x$ to get its embedding representation $\\mathbf {\\phi }^x{(x_{i,k})}$. On top of the embedding representation, the question encoder obtains a contextual representation $\\mathbf {h}^{E}_{i,k}=[\\mathop {{\\mathbf {h}}^{\\overrightarrow{E}}_{i,k}}\\,;{\\mathbf {h}}^{\\overleftarrow{E}}_{i,k}]$, where the forward hidden state is computed as following:"
],
[
"The decoder is grammar-based with attention on the input question BIBREF7. Different from producing a SQL query word by word, our decoder outputs a sequence of grammar rule (i.e. action). Such a sequence has one-to-one correspondence with the abstract syntax tree of the SQL query. Taking the SQL query in Figure FIGREF6 as an example, it is transformed to the action sequence $\\langle $ $\\rm \\scriptstyle {Start}\\rightarrow \\rm {Root}$, $\\rm \\scriptstyle {Root}\\rightarrow \\rm {Select\\ Order}$, $\\rm \\scriptstyle {Select}\\rightarrow \\rm {Agg}$, $\\rm \\scriptstyle {Agg}\\rightarrow \\rm {max\\ Col\\ Tab}$, $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Id}$, $\\rm \\scriptstyle {Tab}\\rightarrow \\rm {CARS\\_DATA}$, $\\rm \\scriptstyle {Order}\\rightarrow \\rm {desc\\ limit\\ Agg}$, $\\rm \\scriptstyle {Agg}\\rightarrow \\rm {none\\ Col\\ Tab}$, $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Horsepower}$, $\\rm \\scriptstyle {Tab}\\rightarrow \\rm {CARS\\_DATA}$ $\\rangle $ by left-to-right depth-first traversing on the tree. At each decoding step, a nonterminal is expanded using one of its corresponding grammar rules. The rules are either schema-specific (e.g. $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Horsepower}$), or schema-agnostic (e.g. $\\rm \\scriptstyle {Start}\\rightarrow \\rm {Root}$). More specifically, as shown at the top of Figure FIGREF6, we make a little modification on $\\rm {Order}$-related rules upon the grammar proposed by BIBREF9, which has been proven to have better performance than vanilla SQL grammar. Denoting $\\mathbf {LSTM}^{\\overrightarrow{D}}$ the unidirectional LSTM used in the decoder, at each decoding step $j$ of turn $i$, it takes the embedding of the previous generated grammar rule $\\mathbf {\\phi }^y(y_{i,j-1})$ (indicated as the dash lines in Figure FIGREF6), and updates its hidden state as:",
"where $\\mathbf {c}_{i,j-1}$ is the context vector produced by attending on each encoder hidden state $\\mathbf {h}^E_{i,k}$ in the previous step:",
"where $\\mathbf {W}^e$ is a learned matrix. $\\mathbf {h}^{\\overrightarrow{D}}_{i,0}$ is initialized by the final encoder hidden state $\\mathbf {h}^E_{i,|\\mathbf {x}_{i}|}$, while $\\mathbf {c}_{i,0}$ is a zero-vector. For each schema-agnostic grammar rule, $\\mathbf {\\phi }^y$ returns a learned embedding. For schema-specific one, the embedding is obtained by passing its schema (i.e. table or column) through another unidirectional LSTM, namely schema encoder $\\mathbf {LSTM}^{\\overrightarrow{S}}$. For example, the embedding of $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Id}$ is:",
"As for the output $y_{i,j}$, if the expanded nonterminal corresponds to schema-agnostic grammar rules, we can obtain the output probability of action ${\\gamma }$ as:",
"where $\\mathbf {W}^o$ is a learned matrix. When it comes to schema-specific grammar rules, the main challenge is that the model may encounter schemas never appeared in training due to the cross-domain setting. To deal with it, we do not directly compute the similarity between the decoder hidden state and the schema-specific grammar rule embedding. Instead, we first obtain the unnormalized linking score $l(x_{i,k},\\gamma )$ between the $k$-th token in $\\mathbf {x}_i$ and the schema in action $\\gamma $. It is computed by both handcraft features (e.g. word exact match) BIBREF15 and learned similarity (i.e. dot product between word embedding and grammar rule embedding). With the input question as bridge, we reuse the attention score $a_{i,k}$ in Equation DISPLAY_FORM8 to measure the probability of outputting a schema-specific action $\\gamma $ as:"
],
[
"To take advantage of the question context, we provide the base model with recent $h$ questions as additional input. As shown in Figure FIGREF13, we summarize and generalize three ways to incorporate recent questions as context."
],
[
"The method concatenates recent questions with the current question in order, making the input of the question encoder be $[\\mathbf {x}_{i-h},\\dots ,\\mathbf {x}_{i}]$, while the architecture of the base model remains the same. We do not insert special delimiters between questions, as there are punctuation marks."
],
[
"A dialogue can be seen as a sequence of questions which, in turn, are sequences of words. Considering such hierarchy, BIBREF4 employed a turn-level encoder (i.e. an unidirectional LSTM) to encode recent questions hierarchically. At turn $i$, the turn-level encoder takes the previous question vector $[\\mathbf {h}^{\\overleftarrow{E}}_{i-1,1},\\mathbf {h}^{\\overrightarrow{E}}_{i-1,|\\mathbf {x}_{i-1}|}]$ as input, and updates its hidden state to $\\mathbf {h}^{\\overrightarrow{T}}_{i}$. Then $\\mathbf {h}^{\\overrightarrow{T}}_{i}$ is fed into $\\mathbf {LSTM}^E$ as an implicit context. Accordingly Equation DISPLAY_FORM4 is rewritten as:",
"Similar to Concat, BIBREF4 allowed the decoder to attend over all encoder hidden states. To make the decoder distinguish hidden states from different turns, they further proposed a relative distance embedding ${\\phi }^{d}$ in attention computing. Taking the above into account, Equation DISPLAY_FORM8 is as:",
"",
"where $t{\\in }[0,\\dots ,h]$ represents the relative distance."
],
[
"To jointly model the decoder attention in token-level and question-level, inspired by the advances of open-domain dialogue area BIBREF16, we propose a gate mechanism to automatically compute the importance of each question. The importance is computed by:",
"where $\\lbrace \\mathbf {V}^{g},\\mathbf {W}^g,\\mathbf {U}^g\\rbrace $ are learned parameters and $0\\,{\\le }\\,t\\,{\\le }\\,h$. As done in Equation DISPLAY_FORM17 except for the relative distance embedding, the decoder of Gate also attends over all the encoder hidden states. And the question-level importance $\\bar{g}_{i-t}$ is employed as the coefficient of the attention scores at turn $i\\!-\\!t$."
],
[
"Besides recent questions, as mentioned in Section SECREF1, the precedent SQL can also be context. As shown in Figure FIGREF27, the usage of $\\mathbf {y}_{i-1}$ requires a SQL encoder, where we employ another BiLSTM to achieve it. The $m$-th contextual action representation at turn $i\\!-\\!1$, $\\mathbf {h}^A_{i-1,m}$, can be obtained by passing the action sequence through the SQL encoder."
],
[
"Attention over $\\mathbf {y}_{i-1}$ is a straightforward method to incorporate the SQL context. Given $\\mathbf {h}^A_{i-1,m}$, we employ a similar manner as Equation DISPLAY_FORM8 to compute attention score and thus obtain the SQL context vector. This vector is employed as an additional input for decoder in Equation DISPLAY_FORM7."
],
[
"To reuse the precedent generated SQL, BIBREF5 presented a token-level copy mechanism on their non-grammar based parser. Inspired by them, we propose an action-level copy mechanism suited for grammar-based decoding. It enables the decoder to copy actions appearing in $\\mathbf {y}_{i-1}$, when the actions are compatible to the current expanded nonterminal. As the copied actions lie in the same semantic space with the generated ones, the output probability for action $\\gamma $ is a mix of generating ($\\mathbf {g}$) and copying ($\\mathbf {c}$). The generating probability $P(y_{i,j}\\!=\\!{\\gamma }\\,|\\,\\mathbf {g})$ follows Equation DISPLAY_FORM10 and DISPLAY_FORM11, while the copying probability is:",
"where $\\mathbf {W}^l$ is a learned matrix. Denoting $P^{copy}_{i,j}$ the probability of copying at decoding step $j$ of turn $i$, it can be obtained by $\\sigma (\\mathbf {W}^{c}\\mathbf {h}^{\\overrightarrow{D}}_{i,j}+\\mathbf {b}^{c})$, where $\\lbrace \\mathbf {W}^{c},\\mathbf {b}^{c}\\rbrace $ are learned parameters and $\\sigma $ is the sigmoid function. The final probability $P(y_{i,j}={\\gamma })$ is computed by:"
],
[
"Besides the action-level copy, we also introduce a tree-level copy mechanism. As illustrated in Figure FIGREF27, tree-level copy mechanism enables the decoder to copy action subtrees extracted from $\\mathbf {y}_{i-1}$, which shrinks the number of decoding steps by a large margin. Similar idea has been proposed in a non-grammar based decoder BIBREF4. In fact, a subtree is an action sequence starting from specific nonterminals, such as ${\\rm Select}$. To give an example, $\\langle $ $\\rm \\scriptstyle {Select}\\rightarrow \\rm {Agg}$, $\\rm \\scriptstyle {Agg}\\rightarrow \\rm {max\\ Col\\ Tab}$, $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Id}$, $\\rm \\scriptstyle {Tab}\\rightarrow \\rm {CARS\\_DATA}$ $\\rangle $ makes up a subtree for the tree in Figure FIGREF6. For a subtree $\\upsilon $, its representation $\\phi ^{t}(\\upsilon )$ is the final hidden state of SQL encoder, which encodes its corresponding action sequence. Then we can obtain the output probability of subtree $\\upsilon $ as:",
"where $\\mathbf {W}^t$ is a learned matrix. The output probabilities of subtrees are normalized together with Equation DISPLAY_FORM10 and DISPLAY_FORM11."
],
[
"We employ BERT BIBREF10 to augment our model via enhancing the embedding of questions and schemas. We first concatenate the input question and all the schemas in a deterministic order with [SEP] as delimiter BIBREF17. For instance, the input for $Q_1$ in Figure FIGREF1 is “What is id ... max horsepower? [SEP] CARS_NAMES [SEP] MakeId ... [SEP] Horsepower”. Feeding it into BERT, we obtain the schema-aware question representations and question-aware schema representations. These contextual representations are used to substitute $\\phi ^x$ subsequently, while other parts of the model remain the same."
],
[
"We conduct experiments to study whether the introduced methods are able to effectively model context in the task of SPC (Section SECREF36), and further perform a fine-grained analysis on various contextual phenomena (Section SECREF40)."
],
[
"Two large complex cross-domain datasets are used: SParC BIBREF2 consists of 3034 / 422 dialogues for train / development, and CoSQL BIBREF6 consists of 2164 / 292 ones. The average turn numbers of SParC and CoSQL are $3.0$ and $5.2$, respectively."
],
[
"We evaluate each predicted SQL query using exact set match accuracy BIBREF2. Based on it, we consider three metrics: Question Match (Ques.Match), the match accuracy over all questions, Interaction Match (Int.Match), the match accuracy over all dialogues, and Turn $i$ Match, the match accuracy over questions at turn $i$."
],
[
"Our implementation is based on PyTorch BIBREF18, AllenNLP BIBREF19 and the library transformers BIBREF20. We adopt the Adam optimizer and set the learning rate as 1e-3 on all modules except for BERT, for which a learning rate of 1e-5 is used BIBREF21. The dimensions of word embedding, action embedding and distance embedding are 100, while the hidden state dimensions of question encoder, grammar-based decoder, turn-level encoder and SQL encoder are 200. We initialize word embedding using Glove BIBREF22 for non-BERT models. For methods which use recent $h$ questions, $h$ is set as 5 on both datasets."
],
[
"We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy)."
],
[
"Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.",
"To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected.",
"As mentioned in Section SECREF1, intuitively, methods which only use the precedent SQL enjoys better generalizability. To validate it, we further conduct an out-of-distribution experiment to assess the generalizability of different context modeling methods. Concretely, we select three representative methods and train them on questions at turn 1 and 2, whereas test them at turn 3, 4 and beyond. As shown in Figure FIGREF38, Action Copy has a consistently comparable or better performance, validating the intuition. Meanwhile, Concat appears to be strikingly competitive, demonstrating it also has a good generalizability. Compared with them, Turn is more vulnerable to out-of-distribution questions.",
"In conclusion, existing context modeling methods in the task of SPC are not as effective as expected, since they do not show a significant advantage over the simple concatenation method."
],
[
"By a careful investigation on contextual phenomena, we summarize them in multiple hierarchies. Roughly, there are three kinds of contextual phenomena in questions: semantically complete, coreference and ellipsis. Semantically complete means a question can reflect all the meaning of its corresponding SQL. Coreference means a question contains pronouns, while ellipsis means the question cannot reflect all of its SQL, even if resolving its pronouns. In the fine-grained level, coreference can be divided into 5 types according to its pronoun BIBREF1. Ellipsis can be characterized by its intention: continuation and substitution. Continuation is to augment extra semantics (e.g. ${\\rm Filter}$), and substitution refers to the situation where current question is intended to substitute particular semantics in the precedent question. Substitution can be further branched into 4 types: explicit vs. implicit and schema vs. operator. Explicit means the current question provides contextual clues (i.e. partial context overlaps with the precedent question) to help locate the substitution target, while implicit does not. On most cases, the target is schema or operator. In order to study the effect of context modeling methods on various phenomena, as shown in Table TABREF39, we take the development set of SParC as an example to perform our analysis. The analysis begins by presenting Ques.Match of three representative models on above fine-grained types in Figure FIGREF42. As shown, though different methods have different strengths, they all perform poorly on certain types, which will be elaborated below."
],
[
"Diving deep into the coreference (left of Figure FIGREF42), we observe that all methods struggle with two fine-grained types: definite noun phrases and one anaphora. Through our study, we find the scope of antecedent is a key factor. An antecedent is one or more entities referred by a pronoun. Its scope is either whole, where the antecedent is the precedent answer, or partial, where the antecedent is part of the precedent question. The above-mentioned fine-grained types are more challenging as their partial proportion are nearly $40\\%$, while for demonstrative pronoun it is only $22\\%$. It is reasonable as partial requires complex inference on context. Considering the 4th example in Table TABREF39, “one” refers to “pets” instead of “age” because the accompanying verb is “weigh”. From this observation, we draw the conclusion that current context modeling methods do not succeed on pronouns which require complex inference on context."
],
[
"As for ellipsis (right of Figure FIGREF42), we obtain three interesting findings by comparisons in three aspects. The first finding is that all models have a better performance on continuation than substitution. This is expected since there are redundant semantics in substitution, while not in continuation. Considering the 8th example in Table TABREF39, “horsepower” is a redundant semantic which may raise noise in SQL prediction. The second finding comes from the unexpected drop from implicit(substitution) to explicit(substitution). Intuitively, explicit should surpass implicit on substitution as it provides more contextual clues. The finding demonstrates that contextual clues are obviously not well utilized by the context modeling methods. Third, compared with schema(substitution), operator(substitution) achieves a comparable or better performance consistently. We believe it is caused by the cross-domain setting, which makes schema related substitution more difficult."
],
[
"The most related work is the line of semantic parsing in context. In the topic of SQL, BIBREF24 proposed a context-independent CCG parser and then applied it to do context-dependent substitution, BIBREF3 applied a search-based method for sequential questions, and BIBREF4 provided the first sequence-to-sequence solution in the area. More recently, BIBREF5 presented a edit-based method to reuse the precedent generated SQL. With respect to other logic forms, BIBREF25 focuses on understanding execution commands in context, BIBREF26 on question answering over knowledge base in a conversation, and BIBREF27 on code generation in environment context. Our work is different from theirs as we perform an exploratory study, not fulfilled by previous works.",
"There are also several related works that provided studies on context. BIBREF17 explored the contextual representations in context-independent semantic parsing, and BIBREF28 studied how conversational agents use conversation history to generate response. Different from them, our task focuses on context modeling for semantic parsing. Under the same task, BIBREF1 summarized contextual phenomena in a coarse-grained level, while BIBREF0 performed a wizard-of-oz experiment to study the most frequent phenomena. What makes our work different from them is that we not only summarize contextual phenomena by fine-grained types, but also perform an analysis on context modeling methods."
],
[
"This work conducts an exploratory study on semantic parsing in context, to realize how far we are from effective context modeling. Through a thorough comparison, we find that existing context modeling methods are not as effective as expected. A simple concatenation method can be much competitive. Furthermore, by performing a fine-grained analysis, we summarize two potential directions as our future work: incorporating common sense for better pronouns inference, and modeling contextual clues in a more explicit manner. By open-sourcing our code and materials, we believe our work can facilitate the community to debug models in a fine-grained level and make more progress."
]
],
"section_name": [
"Introduction",
"Methodology",
"Methodology ::: Base Model",
"Methodology ::: Base Model ::: Question Encoder",
"Methodology ::: Base Model ::: Grammar-based Decoder",
"Methodology ::: Recent Questions as Context",
"Methodology ::: Recent Questions as Context ::: Concat",
"Methodology ::: Recent Questions as Context ::: Turn",
"Methodology ::: Recent Questions as Context ::: Gate",
"Methodology ::: Precedent SQL as Context",
"Methodology ::: Precedent SQL as Context ::: SQL Attn",
"Methodology ::: Precedent SQL as Context ::: Action Copy",
"Methodology ::: Precedent SQL as Context ::: Tree Copy",
"Methodology ::: BERT Enhanced Embedding",
"Experiment & Analysis",
"Experiment & Analysis ::: Experimental Setup ::: Dataset",
"Experiment & Analysis ::: Experimental Setup ::: Evaluation Metrics",
"Experiment & Analysis ::: Experimental Setup ::: Implementation Detail",
"Experiment & Analysis ::: Experimental Setup ::: Baselines",
"Experiment & Analysis ::: Model Comparison",
"Experiment & Analysis ::: Fine-grained Analysis",
"Experiment & Analysis ::: Fine-grained Analysis ::: Coreference",
"Experiment & Analysis ::: Fine-grained Analysis ::: Ellipsis",
"Related Work",
"Conclusion & Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"dd3f3fb7924027f3d1d27347939df4aa60f5b89e"
],
"answer": [
{
"evidence": [
"We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy).",
"Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.",
"FLOAT SELECTED: Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005."
],
"extractive_spans": [
"Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively."
],
"free_form_answer": "",
"highlighted_evidence": [
"EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy).",
"Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.",
"FLOAT SELECTED: Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"07cc2547a5636d8efd45277b27e554600311e0e7"
],
"answer": [
{
"evidence": [
"In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance."
],
"extractive_spans": [
"SParC BIBREF2 and CoSQL BIBREF6"
],
"free_form_answer": "",
"highlighted_evidence": [
"Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"f85cd8cb2e930ddf579c2a28e1b9bedad79f19dc"
],
"answer": [
{
"evidence": [
"To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected.",
"FLOAT SELECTED: Figure 5: Question Match, Interaction Match and Turn i Match on SPARC and COSQL development sets. The numbers are averaged over 5 runs. The first column represents absolute values. The rest are improvements of different context modeling methods over CONCAT."
],
"extractive_spans": [],
"free_form_answer": "Concat\nTurn\nGate\nAction Copy\nTree Copy\nSQL Attn\nConcat + Action Copy\nConcat + Tree Copy\nConcat + SQL Attn\nTurn + Action Copy\nTurn + Tree Copy\nTurn + SQL Attn\nTurn + SQL Attn + Action Copy",
"highlighted_evidence": [
"To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected.",
"FLOAT SELECTED: Figure 5: Question Match, Interaction Match and Turn i Match on SPARC and COSQL development sets. The numbers are averaged over 5 runs. The first column represents absolute values. The rest are improvements of different context modeling methods over CONCAT."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"b2a511c76b52fce2865b0cd74f268894a014b94d"
],
"answer": [
{
"evidence": [
"In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance."
],
"extractive_spans": [
"SParC BIBREF2 and CoSQL BIBREF6"
],
"free_form_answer": "",
"highlighted_evidence": [
"Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How big is improvement in performances of proposed model over state of the art?",
"What two large datasets are used for evaluation?",
"What context modelling methods are evaluated?",
"What are two datasets models are tested on?"
],
"question_id": [
"cc9f0ac8ead575a9b485a51ddc06b9ecb2e2a44d",
"69e678666d11731c9bfa99953e2cd5a5d11a4d4f",
"471d624498ab48549ce492ada9e6129da05debac",
"f858031ebe57b6139af46ee0f25c10870bb00c3c"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An example dialogue (right) and its database schema (left).",
"Figure 2: The grammar rule and the abstract syntax tree for the SQL",
"Figure 3: Different methods to incorporate recent h questions [xi−h, ...,xi−1]. (a) CONCAT: concatenate recent questions with xi as input; (b) TURN: employ a turn-level encoder to capture the inter-dependencies among questions in different turns; (c) GATE: use a gate mechanism to compute the importance of each question.",
"Figure 4: Different methods to employ the precedent SQL yi−1. SQL Enc. is short for SQL Encoder, and Tree Ext. is short for Subtree Extractor. (a) SQL ATTN: attending over yi−1; (b) ACTION COPY: allow to copy actions from yi−1; (c) TREE COPY: allow to copy action subtrees extracted from yi−1.",
"Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005.",
"Figure 6: Out-of-distribution experimental results (Turn i Match) of three models on SPARC and COSQL development sets.",
"Figure 5: Question Match, Interaction Match and Turn i Match on SPARC and COSQL development sets. The numbers are averaged over 5 runs. The first column represents absolute values. The rest are improvements of different context modeling methods over CONCAT.",
"Table 2: Different fine-grained types, their count and representative examples from the SPARC development set. one means one is a pronoun. Winners means Winners is a phrase intended to substitute losers.",
"Figure 7: Different context modeling methods have different strengths on fine-grained types (better viewed in color)."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Table1-1.png",
"5-Figure6-1.png",
"5-Figure5-1.png",
"6-Table2-1.png",
"6-Figure7-1.png"
]
} | [
"What context modelling methods are evaluated?"
] | [
[
"2002.00652-Experiment & Analysis ::: Model Comparison-1",
"2002.00652-5-Figure5-1.png"
]
] | [
"Concat\nTurn\nGate\nAction Copy\nTree Copy\nSQL Attn\nConcat + Action Copy\nConcat + Tree Copy\nConcat + SQL Attn\nTurn + Action Copy\nTurn + Tree Copy\nTurn + SQL Attn\nTurn + SQL Attn + Action Copy"
] | 143 |
1905.06566 | HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization | Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \cite{devlin:2018:arxiv}, we propose {\sc Hibert} (as shorthand for {\bf HI}erachical {\bf B}idirectional {\bf E}ncoder {\bf R}epresentations from {\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets. | {
"paragraphs": [
[
"Automatic document summarization is the task of rewriting a document into its shorter form while still retaining its important content. Over the years, many paradigms for document summarization have been explored (see Nenkova:McKeown:2011 for an overview). The most popular two among them are extractive approaches and abstractive approaches. As the name implies, extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document.",
"Extractive summarization is usually modeled as a sentence ranking problem with length constraints (e.g., max number of words or sentences). Top ranked sentences (under constraints) are selected as summaries. Early attempts mostly leverage manually engineered features BIBREF1 . Based on these sparse features, sentence are selected using a classifier or a regression model. Later, the feature engineering part in this paradigm is replaced with neural networks. cheng:2016:acl propose a hierarchical long short-term memory network (LSTM; BIBREF2 ) to encode a document and then use another LSTM to predict binary labels for each sentence in the document. This architecture is widely adopted recently BIBREF3 , BIBREF4 , BIBREF5 . Our model also employs a hierarchical document encoder, but we adopt a hierarchical transformer BIBREF6 rather a hierarchical LSTM. Because recent studies BIBREF6 , BIBREF0 show the transformer model performs better than LSTM in many tasks.",
"Abstractive models do not attract much attention until recently. They are mostly based on sequence to sequence (seq2seq) models BIBREF7 , where a document is viewed a sequence and its summary is viewed as another sequence. Although seq2seq based summarizers can be equipped with copy mechanism BIBREF8 , BIBREF9 , coverage model BIBREF9 and reinforcement learning BIBREF10 , there is still no guarantee that the generated summaries are grammatical and convey the same meaning as the original document does. It seems that extractive models are more reliable than their abstractive counterparts.",
"However, extractive models require sentence level labels, which are usually not included in most summarization datasets (most datasets only contain document-summary pairs). Sentence labels are usually obtained by rule-based methods (e.g., maximizing the ROUGE score between a set of sentences and reference summaries) and may not be accurate. Extractive models proposed recently BIBREF11 , BIBREF3 employ hierarchical document encoders and even have neural decoders, which are complex. Training such complex neural models with inaccurate binary labels is challenging. We observed in our initial experiments on one of our dataset that our extractive model (see Section \"Extractive Summarization\" for details) overfits to the training set quickly after the second epoch, which indicates the training set may not be fully utilized. Inspired by the recent pre-training work in natural language processing BIBREF12 , BIBREF13 , BIBREF0 , our solution to this problem is to first pre-train the “complex”' part (i.e., the hierarchical encoder) of the extractive model on unlabeled data and then we learn to classify sentences with our model initialized from the pre-trained encoder. In this paper, we propose Hibert, which stands for HIerachical Bidirectional Encoder Representations from Transformers. We design an unsupervised method to pre-train Hibert for document modeling. We apply the pre-trained Hibert to the task of document summarization and achieve state-of-the-art performance on both the CNN/Dailymail and New York Times dataset."
],
[
"In this section, we introduce work on extractive summarization, abstractive summarization and pre-trained natural language processing models. For a more comprehensive review of summarization, we refer the interested readers to Nenkova:McKeown:2011 and Mani:01."
],
[
"In this section, we present our model Hibert. We first introduce how documents are represented in Hibert. We then describe our method to pre-train Hibert and finally move on to the application of Hibert to summarization."
],
[
"Let $\\mathcal {D} = (S_1, S_2, \\dots , S_{| \\mathcal {D} |})$ denote a document, where $S_i = (w_1^i, w_2^i, \\dots , w_{|S_i|}^i)$ is a sentence in $\\mathcal {D}$ and $w_j^i$ a word in $S_i$ . Note that following common practice in natural language processing literatures, $w_{|S_i|}^i$ is an artificial EOS (End Of Sentence) token. To obtain the representation of $\\mathcal {D}$ , we use two encoders: a sentence encoder to transform each sentence in $\\mathcal {D}$ to a vector and a document encoder to learn sentence representations given their surrounding sentences as context. Both the sentence encoder and document encoder are based on the Transformer encoder described in vaswani:2017:nips. As shown in Figure 1 , they are nested in a hierarchical fashion. A transformer encoder usually has multiple layers and each layer is composed of a multi-head self attentive sub-layer followed by a feed-forward sub-layer with residual connections BIBREF30 and layer normalizations BIBREF31 . For more details of the Transformer encoder, we refer the interested readers to vaswani:2017:nips. To learn the representation of $S_i$ , $S_i= (w_1^i, w_2^i, \\dots , w_{|S_i|}^i)$ is first mapped into continuous space ",
"$$\\begin{split}\n\\mathbf {E}_i = (\\mathbf {e}_1^i, \\mathbf {e}_2^i, \\dots , \\mathbf {e}_{|S_i|}^i) \\\\\n\\quad \\quad \\text{where} \\quad \\mathbf {e}_j^i = e(w_j^i) + \\mathbf {p}_j\n\\end{split}$$ (Eq. 6) ",
" where $e(w_j^i)$ and $\\mathbf {p}_j$ are the word and positional embeddings of $w_j^i$ , respectively. The word embedding matrix is randomly initialized and we adopt the sine-cosine positional embedding BIBREF6 . Then the sentence encoder (a Transformer) transforms $\\mathbf {E}_i$ into a list of hidden representations $(\\mathbf {h}_1^i, \\mathbf {h}_2^i, \\dots , \\mathbf {h}_{|S_i|}^i)$ . We take the last hidden representation $\\mathbf {h}_{|S_i|}^i$ (i.e., the representation at the EOS token) as the representation of sentence $S_i$ . Similar to the representation of each word in $S_i$ , we also take the sentence position into account. The final representation of $S_i$ is ",
"$$\\hat{\\mathbf {h}}_i = \\mathbf {h}_{|S_i|}^i + \\mathbf {p}_i$$ (Eq. 8) ",
"Note that words and sentences share the same positional embedding matrix.",
"In analogy to the sentence encoder, as shown in Figure 1 , the document encoder is yet another Transformer but applies on the sentence level. After running the Transformer on a sequence of sentence representations $( \\hat{\\mathbf {h}}_1, \\hat{\\mathbf {h}}_2, \\dots , \\hat{\\mathbf {h}}_{|\\mathcal {D}|} )$ , we obtain the context sensitive sentence representations $( \\mathbf {d}_1, \\mathbf {d}_2, \\dots , \\mathbf {d}_{|\\mathcal {D}|} )$ . Now we have finished the encoding of a document with a hierarchical bidirectional transformer encoder Hibert. Note that in previous work, document representation are also learned with hierarchical models, but each hierarchy is a Recurrent Neural Network BIBREF3 , BIBREF21 or Convolutional Neural Network BIBREF11 . We choose the Transformer because it outperforms CNN and RNN in machine translation BIBREF6 , semantic role labeling BIBREF32 and other NLP tasks BIBREF0 . In the next section we will introduce how we train Hibert with an unsupervised training objective."
],
[
"Most recent encoding neural models used in NLP (e.g., RNNs, CNNs or Transformers) can be pre-trained by predicting a word in a sentence (or a text span) using other words within the same sentence (or span). For example, ELMo BIBREF12 and OpenAI-GPT BIBREF13 predict a word using all words on its left (or right); while word2vec BIBREF33 predicts one word with its surrounding words in a fixed window and BERT BIBREF0 predicts (masked) missing words in a sentence given all the other words.",
"All the models above learn the representation of a sentence, where its basic units are words. Hibert aims to learn the representation of a document, where its basic units are sentences. Therefore, a natural way of pre-training a document level model (e.g., Hibert) is to predict a sentence (or sentences) instead of a word (or words). We could predict a sentence in a document with all the sentences on its left (or right) as in a (document level) language model. However, in summarization, context on both directions are available. We therefore opt to predict a sentence using all sentences on both its left and right.",
"Specifically, suppose $\\mathcal {D} = (S_1, S_2, \\dots , S_{| \\mathcal {D} |})$ is a document, where $S_i = (w_1^i, w_2^i, \\dots , w_{|S_i|}^i)$ is a sentence in it. We randomly select 15% of the sentences in $\\mathcal {D}$ and mask them. Then, we predict these masked sentences. The prediction task here is similar with the Cloze task BIBREF34 , BIBREF0 , but the missing part is a sentence. However, during test time the input document is not masked, to make our model can adapt to documents without masks, we do not always mask the selected sentences. Once a sentence is selected (as one of the 15% selected masked sentences), we transform it with one of three methods below. We will use an example to demonstrate the transformation. For instance, we have the following document and the second sentence is selected:",
"William Shakespeare is a poet . He died in 1616 . He is regarded as the greatest writer .",
"In 80% of the cases, we mask the selected sentence (i.e., we replace each word in the sentence with a mask token [MASK]). The document above becomes William Shakespeare is a poet . [MASK] [MASK] [MASK] [MASK] [MASK] He is regarded as the greatest writer . (where “He died in 1616 . ” is masked).",
"In 10% of the cases, we keep the selected sentence as it is. This strategy is to simulate the input document during test time (with no masked sentences).",
"In the rest 10% cases, we replace the selected sentence with a random sentence. In this case, the document after transformation is William Shakespeare is a poet . Birds can fly . He is regarded as the greatest writer . The second sentence is replaced with “Birds can fly .” This strategy intends to add some noise during training and make the model more robust.",
"After the application of the above procedures to a document $\\mathcal {D} = (S_1, S_2, \\dots , S_{| \\mathcal {D} |})$ , we obtain the masked document $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$ . Let $\\mathcal {K} $ denote the set of indicies of selected sentences in $\\mathcal {D}$ . Now we are ready to predict the masked sentences $\\mathcal {M} = \\lbrace S_k | k \\in \\mathcal {K} \\rbrace $ using $\\widetilde{ \\mathcal {D} }$ . We first apply the hierarchical encoder Hibert in Section \"Conclusions\" to $\\widetilde{ \\mathcal {D} }$ and obtain its context sensitive sentence representations $( \\tilde{ \\mathbf {d}_1 }, \\tilde{ \\mathbf {d}_2 }, \\dots , \\tilde{ \\mathbf {d}_{| \\mathcal {D} |} } )$ . We will demonstrate how we predict the masked sentence $S_k = (w_0^k, w_1^k, w_2^k, \\dots , w_{|S_k|}^k)$ one word per step ( $w_0^k$ is an artificially added BOS token). At the $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$0 th step, we predict $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$1 given $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$2 and $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$3 . $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$4 already encodes the information of $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$5 with a focus around its $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$6 th sentence $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$7 . As shown in Figure 1 , we employ a Transformer decoder BIBREF6 to predict $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$8 with $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$9 as its additional input. The transformer decoder we used here is slightly different from the original one. The original decoder employs two multi-head attention layers to include both the context in encoder and decoder, while we only need one to learn the decoder context, since the context in encoder is a vector (i.e., $\\mathcal {K} $0 ). Specifically, after applying the word and positional embeddings to ( $\\mathcal {K} $1 ), we obtain $\\mathcal {K} $2 (also see Equation 6 ). Then we apply multi-head attention sub-layer to $\\mathcal {K} $3 : ",
"$$\\begin{split}\n\\tilde{\\mathbf {h}_{j-1}} &= \\text{MultiHead}(\\mathbf {q}_{j-1}, \\mathbf {K}_{j-1}, \\mathbf {V}_{j-1}) \\\\\n\\mathbf {q}_{j-1} &= \\mathbf {W}^Q \\: \\tilde{\\mathbf {e}_{j-1}^k} \\\\\n\\mathbf {K}_{j-1} &= \\mathbf {W}^K \\: \\widetilde{ \\mathbf {E} }^k_{1:j-1} \\\\\n\\mathbf {K}_{j-1} &= \\mathbf {W}^V \\: \\widetilde{ \\mathbf {E} }^k_{1:j-1}\n\\end{split}$$ (Eq. 13) ",
" where $\\mathbf {q}_{j-1}$ , $\\mathbf {K}_{j-1}$ , $\\mathbf {V}_{j-1}$ are the input query, key and value matrices of the multi-head attention function BIBREF6 $\\text{MultiHead}(\\cdot , \\cdot , \\cdot )$ , respectively. $\\mathbf {W}^Q \\in \\mathbb {R}^{d \\times d}$ , $\\mathbf {W}^K \\in \\mathbb {R}^{d \\times d}$ and $\\mathbf {W}^V \\in \\mathbb {R}^{d \\times d}$ are weight matrices.",
"Then we include the information of $\\widetilde{ \\mathcal {D} }$ by addition: ",
"$$\\tilde{\\mathbf {x}_{j-1}} = \\tilde{\\mathbf {h}_{j-1}} + \\tilde{ \\mathbf {d}_k }$$ (Eq. 14) ",
"We also follow a feedforward sub-layer (one hidden layer with ReLU BIBREF35 activation function) after $\\tilde{\\mathbf {x}_{j-1}}$ as in vaswani:2017:nips: ",
"$$\\tilde{\\mathbf {g}_{j-1}} = \\mathbf {W}^{ff}_2 \\max (0, \\mathbf {W}^{ff}_1 \\tilde{\\mathbf {x}_{j-1}} + \\mathbf {b}_1) + \\mathbf {b}_2$$ (Eq. 15) ",
"Note that the transformer decoder can have multiple layers by applying Equation ( 13 ) to ( 15 ) multiple times and we only show the computation of one layer for simplicity.",
"The probability of $w_j^k$ given $w_0^k,\\dots ,w_{j-1}^k$ and $\\widetilde{ \\mathcal {D} }$ is: ",
"$$p( w_j^k | w_{0:j-1}^k, \\widetilde{ \\mathcal {D} } ) = \\text{softmax}( \\mathbf {W}^O \\: \\tilde{\\mathbf {g}_{j-1}} )$$ (Eq. 16) ",
"Finally the probability of all masked sentences $ \\mathcal {M} $ given $\\widetilde{ \\mathcal {D} }$ is ",
"$$p(\\mathcal {M} | \\widetilde{ \\mathcal {D} }) = \\prod _{k \\in \\mathcal {K}} \\prod _{j=1}^{|S_k|} p(w_j^k | w_{0:j-1}^k, \\widetilde{ \\mathcal {D} })$$ (Eq. 17) ",
"The model above can be trained by minimizing the negative log-likelihood of all masked sentences given their paired documents. We can in theory have unlimited amount of training data for Hibert, since they can be generated automatically from (unlabeled) documents. Therefore, we can first train Hibert on large amount of data and then apply it to downstream tasks. In the next section, we will introduce its application to document summarization."
],
[
"Extractive summarization selects the most important sentences in a document as its summary. In this section, summarization is modeled as a sequence labeling problem. Specifically, a document is viewed as a sequence of sentences and a summarization model is expected to assign a True or False label for each sentence, where True means this sentence should be included in the summary. In the following, we will introduce the details of our summarization model based Hibert.",
"Let $\\mathcal {D} = (S_1, S_2, \\dots , S_{| \\mathcal {D} |})$ denote a document and $Y = (y_1, y_2, \\dots , y_{| \\mathcal {D} |})$ its sentence labels (methods for obtaining these labels are in Section \"Datasets\" ). As shown in Figure 2 , we first apply the hierarchical bidirectional transformer encoder Hibert to $\\mathcal {D}$ and yields the context dependent representations for all sentences $( \\mathbf {d}_1, \\mathbf {d}_2, \\dots , \\mathbf {d}_{|\\mathcal {D}|} )$ . The probability of the label of $S_i$ can be estimated using an additional linear projection and a softmax: ",
"$$p( y_i | \\mathcal {D} ) = \\text{softmax}(\\mathbf {W}^S \\: \\mathbf {d}_i)$$ (Eq. 20) ",
"where $\\mathbf {W}^S \\in \\mathbb {R}^{2 \\times d}$ . The summarization model can be trained by minimizing the negative log-likelihood of all sentence labels given their paired documents."
],
[
"In this section we assess the performance of our model on the document summarization task. We first introduce the dataset we used for pre-training and the summarization task and give implementation details of our model. We also compare our model against multiple previous models."
],
[
"We conducted our summarization experiments on the non-anonymous version CNN/Dailymail (CNNDM) dataset BIBREF36 , BIBREF9 , and the New York Times dataset BIBREF37 , BIBREF38 . For the CNNDM dataset, we preprocessed the dataset using the scripts from the authors of see:2017:acl. The resulting dataset contains 287,226 documents with summaries for training, 13,368 for validation and 11,490 for test. Following BIBREF38 , BIBREF37 , we created the NYT50 dataset by removing the documents whose summaries are shorter than 50 words from New York Times dataset. We used the same training/validation/test splits as in xu:2019:arxiv, which contain 137,778 documents for training, 17,222 for validation and 17,223 for test. To create sentence level labels for extractive summarization, we used a strategy similar to nallapati:2017:aaai. We label the subset of sentences in a document that maximizes Rouge BIBREF39 (against the human summary) as True and all other sentences as False.",
"To unsupervisedly pre-train our document model Hibert (see Section \"Pre-training\" for details), we created the GIGA-CM dataset (totally 6,626,842 documents and 2,854 million words), which includes 6,339,616 documents sampled from the English Gigaword dataset and the training split of the CNNDM dataset. We used the validation set of CNNDM as the validation set of GIGA-CM as well. As in see:2017:acl, documents and summaries in CNNDM, NYT50 and GIGA-CM are all segmented and tokenized using Stanford CoreNLP toolkit BIBREF40 . To reduce the vocabulary size, we applied byte pair encoding (BPE; BIBREF41 ) to all of our datasets. To limit the memory consumption during training, we limit the length of each sentence to be 50 words (51th word and onwards are removed) and split documents with more than 30 sentences into smaller documents with each containing at most 30 sentences."
],
[
"Our model is trained in three stages, which includes two pre-training stages and one finetuning stage. The first stage is the open-domain pre-training and in this stage we train Hibert with the pre-training objective (Section \"Pre-training\" ) on GIGA-CM dataset. In the second stage, we perform the in-domain pre-training on the CNNDM (or NYT50) dataset still with the same pre-training objective. In the final stage, we finetune Hibert in the summarization model (Section \"Extractive Summarization\" ) to predict extractive sentence labels on CNNDM (or NYT50).",
"The sizes of the sentence and document level Transformers as well as the Transformer decoder in Hibert are the same. Let $L$ denote the number of layers in Transformer, $H$ the hidden size and $A$ the number of attention heads. As in BIBREF6 , BIBREF0 , the hidden size of the feedforward sublayer is $4H$ . We mainly trained two model sizes: $\\text{\\sc Hibert}_S$ ( $L=6$ , $H=512$ and $A=8$ ) and $\\text{\\sc Hibert}_M$ ( $L=6$ , $H$0 and $H$1 ). We trained both $H$2 and $H$3 on a single machine with 8 Nvidia Tesla V100 GPUs with a batch size of 256 documents. We optimized our models using Adam with learning rate of 1e-4, $H$4 , $H$5 , L2 norm of 0.01, learning rate warmup 10,000 steps and learning rate decay afterwards using the strategies in vaswani:2017:nips. The dropout rate in all layers are 0.1. In pre-training stages, we trained our models until validation perplexities do not decrease significantly (around 45 epochs on GIGA-CM dataset and 100 to 200 epochs on CNNDM and NYT50). Training $H$6 for one epoch on GIGA-CM dataset takes approximately 20 hours.",
"Our models during fine-tuning stage can be trained on a single GPU. The hyper-parameters are almost identical to these in the pre-training stages except that the learning rate is 5e-5, the batch size is 32, the warmup steps are 4,000 and we train our models for 5 epochs. During inference, we rank sentences using $p( y_i | \\mathcal {D} ) $ (Equation ( 20 )) and choose the top $K$ sentences as summary, where $K$ is tuned on the validation set."
],
[
"We evaluated the quality of summaries from different systems automatically using ROUGE BIBREF39 . We reported the full length F1 based ROUGE-1, ROUGE-2 and ROUGE-L on the CNNDM and NYT50 datasets. We compute ROUGE scores using the ROUGE-1.5.5.pl script.",
"Additionally, we also evaluated the generated summaries by eliciting human judgments. Following BIBREF11 , BIBREF4 , we randomly sampled 20 documents from the CNNDM test set. Participants were presented with a document and a list of summaries produced by different systems. We asked subjects to rank these summaries (ties allowed) by taking informativeness (is the summary capture the important information from the document?) and fluency (is the summary grammatical?) into account. Each document is annotated by three different subjects."
],
[
"Our main results on the CNNDM dataset are shown in Table 1 , with abstractive models in the top block and extractive models in the bottom block. Pointer+Coverage BIBREF9 , Abstract-ML+RL BIBREF10 and DCA BIBREF42 are all sequence to sequence learning based models with copy and coverage modeling, reinforcement learning and deep communicating agents extensions. SentRewrite BIBREF26 and InconsisLoss BIBREF25 all try to decompose the word by word summary generation into sentence selection from document and “sentence” level summarization (or compression). Bottom-Up BIBREF27 generates summaries by combines a word prediction model with the decoder attention model. The extractive models are usually based on hierarchical encoders (SummaRuNNer; BIBREF3 and NeuSum; BIBREF11 ). They have been extended with reinforcement learning (Refresh; BIBREF4 and BanditSum; BIBREF20 ), Maximal Marginal Relevance (NeuSum-MMR; BIBREF21 ), latent variable modeling (LatentSum; BIBREF5 ) and syntactic compression (JECS; BIBREF38 ). Lead3 is a baseline which simply selects the first three sentences. Our model $\\text{\\sc Hibert}_S$ (in-domain), which only use one pre-training stage on the in-domain CNNDM training set, outperforms all of them and differences between them are all significant with a 0.95 confidence interval (estimated with the ROUGE script). Note that pre-training $\\text{\\sc Hibert}_S$ (in-domain) is very fast and it only takes around 30 minutes for one epoch on the CNNDM training set. Our models with two pre-training stages ( $\\text{\\sc Hibert}_S$ ) or larger size ( $\\text{\\sc Hibert}_M$ ) perform even better and $\\text{\\sc Hibert}_M$ outperforms BERT by 0.5 ROUGE. We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training. Note the setting for HeriTransfomer is ( $L=4$ , $H=300$ and $A=4$ ) . We can see that the pre-training (details in Section \"Pre-training\" ) leads to a +1.25 ROUGE improvement. Another baseline is based on a pre-trained BERT BIBREF0 and finetuned on the CNNDM dataset. We used the $\\text{BERT}_{\\text{base}}$ model because our 16G RAM V100 GPU cannot fit $\\text{BERT}_{\\text{large}}$ for the summarization task even with batch size of 1. The positional embedding of BERT supports input length up to 512 words, we therefore split documents with more than 10 sentences into multiple blocks (each block with 10 sentences). We feed each block (the BOS and EOS tokens of each sentence are replaced with [CLS] and [SEP] tokens) into BERT and use the representation at [CLS] token to classify each sentence. Our model $\\text{\\sc Hibert}_S$1 outperforms BERT by 0.4 to 0.5 ROUGE despite with only half the number of model parameters ( $\\text{\\sc Hibert}_S$2 54.6M v.s. BERT 110M). Results on the NYT50 dataset show the similar trends (see Table 2 ). EXTRACTION is a extractive model based hierarchical LSTM and we use the numbers reported by xu:2019:arxiv. The improvement of $\\text{\\sc Hibert}_S$3 over the baseline without pre-training (HeriTransformer) becomes 2.0 ROUGE. $\\text{\\sc Hibert}_S$4 (in-domain), $\\text{\\sc Hibert}_S$5 (in-domain), $\\text{\\sc Hibert}_S$6 and $\\text{\\sc Hibert}_S$7 all outperform BERT significantly according to the ROUGE script.",
"We also conducted human experiment with 20 randomly sampled documents from the CNNDM test set. We compared our model $\\text{\\sc Hibert}_M$ against Lead3, DCA, Latent, BERT and the human reference (Human). We asked the subjects to rank the outputs of these systems from best to worst. As shown in Table 4 , the output of $\\text{\\sc Hibert}_M$ is selected as the best in 30% of cases and we obtained lower mean rank than all systems except for Human. We also converted the rank numbers into ratings (rank $i$ to $7-i$ ) and applied student $t$ -test on the ratings. $\\text{\\sc Hibert}_M$ is significantly different from all systems in comparison ( $p < 0.05$ ), which indicates our model still lags behind Human, but is better than all other systems.",
"As mentioned earlier, our pre-training includes two stages. The first stage is the open-domain pre-training stage on the GIGA-CM dataset and the following stage is the in-domain pre-training on the CNNDM (or NYT50) dataset. As shown in Table 3 , we pretrained $\\text{\\sc Hibert}_S$ using only open-domain stage (Open-Domain), only in-domain stage (In-Domain) or both stages (Open+In-Domain) and applied it to the CNNDM summarization task. Results on the validation set of CNNDM indicate the two-stage pre-training process is necessary."
],
[
"The core part of a neural extractive summarization model is the hierarchical document encoder. We proposed a method to pre-train document level hierarchical bidirectional transformer encoders on unlabeled data. When we only pre-train hierarchical transformers on the training sets of summarization datasets with our proposed objective, application of the pre-trained hierarchical transformers to extractive summarization models already leads to wide improvement of summarization performance. Adding the large open-domain dataset to pre-training leads to even better performance. In the future, we plan to apply models to other tasks that also require hierarchical document encodings (e.g., document question answering). We are also interested in improving the architectures of hierarchical document encoders and designing other objectives to train hierarchical transformers."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Document Representation",
"Pre-training",
"Extractive Summarization",
"Experiments",
"Datasets",
"Implementation Details",
"Evaluations",
"Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"07f9afd79ec1426e67b10f5a598bbe3103f714cf"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L).",
"Our main results on the CNNDM dataset are shown in Table 1 , with abstractive models in the top block and extractive models in the bottom block. Pointer+Coverage BIBREF9 , Abstract-ML+RL BIBREF10 and DCA BIBREF42 are all sequence to sequence learning based models with copy and coverage modeling, reinforcement learning and deep communicating agents extensions. SentRewrite BIBREF26 and InconsisLoss BIBREF25 all try to decompose the word by word summary generation into sentence selection from document and “sentence” level summarization (or compression). Bottom-Up BIBREF27 generates summaries by combines a word prediction model with the decoder attention model. The extractive models are usually based on hierarchical encoders (SummaRuNNer; BIBREF3 and NeuSum; BIBREF11 ). They have been extended with reinforcement learning (Refresh; BIBREF4 and BanditSum; BIBREF20 ), Maximal Marginal Relevance (NeuSum-MMR; BIBREF21 ), latent variable modeling (LatentSum; BIBREF5 ) and syntactic compression (JECS; BIBREF38 ). Lead3 is a baseline which simply selects the first three sentences. Our model $\\text{\\sc Hibert}_S$ (in-domain), which only use one pre-training stage on the in-domain CNNDM training set, outperforms all of them and differences between them are all significant with a 0.95 confidence interval (estimated with the ROUGE script). Note that pre-training $\\text{\\sc Hibert}_S$ (in-domain) is very fast and it only takes around 30 minutes for one epoch on the CNNDM training set. Our models with two pre-training stages ( $\\text{\\sc Hibert}_S$ ) or larger size ( $\\text{\\sc Hibert}_M$ ) perform even better and $\\text{\\sc Hibert}_M$ outperforms BERT by 0.5 ROUGE. We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training. Note the setting for HeriTransfomer is ( $L=4$ , $H=300$ and $A=4$ ) . We can see that the pre-training (details in Section \"Pre-training\" ) leads to a +1.25 ROUGE improvement. Another baseline is based on a pre-trained BERT BIBREF0 and finetuned on the CNNDM dataset. We used the $\\text{BERT}_{\\text{base}}$ model because our 16G RAM V100 GPU cannot fit $\\text{BERT}_{\\text{large}}$ for the summarization task even with batch size of 1. The positional embedding of BERT supports input length up to 512 words, we therefore split documents with more than 10 sentences into multiple blocks (each block with 10 sentences). We feed each block (the BOS and EOS tokens of each sentence are replaced with [CLS] and [SEP] tokens) into BERT and use the representation at [CLS] token to classify each sentence. Our model $\\text{\\sc Hibert}_S$1 outperforms BERT by 0.4 to 0.5 ROUGE despite with only half the number of model parameters ( $\\text{\\sc Hibert}_S$2 54.6M v.s. BERT 110M). Results on the NYT50 dataset show the similar trends (see Table 2 ). EXTRACTION is a extractive model based hierarchical LSTM and we use the numbers reported by xu:2019:arxiv. The improvement of $\\text{\\sc Hibert}_S$3 over the baseline without pre-training (HeriTransformer) becomes 2.0 ROUGE. $\\text{\\sc Hibert}_S$4 (in-domain), $\\text{\\sc Hibert}_S$5 (in-domain), $\\text{\\sc Hibert}_S$6 and $\\text{\\sc Hibert}_S$7 all outperform BERT significantly according to the ROUGE script."
],
"extractive_spans": [],
"free_form_answer": "There were hierarchical and non-hierarchical baselines; BERT was one of those baselines",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L).",
"We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"Is the baseline a non-heirarchical model like BERT?"
],
"question_id": [
"fc8bc6a3c837a9d1c869b7ee90cf4e3c39bcd102"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"transformers"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: The architecture of HIBERT during training. senti is a sentence in the document above, which has four sentences in total. sent3 is masked during encoding and the decoder predicts the original sent3.",
"Figure 2: The architecture of our extractive summarization model. The sentence and document level transformers can be pretrained.",
"Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L).",
"Table 4: Human evaluation: proportions of rankings and mean ranks (MeanR; lower is better) of various models.",
"Table 2: Results of various models on the NYT50 test set using full-length F1 ROUGE. HIBERTS (indomain) and HIBERTM (in-domain) only uses one pretraining stage on the NYT50 training set.",
"Table 3: Results of summarization model (HIBERTS setting) with different pre-training strategies on the CNNDM validation set using full-length F1 ROUGE."
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"7-Table1-1.png",
"8-Table4-1.png",
"8-Table2-1.png",
"8-Table3-1.png"
]
} | [
"Is the baseline a non-heirarchical model like BERT?"
] | [
[
"1905.06566-Results-0",
"1905.06566-7-Table1-1.png"
]
] | [
"There were hierarchical and non-hierarchical baselines; BERT was one of those baselines"
] | 145 |
2004.03034 | The Role of Pragmatic and Discourse Context in Determining Argument Impact | Research in the social sciences and psychology has shown that the persuasiveness of an argument depends not only the language employed, but also on attributes of the source/communicator, the audience, and the appropriateness and strength of the argument's claims given the pragmatic and discourse context of the argument. Among these characteristics of persuasive arguments, prior work in NLP does not explicitly investigate the effect of the pragmatic and discourse context when determining argument quality. This paper presents a new dataset to initiate the study of this aspect of argumentation: it consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims. We further propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely only on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument. | {
"paragraphs": [
[
"Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8.",
"Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments BIBREF9, BIBREF10, BIBREF11, but it does so without controlling for the audience, the communicator or the pragmatic context.",
"Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength BIBREF12, BIBREF13. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits.",
"To the best of our knowledge, however, no NLP studies explicitly investigate the role of kairos — a component of pragmatic context that refers to the context-dependent “timeliness\" and “appropriateness\" of an argument and its claims within an argumentative discourse — in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: 10.1086/209393, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. article-3 similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument.",
"Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media BIBREF10, BIBREF14. However, in social media not all posts have to express arguments or stay on topic BIBREF15, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals BIBREF16. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research.",
"In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument. We define kairos as the sequence of argumentative text (e.g. claims) along a particular line of argumentative reasoning.",
"To start, we present a dataset extracted from kialo.com of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments.",
"With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument."
],
[
"Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing BIBREF17, BIBREF18, and argument component classification BIBREF19, BIBREF20. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power BIBREF9, BIBREF10, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) BIBREF16 and the audience BIBREF12, BIBREF13. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience.",
"It has also been shown, in social science studies, that kairos, which refers to the “timeliness” and “appropropriateness” of arguments and claims, is important to consider in studies of argument impact and persuasiveness BIBREF7, BIBREF8. One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at BIBREF14 Change My View, which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them. However, as stated in BIBREF15 many posts on social media platforms either do not express an argument, or diverge from the main topic of conversation. Therefore, it is difficult to measure the effect of pragmatic context in argument impact and persuasion, without confounding factors from using noisy social media data. In contrast, we provide a dataset of claims along with their structured argument path, which only consists of claims and corresponds to a particular line of reasoning for the given controversial topic. This structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context.",
"Consistent with previous findings in the social sciences, we find that incorporating pragmatic and discourse context is important in computational studies of persuasion, as predictive models that with the context representation outperform models that only incorporate claim-specific linguistic features, in predicting the impact of a claim. Such a system that can predict the impact of a claim given an argumentative discourse, for example, could potentially be employed by argument retrieval and generation models which aims to pick or generate the most appropriate possible claim given the discourse."
],
[
"Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented.",
"Figure FIGREF1 shows a partial argument tree for the argument thesis “Physical torture of prisoners is an acceptable interrogation tool.”. Each node in the argument tree corresponds to a claim, and these argument trees are constructed and edited collaboratively by the users of the platform.",
"Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Each path from the root to leaf nodes corresponds to an argument path which represents a particular line of reasoning on the given controversial topic.",
"Moreover, each claim has impact votes assigned by the users of the platform. The impact votes evaluate how impactful a claim is within its context, which consists of its predecessor claims from the thesis of the tree. For example, claim O1 “It is morally wrong to harm a defenseless person” is an opposing claim for the thesis and it is an impactful claim since most of its impact votes belong to the category of very high impact. However, claim S3 “It is illegitimate for state actors to harm someone without the process” is a supporting claim for its parent O1 and it is a less impactful claim since most of the impact votes belong to the no impact and low impact categories.",
"Distribution of impact votes. The distribution of claims with the given range of number of impact votes are shown in Table TABREF5. There are 19,512 claims in total with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. We limit our study to the claims with at least 5 votes to have a more reliable assignment for the accumulated impact label for each claim.",
"Impact label statistics. Table TABREF7 shows the distribution of the number of votes for each of the impact categories. The claims have $241,884$ total votes. The majority of the impact votes belong to medium impact category. We observe that users assign more high impact and very high impact votes than low impact and no impact votes respectively. When we restrict the claims to the ones with at least 5 impact votes, we have $213,277$ votes in total.",
"Agreement for the impact votes. To determine the agreement in assigning the impact label for a particular claim, for each claim, we compute the percentage of the votes that are the same as the majority impact vote for that claim. Let $c_{i}$ denote the count of the claims with the class labels C=[no impact, low impact, medium impact, high impact, very high impact] for the impact label $l$ at index $i$.",
"For example, for claim S1 in Figure FIGREF1, the agreement score is $100 * \\frac{30}{90}\\%=33.33\\%$ since the majority class (no impact) has 30 votes and there are 90 impact votes in total for this particular claim. We compute the agreement score for the cases where (1) we treat each impact label separately (5-class case) and (2) we combine the classes high impact and very high impact into a one class: impactful and no impact and low impact into a one class: not impactful (3-class case).",
"Table TABREF6 shows the number of claims with the given agreement score thresholds when we include the claims with at least 5 votes. We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60% agreement.",
"Context. In an argument tree, the claims from the thesis node (root) to each leaf node, form an argument path. This argument path represents a particular line of reasoning for the given thesis. Similarly, for each claim, all the claims along the path from the thesis to the claim, represent the context for the claim. For example, in Figure FIGREF1, the context for O1 consists of only the thesis, whereas the context for S3 consists of both the thesis and O1 since S3 is provided to support the claim O1 which is an opposing claim for the thesis.",
"The claims are not constructed independently from their context since they are written in consideration with the line of reasoning so far. In most cases, each claim elaborates on the point made by its parent and presents cases to support or oppose the parent claim's points. Similarly, when users evaluate the impact of a claim, they consider if the claim is timely and appropriate given its context. There are cases in the dataset where the same claim has different impact labels, when presented within a different context. Therefore, we claim that it is not sufficient to only study the linguistic characteristic of a claim to determine its impact, but it is also necessary to consider its context in determining the impact.",
"Context length ($\\text{C}_{l}$) for a particular claim C is defined by number of claims included in the argument path starting from the thesis until the claim C. For example, in Figure FIGREF1, the context length for O1 and S3 are 1 and 2 respectively. Table TABREF8 shows number of claims with the given range of context length for the claims with more than 5 votes and $60\\%$ agreement score. We observe that more than half of these claims have 3 or higher context length."
],
[
"Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we hypothesize that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models.",
"Prediction task. Given a claim, we want to predict the impact label that is assigned to it by the users: not impactful, medium impact, or impactful.",
"Preprocessing. We restrict our study to claims with at least 5 or more votes and greater than $60\\%$ agreement, to have a reliable impact label assignment. We have $7,386$ claims in the dataset satisfying these constraints. We see that the impact class impacful is the majority class since around $58\\%$ of the claims belong to this category.",
"For our experiments, we split our data to train (70%), validation (15%) and test (15%) sets."
],
[
"The majority baseline assigns the most common label of the training examples (high impact) to every test example."
],
[
"Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim.",
"The features that represent the simple characteristics of the claim's argument tree include the distance and similarity of the claim to the thesis, the similarity of a claim with its parent, and the impact votes of the claim's parent claim. We encode the similarity of a claim to its parent and the thesis claim with the cosine similarity of their tf-idf vectors. The distance and similarity metrics aim to model whether claims which are more similar (i.e. potentially more topically relevant) to their parent claim or the thesis claim, are more impactful.",
"We encode the quality of the parent claim as the number of votes for each impact class, and incorporate it as a feature to understand if it is more likely for a claim to impactful given an impactful parent claim.",
"Linguistic features. To represent each claim, we extracted the linguistic features proposed by BIBREF9 such as tf-idf scores for unigrams and bigrams, ratio of quotation marks, exclamation marks, modal verbs, stop words, type-token ratio, hedging BIBREF29, named entity types, POS n-grams, sentiment BIBREF30 and subjectivity scores BIBREF31, spell-checking, readibility features such as Coleman-Liau BIBREF32, Flesch BIBREF33, argument lexicon features BIBREF34 and surface features such as word lengths, sentence lengths, word types, and number of complex words."
],
[
"joulin-etal-2017-bag introduced a simple, yet effective baseline for text classification, which they show to be competitive with deep learning classifiers in terms of accuracy. Their method represents a sequence of text as a bag of n-grams, and each n-gram is passed through a look-up table to get its dense vector representation. The overall sequence representation is simply an average over the dense representations of the bag of n-grams, and is fed into a linear classifier to predict the label. We use the code released by joulin-etal-2017-bag to train a classifier for argument impact prediction, based on the claim text."
],
[
"Another effective baseline BIBREF35, BIBREF36 for text classification consists of encoding the text sequence using a bidirectional Long Short Term Memory (LSTM) BIBREF37, to get the token representations in context, and then attending BIBREF38 over the tokens to get the sequence representation. For the query vector for attention, we use a learned context vector, similar to yang-etal-2016-hierarchical. We picked our hyperparameters based on performance on the validation set, and report our results for the best set of hyperparameters. We initialized our word embeddings with glove vectors BIBREF39 pre-trained on Wikipedia + Gigaword, and used the Adam optimizer BIBREF40 with its default settings."
],
[
"devlin2018bert fine-tuned a pre-trained deep bi-directional transformer language model (which they call BERT), by adding a simple classification layer on top, and achieved state of the art results across a variety of NLP tasks. We employ their pre-trained language models for our task and compare it to our baseline models. For all the architectures described below, we finetune for 10 epochs, with a learning rate of 2e-5. We employ an early stopping procedure based on the model performance on a validation set."
],
[
"In this setting, we attempt to classify the impact of the claim, based on the text of the claim only. We follow the fine-tuning procedure for sequence classification detailed in BIBREF41, and input the claim text as a sequence of tokens preceded by the special [CLS] token and followed by the special [SEP] token. We add a classification layer on top of the BERT encoder, to which we pass the representation of the [CLS] token, and fine-tune this for argument impact prediction."
],
[
"In this setting, we use the parent claim's text, in addition to the target claim text, in order to classify the impact of the target claim. We treat this as a sequence pair classification task, and combine both the target claim and parent claim as a single sequence of tokens, separated by the special separator [SEP]. We then follow the same procedure above, for fine-tuning."
],
[
"In this setting, we consider incorporating a larger context from the discourse, in order to assess the impact of a claim. In particular, we consider up to four previous claims in the discourse (for a total context length of 5). We attempt to incorporate larger context into the BERT model in three different ways.",
"Flat representation of the path. The first, simple approach is to represent the entire path (claim + context) as a single sequence, where each of the claims is separated by the [SEP] token. BERT was trained on sequence pairs, and therefore the pre-trained encoders only have two segment embeddings BIBREF41. So to fit multiple sequences into this framework, we indicate all tokens of the target claim as belonging to segment A and the tokens for all the claims in the discourse context as belonging to segment B. This way of representing the input, requires no additional changes to the architecture or retraining, and we can just finetune in a similar manner as above. We refer to this representation of the context as a flat representation, and denote the model as $\\text{Context}_{f}(i)$, where $i$ indicates the length of the context that is incorporated into the model.",
"Attention over context. Recent work in incorporating argument sequence in predicting persuasiveness BIBREF14 has shown that hierarchical representations are effective in representing context. Similarly, we consider hierarchical representations for representing the discourse. We first encode each claim using the pre-trained BERT model as the claim encoder, and use the representation of the [CLS] token as claim representation. We then employ dot-product attention BIBREF38, to get a weighted representation for the context. We use a learned context vector as the query, for computing attention scores, similar to yang-etal-2016-hierarchical. The attention score $\\alpha _c$ is computed as shown below:",
"Where $V_c$ is the claim representation that was computed with the BERT encoder as described above, $V_l$ is the learned context vector that is used for computing attention scores, and $D$ is the set of claims in the discourse. After computing the attention scores, the final context representation $v_d$ is computed as follows:",
"We then concatenate the context representation with the target claim representation $[V_d, V_r]$ and pass it to the classification layer to predict the quality. We denote this model as $\\text{Context}_{a}(i)$.",
"GRU to encode context Similar to the approach above, we consider a hierarchical representation for representing the context. We compute the claim representations, as detailed above, and we then feed the discourse claims' representations (in sequence) into a bidirectional Gated Recurrent Unit (GRU) BIBREF42, to compute the context representation. We concatenate this with the target claim representation and use this to predict the claim impact. We denote this model as $\\text{Context}_{gru}(i)$."
],
[
"Table TABREF21 shows the macro precision, recall and F1 scores for the baselines as well as the BERT models with and without context representations.",
"We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.",
"We find that the BERT model with claim only representation performs significantly better ($p<0.001$) than the baseline models. Incorporating the parent representation only along with the claim representation does not give significant improvement over representing the claim only. However, incorporating the flat representation of the larger context along with the claim representation consistently achieves significantly better ($p<0.001$) performance than the claim representation alone. Similarly, attention representation over the context with the learned query vector achieves significantly better performance then the claim representation only ($p<0.05$).",
"We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\\%$).",
"To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with flat context representation for claims with context length values 1, 2, 3 and 4 separately. Table TABREF26 shows the F1 score of the BERT model without context and with flat context representation with different lengths of context. For the claims with context length 1, adding $\\text{Context}_{f}(3)$ and $\\text{Context}_{f}(4)$ representation along with the claim achieves significantly better $(p<0.05)$ F1 score than modeling the claim only. Similarly for the claims with context length 3 and 4, $\\text{Context}_{f}(4)$ and $\\text{Context}_{f}(3)$ perform significantly better than BERT with claim only ($(p<0.05)$ and $(p<0.01)$ respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. $\\text{C}_{l}=1$). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better."
],
[
"In this paper, we present a dataset of claims with their corresponding impact votes, and investigate the role of argumentative discourse context in argument impact classification. We experiment with various models to represent the claims and their context and find that incorporating the context information gives significant improvement in predicting argument impact. In our study, we find that flat representation of the context gives the best improvement in the performance and our analysis indicates that the contextual models perform better even for the claims with limited context."
],
[
"This work was supported in part by NSF grants IIS-1815455 and SES-1741441. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset",
"Methodology ::: Hypothesis and Task Description",
"Methodology ::: Baseline Models ::: Majority",
"Methodology ::: Baseline Models ::: SVM with RBF kernel",
"Methodology ::: Baseline Models ::: FastText",
"Methodology ::: Baseline Models ::: BiLSTM with Attention",
"Methodology ::: Fine-tuned BERT model",
"Methodology ::: Fine-tuned BERT model ::: Claim with no context",
"Methodology ::: Fine-tuned BERT model ::: Claim with parent representation",
"Methodology ::: Fine-tuned BERT model ::: Incorporating larger context",
"Results and Analysis",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"08357ffcc372ab5b2dcdeef00478d3a45f7d1ddc"
],
"answer": [
{
"evidence": [
"We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.",
"We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\\%$)."
],
"extractive_spans": [],
"free_form_answer": "F1 score of best authors' model is 55.98 compared to BiLSTM and FastText that have F1 score slighlty higher than 46.61.",
"highlighted_evidence": [
"We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\\%$) than distance from the thesis and linguistic features.",
"Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.",
"We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\\%$)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"fc4679a243e345a5d645efff11bc4e4317cde929"
],
"answer": [
{
"evidence": [
"Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim."
],
"extractive_spans": [
"SVM with RBF kernel"
],
"free_form_answer": "",
"highlighted_evidence": [
"Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c9c5229625288c47e9f396728a6162bc35fc8ea8"
],
"answer": [
{
"evidence": [
"Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented."
],
"extractive_spans": [
"While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument."
],
"free_form_answer": "",
"highlighted_evidence": [
"Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2e9ad78831c6a42fc1da68fde798899e8e64d8a8"
],
"answer": [
{
"evidence": [
"Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented."
],
"extractive_spans": [
"5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact"
],
"free_form_answer": "",
"highlighted_evidence": [
" Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How better are results compared to baseline models?",
"What models that rely only on claim-specific linguistic features are used as baselines?",
"How is pargmative and discourse context added to the dataset?",
"What annotations are available in the dataset?"
],
"question_id": [
"ca26cfcc755f9d0641db0e4d88b4109b903dbb26",
"6cdd61ebf84aa742155f4554456cc3233b6ae2bf",
"8e8097cada29d89ca07166641c725e0f8fed6676",
"951098f0b7169447695b47c142384f278f451a1e"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Example partial argument tree with claims and corresponding impact votes for the thesis “PHYSICAL TORTURE OF PRISONERS IS AN ACCEPTABLE INTERROGATION TOOL.”.",
"Table 1: Number of claims for the given range of number of votes. There are 19,512 claims in the dataset with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes.",
"Table 2: Number of claims, with at least 5 votes, above the given threshold of agreement percentage for 3-class and 5-class cases. When we combine the low impact and high impact classes, there are more claims with high agreement score.",
"Table 3: Number of votes for the given impact label. There are 241, 884 total votes and majority of them belongs to the category MEDIUM IMPACT.",
"Table 4: Number of claims for the given range of context length, for claims with more than 5 votes and an agreement score greater than 60%.",
"Table 5: Results for the baselines and the BERT models with and without the context. Best performing model is BERT with the representation of previous 3 claims in the path along with the claim representation itself. We run the models 5 times and we report the mean and standard deviation.",
"Table 6: F1 scores of each model for the claims with various context length values."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png"
]
} | [
"How better are results compared to baseline models?"
] | [
[
"2004.03034-Results and Analysis-1",
"2004.03034-Results and Analysis-3"
]
] | [
"F1 score of best authors' model is 55.98 compared to BiLSTM and FastText that have F1 score slighlty higher than 46.61."
] | 147 |
1910.12618 | Textual Data for Time Series Forecasting | While ubiquitous, textual sources of information such as company reports, social media posts, etc. are hardly included in prediction algorithms for time series, despite the relevant information they may contain. In this work, openly accessible daily weather reports from France and the United-Kingdom are leveraged to predict time series of national electricity consumption, average temperature and wind-speed with a single pipeline. Two methods of numerical representation of text are considered, namely traditional Term Frequency - Inverse Document Frequency (TF-IDF) as well as our own neural word embedding. Using exclusively text, we are able to predict the aforementioned time series with sufficient accuracy to be used to replace missing data. Furthermore the proposed word embeddings display geometric properties relating to the behavior of the time series and context similarity between words. | {
"paragraphs": [
[
"Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes.",
"The field of electricity consumption is one where expert knowledge is broad. It is known that the major phenomena driving the load demand are calendar (time of the year, day of the week, ...) and meteorological. For instance generalized additive models (GAM) BIBREF12 representing the consumption as a sum of functions of the time of the year, temperature and wind speed (among others) typically yield less than 1.5% of relative error for French national electricity demand and 8% for local one BIBREF13, BIBREF14. Neural networks and their variants, with their ability to extract patterns from heterogeneous types of data have also obtained state-of-the-art results BIBREF15, BIBREF16, BIBREF17. However to our knowledge no exploratory work using text has been conducted yet. Including such data in electricity demand forecasting models would not only contribute to close the gap with other domains, but also help to understand better which aspects of text are useful, how the encoding of the text influences forecasts and to which extend a prediction algorithm can extract relevant information from unstructured data. Moreover the major drawback of all the aforementioned approaches is that they require meteorological data that may be difficult to find, unavailable in real time or expensive. Textual sources such as weather reports on the other hand are easy to find, usually available on a daily basis and free.",
"The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series.",
"The rest of this paper is organized as follows. The following section introduces the two data sets used to conduct our study. Section 3 presents the different machine learning approaches used and how they were tuned. Section 4 highlights the main results of our study, while section 5 concludes this paper and gives insight on future possible work."
],
[
"In order to prove the consistency of our work, experiments have been conducted on two data sets, one for France and the other for the UK. In this section details about the text and time series data are given, as well as the major preprocessing steps."
],
[
"Three types of time series are considered in our work: national net electricity consumption (also referred as load or demand), national temperature and wind speed. The load data sets were retrieved on the websites of the respective grid operators, respectively RTE (Réseau et Transport d'Électricité) for France and National Grid for the UK. For France, the available data ranges from January the 1st 2007 to August the 31st 2018. The default temporal resolution is 30 minutes, but it is averaged to a daily one. For the UK, it is available from January the 1st 2006 to December the 31st 2018 with the same temporal resolution and thus averaging. Due to social factors such as energy policies or new usages of electricity (e.g. Electric Vehicles), the net consumption usually has a long-term trend (fig. FIGREF2). While for France it seems marginal (fig. FIGREF2), there is a strong decreasing trend for the United-Kingdom (fig. FIGREF2). Such a strong non-stationarity of the time series would cause problems for the forecasting process, since the learnt demand levels would differ significantly from the upcoming ones. Therefore a linear regression was used to approximate the decreasing trend of the net consumption in the UK. It is then subtracted before the training of the methods, and then re-added a posteriori for prediction.",
"As for the weather time series, they were extracted from multiple weather stations around France and the UK. The national average is obtained by combining the data from all stations with a weight proportional to the city population the station is located in. For France the stations' data is provided by the French meteorological office, Météo France, while the British ones are scrapped from stations of the National Oceanic and Atmospheric Administration (NOAA). Available on the same time span as the consumption, they usually have a 3 hours temporal resolution but are averaged to a daily one as well. Finally the time series were scaled to the range $[0,1]$ before the training phase, and re-scaled during prediction time."
],
[
"Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2.",
"As emphasized in many studies, preprocessing of the text can ease the learning of the methods and improve accuracy BIBREF18. Therefore the following steps are applied: removal of non-alphabetic characters, removal of stop-words and lowercasing. While it was often highlighted that word lemmatization and stemming improve results, initial experiments showed it was not the case for our study. This is probably due to the technical vocabulary used in both corpora pertaining to the field of meteorology. Already limited in size, the aforementioned preprocessing operations do not yield a significant vocabulary size reduction and can even lead to a loss of linguistic meaning. Finally, extremely frequent or rare words may not have high explanatory power and may reduce the different models' accuracy. That is why words appearing less than 7 times or in more than 40% of the (learning) corpus are removed as well. Figure FIGREF8 represents the distribution of the document lengths after preprocessing, while table TABREF11 gives descriptive statistics on both corpora. Note that the preprocessing steps do not heavily rely on the considered language: therefore our pipeline is easily adaptable for other languages."
],
[
"A major target of our work is to show the reports contain an intrinsic information relevant for time series, and that the predictive results do not heavily depend on the encoding of the text or the machine learning algorithm used. Therefore in this section we present the text encoding approaches, as well as the forecasting methods used with them."
],
[
"Machines and algorithms cannot work with raw text directly. Thus one major step when working with text is the choice of its numerical representation. In our work two significantly different encoding approaches are considered. The first one is the TF-IDF approach. It embeds a corpus of $N$ documents and $V$ words into a matrix $X$ of size $N \\times V$. As such, every document is represented by a vector of size $V$. For each word $w$ and document $d$ the associated coefficient $x_{d,w}$ represents the frequency of that word in that document, penalized by its overall frequency in the rest of the corpus. Thus very common words will have a low TF-IDF value, whereas specific ones which will appear often in a handful of documents will have a large TF-IDF score. The exact formula to calculate the TF-IDF value of word $w$ in document $d$ is:",
"where $f_{d,w}$ is the number of appearances of $w$ in $d$ adjusted by the length of $d$ and $\\#\\lbrace d: w \\in d \\rbrace $ is the number of documents in which the word $w$ appears. In our work we considered only individual words, also commonly referred as 1-grams in the field of natural language processing (NLP). The methodology can be easily extended to $n$-grams (groups of $n$ consecutive words), but initial experiments showed that it did not bring any significant improvement over 1-grams.",
"The second representation is a neural word embedding. It consists in representing every word in the corpus by a real-valued vector of dimension $q$. Such models are usually obtained by learning a vector representation from word co-occurrences in a very large corpus (typically hundred thousands of documents, such as Wikipedia articles for example). The two most popular embeddings are probably Google's Word2Vec BIBREF19 and Standford's GloVe BIBREF20. In the former, a neural network is trained to predict a word given its context (continuous bag of word model), whereas in the latter a matrix factorization scheme on the log co-occurences of words is applied. In any case, the very nature of the objective function allows the embedding models to learn to translate linguistic similarities into geometric properties in the vector space. For instance the vector $\\overrightarrow{king} - \\overrightarrow{man} + \\overrightarrow{woman}$ is expected to be very close to the vector $\\overrightarrow{queen}$. However in our case we want a vector encoding which is tailored for the technical vocabulary of our weather reports and for the subsequent prediction task. This is why we decided to train our own word embedding from scratch during the learning phase of our recurrent or convolutional neural network. Aside from the much more restricted size of our corpora, the major difference with the aforementioned embeddings is that in our case it is obtained by minimizing a squared loss on the prediction. In that framework there is no explicit reason for our representation to display any geometric structure. However as detailed in section SECREF36, our word vectors nonetheless display geometric properties pertaining to the behavior of the time series."
],
[
"Multiple machine learning algorithms were applied on top of the encoded textual documents. For the TF-IDF representation, the following approaches are applied: random forests (RF), LASSO and multilayer perceptron (MLP) neural networks (NN). We chose these algorithms combined to the TF-IDF representation due to the possibility of interpretation they give. Indeed, considering the novelty of this work, the understanding of the impact of the words on the forecast is of paramount importance, and as opposed to embeddings, TF-IDF has a natural interpretation. Furthermore the RF and LASSO methods give the possibility to interpret marginal effects and analyze the importance of features, and thus to find the words which affect the time series the most.",
"As for the word embedding, recurrent or convolutional neural networks (respectively RNN and CNN) were used with them. MLPs are not used, for they would require to concatenate all the vector representations of a sentence together beforehand and result in a network with too many parameters to be trained correctly with our number of available documents. Recall that we decided to train our own vector representation of words instead of using an already available one. In order to obtain the embedding, the texts are first converted into a sequence of integers: each word is given a number ranging from 1 to $V$, where $V$ is the vocabulary size (0 is used for padding or unknown words in the test set). One must then calculate the maximum sequence length $S$, and sentences of length shorter than $S$ are then padded by zeros. During the training process of the network, for each word a $q$ dimensional real-valued vector representation is calculated simultaneously to the rest of the weights of the network. Ergo a sentence of $S$ words is translated into a sequence of $S$ $q$-sized vectors, which is then fed into a recurrent neural unit. For both languages, $q=20$ seemed to yield the best results. In the case of recurrent units two main possibilities arise, with LSTM (Long Short-Term Memory) BIBREF21 and GRU (Gated Recurrent Unit) BIBREF22. After a few initial trials, no significant performance differences were noticed between the two types of cells. Therefore GRU were systematically used for recurrent networks, since their lower amount of parameters makes them easier to train and reduces overfitting. The output of the recurrent unit is afterwards linked to a fully connected (also referred as dense) layer, leading to the final forecast as output. The rectified linear unit (ReLU) activation in dense layers systematically gave the best results, except on the output layer where we used a sigmoid one considering the time series' normalization. In order to tone down overfitting, dropout layers BIBREF23 with probabilities of 0.25 or 0.33 are set in between the layers. Batch normalization BIBREF24 is also used before the GRU since it stabilized training and improved performance. Figure FIGREF14 represents the architecture of our RNN.",
"The word embedding matrix is therefore learnt jointly with the rest of the parameters of the neural network by minimization of the quadratic loss with respect to the true electricity demand. Note that while above we described the case of the RNN, the same procedure is considered for the case of the CNN, with only the recurrent layers replaced by a combination of 1D convolution and pooling ones. As for the optimization algorithms of the neural networks, traditional stochastic gradient descent with momentum or ADAM BIBREF25 together with a quadratic loss are used. All of the previously mentioned methods were coded with Python. The LASSO and RF were implemented using the library Scikit Learn BIBREF26, while Keras BIBREF27 was used for the neural networks."
],
[
"While most parameters are trained during the learning optimization process, all methods still involve a certain number of hyperparameters that must be manually set by the user. For instance for random forests it can correspond to the maximum depth of the trees or the fraction of features used at each split step, while for neural networks it can be the number of layers, neurons, the embedding dimension or the activation functions used. This is why the data is split into three sets:",
"The training set, using all data available up to the 31st of December 2013 (2,557 days for France and 2,922 for the UK). It is used to learn the parameters of the algorithms through mathematical optimization.",
"The years 2014 and 2015 serve as validation set (730 days). It is used to tune the hyperparameters of the different approaches.",
"All the data from January the 1st 2016 (974 days for France and 1,096 for the UK) is used as test set, on which the final results are presented.",
"Grid search is applied to find the best combination of values: for each hyperparameter, a range of values is defined, and all the possible combinations are successively tested. The one yielding the lowest RMSE (see section SECREF4) on the validation set is used for the final results on the test one. While relatively straightforward for RFs and the LASSO, the extreme number of possibilities for NNs and their extensive training time compelled us to limit the range of architectures possible. The hyperparameters are tuned per method and per country: ergo the hyperparameters of a given algorithm will be the same for the different time series of a country (e.g. the RNN architecture for temperature and load for France will be the same, but different from the UK one). Finally before application on the testing set, all the methods are re-trained from scratch using both the training and validation data."
],
[
"The goal of our experiments is to quantify how close one can get using textual data only when compared to numerical data. However the inputs of the numerical benchmark should be hence comparable to the information contained in the weather reports. Considering they mainly contain calendar (day of the week and month) as well as temperature and wind information, the benchmark of comparison is a random forest trained on four features only: the time of the year (whose value is 0 on January the 1st and 1 on December the 31st with a linear growth in between), the day of the week, the national average temperature and wind speed. The metrics of evaluation are the Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and the $R^2$ coefficient given by:",
"where $T$ is the number of test samples, $y_t$ and $\\hat{y}_t$ are respectively the ground truth and the prediction for the document of day $t$, and $\\overline{y}$ is the empirical average of the time series over the test sample. A known problem with MAPE is that it unreasonably increases the error score for values close to 0. While for the load it isn't an issue at all, it can be for the meteorological time series. Therefore for the temperature, the MAPE is calculated only when the ground truth is above the 5% empirical quantile. Although we aim at achieving the highest accuracy possible, we focus on the interpretability of our models as well."
],
[
"Many words are obviously irrelevant to the time series in our texts. For instance the day of the week, while playing a significant role for the load demand, is useless for temperature or wind. Such words make the training harder and may decrease the accuracy of the prediction. Therefore a feature selection procedure similar to BIBREF28 is applied to select a subset of useful features for the different algorithms, and for each type of time series. Random forests are naturally able to calculate feature importance through the calculation of error increase in the out-of-bag (OOB) samples. Therefore the following process is applied to select a subset of $V^*$ relevant words to keep:",
"A RF is trained on the whole training & validation set. The OOB feature importance can thus be calculated.",
"The features are then successively added to the RF in decreasing order of feature importance.",
"This process is repeated $B=10$ times to tone down the randomness. The number $V^*$ is then set to the number of features giving the highest median OOB $R^2$ value.",
"The results of this procedure for the French data is represented in figure FIGREF24. The best median $R^2$ is achieved for $V^* = 52$, although one could argue that not much gain is obtained after 36 words. The results are very similar for the UK data set, thus for the sake of simplicity the same value $V^* = 52$ is used. Note that the same subset of words is used for all the different forecasting models, which could be improved in further work using other selection criteria (e.g. mutual information, see BIBREF29). An example of normalized feature importance is given in figure. FIGREF32."
],
[
"Note that most of the considered algorithms involve randomness during the training phase, with the subsampling in the RFs or the gradient descent in the NNs for instance. In order to tone it down and to increase the consistency of our results, the different models are run $B=10$ times. The results presented hereafter correspond to the average and standard-deviation on those runs. The RF model denoted as \"sel\" is the one with the reduced number of features, whereas the other RF uses the full vocabulary. We also considered an aggregated forecaster (abridged Agg), consisting of the average of the two best individual ones in terms of RMSE. All the neural network methods have a reduced vocabulary size $V^*$. The results for the French and UK data are respectively given by tables TABREF26 and TABREF27.",
"Our empirical results show that for the electricity consumption prediction task, the order of magnitude of the relative error is around 5%, independently of the language, encoding and machine learning method, thus proving the intrinsic value of the information contained in the textual documents for this time series. As expected, all text based methods perform poorer than when using explicitly numerical input features. Indeed, despite containing relevant information, the text is always more fuzzy and less precise than an explicit value for the temperature or the time of the year for instance. Again the aim of this work is not to beat traditional methods with text, but quantifying how close one can come to traditional approaches when using text exclusively. As such achieving less than 5% of MAPE was nonetheless deemed impressive by expert electricity forecasters. Feature selection brings significant improvement in the French case, although it does not yield any improvement in the English one. The reason for this is currently unknown. Nevertheless the feature selection procedure also helps the NNs by dramatically reducing the vocabulary size, and without it the training of the networks was bound to fail. While the errors accross methods are roughly comparable and highlight the valuable information contained within the reports, the best method nonetheless fluctuates between languages. Indeed in the French case there is a hegemony of the NNs, with the embedding RNN edging the MLP TF-IDF one. However for the UK data set the RFs yield significantly better results on the test set than the NNs. This inversion of performance of the algorithms is possibly due to a change in the way the reports were written by the Met Office after August 2017, since the results of the MLP and RNN on the validation set (not shown here) were satisfactory and better than both RFs. For the two languages both the CNN and the LASSO yielded poor results. For the former, it is because despite grid search no satisfactory architecture was found, whereas the latter is a linear approach and was used more for interpretation purposes than strong performance. Finally the naive aggregation of the two best experts always yields improvement, especially for the French case where the two different encodings are combined. This emphasises the specificity of the two representations leading to different types of errors. An example of comparison between ground truth and forecast for the case of electricity consumption is given for the French language with fig. FIGREF29, while another for temperature may be found in the appendix FIGREF51. The sudden \"spikes\" in the forecast are due to the presence of winter related words in a summer report. This is the case when used in comparisons, such as \"The flood will be as severe as in January\" in a June report and is a limit of our approach. Finally, the usual residual $\\hat{\\varepsilon }_t = y_t - \\hat{y}_t$ analyses procedures were applied: Kolmogorov normality test, QQplots comparaison to gaussian quantiles, residual/fit comparison... While not thoroughly gaussian, the residuals were close to normality nonetheless and displayed satisfactory properties such as being generally independent from the fitted and ground truth values. Excerpts of this analysis for France are given in figure FIGREF52 of the appendix. The results for the temperature and wind series are given in appendix. Considering that they have a more stochastic behavior and are thus more difficult to predict, the order of magnitude of the errors differ (the MAPE being around 15% for temperature for instance) but globally the same observations can be made."
],
[
"While accuracy is the most relevant metric to assess forecasts, interpretability of the models is of paramount importance, especially in the field of professional electricity load forecasting and considering the novelty of our work. Therefore in this section we discuss the properties of the RF and LASSO models using the TF-IDF encoding scheme, as well as the RNN word embedding."
],
[
"One significant advantage of the TF-IDF encoding when combined with random forests or the LASSO is that it is possible to interpret the behavior of the models. For instance, figure FIGREF32 represents the 20 most important features (in the RF OOB sense) for both data sets when regressing over electricity demand data. As one can see, the random forest naturally extracts calendar information contained in the weather reports, since months or week-end days are among the most important ones. For the former, this is due to the periodic behavior of electricity consumption, which is higher in winter and lower in summer. This is also why characteristic phenomena of summer and winter, such as \"thunderstorms\", \"snow\" or \"freezing\" also have a high feature importance. The fact that August has a much more important role than July also concurs with expert knowledge, especially for France: indeed it is the month when most people go on vacations, and thus when the load drops the most. As for the week-end names, it is due to the significantly different consumer behavior during Saturdays and especially Sundays when most of the businesses are closed and people are usually at home. Therefore the relevant words selected by the random forest are almost all in agreement with expert knowledge.",
"We also performed the analysis of the relevant words for the LASSO. In order to do that, we examined the words $w$ with the largest associated coefficients $\\beta _w$ (in absolute value) in the regression. Since the TF-IDF matrix has positive coefficients, it is possible to interpret the sign of the coefficient $\\beta _w$ as its impact on the time series. For instance if $\\beta _w > 0$ then the presence of the word $w$ causes a rise the time series (respectively if $\\beta _w < 0$, it entails a decline). The results are plotted fig. FIGREF35 for the the UK. As one can see, the winter related words have positive coefficients, and thus increase the load demand as expected whereas the summer related ones decrease it. The value of the coefficients also reflects the impact on the load demand. For example January and February have the highest and very similar values, which concurs with the similarity between the months. Sunday has a much more negative coefficient than Saturday, since the demand significantly drops during the last day of the week. The important words also globally match between the LASSO and the RF, which is a proof of the consistency of our results (this is further explored afterwards in figure FIGREF43). Although not presented here, the results are almost identical for the French load, with approximately the same order of relevancy. The important words logically vary in function of the considered time series, but are always coherent. For instance for the wind one, terms such as \"gales\", \"windy\" or \"strong\" have the highest positive coefficients, as seen in the appendix figure FIGREF53. Those results show that a text based approach not only extracts the relevant information by itself, but it may eventually be used to understand which phenomena are relevant to explain the behavior of a time series, and to which extend."
],
[
"Word vector embeddings such as Word2Vec and GloVe are known for their vectorial properties translating linguistic ones. However considering the objective function of our problem, there was no obvious reason for such attributes to appear in our own. Nevertheless for both languages we conducted an analysis of the geometric properties of our embedding matrix. We investigated the distances between word vectors, the relevant metric being the cosine distance given by:",
"where $\\overrightarrow{w_1}$ and $\\overrightarrow{w_2}$ are given word vectors. Thus a cosine distance lower than 1 means similarity between word vectors, whereas a greater than 1 corresponds to opposition.",
"The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance).",
"The results of the experiments are very similar for both languages again. Indeed, the words are globally embedded in the vector space by topic: winter related words such as \"January\" (\"janvier\"), \"February\" (\"février\"), \"snow\" (\"neige\"), \"freezing\" (\"glacial\") are close to each other and almost opposite to summer related ones such as \"July\" (\"juillet\"), \"August\" (\"août\"), \"hot\" (\"chaud\"). For both cases the week days Monday (\"lundi\") to Friday (\"vendredi\") are grouped very closely to each other, while significantly separated from the week-end ones \"Saturday\" (\"samedi\") and \"Sunday\" (\"dimanche\"). Despite these observations, a few seemingly unrelated words enter the lists of top 10, especially for the English case (such as \"pressure\" or \"dusk\" for \"February\"). In fact the French language embedding seems of better quality, which is perhaps linked to the longer length of the French reports in average. This issue could probably be addressed with more data. Another observation made is that the importance of a word $w$ seems related to its euclidean norm in the embedding space ${\\overrightarrow{w}}_2$. For both languages the list of the 20 words with the largest norm is given fig. FIGREF40. As one can see, it globally matches the selected ones from the RF or the LASSO (especially for the French language), although the order is quite different. This is further supported by the Venn diagram of common words among the top 50 ones for each word selection method represented in figure FIGREF43 for France. Therefore this observation could also be used as feature selection procedure for the RNN or CNN in further work.",
"In order to achieve a global view of the embeddings, the t-SNE algorithm BIBREF30 is applied to project an embedding matrix into a 2 dimensional space, for both languages. The observations for the few aforementioned words are confirmed by this representation, as plotted in figure FIGREF44. Thematic clusters can be observed, roughly corresponding to winter, summer, week-days, week-end days for both languages. Globally summer and winter seem opposed, although one should keep in mind that the t-SNE representation does not preserve the cosine distance. The clusters of the French embedding appear much more compact than the UK one, comforting the observations made when explicitly calculating the cosine distances."
],
[
"In this study, a novel pipeline to predict three types of time series using exclusively a textual source was proposed. Making use of publicly available daily weather reports, we were able to predict the electricity consumption with less than 5% of MAPE for both France and the United-Kingdom. Moreover our average national temperature and wind speed predictions displayed sufficient accuracy to be used to replace missing data or as first approximation in traditional models in case of unavailability of meteorological features.",
"The texts were encoded numerically using either TF-IDF or our own neural word embedding. A plethora of machine learning algorithms such as random forests or neural networks were applied on top of those representations. Our results were consistent over language, numerical representation of the text and prediction algorithm, proving the intrinsic value of the textual sources for the three considered time series. Contrarily to previous works in the field of textual data for time series forecasting, we went in depth and quantified the impact of words on the variations of the series. As such we saw that all the algorithms naturally extract calendar and meteorological information from the texts, and that words impact the time series in the expected way (e.g. winter words increase the consumption and summer ones decrease it). Despite being trained on a regular quadratic loss, our neural word embedding spontaneously builds geometric properties. Not only does the norm of a word vector reflect its significance, but the words are also grouped by topic with for example winter, summer or day of the week clusters.",
"Note that this study was a preliminary work on the use of textual information for time series prediction, especially electricity demand one. The long-term goal is to include multiple sources of textual information to improve the accuracy of state-of-the-art methods or to build a text based forecaster which can be used to increase the diversity in a set of experts for electricity consumption BIBREF31. However due to the redundancy of the information of the considered weather reports with meteorological features, it may be necessary to consider alternative textual sources. The use of social media such as Facebook, Twitter or Instagram may give interesting insight and will therefore be investigated in future work."
],
[
"Additional results for the prediction tasks on temperature and wind speed can be found in tables TABREF47 to TABREF50. An example of forecast for the French temperature is given in figure FIGREF51.",
"While not strictly normally distributed, the residuals for the French electricity demand display an acceptable behavior. This holds also true for the British consumption, and both temperature time series, but is of lesser quality for the wind one.",
"The the UK wind LASSO regression, the words with the highest coefficients $\\beta _w$ are indeed related to strong wind phenomena, whereas antagonistic ones such as \"fog\" or \"mist\" have strongly negative ones as expected (fig. FIGREF53).",
"For both languages we represented the evolution of the (normalized) losses for the problem of load regression in fig. FIGREF54. The aspect is a typical one, with the validation loss slightly above the training one. The slightly erratic behavior of the former one is possibly due to a lack of data to train the embeddings.",
"The cosine distances for three other major words and for both corpora have been calculated as well. The results are given in tables TABREF57 and TABREF58. For both languages, the three summer months are grouped together, and so are the two week-end days. However again the results are less clear for the English language. They are especially mediocre for \"hot\", considering that only \"warm\" seems truly relevant and that \"August\" for instance is quite far away. For the French language instead of \"hot\" the distances to \"thunderstorms\" were calculated. The results are quite satisfactory, with \"orageux\"/\"orageuse\" (\"thundery\") coming in the two first places and related meteorological phenomena (\"cumulus\" and \"grêle\", meaning \"hail\") relatively close as well. For the French case, Saturday and Sunday are very close to summer related words. This observation probably highlights the fact that the RNN groups load increasing and decreasing words in opposite parts of the embedding space."
]
],
"section_name": [
"Introduction",
"Presentation of the data",
"Presentation of the data ::: Time Series",
"Presentation of the data ::: Text",
"Modeling and forecasting framework",
"Modeling and forecasting framework ::: Numerical Encoding of the Text",
"Modeling and forecasting framework ::: Machine Learning Algorithms",
"Modeling and forecasting framework ::: Hyperparameter Tuning",
"Experiments",
"Experiments ::: Feature selection",
"Experiments ::: Main results",
"Experiments ::: Interpretability of the models",
"Experiments ::: Interpretability of the models ::: TF-IDF representation",
"Experiments ::: Interpretability of the models ::: Vector embedding representation",
"Conclusion",
""
]
} | {
"answers": [
{
"annotation_id": [
"e6c530042231f1a95608b2495514fe8b5ad08d28"
],
"answer": [
{
"evidence": [
"Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2."
],
"extractive_spans": [],
"free_form_answer": "4,261 days for France and 4,748 for the UK",
"highlighted_evidence": [
"The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"5aa11104f6641837a83ea424f900ee683d194b79"
],
"answer": [
{
"evidence": [
"The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"f704fdce4c0a29cd04b3bd36b5062fd44e16c965"
],
"answer": [
{
"evidence": [
"The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance)."
],
"extractive_spans": [],
"free_form_answer": "Winter and summer words formed two separate clusters. Week day and week-end day words also formed separate clusters.",
"highlighted_evidence": [
"For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"08426e8d76bfe140f762a3949db74028e5b14163"
],
"answer": [
{
"evidence": [
"The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series."
],
"extractive_spans": [],
"free_form_answer": "Relative error is less than 5%",
"highlighted_evidence": [
"With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How big is dataset used for training/testing?",
"Is there any example where geometric property is visible for context similarity between words?",
"What geometric properties do embeddings display?",
"How accurate is model trained on text exclusively?"
],
"question_id": [
"07c59824f5e7c5399d15491da3543905cfa5f751",
"77f04cd553df691e8f4ecbe19da89bc32c7ac734",
"728a55c0f628f2133306b6bd88af00eb54017b12",
"d5498d16e8350c9785782b57b1e5a82212dbdaad"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Net electricity consumption (Load) over time.",
"Figure 2: Word counts for the two corpora after preprocessing.",
"Table 3: Descriptive analysis of the two corpora (after preprocessing)",
"Figure 3: Structure of our RNN. Dropout and batch normalization are not represented.",
"Figure 4: Evolution of the OOB R2 during the selection procedure.",
"Table 4: Forecast errors on the net load for the French Dataset.",
"Table 5: Forecast errors on the net load for the British Dataset.",
"Table 6: Best (individual, in terms of RMSE) result for each of the considered time series.",
"Figure 5: Overlapping of prediction and real load (France)",
"Figure 6: RF feature importance over the B = 10 runs.",
"Figure 7: Coefficients βw in the british load LASSO regression.",
"Table 7: Closest words (in the cosine sense) to ”february”,”snow” and ”tuesday” for the UK",
"Table 8: Closest words (in the cosine sense) to ”february”,”snow” and ”tuesday” for France",
"Figure 8: Word vector log-norm over B = 10.",
"Figure 9: Venn diagram of common words among the top 50 ones for each selection procedure (France).",
"Figure 10: 2D t-SNE projections of the embedding matrix for both languages.",
"Table A.9: Forecast errors on the national temperature for France.",
"Table A.10: Forecast errors on the national wind for France.",
"Table A.11: Forecast errors on the national temperature for Great-Britain.",
"Table A.12: Forecast errors on the national wind for Great-Britain.",
"Figure A.11: Overlapping of prediction and national Temperature (France)",
"Figure A.12: Residual analysis of the French aggregated predictor.",
"Figure A.13: Coefficients βw in the British wind LASSO regression.",
"Figure A.14: Loss (Mean Squared Error) evolution of the electricity demand RNN for both languages.",
"Table A.13: Closest words (in the cosine sense) to ”August”,”Sunday” and ”Hot” for the UK",
"Table A.14: Closest words (in the cosine sense) to ”August”,”Sunday and ”thunderstorms” for the France"
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"5-Table3-1.png",
"7-Figure3-1.png",
"9-Figure4-1.png",
"9-Table4-1.png",
"10-Table5-1.png",
"11-Table6-1.png",
"11-Figure5-1.png",
"12-Figure6-1.png",
"13-Figure7-1.png",
"14-Table7-1.png",
"14-Table8-1.png",
"15-Figure8-1.png",
"15-Figure9-1.png",
"16-Figure10-1.png",
"17-TableA.9-1.png",
"17-TableA.10-1.png",
"17-TableA.11-1.png",
"17-TableA.12-1.png",
"18-FigureA.11-1.png",
"18-FigureA.12-1.png",
"19-FigureA.13-1.png",
"19-FigureA.14-1.png",
"20-TableA.13-1.png",
"20-TableA.14-1.png"
]
} | [
"How big is dataset used for training/testing?",
"What geometric properties do embeddings display?",
"How accurate is model trained on text exclusively?"
] | [
[
"1910.12618-Presentation of the data ::: Text-0"
],
[
"1910.12618-Experiments ::: Interpretability of the models ::: Vector embedding representation-2"
],
[
"1910.12618-Introduction-2"
]
] | [
"4,261 days for France and 4,748 for the UK",
"Winter and summer words formed two separate clusters. Week day and week-end day words also formed separate clusters.",
"Relative error is less than 5%"
] | 148 |
1911.12569 | Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis | In this paper, we propose a two-layered multi-task attention based neural network that performs sentiment analysis through emotion analysis. The proposed approach is based on Bidirectional Long Short-Term Memory and uses Distributional Thesaurus as a source of external knowledge to improve the sentiment and emotion prediction. The proposed system has two levels of attention to hierarchically build a meaningful representation. We evaluate our system on the benchmark dataset of SemEval 2016 Task 6 and also compare it with the state-of-the-art systems on Stance Sentiment Emotion Corpus. Experimental results show that the proposed system improves the performance of sentiment analysis by 3.2 F-score points on SemEval 2016 Task 6 dataset. Our network also boosts the performance of emotion analysis by 5 F-score points on Stance Sentiment Emotion Corpus. | {
"paragraphs": [
[
"The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1.",
"Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment.",
"Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis.",
"In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7.",
"The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis."
],
[
"A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time.",
"Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis."
],
[
"We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections."
],
[
"Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow.",
"BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\\overrightarrow{h_t}$ of the forward LSTM and $\\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\\overrightarrow{h_t}$, $\\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis)."
],
[
"The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\\in $ V($x_t$):",
"where $W_w$ and $b_{w}$ are jointly learned parameters.",
"Each embedding of the candidate term is weighted with the attention score $\\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms.",
"Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\\widehat{h_{t}}$, the final output of the primary attention mechanism."
],
[
"The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\\alpha _t$ for each word vector representation and the sentence representation $\\widehat{H}$ are calculated as:",
"where $W_s$ and $b_{s}$ are parameters to be learned.",
"$\\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both."
],
[
"The final outputs for both sentiment and emotion analysis are computed by feeding $\\widehat{H}$ and $\\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification."
],
[
"Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system."
],
[
"Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word."
],
[
"In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis."
],
[
"We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively."
],
[
"The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token."
],
[
"We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis."
],
[
"We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.",
"The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance.",
"Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.",
"We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise.",
"Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis.",
"As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below.",
"@realMessi he is a real sportsman and deserves to be the skipper.",
"The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text.",
"A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting."
],
[
"We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions.",
"Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios:",
"$\\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances.",
"Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is!",
"Actual Sentiment: positive",
"Actual Emotion: anticipation, joy, surprise, trust",
"Predicted Sentiment: negative",
"Predicted Emotion: anger, anticipation, sadness",
"The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life.",
"$\\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion.",
"Text: I've been called many things, quitter is not one of them...",
"Actual Sentiment: positive",
"Actual Emotion: anticipation, joy, trust",
"Predicted Sentiment: negative",
"Predicted Emotion: anticipation, sadness",
"Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion."
],
[
"In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages."
],
[
"Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Methodology",
"Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder",
"Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention",
"Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention",
"Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output",
"Proposed Methodology ::: Distributional Thesaurus",
"Proposed Methodology ::: Word Embeddings",
"Datasets, Experiments and Analysis",
"Datasets, Experiments and Analysis ::: Datasets",
"Datasets, Experiments and Analysis ::: Preprocessing",
"Datasets, Experiments and Analysis ::: Implementation Details",
"Datasets, Experiments and Analysis ::: Results and Analysis",
"Datasets, Experiments and Analysis ::: Error Analysis",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"d06db6cb47479b16310c2b411473e15f7bf6a92d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET.",
"We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.",
"We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis."
],
"extractive_spans": [],
"free_form_answer": "F1 score of 66.66%",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET.",
"We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.",
"F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"7f3ef3b4b9425404afc5b0f0614299cc2fda258f"
],
"answer": [
{
"evidence": [
"We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis.",
"We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.",
"FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET."
],
"extractive_spans": [],
"free_form_answer": "F1 score of 82.10%",
"highlighted_evidence": [
"F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. ",
"We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.",
"FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"08a5920d677c3b68fa489891947176aabc8aea5b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE III COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS OF SEMEVAL 2016 TASK 6 ON SENTIMENT DATASET.",
"FLOAT SELECTED: TABLE IV COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS PROPOSED BY [16] ON EMOTION DATASET. THE METRICS P, R AND F STAND FOR PRECISION, RECALL AND F1-SCORE.",
"Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.",
"We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise."
],
"extractive_spans": [],
"free_form_answer": "For sentiment analysis UWB, INF-UFRGS-OPINION-MINING, LitisMind, pkudblab and SVM + n-grams + sentiment and for emotion analysis MaxEnt, SVM, LSTM, BiLSTM and CNN",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE III COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS OF SEMEVAL 2016 TASK 6 ON SENTIMENT DATASET.",
"FLOAT SELECTED: TABLE IV COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS PROPOSED BY [16] ON EMOTION DATASET. THE METRICS P, R AND F STAND FOR PRECISION, RECALL AND F1-SCORE.",
"Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset.",
"We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"da6e104da9b4c6afc83c4800d11568afe8d568d7"
],
"answer": [
{
"evidence": [
"We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections."
],
"extractive_spans": [
"The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks.",
"Each of the shared representations is then fed to the primary attention mechanism"
],
"free_form_answer": "",
"highlighted_evidence": [
"The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"463da0e392644787be01b0c603d433f5d3e32098"
],
"answer": [
{
"evidence": [
"We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively."
],
"extractive_spans": [
"SemEval 2016 Task 6 BIBREF7",
"Stance Sentiment Emotion Corpus (SSEC) BIBREF15"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1e78ce2b71204f6727220e406bbcd71811faca2a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"403bf3135ace52b79ffbabe0d50d4cd367b61838"
],
"answer": [
{
"evidence": [
"Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.",
"We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise."
],
"extractive_spans": [
"BIBREF7",
"BIBREF39",
"BIBREF37",
"LitisMind",
"Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features.",
"We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e21f12751aa4c12d358cec2f742eec769c765999"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What was their result on Stance Sentiment Emotion Corpus?",
"What performance did they obtain on the SemEval dataset?",
"What are the state-of-the-art systems?",
"How is multi-tasking performed?",
"What are the datasets used for training?",
"How many parameters does the model have?",
"What is the previous state-of-the-art model?",
"What is the previous state-of-the-art performance?"
],
"question_id": [
"3e839783d8a4f2fe50ece4a9b476546f0842b193",
"2869d19e54fb554fcf1d6888e526135803bb7d75",
"894c086a2cbfe64aa094c1edabbb1932a3d7c38a",
"722e9b6f55971b4c48a60f7a9fe37372f5bf3742",
"9c2f306044b3d1b3b7fdd05d1c046e887796dd7a",
"3d99bc8ab2f36d4742e408f211bec154bc6696f7",
"9219eef636ddb020b9d394868959325562410f83",
"ff83eea2df9976c1a01482818340871b17ad4f8c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"sentiment",
"sentiment",
"sentiment",
"Sentiment Analysis",
"Sentiment Analysis",
"Sentiment Analysis",
"Sentiment Analysis",
"Sentiment Analysis"
],
"topic_background": [
"",
"",
"",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Two-layered multi-task attention based network",
"TABLE I DATASET STATISTICS OF SEMEVAL 2016 TASK 6 AND SSEC USED FOR SENTIMENT AND EMOTION ANALYSIS, RESPECTIVELY.",
"TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET.",
"TABLE III COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS OF SEMEVAL 2016 TASK 6 ON SENTIMENT DATASET.",
"Fig. 2. Comparison of various models (S1, S2, M1, M2) w.r.t different hidden state vector sizes of BiLSTM for sentiment analysis. Y-axis denotes the Fscores.",
"TABLE IV COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS PROPOSED BY [16] ON EMOTION DATASET. THE METRICS P, R AND F STAND FOR PRECISION, RECALL AND F1-SCORE.",
"TABLE XI CONFUSION MATRIX FOR sadness"
],
"file": [
"3-Figure1-1.png",
"5-TableI-1.png",
"5-TableII-1.png",
"5-TableIII-1.png",
"5-Figure2-1.png",
"6-TableIV-1.png",
"7-TableXI-1.png"
]
} | [
"What was their result on Stance Sentiment Emotion Corpus?",
"What performance did they obtain on the SemEval dataset?",
"What are the state-of-the-art systems?"
] | [
[
"1911.12569-Datasets, Experiments and Analysis ::: Results and Analysis-0",
"1911.12569-Datasets, Experiments and Analysis ::: Implementation Details-0",
"1911.12569-5-TableII-1.png"
],
[
"1911.12569-Datasets, Experiments and Analysis ::: Results and Analysis-0",
"1911.12569-Datasets, Experiments and Analysis ::: Implementation Details-0",
"1911.12569-5-TableII-1.png"
],
[
"1911.12569-Datasets, Experiments and Analysis ::: Results and Analysis-3",
"1911.12569-5-TableIII-1.png",
"1911.12569-6-TableIV-1.png",
"1911.12569-Datasets, Experiments and Analysis ::: Results and Analysis-2"
]
] | [
"F1 score of 66.66%",
"F1 score of 82.10%",
"For sentiment analysis UWB, INF-UFRGS-OPINION-MINING, LitisMind, pkudblab and SVM + n-grams + sentiment and for emotion analysis MaxEnt, SVM, LSTM, BiLSTM and CNN"
] | 149 |
1901.04899 | Conversational Intent Understanding for Passengers in Autonomous Vehicles | Understanding passenger intents and extracting relevant slots are important building blocks towards developing a contextual dialogue system responsible for handling certain vehicle-passenger interactions in autonomous vehicles (AV). When the passengers give instructions to AMIE (Automated-vehicle Multimodal In-cabin Experience), the agent should parse such commands properly and trigger the appropriate functionality of the AV system. In our AMIE scenarios, we describe usages and support various natural commands for interacting with the vehicle. We collected a multimodal in-cabin data-set with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme. We explored various recent Recurrent Neural Networks (RNN) based techniques and built our own hierarchical models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results achieved F1-score of 0.91 on utterance-level intent recognition and 0.96 on slot extraction models. | {
"paragraphs": [
[
"Understanding passenger intents and extracting relevant slots are important building blocks towards developing a contextual dialogue system responsible for handling certain vehicle-passenger interactions in autonomous vehicles (AV). When the passengers give instructions to AMIE (Automated-vehicle Multimodal In-cabin Experience), the agent should parse such commands properly and trigger the appropriate functionality of the AV system. In our AMIE scenarios, we describe usages and support various natural commands for interacting with the vehicle. We collected a multimodal in-cabin data-set with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme. We explored various recent Recurrent Neural Networks (RNN) based techniques and built our own hierarchical models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results achieved F1-score of 0.91 on utterance-level intent recognition and 0.96 on slot extraction models."
],
[
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators.",
"For slot filling and intent keywords extraction tasks, we experimented with seq2seq LSTMs and GRUs, and also Bidirectional LSTM/GRUs. The passenger utterance is fed into a Bi-LSTM network via an embedding layer as a sequence of words, which are transformed into word vectors. We also experimented with GloVe, word2vec, and fastText as pre-trained word embeddings. To prevent overfitting, a dropout layer is used for regularization. Best performing results are obtained with Bi-LSTMs and GloVe embeddings (6B tokens, 400K vocab size, dim 100).",
"For utterance-level intent detection, we experimented with mainly 5 models: (1) Hybrid: RNN + Rule-based, (2) Separate: Seq2one Bi-LSTM + Attention, (3) Joint: Seq2seq Bi-LSTM for slots/intent keywords & utterance-level intents, (4) Hierarchical + Separate, (5) Hierarchical + Joint. For (1), we extract intent keywords/slots (Bi-LSTM) and map them into utterance-level intent types (rule-based via term frequencies for each intent). For (2), we feed the whole utterance as input sequence and intent-type as single target. For (3), we experiment with the joint learning models BIBREF0 , BIBREF1 , BIBREF2 where we jointly train word-level intent keywords/slots and utterance-level intents (adding <BOU>/<EOU> terms to the start/end of utterances with intent types). For (4) and (5), we experiment with the hierarchical models BIBREF3 , BIBREF4 , BIBREF5 where we extract intent keywords/slots first, and then only feed the predicted keywords/slots as a sequence into (2) and (3), respectively."
],
[
"The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer."
],
[
"After exploring various recent Recurrent Neural Networks (RNN) based techniques, we built our own hierarchical joint models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results outperformed certain competitive baselines and achieved overall F1-scores of 0.91 for utterance-level intent recognition and 0.96 for slot extraction tasks."
]
],
"section_name": [
"Introduction",
"Methodology",
"Experimental Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"ca8e0b7c0f1b3216656508fc0b7b097f3d0235b9"
],
"answer": [
{
"evidence": [
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators."
],
"extractive_spans": [
"Set/Change Destination",
"Set/Change Route",
"Go Faster",
"Go Slower",
"Stop",
"Park",
"Pull Over",
"Drop Off",
"Open Door",
"Other "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"562d57fd3a19570effda503acce6ef14104b0bb5"
],
"answer": [
{
"evidence": [
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators."
],
"extractive_spans": [],
"free_form_answer": "3347 unique utterances ",
"highlighted_evidence": [
"We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"08cc6df0130add74d12eaccb3f1199ec873259eb"
],
"answer": [
{
"evidence": [
"The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer.",
"FLOAT SELECTED: Table 3: Utterance-level Intent Recognition Results (10-fold CV)"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer.",
"FLOAT SELECTED: Table 3: Utterance-level Intent Recognition Results (10-fold CV)"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"63a0529a4906af245494bc3e0c499cd869c4e775"
],
"answer": [
{
"evidence": [
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators."
],
"extractive_spans": [
"Set/Change Destination",
"Set/Change Route",
"Go Faster",
"Go Slower",
"Stop",
"Park",
"Pull Over",
"Drop Off",
"Open Door",
"Other "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What are the supported natural commands?",
"What is the size of their collected dataset?",
"Did they compare against other systems?",
"What intents does the paper explore?"
],
"question_id": [
"c6e63e3b807474e29bfe32542321d015009e7148",
"4ef2fd79d598accc54c084f0cca8ad7c1b3f892a",
"40e3639b79e2051bf6bce300d06548e7793daee0",
"8383e52b2adbbfb533fbe8179bc8dae11b3ed6da"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Slot Extraction Results (10-fold CV)",
"Table 3: Utterance-level Intent Recognition Results (10-fold CV)",
"Table 2: Intent Keyword Extraction Results (10-fold CV)",
"Table 4: Intent-wise Performance Results of Utterance-level Intent Recognition Models: Hierarchical & Joint (10-fold CV)"
],
"file": [
"2-Table1-1.png",
"2-Table3-1.png",
"2-Table2-1.png",
"3-Table4-1.png"
]
} | [
"What is the size of their collected dataset?"
] | [
[
"1901.04899-Methodology-0"
]
] | [
"3347 unique utterances "
] | 151 |
1606.05320 | Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models | As deep neural networks continue to revolutionize various application domains, there is increasing interest in making these powerful models more understandable and interpretable, and narrowing down the causes of good and bad predictions. We focus on recurrent neural networks (RNNs), state of the art models in speech recognition and translation. Our approach to increasing interpretability is by combining an RNN with a hidden Markov model (HMM), a simpler and more transparent model. We explore various combinations of RNNs and HMMs: an HMM trained on LSTM states; a hybrid model where an HMM is trained first, then a small LSTM is given HMM state distributions and trained to fill in gaps in the HMM's performance; and a jointly trained hybrid model. We find that the LSTM and HMM learn complementary information about the features in the text. | {
"paragraphs": [
[
"Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models could be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque expert system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power.",
"There are several aspects of what makes a model or algorithm understandable to humans. One aspect is model complexity or parsimony. Another aspect is the ability to trace back from a prediction or model component to particularly influential features in the data BIBREF0 BIBREF1 . This could be useful for understanding mistakes made by neural networks, which have human-level performance most of the time, but can perform very poorly on seemingly easy cases. For instance, convolutional networks can misclassify adversarial examples with very high confidence BIBREF2 , and made headlines in 2015 when the image tagging algorithm in Google Photos mislabeled African Americans as gorillas. It's reasonable to expect recurrent networks to fail in similar ways as well. It would thus be useful to have more visibility into where these sorts of errors come from, i.e. which groups of features contribute to such flawed predictions.",
"Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).",
"We explore several methods for building interpretable models by combining LSTMs and HMMs. The existing body of literature mostly focuses on methods that specifically train the RNN to predict HMM states BIBREF5 or posteriors BIBREF6 , referred to as hybrid or tandem methods respectively. We first investigate an approach that does not require the RNN to be modified in order to make it understandable, as the interpretation happens after the fact. Here, we model the big picture of the state changes in the LSTM, by extracting the hidden states and approximating them with a continuous emission hidden Markov model (HMM). We then take the reverse approach where the HMM state probabilities are added to the output layer of the LSTM (see Figure 1 ). The LSTM model can then make use of the information from the HMM, and fill in the gaps when the HMM is not performing well, resulting in an LSTM with a smaller number of hidden state dimensions that could be interpreted individually (Figures 3 , 3 )."
],
[
"We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data)."
],
[
"We use a character-level LSTM with 1 layer and no dropout, based on the Element-Research library. We train the LSTM for 10 epochs, starting with a learning rate of 1, where the learning rate is halved whenever $\\exp (-l_t) > \\exp (-l_{t-1}) + 1$ , where $l_t$ is the log likelihood score at epoch $t$ . The $L_2$ -norm of the parameter gradient vector is clipped at a threshold of 5."
],
[
"The HMM training procedure is as follows:",
"Initialization of HMM hidden states:",
"(Discrete HMM) Random multinomial draw for each time step (i.i.d. across time steps).",
"(Continuous HMM) K-means clusters fit on LSTM states, to speed up convergence relative to random initialization.",
"At each iteration:",
"Sample states using Forward Filtering Backwards Sampling algorithm (FFBS, BIBREF7 ).",
"Sample transition parameters from a Multinomial-Dirichlet posterior. Let $n_{ij}$ be the number of transitions from state $i$ to state $j$ . Then the posterior distribution of the $i$ -th row of transition matrix $T$ (corresponding to transitions from state $i$ ) is: $T_i \\sim \\text{Mult}(n_{ij} | T_i) \\text{Dir}(T_i | \\alpha )$ ",
"where $\\alpha $ is the Dirichlet hyperparameter.",
"(Continuous HMM) Sample multivariate normal emission parameters from Normal-Inverse-Wishart posterior for state $i$ : $ \\mu _i, \\Sigma _i \\sim N(y|\\mu _i, \\Sigma _i) N(\\mu _i |0, \\Sigma _i) \\text{IW}(\\Sigma _i) $ ",
"(Discrete HMM) Sample the emission parameters from a Multinomial-Dirichlet posterior.",
"Evaluation:",
"We evaluate the methods on how well they predict the next observation in the validation set. For the HMM models, we do a forward pass on the validation set (no backward pass unlike the full FFBS), and compute the HMM state distribution vector $p_t$ for each time step $t$ . Then we compute the predictive likelihood for the next observation as follows: $ P(y_{t+1} | p_t) =\\sum _{x_t=1}^n \\sum _{x_{t+1}=1}^n p_{tx_t} \\cdot T_{x_t, x_{t+1}} \\cdot P(y_{t+1} | x_{t+1})$ ",
"where $n$ is the number of hidden states in the HMM."
],
[
"Our main hybrid model is put together sequentially, as shown in Figure 1 . We first run the discrete HMM on the data, outputting the hidden state distributions obtained by the HMM's forward pass, and then add this information to the architecture in parallel with a 1-layer LSTM. The linear layer between the LSTM and the prediction layer is augmented with an extra column for each HMM state. The LSTM component of this architecture can be smaller than a standalone LSTM, since it only needs to fill in the gaps in the HMM's predictions. The HMM is written in Python, and the rest of the architecture is in Torch.",
"We also build a joint hybrid model, where the LSTM and HMM are simultaneously trained in Torch. We implemented an HMM Torch module, optimized using stochastic gradient descent rather than FFBS. Similarly to the sequential hybrid model, we concatenate the LSTM outputs with the HMM state probabilities."
],
[
"We test the models on several text data sets on the character level: the Penn Tree Bank (5M characters), and two data sets used by BIBREF4 , Tiny Shakespeare (1M characters) and Linux Kernel (5M characters). We chose $k=20$ for the continuous HMM based on a PCA analysis of the LSTM states, as the first 20 components captured almost all the variance.",
"Table 1 shows the predictive log likelihood of the next text character for each method. On all text data sets, the hybrid algorithm performs a bit better than the standalone LSTM with the same LSTM state dimension. This effect gets smaller as we increase the LSTM size and the HMM makes less difference to the prediction (though it can still make a difference in terms of interpretability). The hybrid algorithm with 20 HMM states does better than the one with 10 HMM states. The joint hybrid algorithm outperforms the sequential hybrid on Shakespeare data, but does worse on PTB and Linux data, which suggests that the joint hybrid is more helpful for smaller data sets. The joint hybrid is an order of magnitude slower than the sequential hybrid, as the SGD-based HMM is slower to train than the FFBS-based HMM.",
"We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data."
],
[
"Hybrid HMM-RNN approaches combine the interpretability of HMMs with the predictive power of RNNs. Sometimes, a small hybrid model can perform better than a standalone LSTM of the same size. We use visualizations to show how the LSTM and HMM components of the hybrid algorithm complement each other in terms of features learned in the data."
]
],
"section_name": [
"Introduction",
"Methods",
"LSTM models",
"Hidden Markov models",
"Hybrid models",
"Experiments",
"Conclusion and future work"
]
} | {
"answers": [
{
"annotation_id": [
"3be4a77ab3aaee94fae674de02f30c26a8ac92cc"
],
"answer": [
{
"evidence": [
"We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data).",
"We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data.",
"FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments."
],
"extractive_spans": [],
"free_form_answer": "A continuous emission HMM uses the hidden states of a 2-layer LSTM as features and a discrete emission HMM uses data as features. \nThe interpretability of the model is shown in Figure 2. ",
"highlighted_evidence": [
"We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data).",
"We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components.",
"FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"74af4b76c56784369d825b16869ad676ce461b5a"
],
"answer": [
{
"evidence": [
"We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data.",
"FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments."
],
"extractive_spans": [],
"free_form_answer": "The HMM can identify punctuation or pick up on vowels.",
"highlighted_evidence": [
"We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data.",
"FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"ec6cd705d22766e1274c29c47bbc0130b8ebe6e4"
],
"answer": [
{
"evidence": [
"Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).",
"We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data."
],
"extractive_spans": [
"decision trees to predict individual hidden state dimensions",
"apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters"
],
"free_form_answer": "",
"highlighted_evidence": [
"Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).",
"In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"b06f6ec0482033adb20e36a1fa5db6e23787c281"
]
},
{
"annotation_id": [
"08dd9aab02deed98405f4acd28f2cd1bb2f50927"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance."
],
"extractive_spans": [],
"free_form_answer": "With similar number of parameters, the log likelihood is about 0.1 lower for LSTMs across datasets. When the number of parameters in LSTMs is increased, their log likelihood is up to 0.7 lower.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"b06f6ec0482033adb20e36a1fa5db6e23787c281"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What kind of features are used by the HMM models, and how interpretable are those?",
"What kind of information do the HMMs learn that the LSTMs don't?",
"Which methods do the authors use to reach the conclusion that LSTMs and HMMs learn complementary information?",
"How large is the gap in performance between the HMMs and the LSTMs?"
],
"question_id": [
"5f7850254b723adf891930c6faced1058b99bd57",
"4d05a264b2353cff310edb480a917d686353b007",
"7cdce4222cea6955b656c1a3df1129bb8119e2d0",
"6ea63327ffbab2fc734dd5c2414e59d3acc56ea5"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"interpretability",
"interpretability",
"interpretability",
"interpretability"
],
"topic_background": [
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Figure 1: Hybrid HMM-LSTM algorithms (the dashed blocks indicate the components trained using SGD in Torch).",
"Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance.",
"Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments.",
"Figure 3: Decision tree predicting an individual hidden state dimension of the hybrid algorithm based on the preceding characters on the Linux data. Nodes with uninformative splits are represented with . . . ."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png"
]
} | [
"What kind of features are used by the HMM models, and how interpretable are those?",
"What kind of information do the HMMs learn that the LSTMs don't?",
"How large is the gap in performance between the HMMs and the LSTMs?"
] | [
[
"1606.05320-Experiments-2",
"1606.05320-Methods-0",
"1606.05320-4-Figure2-1.png"
],
[
"1606.05320-Experiments-2",
"1606.05320-4-Figure2-1.png"
],
[
"1606.05320-3-Table1-1.png"
]
] | [
"A continuous emission HMM uses the hidden states of a 2-layer LSTM as features and a discrete emission HMM uses data as features. \nThe interpretability of the model is shown in Figure 2. ",
"The HMM can identify punctuation or pick up on vowels.",
"With similar number of parameters, the log likelihood is about 0.1 lower for LSTMs across datasets. When the number of parameters in LSTMs is increased, their log likelihood is up to 0.7 lower."
] | 152 |
1809.10644 | Predictive Embeddings for Hate Speech Detection on Twitter | We present a neural-network based approach to classifying online hate speech in general, as well as racist and sexist speech in particular. Using pre-trained word embeddings and max/mean pooling from simple, fully-connected transformations of these embeddings, we are able to predict the occurrence of hate speech on three commonly used publicly available datasets. Our models match or outperform state of the art F1 performance on all three datasets using significantly fewer parameters and minimal feature preprocessing compared to previous methods. | {
"paragraphs": [
[
"The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems.",
"Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.”",
"In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features:",
"In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study."
],
[
"Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features.",
"Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech.",
" BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone.",
"In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts."
],
[
"In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.",
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ."
],
[
"Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence.",
"We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations."
],
[
"Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand."
],
[
"We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation."
],
[
"We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels."
],
[
"We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search.",
"All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets.",
"SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 "
],
[
"The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. ",
"Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split",
"to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement.",
"Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists).",
"On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters."
],
[
"False negatives",
"Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel.",
"Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two.",
"Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech:",
"@LoveAndLonging ...how is that example \"sexism\"?",
"@amberhasalamb ...in what way?",
"Another case our classifier misses is problematic speech within a hashtag:",
":D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :)",
"This limitation could be potentially improved through the use of character convolutions or subword tokenization.",
"False Positives",
"In certain cases, our model seems to be learning user names instead of semantic content:",
"RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't.",
"Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki)"
],
[
"Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results.",
"Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags."
],
[
"We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them."
],
[
"Original text",
"RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing",
"Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing)",
"RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing",
"Tokenize Lowercase: Lowercase the basic tokenize scheme",
"rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing",
"Token Replace: Replaces entities and user names with placeholder)",
"ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing",
"Token Replace Lowercase: Lowercase the Token Replace Scheme",
"ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing",
"We did analysis on a validation set across multiple datasets to find that the \"Tokenize\" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of."
],
[
"Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space.",
"The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data",
"Transformed Word Embedding Model (TWEM)",
"Word Embeddings",
"Pooling",
"Output",
"Experimental Setup",
"Results and Discussion",
"Error Analysis",
"Conclusion",
"Supplemental Material",
"Preprocessing",
"Embedding Analysis"
]
} | {
"answers": [
{
"annotation_id": [
"7acdce6a3960c4cb8094d6e4544c30573fbd7f65"
],
"answer": [
{
"evidence": [
"In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.",
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 .",
"Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel.",
"Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two.",
"Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech:",
"@LoveAndLonging ...how is that example \"sexism\"?",
"@amberhasalamb ...in what way?"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we use three data sets from the literature to train and evaluate our own classifier.",
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 .",
"Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. ",
"While this may be a limitation of considering only the content of the tweet, it could also be a mislabel.\n\nDebra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two.\n\nAlong these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech:\n\n@LoveAndLonging ...how is that example \"sexism\"?\n\n@amberhasalamb ...in what way?"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"80c406b3f6db9d8fc52494f64623dece1a1fb5a9"
],
"answer": [
{
"evidence": [
"In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.",
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ."
],
"extractive_spans": [
"BIBREF3",
"BIBREF4",
"BIBREF9"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we use three data sets from the literature to train and evaluate our own classifier.",
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"a304633262bac6ad36eebafd497fad08ae92472f"
],
"answer": [
{
"evidence": [
"We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search."
],
"extractive_spans": [
"300 Dimensional Glove"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"ef801e4f9403ce2032a60c72ab309d59ae99815b"
],
"answer": [
{
"evidence": [
"We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search."
],
"extractive_spans": [
"Common Crawl "
],
"free_form_answer": "",
"highlighted_evidence": [
"We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"629050e165fd7bce52139caf1d57c8bb2af6f6b1"
],
"answer": [
{
"evidence": [
"On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters."
],
"extractive_spans": [
"our model requires 100k parameters , while BIBREF8 requires 250k parameters"
],
"free_form_answer": "",
"highlighted_evidence": [
"Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"090362d69eea1dc52f6e26ca692dc5a45aab9ea2"
],
"answer": [
{
"evidence": [
"On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters."
],
"extractive_spans": [
"Excluding the embedding weights, our model requires 100k parameters"
],
"free_form_answer": "",
"highlighted_evidence": [
"Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a7994610e5a9941b8fc4c4bff59ba0efbd157426"
],
"answer": [
{
"evidence": [
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ."
],
"extractive_spans": [
"Sexist/Racist (SR) data set",
"HATE dataset",
"HAR"
],
"free_form_answer": "",
"highlighted_evidence": [
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d6c36ac05ab606c6508299255adf1a37eb474542"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: F1 Results3",
"The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1."
],
"extractive_spans": [],
"free_form_answer": "Proposed model achieves 0.86, 0.924, 0.71 F1 score on SR, HATE, HAR datasets respectively.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: F1 Results3",
"Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ef284b3f6c2607cb62a2fbfa6b7d0bcfb580696d"
],
"answer": [
{
"evidence": [
"All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets."
],
"extractive_spans": [
"logistic regression"
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"",
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"Which publicly available datasets are used?",
"What embedding algorithm and dimension size are used?",
"What data are the embeddings trained on?",
"how much was the parameter difference between their model and previous methods?",
"how many parameters did their model use?",
"which datasets were used?",
"what was their system's f1 performance?",
"what was the baseline?"
],
"question_id": [
"50690b72dc61748e0159739a9a0243814d37f360",
"8266642303fbc6a1138b4e23ee1d859a6f584fbb",
"3685bf2409b23c47bfd681989fb4a763bcab6be2",
"19225e460fff2ac3aebc7fe31fcb4648eda813fb",
"f37026f518ab56c859f6b80b646d7f19a7b684fa",
"1231934db6adda87c1b15e571468b8e9d225d6fe",
"81303f605da57ddd836b7c121490b0ebb47c60e7",
"a3f108f60143d13fe38d911b1cc3b17bdffde3bd",
"118ff1d7000ea0d12289d46430154cc15601fd8e"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Dataset Characteristics",
"Table 2: F1 Results3",
"Table 3: Projected Embedding Cluster Analysis from SR Dataset",
"Table 5: SR Results",
"Table 7: HAR Results",
"Table 6: HATE Results",
"Table 8: Projected Embedding Cluster Analysis from SR Dataset"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"6-Table5-1.png",
"6-Table7-1.png",
"6-Table6-1.png",
"7-Table8-1.png"
]
} | [
"what was their system's f1 performance?"
] | [
[
"1809.10644-3-Table2-1.png"
]
] | [
"Proposed model achieves 0.86, 0.924, 0.71 F1 score on SR, HATE, HAR datasets respectively."
] | 153 |
1911.03243 | Crowdsourcing a High-Quality Gold Standard for QA-SRL | Question-answer driven Semantic Role Labeling (QA-SRL) has been proposed as an attractive open and natural form of SRL, easily crowdsourceable for new corpora. Recently, a large-scale QA-SRL corpus and a trained parser were released, accompanied by a densely annotated dataset for evaluation. Trying to replicate the QA-SRL annotation and evaluation scheme for new texts, we observed that the resulting annotations were lacking in quality and coverage, particularly insufficient for creating gold standards for evaluation. In this paper, we present an improved QA-SRL annotation protocol, involving crowd-worker selection and training, followed by data consolidation. Applying this process, we release a new gold evaluation dataset for QA-SRL, yielding more consistent annotations and greater coverage. We believe that our new annotation protocol and gold standard will facilitate future replicable research of natural semantic annotations. | {
"paragraphs": [
[
"Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role.",
"Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s.",
"In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline."
],
[
"In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4)."
],
[
"The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others.",
"In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference.",
"We found that these characteristics of the dataset impede its utility for future development of parsers."
],
[
"Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations."
],
[
"We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix."
],
[
"We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4)."
],
[
"We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense."
],
[
"Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics.",
"Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives.",
"Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method.",
"As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research."
],
[
"We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant."
],
[
"To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency."
],
[
"We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield.",
"Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines."
],
[
"It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol.",
"The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset."
],
[
"To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set."
],
[
"We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones."
],
[
"We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing."
],
[
"For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26"
],
[
"As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples."
],
[
"As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology."
]
],
"section_name": [
"Introduction",
"Background — QA-SRL ::: Specifications",
"Background — QA-SRL ::: Corpora",
"Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training",
"Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation",
"Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements",
"Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost",
"Annotation and Evaluation Methods ::: Evaluation Metrics",
"Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations",
"Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA)",
"Dataset Quality Analysis ::: Dataset Assessment and Comparison",
"Dataset Quality Analysis ::: Agreement with PropBank Data",
"Baseline Parser Evaluation",
"Baseline Parser Evaluation ::: Error Analysis",
"Conclusion",
"Supplemental Material ::: The Question Template",
"Supplemental Material ::: Annotation Pipeline",
"Supplemental Material ::: Redundant Parser Output"
]
} | {
"answers": [
{
"annotation_id": [
"12360275d5fa216c2ae92edd18d2b5a7e81fa3a9"
],
"answer": [
{
"evidence": [
"The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset."
],
"extractive_spans": [],
"free_form_answer": "278 more annotations",
"highlighted_evidence": [
"Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"b8a2a6a6b76fdcdd7530bd3a87e4450e92da67ef"
],
"answer": [
{
"evidence": [
"The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others."
],
"extractive_spans": [
"QA pairs per predicate"
],
"free_form_answer": "",
"highlighted_evidence": [
"They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"a1ba8313ddccd343aaf9ee6ac69b3c8d7c00cbfa"
],
"answer": [
{
"evidence": [
"Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA)",
"To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency.",
"Dataset Quality Analysis ::: Dataset Assessment and Comparison",
"We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield.",
"Dataset Quality Analysis ::: Agreement with PropBank Data",
"It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol."
],
"extractive_spans": [],
"free_form_answer": "Inter-annotator agreement, comparison against expert annotation, agreement with PropBank Data annotations.",
"highlighted_evidence": [
"Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA)\nTo estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate.",
"Dataset Quality Analysis ::: Dataset Assessment and Comparison\nWe assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. ",
"Dataset Quality Analysis ::: Agreement with PropBank Data\nIt is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"090ec541ca7e88cc908f7c23f2dc68b3eee4024b"
],
"answer": [
{
"evidence": [
"Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s."
],
"extractive_spans": [
" trained annotators BIBREF4",
"crowdsourcing BIBREF5 "
],
"free_form_answer": "",
"highlighted_evidence": [
"Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"b1a374fe6485a9c92479db7bca8c839850edbfe0"
],
"answer": [
{
"evidence": [
"Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations."
],
"extractive_spans": [
"extensive personal feedback"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"f2413e07629ffe74ac179dd6085da5781debcb51"
],
"answer": [
{
"evidence": [
"We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix."
],
"extractive_spans": [],
"free_form_answer": "a trained worker consolidates existing annotations ",
"highlighted_evidence": [
"We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"d7fde438a66548287215deabf15d328d3afbb7b3"
],
"answer": [
{
"evidence": [
"We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix."
],
"extractive_spans": [
"the annotation machinery of BIBREF5"
],
"free_form_answer": "",
"highlighted_evidence": [
"We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"d6014ab0bc1d512e6e22ae906021cc4c94643c57"
],
"answer": [
{
"evidence": [
"The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset."
],
"extractive_spans": [
"1593 annotations"
],
"free_form_answer": "",
"highlighted_evidence": [
"Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How much more coverage is in the new dataset?",
"How was coverage measured?",
"How was quality measured?",
"How was the corpus obtained?",
"How are workers trained?",
"What is different in the improved annotation protocol?",
"How was the previous dataset annotated?",
"How big is the dataset?"
],
"question_id": [
"04f72eddb1fc73dd11135a80ca1cf31e9db75578",
"f74eaee72cbd727a6dffa1600cdf1208672d713e",
"068dbcc117c93fa84c002d3424bafb071575f431",
"96526a14820b7debfd6f7c5beeade0a854b93d1a",
"32ba4d2d15194e889cbc9aa1d21ff1aa6fa27679",
"78c010db6413202b4063dc3fb6e3cc59ec16e7e3",
"a69af5937cab861977989efd72ad1677484b5c8c",
"8847f2c676193189a0f9c0fe3b86b05b5657b76a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Running examples of QA-SRL annotations; this set is a sample of the possible questions that can be asked. The bar (|) separates multiple selected answers.",
"Table 2: Automatic and manually-corrected evaluation of our gold standard and Dense (Fitzgerald et al., 2018) against the expert annotated sample.",
"Table 3: Performance analysis against PropBank. Precision, recall and F1 for all roles, core roles, and adjuncts.",
"Table 4: Automatic and manual parser evaluation against 500 Wikinews sentences from the gold dataset. Manual is evaluated on 50 sampled predicates.",
"Table 6: The consolidation task – A1, A2 refer to the original annotator QAs, C refers to the consolidator selected question and corrected answers.",
"Table 7: The parser generates redundant arguments with different paraphrased questions."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Table6-1.png",
"5-Table7-1.png"
]
} | [
"How much more coverage is in the new dataset?",
"How was quality measured?",
"What is different in the improved annotation protocol?"
] | [
[
"1911.03243-Dataset Quality Analysis ::: Agreement with PropBank Data-1"
],
[
"1911.03243-Dataset Quality Analysis ::: Agreement with PropBank Data-0",
"1911.03243-Dataset Quality Analysis ::: Dataset Assessment and Comparison-0",
"1911.03243-Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA)-0"
],
[
"1911.03243-Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation-0"
]
] | [
"278 more annotations",
"Inter-annotator agreement, comparison against expert annotation, agreement with PropBank Data annotations.",
"a trained worker consolidates existing annotations "
] | 155 |
1809.04686 | Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation | Transferring representations from large supervised tasks to downstream tasks has shown promising results in AI fields such as Computer Vision and Natural Language Processing (NLP). In parallel, the recent progress in Machine Translation (MT) has enabled one to train multilingual Neural MT (NMT) systems that can translate between multiple languages and are also capable of performing zero-shot translation. However, little attention has been paid to leveraging representations learned by a multilingual NMT system to enable zero-shot multilinguality in other NLP tasks. In this paper, we demonstrate a simple framework, a multilingual Encoder-Classifier, for cross-lingual transfer learning by reusing the encoder from a multilingual NMT system and stitching it with a task-specific classifier component. Our proposed model achieves significant improvements in the English setup on three benchmark tasks - Amazon Reviews, SST and SNLI. Further, our system can perform classification in a new language for which no classification data was seen during training, showing that zero-shot classification is possible and remarkably competitive. In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT, classifier complexity, encoder representation power, and model generalization on zero-shot performance. Our results provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks. | {
"paragraphs": [
[
"Transfer learning has been shown to work well in Computer Vision where pre-trained components from a model trained on ImageNet BIBREF0 are used to initialize models for other tasks BIBREF1 . In most cases, the other tasks are related to and share architectural components with the ImageNet task, enabling the use of such pre-trained models for feature extraction. With this transfer capability, improvements have been obtained on other image classification datasets, and on other tasks such as object detection, action recognition, image segmentation, etc BIBREF2 . Analogously, we propose a method to transfer a pre-trained component - the multilingual encoder from an NMT system - to other NLP tasks.",
"In NLP, initializing word embeddings with pre-trained word representations obtained from Word2Vec BIBREF3 or GloVe BIBREF4 has become a common way of transferring information from large unlabeled data to downstream tasks. Recent work has further shown that we can improve over this approach significantly by considering representations in context, i.e. modeled depending on the sentences that contain them, either by taking the outputs of an encoder in MT BIBREF5 or by obtaining representations from the internal states of a bi-directional Language Model (LM) BIBREF6 . There has also been successful recent work in transferring sentence representations from resource-rich tasks to improve resource-poor tasks BIBREF7 , however, most of the above transfer learning examples have focused on transferring knowledge across tasks for a single language, in English.",
"Cross-lingual or multilingual NLP, the task of transferring knowledge from one language to another, serves as a good test bed for evaluating various transfer learning approaches. For cross-lingual NLP, the most widely studied approach is to use multilingual embeddings as features in neural network models. However, research has shown that representations learned in context are more effective BIBREF5 , BIBREF6 ; therefore, we aim at doing better than just using multilingual embeddings in the cross-lingual tasks. Recent progress in multilingual NMT provides a compelling opportunity for obtaining contextualized multilingual representations, as multilingual NMT systems are capable of generalizing to an unseen language direction, i.e. zero-shot translation. There is also evidence that the encoder of a multilingual NMT system learns language agnostic, universal interlingua representations, which can be further exploited BIBREF8 .",
"In this paper, we focus on using the representations obtained from a multilingual NMT system to enable cross-lingual transfer learning on downstream NLP tasks. Our contributions are three-fold:"
],
[
"We propose an Encoder-Classifier model, where the Encoder, leveraging the representations learned by a multilingual NMT model, converts an input sequence ${\\mathbf {x}}$ into a set of vectors C, and the Classifier predicts a class label $y$ given the encoding of the input sequence, C."
],
[
"Although there has been a large body of work in building multilingual NMT models which can translate between multiple languages at the same time BIBREF29 , BIBREF30 , BIBREF31 , BIBREF8 , zero-shot capabilities of such multilingual representations have only been tested for MT BIBREF8 . We propose a simple yet effective solution - reuse the encoder of a multilingual NMT model to initialize the encoder for other NLP tasks. To be able to achieve promising zero-shot classification performance, we consider two factors: (1) The ability to encode multiple source languages with the same encoder and (2) The ability to learn language agnostic representations of the source sequence. Based on the literature, both requirements can be satisfied by training a multilingual NMT model having a shared encoder BIBREF32 , BIBREF8 , and a separate decoder and attention mechanism for each target language BIBREF30 . After training such a multilingual NMT model, the decoder and the corresponding attention mechanisms (which are target-language specific) are discarded, while the multilingual encoder is used to initialize the encoder of our proposed Encoder-Classifier model."
],
[
"In order to leverage pre-trained multilingual representations introduced in Section \"Analyses\" , our encoder strictly follows the structure of a regular Recurrent Neural Network (RNN) based NMT encoder BIBREF33 with a stacked layout BIBREF34 . Given an input sequence ${\\mathbf {x}} = (x_{1}, x_{2}, \\ldots , x_{T_x})$ of length $T_x$ , our encoder contextualizes or encodes the input sequence into a set of vectors C, by first applying a bi-directional RNN BIBREF35 , followed by a stack of uni-directional RNNs. The hidden states of the final layer RNN, $h_i^l$ , form the set C $~=\\lbrace h_i^l \\rbrace _{i=1}^{T_x}$ of context vectors which will be used by the classifier, where $l$ denotes the number of RNN layers in the stacked encoder.",
"The task of the classifier is to predict a class label $y$ given the context set C. To ease this classification task given a variable length input set C, a common approach in the literature is to extract a single sentence vector $\\mathbf {q}$ by making use of pooling over time BIBREF36 . Further, to increase the modeling capacity, the pooling operation can be parameterized using pre- and post-pooling networks. Formally, given the context set C, we extract a sentence vector $\\mathbf {q}$ in three steps, using three networks, (1) pre-pooling feed-forward network $f_{pre}$ , (2) pooling network $f_{pool}$ and (3) post-pooling feed-forward network $f_{post}$ , $\n\\mathbf {q} = f_{post}( f_{pool} ( f_{pre} (\\textbf {C}) ) ).\n$ ",
" Finally, given the sentence vector $\\mathbf {q}$ , a class label $y$ is predicted by employing a softmax function."
],
[
"We evaluate the proposed method on three common NLP tasks: Amazon Reviews, SST and SNLI. We utilize parallel data to train our multilingual NMT system, as detailed below.",
"For the MT task, we use the WMT 2014 En $\\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below.",
"The Amazon reviews dataset BIBREF39 is a multilingual sentiment classification dataset, providing data for four languages - English (En), French (Fr), German (De), and Japanese. We use the English and French datasets in our experiments. The dataset contains 6,000 documents in the train and test portions for each language. Each review consists of a category label, a title, a review, and a star rating (5-point scale). We only use the review text in our experiments. Following BIBREF39 , we mapped the reviews with lower scores (1 and 2) to negative examples and the reviews with higher scores (4 and 5) to positive examples, thereby turning it into a binary classification problem. Reviews with score 3 are dropped. We split the training dataset into 10% for development and the rest for training, and we truncate each example and keep the first 200 words in the review. Note that, since the data for each language was obtained by crawling different product pages, the data is not aligned across languages.",
"The sentiment classification task proposed in BIBREF9 is also a binary classification problem where each sentence and phrase is associated with either a positive or a negative sentiment. We ignore phrase-level annotations and sentence-level neutral examples in our experiments. The dataset contains 6920, 872, and 1821 examples for training, development and testing, respectively. Since SST does not provide a multilingual test set, we used the public translation engine Google Translate to translate the SST test set to French. Previous work by BIBREF40 has shown that replacing the human translated test set with a synthetic set (obtained by using Google Translate) produces only a small difference of around 1% absolute accuracy on their human-translated French SNLI test set. Therefore, the performance measured on our `pseudo' French SST test set is expected to be a good indicator of zero-shot performance.",
"Natural language inference is a task that aims to determine whether a natural language hypothesis $\\mathbf {h}$ can justifiably be inferred from a natural language premise $\\mathbf {p}$ . SNLI BIBREF10 is one of the largest datasets for a natural language inference task in English and contains multiple sentence pairs with a sentence-level entailment label. Each pair of sentences can have one of three labels - entailment, contradiction, and neutral, which are annotated by multiple humans. The dataset contains 550K training, 10K validation, and 10K testing examples. To enable research on multilingual SNLI, BIBREF40 chose a subset of the SNLI test set (1332 sentences) and professionally translated it into four major languages - Arabic, French, Russian, and Spanish. We use the French test set for evaluation in Section \"Zero-Shot Classification Results\" and \"Analyses\" ."
],
[
"Here, we first describe the model and training details of the base multilingual NMT model whose encoder is reused in all other tasks. Then we provide details about the task-specific classifiers. For each task, we provide the specifics of $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets that build the task-specific classifier.",
"All the models in our experiments are trained using Adam optimizer BIBREF42 with label smoothing BIBREF43 and unless otherwise stated below, layer normalization BIBREF44 is applied to all LSTM gates and feed-forward layer inputs. We apply L2 regularization to the model weights and dropout to layer activations and sub-word embeddings. Hyper-parameters, such as mixing ratio $\\lambda $ of L2 regularization, dropout rates, label smoothing uncertainty, batch sizes, learning rate of optimizers and initialization ranges of weights are tuned on the development sets provided for each task separately.",
"Our multilingual NMT model consists of a shared multilingual encoder and two decoders, one for English and the other for French. The multilingual encoder uses one bi-directional LSTM, followed by three stacked layers of uni-directional LSTMs in the encoder. Each decoder consists of four stacked LSTM layers, with the first LSTM layers intertwined with additive attention networks BIBREF33 to learn a source-target alignment function. All the uni-directional LSTMs are equipped with residual connections BIBREF45 to ease the optimization, both in the encoder and the decoders. LSTM hidden units and the shared source-target embedding dimensions are set to 512.",
"Similar to BIBREF30 , multilingual NMT model is trained in a multi-task learning setup, where each decoder is augmented with a task-specific loss, minimizing the negative conditional log-likelihood of the target sequence given the source sequence. During training, mini-batches of En $\\rightarrow $ Fr and Fr $\\rightarrow $ En examples are interleaved. We picked the best model based on the best average development set BLEU score on both of the language pairs.",
"The Encoder-Classifier model here uses the encoder defined previously. With regards to the classifier, the pre- and post-pooling networks ( $f_{pre}$ , $f_{post}$ ) are both one-layer feed forward networks to cast the dimension size from 512 to 128 and from 128 to 32, respectively. We used max-pooling operator for the $f_{pool}$ network to pool the activation over time.",
"We extended the proposed Encoder-Classifier model to a multi-source model BIBREF46 since SNLI is an inference task of relations between two input sentences, “premise\" and “hypothesis\". For the two sources, we use two separate encoders, which are initialized with the same pre-trained multilingual NMT encoder, to obtain their representations. Following our notation, the encoder outputs are processed using $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets, again with two separate network blocks. Specifically, $f_{pre}$ consists of a co-attention layer BIBREF47 followed by a two-layer feed-forward neural network with residual connections. We use max pooling over time for $f_{pool}$ and again a two-layer feed-forward neural network with residual connections as $f_{post}$ . After processing two sentence encodings using two network blocks, we obtain two vectors representing premise $\\mathbf {h}_{premise}$ and hypothesis $\\mathbf {h}_{hypothesis}$ . Following BIBREF48 , we compute two types of relational vectors with $\\mathbf {h}_{-} = |\\mathbf {h}_{premise} - \\mathbf {h}_{hypothesis}|,$ and $\\mathbf {h}_{\\times } = \\mathbf {h}_{premise} \\odot \\mathbf {h}_{hypothesis}$ , where $f_{pool}$0 denotes the element-wise multiplication between two vectors. The final relation vector is obtained by concatenating $f_{pool}$1 and $f_{pool}$2 . For both “premise\" and “hypothesis\" feed-forward networks we used 512 hidden dimensions.",
"For Amazon Reviews, SST and SNLI tasks, we picked the best model based on the highest development set accuracy."
],
[
"In this section, we report our results for the three tasks - Amazon Reviews (English and French), SST, and SNLI. For each task, we first build a baseline system using the proposed Encoder-Classifier architecture described in Section \"Proposed Method\" where the encoder is initialized randomly. Next, we experiment with using the pre-trained multilingual NMT encoder to initialize the system as described in Section \"Analyses\" . Finally, we perform an experiment where we freeze the encoder after initialization and only update the classifier component of the system.",
"Table 1 summarizes the accuracy of our proposed system for these three different approaches and the state-of-the-art results on all the tasks. The first row in the table shows the baseline accuracy of our system for all four datasets. The second row shows the result from initializing with a pre-trained multilingual NMT encoder. It can be seen that this provides a significant improvement in accuracy, an average of 4.63%, across all the tasks. This illustrates that the multilingual NMT encoder has successfully learned transferable contextualized representations that are leveraged by the classifier component of our proposed system. These results are in line with the results in BIBREF5 where the authors used the representations from the top NMT encoder layer as an additional input to the task-specific system. However, in our setup we reused all of the layers of the encoder as a single pre-trained component in the task-specific system. The third row shows the results from freezing the pre-trained encoder after initialization and only training the classifier component. For the Amazon English and French tasks, freezing the encoder after initialization significantly improves the performance further. We hypothesize that since the Amazon dataset is a document level classification task, the long input sequences are very different from the short sequences consumed by the NMT system and hence freezing the encoder seems to have a positive effect. This hypothesis is also supported by the SNLI and SST results, which contain sentence-level input sequences, where we did not find any significant difference between freezing and not freezing the encoder."
],
[
"In this section, we explore the zero-shot classification task in French for our systems. We assume that we do not have any French training data for all the three tasks and test how well our proposed method can generalize to the unseen French language without any further training. Specifically, we reuse the three proposed systems from Table 1 after being trained only on the English classification task and test the systems on data from an unseen language (e.g. French). A reasonable upper bound to which zero-shot performance should be compared to is bridging - translating a French test text to English and then applying the English classifier on the translated text. If we assume the translation to be perfect, we should expect this approach to perform as well as the English classifier.",
"The Amazon Reviews and SNLI tasks have a French test set available, and we evaluate the performance of the bridged and zero-shot systems on each French set. However, the SST dataset does not have a French test set, hence the `pseudo French' test set described in Section UID14 is used to evaluate the zero-shot performance. We use the English accuracy scores from the SST column in Table 1 as a high-quality proxy for the SST bridged system. We do this since translating the `pseudo French' back to English will result in two distinct translation steps and hence more errors.",
"Table 2 summarizes all of our zero-shot results for French classification on the three tasks. It can be seen that just by using the pre-trained NMT encoder, the zero-shot performance increases drastically from almost random to within 10% of the bridged system. Freezing the encoder further pushes this performance closer to the bridged system. On the Amazon Review task, our zero-shot system is within 2% of the best bridged system. On the SST task, our zero-shot system obtains an accuracy of 83.14% which is within 1.5% of the bridged equivalent (in this case the English system).",
"Finally, on SNLI, we compare our best zero-shot system with bilingual and multilingual embedding based methods evaluated on the same French test set in BIBREF40 . As illustrated in Table 3 , our best zero-shot system obtains the highest accuracy of 73.88%. INVERT BIBREF23 uses inverted indexing over a parallel corpus to obtain crosslingual word representations. BiCVM BIBREF25 learns bilingual compositional representations from sentence-aligned parallel corpora. In RANDOM BIBREF24 , bilingual embeddings are trained on top of parallel sentences with randomly shuffled tokens using skip-gram with negative sampling, and RATIO is similar to RANDOM with the one difference being that the tokens in the parallel sentences are not randomly shuffled. Our system significantly outperforms all methods listed in the second column by 10.66% to 15.24% and demonstrates the effectiveness of our proposed approach."
],
[
"In this section, we try to analyze why our simple Encoder-Classifier system is effective at zero-shot classification. We perform a series of experiments to better understand this phenomenon. In particular, we study (1) the effect of shared sub-word vocabulary, (2) the amount of multilingual training data to measure the influence of multilinguality, (3) encoder/classifier capacity to measure the influence of representation power, and (4) model behavior on different training phases to assess the relation between generalization performance on English and zero-shot performance on French."
],
[
"In this paper, we have demonstrated a simple yet effective approach to perform cross-lingual transfer learning using representations from a multilingual NMT model. Our proposed approach of reusing the encoder from a multilingual NMT system as a pre-trained component provides significant improvements on three downstream tasks. Further, our approach enables us to perform surprisingly competitive zero-shot classification on an unseen language and outperforms cross-lingual embedding base methods. Finally, we end with a series of analyses which shed light on the factors that contribute to the zero-shot phenomenon. We hope that these results showcase the efficacy of multilingual NMT to learn transferable contextualized representations for many downstream tasks."
]
],
"section_name": [
"Introduction",
"Proposed Method",
"Multilingual Representations Using NMT",
"Multilingual Encoder-Classifier",
"Corpora",
"Model and Training Details",
"Transfer Learning Results",
"Zero-Shot Classification Results",
"Analyses",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"09194b62d31ef50c74d81ba330cf0d816da83d95"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"f57f20aa015b4c9c640ce2729851ea8a9d45c360"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"103d0d2040a12a509171cbe3ce33664e976243bb"
],
"answer": [
{
"evidence": [
"For the MT task, we use the WMT 2014 En $\\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below."
],
"extractive_spans": [],
"free_form_answer": "WMT 2014 En-Fr parallel corpus",
"highlighted_evidence": [
"For the MT task, we use the WMT 2014 En $\\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"f840a836eee0180d2c976457f8b3052d8e78050c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"Do the other multilingual baselines make use of the same amount of training data?",
"How big is the impact of training data size on the performance of the multilingual encoder?",
"What data were they used to train the multilingual encoder?"
],
"question_id": [
"05196588320dfb0b9d9be7d64864c43968d329bc",
"e930f153c89dfe9cff75b7b15e45cd3d700f4c72",
"545ff2f76913866304bfacdb4cc10d31dbbd2f37"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"multilingual classification",
"multilingual classification",
"multilingual classification"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Transfer learning results of the classification accuracy on all the datasets. Amazon (En) and Amazon (Fr) are the English and French versions of the task, training the models on the data for each language. The state-of-the-art results are cited from Fernndez, Esuli, and Sebastiani (2016) for both Amazon Reviews tasks and McCann et al. (2017) for SST and SNLI.",
"Table 2: Zero-Shot performance on all French test sets. ∗Note that we use the English accuracy in the bridged column for SST.",
"Table 3: Comparison of our best zero-shot result on the French SNLI test set to other baselines. See text for details.",
"Table 4: Results of the control experiment on zero-shot performance on the Amazon German test set.",
"Table 5: Effect of machine translation data over our proposed Encoder-Classifier on the SNLI tasks. The results of SNLI (Fr) shows the zero-shot performance of our system.",
"Table 6: Zero-shot analyses of classifier network model capacity. The SNLI (Fr) results report the zero-shot performance.",
"Figure 1: Correlation between test-loss, test-accuracy (the English SNLI) and zero-shot accuracy (the French test set).",
"Table 7: Effect of parameter smoothing on the English SNLI test set and zero-shot performance on the French test set."
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Figure1-1.png",
"7-Table7-1.png"
]
} | [
"What data were they used to train the multilingual encoder?"
] | [
[
"1809.04686-Corpora-1"
]
] | [
"WMT 2014 En-Fr parallel corpus"
] | 156 |
1703.09684 | An Analysis of Visual Question Answering Algorithms | In visual question answering (VQA), an algorithm must answer text-based questions about images. While multiple datasets for VQA have been created since late 2014, they all have flaws in both their content and the way algorithms are evaluated on them. As a result, evaluation scores are inflated and predominantly determined by answering easier questions, making it difficult to compare different methods. In this paper, we analyze existing VQA algorithms using a new dataset. It contains over 1.6 million questions organized into 12 different categories. We also introduce questions that are meaningless for a given image to force a VQA system to reason about image content. We propose new evaluation schemes that compensate for over-represented question-types and make it easier to study the strengths and weaknesses of algorithms. We analyze the performance of both baseline and state-of-the-art VQA models, including multi-modal compact bilinear pooling (MCB), neural module networks, and recurrent answering units. Our experiments establish how attention helps certain categories more than others, determine which models work better than others, and explain how simple models (e.g. MLP) can surpass more complex models (MCB) by simply learning to answer large, easy question categories. | {
"paragraphs": [
[
"In open-ended visual question answering (VQA) an algorithm must produce answers to arbitrary text-based questions about images BIBREF0 , BIBREF1 . VQA is an exciting computer vision problem that requires a system to be capable of many tasks. Truly solving VQA would be a milestone in artificial intelligence, and would significantly advance human computer interaction. However, VQA datasets must test a wide range of abilities for progress to be adequately measured.",
"VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0 . Including DAQUAR, six major VQA datasets have been released, and algorithms have rapidly improved. On the most popular dataset, `The VQA Dataset' BIBREF1 , the best algorithms are now approaching 70% accuracy BIBREF2 (human performance is 83%). While these results are promising, there are critical problems with existing datasets in terms of multiple kinds of biases. Moreover, because existing datasets do not group instances into meaningful categories, it is not easy to compare the abilities of individual algorithms. For example, one method may excel at color questions compared to answering questions requiring spatial reasoning. Because color questions are far more common in the dataset, an algorithm that performs well at spatial reasoning will not be appropriately rewarded for that feat due to the evaluation metrics that are used.",
"Contributions: Our paper has four major contributions aimed at better analyzing and comparing VQA algorithms: 1) We create a new VQA benchmark dataset where questions are divided into 12 different categories based on the task they solve; 2) We propose two new evaluation metrics that compensate for forms of dataset bias; 3) We balance the number of yes/no object presence detection questions to assess whether a balanced distribution can help algorithms learn better; and 4) We introduce absurd questions that force an algorithm to determine if a question is valid for a given image. We then use the new dataset to re-train and evaluate both baseline and state-of-the-art VQA algorithms. We found that our proposed approach enables more nuanced comparisons of VQA algorithms, and helps us understand the benefits of specific techniques better. In addition, it also allowed us to answer several key questions about VQA algorithms, such as, `Is the generalization capacity of the algorithms hindered by the bias in the dataset?', `Does the use of spatial attention help answer specific question-types?', `How successful are the VQA algorithms in answering less-common questions?', and 'Can the VQA algorithms differentiate between real and absurd questions?'"
],
[
"Six datasets for VQA with natural images have been released between 2014–2016: DAQUAR BIBREF0 , COCO-QA BIBREF3 , FM-IQA BIBREF4 , The VQA Dataset BIBREF1 , Visual7W BIBREF5 , and Visual Genome BIBREF6 . FM-IQA needs human judges and has not been widely used, so we do not discuss it further. Table 1 shows statistics for the other datasets. Following others BIBREF7 , BIBREF8 , BIBREF9 , we refer to the portion of The VQA Dataset containing natural images as COCO-VQA. Detailed dataset reviews can be found in BIBREF10 and BIBREF11 .",
"All of the aforementioned VQA datasets are biased. DAQUAR and COCO-QA are small and have a limited variety of question-types. Visual Genome, Visual7W, and COCO-VQA are larger, but they suffer from several biases. Bias takes the form of both the kinds of questions asked and the answers that people give for them. For COCO-VQA, a system trained using only question features achieves 50% accuracy BIBREF7 . This suggests that some questions have predictable answers. Without a more nuanced analysis, it is challenging to determine what kinds of questions are more dependent on the image. For datasets made using Mechanical Turk, annotators often ask object recognition questions, e.g., `What is in the image?' or `Is there an elephant in the image?'. Note that in the latter example, annotators rarely ask that kind of question unless the object is in the image. On COCO-VQA, 79% of questions beginning with `Is there a' will have `yes' as their ground truth answer.",
"In 2017, the VQA 2.0 BIBREF12 dataset was introduced. In VQA 2.0, the same question is asked for two different images and annotators are instructed to give opposite answers, which helped reduce language bias. However, in addition to language bias, these datasets are also biased in their distribution of different types of questions and the distribution of answers within each question-type. Existing VQA datasets use performance metrics that treat each test instance with equal value (e.g., simple accuracy). While some do compute additional statistics for basic question-types, overall performance is not computed from these sub-scores BIBREF1 , BIBREF3 . This exacerbates the issues with the bias because the question-types that are more likely to be biased are also more common. Questions beginning with `Why' and `Where' are rarely asked by annotators compared to those beginning with `Is' and 'Are'. For example, on COCO-VQA, improving accuracy on `Is/Are' questions by 15% will increase overall accuracy by over 5%, but answering all `Why/Where' questions correctly will increase accuracy by only 4.1% BIBREF10 . Due to the inability of the existing evaluation metrics to properly address these biases, algorithms trained on these datasets learn to exploit these biases, resulting in systems that work poorly when deployed in the real-world.",
"For related reasons, major benchmarks released in the last decade do not use simple accuracy for evaluating image recognition and related computer vision tasks, but instead use metrics such as mean-per-class accuracy that compensates for unbalanced categories. For example, on Caltech-101 BIBREF13 , even with balanced training data, simple accuracy fails to address the fact that some categories were much easier to classify than others (e.g., faces and planes were easy and also had the largest number of test images). Mean per-class accuracy compensates for this by requiring a system to do well on each category, even when the amount of test instances in categories vary considerably.",
"Existing benchmarks do not require reporting accuracies across different question-types. Even when they are reported, the question-types can be too coarse to be useful, e.g., `yes/no', `number' and `other' in COCO-VQA. To improve the analysis of the VQA algorithms, we categorize the questions into meaningful types, calculate the sub-scores, and incorporate them in our evaluation metrics."
],
[
"Previous works have studied bias in VQA and proposed countermeasures. In BIBREF14 , the Yin and Yang dataset was created to study the effect of having an equal number of binary (yes/no) questions about cartoon images. They found that answering questions from a balanced dataset was harder. This work is significant, but it was limited to yes/no questions and their approach using cartoon imagery cannot be directly extended to real-world images.",
"One of the goals of this paper is to determine what kinds of questions an algorithm can answer easily. In BIBREF15 , the SHAPES dataset was proposed, which has similar objectives. SHAPES is a small dataset, consisting of 64 images that are composed by arranging colored geometric shapes in different spatial orientations. Each image has the same 244 yes/no questions, resulting in 15,616 questions. Although SHAPES serves as an important adjunct evaluation, it alone cannot suffice for testing a VQA algorithm. The major limitation of SHAPES is that all of its images are of 2D shapes, which are not representative of real-world imagery. Along similar lines, Compositional Language and Elementary Visual Reasoning (CLEVR) BIBREF16 also proposes use of 3D rendered geometric objects to study reasoning capacities of a model. CLEVR is larger than SHAPES and makes use of 3D rendered geometric objects. In addition to shape and color, it adds material property to the objects. CLEVR has five types of questions: attribute query, attribute comparison, integer comparison, counting, and existence.",
"Both SHAPES and CLEVR were specifically tailored for compositional language approaches BIBREF15 and downplay the importance of visual reasoning. For instance, the CLEVR question, `What size is the cylinder that is left of the brown metal thing that is left of the big sphere?' requires demanding language reasoning capabilities, but only limited visual understanding is needed to parse simple geometric objects. Unlike these three synthetic datasets, our dataset contains natural images and questions. To improve algorithm analysis and comparison, our dataset has more (12) explicitly defined question-types and new evaluation metrics."
],
[
"In the past two years, multiple publicly released datasets have spurred the VQA research. However, due to the biases and issues with evaluation metrics, interpreting and comparing the performance of VQA systems can be opaque. We propose a new benchmark dataset that explicitly assigns questions into 12 distinct categories. This enables measuring performance within each category and understand which kind of questions are easy or hard for today's best systems. Additionally, we use evaluation metrics that further compensate for the biases. We call the dataset the Task Driven Image Understanding Challenge (TDIUC). The overall statistics and example images of this dataset are shown in Table 1 and Fig. 2 respectively.",
"TDIUC has 12 question-types that were chosen to represent both classical computer vision tasks and novel high-level vision tasks which require varying degrees of image understanding and reasoning. The question-types are:",
"The number of each question-type in TDIUC is given in Table 2 . The questions come from three sources. First, we imported a subset of questions from COCO-VQA and Visual Genome. Second, we created algorithms that generated questions from COCO's semantic segmentation annotations BIBREF17 , and Visual Genome's objects and attributes annotations BIBREF6 . Third, we used human annotators for certain question-types. In the following sections, we briefly describe each of these methods."
],
[
"We imported questions from COCO-VQA and Visual Genome belonging to all question-types except `object utilities and affordances'. We did this by using a large number of templates and regular expressions. For Visual Genome, we imported questions that had one word answers. For COCO-VQA, we imported questions with one or two word answers and in which five or more annotators agreed.",
"For color questions, a question would be imported if it contained the word `color' in it and the answer was a commonly used color. Questions were classified as activity or sports recognition questions if the answer was one of nine common sports or one of fifteen common activities and the question contained common verbs describing actions or sports, e.g., playing, throwing, etc. For counting, the question had to begin with `How many' and the answer had to be a small countable integer (1-16). The other categories were determined using regular expressions. For example, a question of the form `Are feeling ?' was classified as sentiment understanding and `What is to the right of/left of/ behind the ?' was classified as positional reasoning. Similarly, `What <OBJECT CATEGORY> is in the image?' and similar templates were used to populate subordinate object recognition questions. This method was used for questions about the season and weather as well, e.g., `What season is this?', `Is this rainy/sunny/cloudy?', or `What is the weather like?' were imported to scene classification."
],
[
"Images in the COCO dataset and Visual Genome both have individual regions with semantic knowledge attached to them. We exploit this information to generate new questions using question templates. To introduce variety, we define multiple templates for each question-type and use the annotations to populate them. For example, for counting we use 8 templates, e.g., `How many <objects> are there?', `How many <objects> are in the photo?', etc. Since the COCO and Visual Genome use different annotation formats, we discuss them separately.",
"Sport recognition, counting, subordinate object recognition, object presence, scene understanding, positional reasoning, and absurd questions were created from COCO, similar to the scheme used in BIBREF18 . For counting, we count the number of object instances in an image annotation. To minimize ambiguity, this was only done if objects covered an area of at least 2,000 pixels.",
"For subordinate object recognition, we create questions that require identifying an object's subordinate-level object classification based on its larger semantic category. To do this, we use COCO supercategories, which are semantic concepts encompassing several objects under a common theme, e.g., the supercategory `furniture' contains chair, couch, etc. If the image contains only one type of furniture, then a question similar to `What kind of furniture is in the picture?' is generated because the answer is not ambiguous. Using similar heuristics, we create questions about identifying food, electronic appliances, kitchen appliances, animals, and vehicles.",
"For object presence questions, we find images with objects that have an area larger than 2,000 pixels and produce a question similar to `Is there a <object> in the picture?' These questions will have `yes' as an answer. To create negative questions, we ask questions about COCO objects that are not present in an image. To make this harder, we prioritize the creation of questions referring to absent objects that belong to the same supercategory of objects that are present in the image. A street scene is more likely to contain trucks and cars than it is to contain couches and televisions. Therefore, it is more difficult to answer `Is there a truck?' in a street scene than it is to answer `Is there a couch?'",
"For sport recognition questions, we detect the presence of specific sports equipment in the annotations and ask questions about the type of sport being played. Images must only contain sports equipment for one particular sport. A similar approach was used to create scene understanding questions. For example, if a toilet and a sink are present in annotations, the room is a bathroom and an appropriate scene recognition question can be created. Additionally, we use the supercategories `indoor' and `outdoor' to ask questions about where a photo was taken.",
"For creating positional reasoning questions, we use the relative locations of bounding boxes to create questions similar to `What is to the left/right of <object>?' This can be ambiguous due to overlapping objects, so we employ the following heuristics to eliminate ambiguity: 1) The vertical separation between the two bounding boxes should be within a small threshold; 2) The objects should not overlap by more than the half the length of its counterpart; and 3) The objects should not be horizontally separated by more than a distance threshold, determined by subjectively judging optimal separation to reduce ambiguity. We tried to generate above/below questions, but the results were unreliable.",
"Absurd questions test the ability of an algorithm to judge when a question is not answerable based on the image's content. To make these, we make a list of the objects that are absent from a given image, and then we find questions from rest of TDIUC that ask about these absent objects, with the exception of yes/no and counting questions. This includes questions imported from COCO-VQA, auto-generated questions, and manually created questions. We make a list of all possible questions that would be `absurd' for each image and we uniformly sample three questions per image. In effect, we will have same question repeated multiple times throughout the dataset, where it can either be a genuine question or a nonsensical question. The algorithm must answer `Does Not Apply' if the question is absurd.",
"Visual Genome's annotations contain region descriptions, relationship graphs, and object boundaries. However, the annotations can be both non-exhaustive and duplicated, which makes using them to automatically make QA pairs difficult. We only use Visual Genome to make color and positional reasoning questions. The methods we used are similar to those used with COCO, but additional precautions were needed due to quirks in their annotations. Additional details are provided in the Appendix."
],
[
"Creating sentiment understanding and object utility/affordance questions cannot be readily done using templates, so we used manual annotation to create these. Twelve volunteer annotators were trained to generate these questions, and they used a web-based annotation tool that we developed. They were shown random images from COCO and Visual Genome and could also upload images."
],
[
"Post processing was performed on questions from all sources. All numbers were converted to text, e.g., 2 became two. All answers were converted to lowercase, and trailing punctuation was stripped. Duplicate questions for the same image were removed. All questions had to have answers that appeared at least twice. The dataset was split into train and test splits with 70% for train and 30% for test."
],
[
"One of the main goals of VQA research is to build computer vision systems capable of many tasks, instead of only having expertise at one specific task (e.g., object recognition). For this reason, some have argued that VQA is a kind of Visual Turing Test BIBREF0 . However, if simple accuracy is used for evaluating performance, then it is hard to know if a system succeeds at this goal because some question-types have far more questions than others. In VQA, skewed distributions of question-types are to be expected. If each test question is treated equally, then it is difficult to assess performance on rarer question-types and to compensate for bias. We propose multiple measures to compensate for bias and skewed distributions.",
"To compensate for the skewed question-type distribution, we compute accuracy for each of the 12 question-types separately. However, it is also important to have a final unified accuracy metric. Our overall metrics are the arithmetic and harmonic means across all per question-type accuracies, referred to as arithmetic mean-per-type (Arithmetic MPT) accuracy and harmonic mean-per-type accuracy (Harmonic MPT). Unlike the Arithmetic MPT, Harmonic MPT measures the ability of a system to have high scores across all question-types and is skewed towards lowest performing categories.",
"We also use normalized metrics that compensate for bias in the form of imbalance in the distribution of answers within each question-type, e.g., the most repeated answer `two' covers over 35% of all the counting-type questions. To do this, we compute the accuracy for each unique answer separately within a question-type and then average them together for the question-type. To compute overall performance, we compute the arithmetic normalized mean per-type (N-MPT) and harmonic N-MPT scores. A large discrepancy between unnormalized and normalized scores suggests an algorithm is not generalizing to rarer answers."
],
[
"While there are alternative formulations (e.g., BIBREF4 , BIBREF19 ), the majority of VQA systems formulate it as a classification problem in which the system is given an image and a question, with the answers as categories. BIBREF1 , BIBREF3 , BIBREF2 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF9 , BIBREF27 , BIBREF28 , BIBREF8 , BIBREF19 , BIBREF29 . Almost all systems use CNN features to represent the image and either a recurrent neural network (RNN) or a bag-of-words model for the question. We briefly review some of these systems, focusing on the models we compare in experiments. For a more comprehensive review, see BIBREF10 and BIBREF11 .",
"Two simple VQA baselines are linear or multi-layer perceptron (MLP) classifiers that take as input the question and image embeddings concatenated to each other BIBREF1 , BIBREF7 , BIBREF8 , where the image features come from the last hidden layer of a CNN. These simple approaches often work well and can be competitive with complex attentive models BIBREF7 , BIBREF8 .",
"Spatial attention has been heavily investigated in VQA models BIBREF2 , BIBREF20 , BIBREF28 , BIBREF30 , BIBREF27 , BIBREF24 , BIBREF21 . These systems weigh the visual features based on their relevance to the question, instead of using global features, e.g., from the last hidden layer of a CNN. For example, to answer `What color is the bear?' they aim emphasize the visual features around the bear and suppress other features.",
"The MCB system BIBREF2 won the CVPR-2016 VQA Workshop Challenge. In addition to using spatial attention, it implicitly computes the outer product between the image and question features to ensure that all of their elements interact. Explicitly computing the outer product would be slow and extremely high dimensional, so it is done using an efficient approximation. It uses an long short-term memory (LSTM) networks to embed the question.",
"The neural module network (NMN) is an especially interesting compositional approach to VQA BIBREF15 , BIBREF31 . The main idea is to compose a series of discrete modules (sub-networks) that can be executed collectively to answer a given question. To achieve this, they use a variety of modules, e.g., the find(x) module outputs a heat map for detecting $x$ . To arrange the modules, the question is first parsed into a concise expression (called an S-expression), e.g., `What is to the right of the car?' is parsed into (what car);(what right);(what (and car right)). Using these expressions, modules are composed into a sequence to answer the query.",
"The multi-step recurrent answering units (RAU) model for VQA is another state-of-the-art method BIBREF32 . Each inference step in RAU consists of a complete answering block that takes in an image, a question, and the output from the previous LSTM step. Each of these is part of a larger LSTM network that progressively reasons about the question."
],
[
"We trained multiple baseline models as well as state-of-the-art VQA methods on TDIUC. The methods we use are:",
"For image features, ResNet-152 BIBREF33 with $448 \\times 448$ images was used for all models.",
"QUES and IMG provide information about biases in the dataset. QUES, Q+I, and MLP all use 4800-dimensional skip-thought vectors BIBREF34 to embed the question, as was done in BIBREF7 . For image features, these all use the `pool5' layer of ResNet-152 normalized to unit length. MLP is a 4-layer net with a softmax output layer. The 3 ReLU hidden layers have 6000, 4000, and 2000 units, respectively. During training, dropout (0.3) was used for the hidden layers.",
"For MCB, MCB-A, NMN and RAU, we used publicly available code to train them on TDIUC. The experimental setup and hyperparamters were kept unchanged from the default choices in the code, except for upgrading NMN and RAU's visual representation to both use ResNet-152.",
"Results on TDIUC for these models are given in Table 3 . Accuracy scores are given for each of the 12 question-types in Table 3 , and scores that are normalized by using mean-per-unique-answer are given in appendix Table 5 ."
],
[
"By inspecting Table 3 , we can see that some question-types are comparatively easy ( $>90$ %) under MPT: scene recognition, sport recognition, and object presence. High accuracy is also achieved on absurd, which we discuss in greater detail in Sec. \"Effects of Including Absurd Questions\" . Subordinate object recognition is moderately high ( $>80$ %), despite having a large number of unique answers. Accuracy on counting is low across all methods, despite a large number of training data. For the remaining question-types, more analysis is needed to pinpoint whether the weaker performance is due to lower amounts of training data, bias, or limitations of the models. We next investigate how much of the good performance is due to bias in the answer distribution, which N-MPT compensates for."
],
[
"One of our major aims was to compensate for the fact that algorithms can achieve high scores by simply learning to answer more populated and easier question-types. For existing datasets, earlier work has shown that simple baseline methods routinely exceed more complex methods using simple accuracy BIBREF7 , BIBREF8 , BIBREF19 . On TDIUC, MLP surpasses MCB and NMN in terms of simple accuracy, but a closer inspection reveals that MLP's score is highly determined by performance on categories with a large number of examples, such as `absurd' and `object presence.' Using MPT, we find that both NMN and MCB outperform MLP. Inspecting normalized scores for each question-type (Appendix Table 5 ) shows an even more pronounced differences, which is also reflected in arithmetic N-MPT score presented in Table 3 . This indicates that MLP is prone to overfitting. Similar observations can be made for MCB-A compared to RAU, where RAU outperforms MCB-A using simple accuracy, but scores lower on all the metrics designed to compensate for the skewed answer distribution and bias.",
"Comparing the unnormalized and normalized metrics can help us determine the generalization capacity of the VQA algorithms for a given question-type. A large difference in these scores suggests that an algorithm is relying on the skewed answer distribution to obtain high scores. We found that for MCB-A, the accuracy on subordinate object recognition drops from 85.54% with unnormalized to 23.22% with normalized, and for scene recognition it drops from 93.06% (unnormalized) to 38.53% (normalized). Both these categories have a heavily skewed answer distribution; the top-25 answers in subordinate object recognition and the top-5 answers in scene recognition cover over 80% of all questions in their respective question-types. This shows that question-types that appear to be easy may simply be due to the algorithms learning the answer statistics. A truly easy question-type will have similar performance for both unnormalized and normalized metrics. For example, sport recognition shows only 17.39% drop compared to a 30.21% drop for counting, despite counting having same number of unique answers and far more training data. By comparing relative drop in performance between normalized and unnormalized metric, we can also compare the generalization capability of the algorithms, e.g., for subordinate object recognition, RAU has higher unnormalized score (86.11%) compared to MCB-A (85.54%). However, for normalized scores, MCB-A has significantly higher performance (23.22%) than RAU (21.67%). This shows RAU may be more dependent on the answer distribution. Similar observations can be made for MLP compared to MCB."
],
[
"In the previous section, we saw that the VQA models struggle to correctly predict rarer answers. Are the less repeated questions actually harder to answer, or are the algorithms simply biased toward more frequent answers? To study this, we created a subset of TDIUC that only consisted of questions that have answers repeated less than 1000 times. We call this dataset TDIUC-Tail, which has 46,590 train and 22,065 test questions. Then, we trained MCB on: 1) the full TDIUC dataset; and 2) TDIUC-Tail. Both versions were evaluated on the validation split of TDIUC-Tail.",
"We found that MCB trained only on TDIUC-Tail outperformed MCB trained on all of TDIUC across all question-types (details are in appendix Table 6 and 7 ). This shows that MCB is capable of learning to correctly predict rarer answers, but it is simply biased towards predicting more common answers to maximize overall accuracy. Using normalized accuracy disincentivizes the VQA algorithms' reliance on the answer statistics, and for deploying a VQA system it may be useful to optimize directly for N-MPT."
],
[
"Absurd questions force a VQA system to look at the image to answer the question. In TDIUC, these questions are sampled from the rest of the dataset, and they have a high prior probability of being answered `Does not apply.' This is corroborated by the QUES model, which achieves a high accuracy on absurd; however, for the same questions when they are genuine for an image, it only achieves 6.77% accuracy on these questions. Good absurd performance is achieved by sacrificing performance on other categories. A robust VQA system should be able to detect absurd questions without then failing on others. By examining the accuracy on real questions that are identical to absurd questions, we can quantify an algorithm's ability to differentiate the absurd questions from the real ones. We found that simpler models had much lower accuracy on these questions, (QUES: 6.77%, Q+I: 34%), compared to more complex models (MCB: 62.44%, MCB-A: 68.83%).",
"To further study this, we we trained two VQA systems, Q+I and MCB, both with and without absurd. The results are presented in Table 3 . For Q+I trained without absurd questions, accuracies for other categories increase considerably compared to Q+I trained with full TDIUC, especially for question-types that are used to sample absurd questions, e.g., activity recognition (24% when trained with absurd and 48% without). Arithmetic MPT accuracy for the Q+I model that is trained without absurd (57.03%) is also substantially greater than MPT for the model trained with absurd (51.45% for all categories except absurd). This suggests that Q+I is not properly discriminating between absurd and real questions and is biased towards mis-identifying genuine questions as being absurd. In contrast, MCB, a more capable model, produces worse results for absurd, but the version trained without absurd shows much smaller differences than Q+I, which shows that MCB is more capable of identifying absurd questions."
],
[
"In Sec. \"Can Algorithms Predict Rare Answers?\" , we saw that a skewed answer distribution can impact generalization. This effect is strong even for simple questions and affects even the most sophisticated algorithms. Consider MCB-A when it is trained on both COCO-VQA and Visual Genome, i.e., the winner of the CVPR-2016 VQA Workshop Challenge. When it is evaluated on object presence questions from TDIUC, which contains 50% `yes' and 50% `no' questions, it correctly predicts `yes' answers with 86.3% accuracy, but only 11.2% for questions with `no' as an answer. However, after training it on TDIUC, MCB-A is able to achieve 95.02% for `yes' and 92.26% for `no.' MCB-A performed poorly by learning the biases in the COCO-VQA dataset, but it is capable of performing well when the dataset is unbiased. Similar observations about balancing yes/no questions were made in BIBREF14 . Datasets could balance simple categories like object presence, but extending the same idea to all other categories is a challenging task and undermines the natural statistics of the real-world. Adopting mean-per-class and normalized accuracy metrics can help compensate for this problem."
],
[
"By breaking questions into types, we can assess which types benefit the most from attention. We do this by comparing the MCB model with and without attention, i.e., MCB and MCB-A. As seen in Table 3 , attention helped improve results on several question categories. The most pronounced increases are for color recognition, attribute recognition, absurd, and counting. All of these question-types require the algorithm to detect specified object(s) (or lack thereof) to be answered correctly. MCB-A computes attention using local features from different spatial locations, instead of global image features. This aids in localizing individual objects. The attention mechanism learns the relative importance of these features. RAU also utilizes spatial attention and shows similar increments."
],
[
"NMN, and, to a lesser extent, RAU propose compositional approaches for VQA. For COCO-VQA, NMN has performed worse than some MLP models BIBREF7 using simple accuracy. We hoped that it would achieve better performance than other models for questions that require logically analyzing an image in a step-by-step manner, e.g., positional reasoning. However, while NMN did perform better than MLP using MPT and N-MPT metric, we did not see any substantial benefits in specific question-types. This may be because NMN is limited by the quality of the `S-expression' parser, which produces incorrect or misleading parses in many cases. For example, `What color is the jacket of the man on the far left?' is parsed as (color jacket);(color leave);(color (and jacket leave)). This expression not only fails to parse `the man', which is a crucial element needed to correctly answer the question, but also wrongly interprets `left' as past tense of leave.",
"RAU performs inference over multiple hops, and because each hop contains a complete VQA system, it can learn to solve different tasks in each step. Since it is trained end-to-end, it does not need to rely on rigid question parses. It showed very good performance in detecting absurd questions and also performed well on other categories."
],
[
"We introduced TDIUC, a VQA dataset that consists of 12 explicitly defined question-types, including absurd questions, and we used it to perform a rigorous analysis of recent VQA algorithms. We proposed new evaluation metrics to compensate for biases in VQA datasets. Results show that the absurd questions and the new evaluation metrics enable a deeper understanding of VQA algorithm behavior."
],
[
"In this section, we will provide additional details about the TDIUC dataset creation and additional statistics that were omitted from the main paper due to inadequate space."
],
[
"As mentioned in the main text, Visual Genome's annotations are both non-exhaustive and duplicated. This makes using them to automatically make question-answer (QA) pairs difficult. Due to these issues, we only used them to make two types of questions: Color Attributes and Positional Reasoning. Moreover, a number of restrictions needed to be placed, which are outlined below.",
"For making Color Attribute questions, we make use of the attributes metadata in the Visual Genome annotations to populate the template `What color is the <object>?' However, Visual Genome metadata can contain several color attributes for the same object as well as different names for the same object. Since the annotators type the name of the object manually rather than choosing from a predetermined set of objects, the same object can be referred by different names, e.g., `xbox controller,' `game controller,' `joystick,' and `controller' can all refer to same object in an image. The object name is sometimes also accompanied by its color, e.g., `white horse' instead of `horse' which makes asking the Color Attribute question `What color is the white horse?' pointless. One potential solution is to use the wordnet `synset' which accompanies every object annotation in the Visual Genome annotations. Synsets are used to group different variations of the common objects names under a single noun from wordnet. However, we found that the synset matching was erroneous in numerous instances, where the object category was misrepresented by the given synset. For example, A `controller' is matched with synset `accountant' even when the `controller' is referring to a game controller. Similarly, a `cd' is matched with synset of `cadmium.' To avoid these problems we made a set of stringent requirements before making questions:",
"The chosen object should only have a single attribute that belongs to a set of commonly used colors.",
"The chosen object name or synset must be one of the 91 common objects in the MS-COCO annotations.",
"There must be only one instance of the chosen object.",
"Using these criteria, we found that we could safely ask the question of the form `What color is the <object>?'.",
"Similarly, for making Positional Reasoning questions, we used the relationships metadata in the Visual Genome annotations. The relationships metadata connects two objects by a relationship phrase. Many of these relationships describe the positions of the two objects, e.g., A is `on right' of B, where `on right' is one of the example relationship clause from Visual Genome, with the object A as the subject and the object B as the object. This can be used to generate Positional Reasoning questions. Again, we take several measures to avoid ambiguity. First, we only use objects that appear once in the image because `What is to the left of A' can be ambiguous if there are two instances of the object A. However, since visual genome annotations are non-exhaustive, there may still (rarely) be more than one instance of object A that was not annotated. To disambiguate such cases, we use the attributes metadata to further specify the object wherever possible, e.g., instead of asking `What is to the right of the bus?', we ask `What is to the right of the green bus?'",
"Due to a these stringent criteria, we could only create a small number of questions using Visual Genome annotations compared to other sources. The number of questions produced via each source is shown in Table 4 ."
],
[
"Figure 3 shows the answer distribution for the different question-types. We can see that some categories, such as counting, scene recognition and sentiment understanding, have a very large share of questions represented by only a few top answers. In such cases, the performance of a VQA algorithm can be inflated unless the evaluation metric compensates for this bias. In other cases, such as positional reasoning and object utility and affordances, the answers are much more varied, with top-50 answers covering less than 60% of all answers.",
"We have completely balanced answer distribution for object presence questions, where exactly 50% of questions being answered `yes' and the remaining 50% of the questions are answered `no'. For other categories, we have tried to design our question generation algorithms so that a single answer does not have a significant majority within a question type. For example, while scene understanding has top-4 answers covering over 85% of all the questions, there are roughly as many `no' questions (most common answer) as there are `yes' questions (second most-common answer). Similar distributions can be seen for counting, where `two' (most-common answer) is repeated almost as many times as `one' (second most-common answer). By having at least the top-2 answers split almost equally, we remove the incentive for an algorithm to perform well using simple mode guessing, even when using the simple accuracy metric."
],
[
"In the paper, we mentioned that we split the entire collection into 70% train and 30% test/validation. To do this, we not only need to have a roughly equal distribution of question types and answers, but also need to make sure that the multiple questions for same image do not end up in two different splits, i.e., the same image cannot occur in both the train and the test partitions. So, we took following measures to split the questions into train-test splits. First, we split all the images into three separate clusters.",
"Manually uploaded images, which includes all the images manually uploaded by our volunteer annotators.",
"Images from the COCO dataset, including all the images for questions generated from COCO annotations and those imported from COCO-VQA dataset. In addition, a large number of Visual Genome questions also refer to COCO images. So, some questions that are generated and imported from Visual Genome are also included in this cluster.",
"Images exclusively in the Visual Genome dataset, which includes images for a part of the questions imported from Visual Genome and those generated using that dataset.",
"We follow simple rules to split each of these clusters of images into either belonging to the train or test splits.",
"All the questions belonging to images coming from the `train2014' split of COCO images are assigned to the train split and all the questions belonging to images from the `val2014' split are assigned to test split.",
"For manual and Visual Genome images, we randomly split 70% of images to train and rest to test."
],
[
"In this section, we present additional experimental results that were omitted from the main paper due to inadequate space. First, the detailed normalized scores for each of the question-types is presented in Table 3 . To compute these scores, the accuracy for each unique answer is calculated separately within a question-type and averaged. Second, we present the results from the experiment in section \"Can Algorithms Predict Rare Answers?\" in table 6 (Unnormalized) and table 7 (Normalized). The results are evaluated on TDIUC-Tail, which is a subset of TDIUC that only consists of questions that have answers repeated less than 1000 times (uncommon answers). Note that the TDIUC-Tail excludes the absurd and the object presence question-types, as they do not contain any questions with uncommon answers. The algorithms are identical in both Table 6 and 7 and are named as follows:"
]
],
"section_name": [
"Introduction",
"Prior Natural Image VQA Datasets",
"Synthetic Datasets that Fight Bias",
"TDIUC for Nuanced VQA Analysis",
"Importing Questions from Existing Datasets",
"Generating Questions using Image Annotations",
"Manual Annotation",
"Post Processing",
"Proposed Evaluation Metric",
"Algorithms for VQA",
"Experiments",
"Easy Question-Types for Today's Methods",
"Effects of the Proposed Accuracy Metrics",
"Can Algorithms Predict Rare Answers?",
"Effects of Including Absurd Questions",
"Effects of Balancing Object Presence",
"Advantages of Attentive Models",
"Compositional and Modular Approaches",
"Conclusion",
"Additional Details About TDIUC",
"Questions using Visual Genome Annotations",
"Answer Distribution",
"Train and Test Split",
"Additional Experimental Results"
]
} | {
"answers": [
{
"annotation_id": [
"0953d83d785f0b7533669425168108b142cdd82b"
],
"answer": [
{
"evidence": [
"VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0 . Including DAQUAR, six major VQA datasets have been released, and algorithms have rapidly improved. On the most popular dataset, `The VQA Dataset' BIBREF1 , the best algorithms are now approaching 70% accuracy BIBREF2 (human performance is 83%). While these results are promising, there are critical problems with existing datasets in terms of multiple kinds of biases. Moreover, because existing datasets do not group instances into meaningful categories, it is not easy to compare the abilities of individual algorithms. For example, one method may excel at color questions compared to answering questions requiring spatial reasoning. Because color questions are far more common in the dataset, an algorithm that performs well at spatial reasoning will not be appropriately rewarded for that feat due to the evaluation metrics that are used."
],
"extractive_spans": [],
"free_form_answer": "late 2014",
"highlighted_evidence": [
"VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2a18a3656984d04249f100633e4c1003417a2255"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"From when are many VQA datasets collected?"
],
"question_id": [
"cf93a209c8001ffb4ef505d306b6ced5936c6b63"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"Question Answering"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: A good VQA benchmark tests a wide range of computer vision tasks in an unbiased manner. In this paper, we propose a new dataset with 12 distinct tasks and evaluation metrics that compensate for bias, so that the strengths and limitations of algorithms can be better measured.",
"Figure 2: Images from TDIUC and their corresponding question-answer pairs.",
"Table 1: Comparison of previous natural image VQA datasets with TDIUC. For COCO-VQA, the explicitly defined number of question-types is used, but a much finer granularity would be possible if they were individually classified. MC/OE refers to whether open-ended or multiple-choice evaluation is used.",
"Table 2: The number of questions per type in TDIUC.",
"Table 3: Results for all VQA models. The unnormalized accuracy for each question-type is shown. Overall performance is reported using 5 metrics. Overall (Arithmetic MPT) and Overall (Harmonic MPT) are averages of these sub-scores, providing a clearer picture of performance across question-types than simple accuracy. Overall Arithmetic N-MPT and Harmonic NMPT normalize across unique answers to better analyze the impact of answer imbalance (see Sec. 4). Normalized scores for individual question-types are presented in the appendix table 5. * denotes training without absurd questions.",
"Table 4: The number of questions produced via each source.",
"Figure 3: Answer distributions for the answers for each of the question-types. This shows the relative frequency of each unique answer within a question-type, so for some question-types, e.g., counting, even slim bars contain a fairly large number of instances with that answer. Similarly, for less populated question-types such as utility and affordances, even large bars represents only a small number of training examples.",
"Table 5: Results for all the VQA models. The normalized accuracy for each question-type is shown here. The models are identical to the ones in Table 3 in main paper. Overall performance is, again, reported using all 5 metrics. Overall (Arithmetic N-MPT) and Overall (Harmonic N-MPT) are averages of the reported sub-scores. Similarly, Arithmetic MPT and Harmonic MPT are averages of sub-scores reported in Table 3 in the main paper. * denotes training without absurd questions.",
"Table 6: Results on TDIUC-Tail for MCB model when trained on full TDIUC dataset vs when trained only on TDIUC-Tail. The un-normalized scores for each questiontypes and five different overall scores are shown here",
"Table 7: Results on TDIUC-Tail for MCB model when trained on full TDIUC dataset vs when trained only on TDIUC-Tail. The normalized scores for each questiontypes and five different overall scores are shown here"
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"7-Table3-1.png",
"10-Table4-1.png",
"11-Figure3-1.png",
"12-Table5-1.png",
"12-Table6-1.png",
"12-Table7-1.png"
]
} | [
"From when are many VQA datasets collected?"
] | [
[
"1703.09684-Introduction-1"
]
] | [
"late 2014"
] | 157 |
1911.11744 | Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration | In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn is used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to direct a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability. | {
"paragraphs": [
[
"A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process which requires substantial technical expertise. Imitation learning BIBREF0, is an appealing methodology that aims at overcoming this challenge – instead of complex programming, the user only provides a set of demonstrations of the intended behavior. These demonstrations are consequently distilled into a robot control policy by learning appropriate parameter settings of the controller. Popular approaches to imitation, such as Dynamic Motor Primitives (DMPs) BIBREF1 or Gaussian Mixture Regression (GMR) BIBREF2 largely focus on motion as the sole input and output modality, i.e., joint angles, forces or positions. Critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. The result is often a limited generalization capability which largely revolves around adaptation to changes in the object position. While imitation learning has been successfully applied to a wide range of tasks including table-tennis BIBREF3, locomotion BIBREF4, and human-robot interaction BIBREF5 an important question is how to incorporate language and vision into a differentiable end-to-end system for complex robot control.",
"In this paper, we present an imitation learning approach that combines language, vision, and motion in order to synthesize natural language-conditioned control policies that have strong generalization capabilities while also capturing the semantics of the task. We argue that such a multi-modal teaching approach enables robots to acquire complex policies that generalize to a wide variety of environmental conditions based on descriptions of the intended task. In turn, the network produces control parameters for a lower-level control policy that can be run on a robot to synthesize the corresponding motion. The hierarchical nature of our approach, i.e., a high-level policy generating the parameters of a lower-level policy, allows for generalization of the trained task to a variety of spatial, visual and contextual changes."
],
[
"In order to outline our problem statement, we contrast our approach to Imitation learning BIBREF0 which considers the problem of learning a policy $\\mathbf {\\pi }$ from a given set of demonstrations ${\\cal D}=\\lbrace \\mathbf {d}^0,.., \\mathbf {d}^m\\rbrace $. Each demonstration spans a time horizon $T$ and contains information about the robot's states and actions, e.g., demonstrated sensor values and control inputs at each time step. Robot states at each time step within a demonstration are denoted by $\\mathbf {x}_t$. In contrast to other imitation learning approaches, we assume that we have access to the raw camera images of the robot $_t$ at teach time step, as well as access to a verbal description of the task in natural language. This description may provide critical information about the context, goals or objects involved in the task and is denoted as $\\mathbf {s}$. Given this information, our overall objective is to learn a policy $\\mathbf {\\pi }$ which imitates the demonstrated behavior, while also capturing semantics and important visual features. After training, we can provide the policy $\\mathbf {\\pi }(\\mathbf {s},)$ with a different, new state of the robot and a new verbal description (instruction) as parameters. The policy will then generate the control signals needed to perform the task which takes the new visual input and semantic context int o account."
],
[
"A fundamental challenge in imitation learning is the extraction of policies that do not only cover the trained scenarios, but also generalize to a wide range of other situations. A large body of literature has addressed the problem of learning robot motor skills by imitation BIBREF6, learning functional BIBREF1 or probabilistic BIBREF7 representations. However, in most of these approaches, the state vector has to be carefully designed in order to ensure that all necessary information for adaptation is available. Neural approaches to imitation learning BIBREF8 circumvent this problem by learning suitable feature representations from rich data sources for each task or for a sequence of tasks BIBREF9, BIBREF10, BIBREF11. Many of these approaches assume that either a sufficiently large set of motion primitives is already available or that a taxonomy of the task is available, i.e., semantics and motions are not trained in conjunction. The importance of maintaining this connection has been shown in BIBREF12, allowing the robot to adapt to untrained variations of the same task. To learn entirely new tasks, meta-learning aims at learning policy parameters that can quickly be fine-tuned to new tasks BIBREF13. While very successful in dealing with visual and spatial information, these approaches do not incorporate any semantic or linguistic component into the learning process. Language has shown to successfully generate task descriptions in BIBREF14 and several works have investigated the idea of combining natural language and imitation learning: BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. However, most approaches do not utilize the inherent connection between semantic task descriptions and low-level motions to train a model.",
"Our work is most closely related to the framework introduced in BIBREF20, which also focuses on the symbol grounding problem. More specifically, the work in BIBREF20 aims at mapping perceptual features in the external world to constituents in an expert-provided natural language instruction. Our work approaches the problem of generating dynamic robot policies by fundamentally combining language, vision, and motion control in to a single differentiable neural network that can learn the cross-modal relationships found in the data with minimal human feature engineering. Unlike previous work, our proposed model is capable of directly generating complex low-level control policies from language and vision that allow the robot to reassemble motions shown during training."
],
[
"",
"We motivate our approach with a simple example: consider a binning task in which a 6 DOF robot has to drop an object into one of several differently shaped and colored bowls on a table. To teach this task, the human demonstrator does not only provide a kinesthetic demonstration of the desired trajectory, but also a verbal command, e.g., “Move towards the blue bowl” to the robot. In this example, the trajectory generation would have to be conditioned on the blue bowl's position which, however, has to be extracted from visual sensing. Our approach automatically detects and extracts these relationships between vision, language, and motion modalities in order to make best usage of contextual information for better generalization and disambiguation.",
"Figure FIGREF2 (left) provides an overview of our method. Our goal is to train a deep neural network that can take as input a task description $\\mathbf {s}$ and and image $$ and consequently generates robot controls. In the remainder of this paper, we will refer to our network as the mpn. Rather than immediately producing control signals, the mpn will generate the parameters for a lower-level controller. This distinction allows us to build upon well-established control schemes in robotics and optimal control. In our specific case, we use the widely used Dynamic Motor Primitives BIBREF1 as a lower-level controller for control signal generation.",
"In essence, our network can be divided into three parts. The first part, the semantic network, is used to create a task embedding $$ from the input sentence $$ and environment image $$. In a first step, the sentence $$ is tokenized and converted into a sentence matrix ${W} \\in \\mathbb {R}^{l_s \\times l_w} = f_W()$ by utilizing pre-trained Glove word embeddings BIBREF21 where $l_s$ is the padded-fixed-size length of the sentence and $l_w$ is the size of the glove word vectors. To extract the relationships between the words, we use use multiple CNNs $_s = f_L()$ with filter size $n \\times l_w$ for varying $n$, representing different $n$-gram sizes BIBREF22. The final representation is built by flattening the individual $n$-grams with max-pooling of size $(l_s - n_i + 1)\\times l_w$ and concatenating the results before using a single perceptron to detect relationships between different $n$-grams. In order to combine the sentence embedding $_s$ with the image, it is concatenated as a fourth channel to the input image $$. The task embedding $$ is produced with three blocks of convolutional layers, composed of two regular convolutions, followed by a residual convolution BIBREF23 each.",
"In the second part, the policy translation network is used to generate the task parameters $\\Theta \\in \\mathcal {R}^{o \\times b}$ and $\\in \\mathcal {R}^{o}$ given a task embedding $$ where $o$ is the number of output dimensions and $b$ the number of basis functions in the DMP:",
"where $f_G()$ and $f_H()$ are multilayer-perceptrons that use $$ after being processed in a single perceptron with weight $_G$ and bias $_G$. These parameters are then used in the third part of the network, which is a DMP BIBREF0, allowing us leverage a large body of research regarding their behavior and stability, while also allowing other extensions of DMPs BIBREF5, BIBREF24, BIBREF25 to be incorporated to our framework."
],
[
"We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity.",
"To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.",
"The generated parameters of the low-level DMP controller – the weights and goal position – must be sufficiently accurate in order to successfully deliver the object to the specified bin. On the right side of Figure FIGREF4, the generated weights for the DMP are shown for two tasks in which the target is close and far away from the robot, located at different sides of the table, indicating the robots ability to generate differently shaped trajectories. The accuracy of the goal position can be seen in Figure FIGREF4(left) which shows another aspect of our approach: By using stochastic forward passes BIBREF26 the model can return an estimate for the validity of a requested task in addition to the predicted goal configuration. The figure shows that the goal position of a red bowl has a relatively small distribution independently of the used sentence or location on the table, where as an invalid target (green) produces a significantly larger distribution, indicating that the requested task may be invalid.",
"To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified."
],
[
"In this work, we presented an imitation learning approach combining language, vision, and motion. A neural network architecture called Multimodal Policy Network was introduced which is able to learn the cross-modal relationships in the training data and achieve high generalization and disambiguation performance as a result. Our experiments showed that the model is able to generalize towards different locations and sentences while maintaining a high success rate of delivering an object to a desired bowl. In addition, we discussed an extensions of the method that allow us to obtain uncertainty information from the model by utilizing stochastic network outputs to get a distribution over the belief.",
"The modularity of our architecture allows us to easily exchange parts of the network. This can be utilized for transfer learning between different tasks in the semantic network or transfer between different robots by transferring the policy translation network to different robots in simulation, or to bridge the gap between simulation and reality."
]
],
"section_name": [
"Introduction",
"Introduction ::: Problem Statement:",
"Background",
"Multimodal Policy Generation via Imitation",
"Results",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"098e4ae256790d70e0f02709f0be0779e99b3770"
],
"answer": [
{
"evidence": [
"To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified."
],
"extractive_spans": [],
"free_form_answer": "96-97.6% using the objects color or shape and 79% using shape alone",
"highlighted_evidence": [
"Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2ca85ad9225e9b23024ec88341907e642add1d14"
],
"answer": [
{
"evidence": [
"We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity."
],
"extractive_spans": [
"a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7cf03a2b99adacddc3a1b69170a30c77f738599d"
],
"answer": [
{
"evidence": [
"To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.",
"To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified."
],
"extractive_spans": [],
"free_form_answer": "supervised learning",
"highlighted_evidence": [
"To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.",
"To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is task success rate achieved? ",
"What simulations are performed by the authors to validate their approach?",
"Does proposed end-to-end approach learn in reinforcement or supervised learning manner?"
],
"question_id": [
"fb5ce11bfd74e9d7c322444b006a27f2ff32a0cf",
"1e2ffa065b640e912d6ed299ff713a12195e12c4",
"28b2a20779a78a34fb228333dc4b93fd572fda15"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"vision",
"vision",
"vision"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Network architecture overview. The network consists of two parts, a high-level semantic network and a low-level control network. Both networks are working seamlessly together and are utilized in an End-to-End fashion.",
"Figure 2: Results for placing an object into bowls at different locations: (Left) Stochastic forward passes allow the model to estimate its certainty about the validity of a task. (Right) Generated weights Θ for four joints of the DMP shown for two objects close and far away of the robot."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png"
]
} | [
"What is task success rate achieved? ",
"Does proposed end-to-end approach learn in reinforcement or supervised learning manner?"
] | [
[
"1911.11744-Results-3"
],
[
"1911.11744-Results-3",
"1911.11744-Results-1"
]
] | [
"96-97.6% using the objects color or shape and 79% using shape alone",
"supervised learning"
] | 158 |
1910.11949 | Automatic Reminiscence Therapy for Dementia. | With people living longer than ever, the number of cases with dementia such as Alzheimer's disease increases steadily. It affects more than 46 million people worldwide, and it is estimated that in 2050 more than 100 million will be affected. While there are not effective treatments for these terminal diseases, therapies such as reminiscence, that stimulate memories from the past are recommended. Currently, reminiscence therapy takes place in care homes and is guided by a therapist or a carer. In this work, we present an AI-based solution to automatize the reminiscence therapy, which consists in a dialogue system that uses photos as input to generate questions. We run a usability case study with patients diagnosed of mild cognitive impairment that shows they found the system very entertaining and challenging. Overall, this paper presents how reminiscence therapy can be automatized by using machine learning, and deployed to smartphones and laptops, making the therapy more accessible to every person affected by dementia. | {
"paragraphs": [
[
"Increases in life expectancy in the last century have resulted in a large number of people living to old ages and will result in a double number of dementia cases by the middle of the century BIBREF0BIBREF1. The most common form of dementia is Alzheimer disease which contributes to 60–70% of cases BIBREF2. Research focused on identifying treatments to slow down the evolution of Alzheimer's disease is a very active pursuit, but it has been only successful in terms of developing therapies that eases the symptoms without addressing the cause BIBREF3BIBREF4. Besides, people with dementia might have some barriers to access to the therapies, such as cost, availability and displacement to the care home or hospital, where the therapy takes place. We believe that Artificial Intelligence (AI) can contribute in innovative systems to give accessibility and offer new solutions to the patients needs, as well as help relatives and caregivers to understand the illness of their family member or patient and monitor the progress of the dementia.",
"Therapies such as reminiscence, that stimulate memories of the patient's past, have well documented benefits on social, mental and emotional well-being BIBREF5BIBREF6, making them a very desirable practice, especially for older adults. Reminiscence therapy in particular involves the discussion of events and past experiences using tangible prompts such as pictures or music to evoke memories and stimulate conversation BIBREF7. With this aim, we explore multi-modal deep learning architectures to be used to develop an intuitive, easy to use, and robust dialogue system to automatize the reminiscence therapy for people affected by mild cognitive impairment or at early stages of Alzheimer's disease.",
"We propose a conversational agent that simulates a reminiscence therapist by asking questions about the patient's experiences. Questions are generated from pictures provided by the patient, which contain significant moments or important people in user's life. Moreover, to engage the user in the conversation we propose a second model which generates comments on user's answers. A chatbot model trained with a dataset containing simple conversations between different people. The activity pretends to be challenging for the patient, as the questions may require the user to exercise the memory. Our contributions include:",
"Automation of the Reminiscence therapy by using a multi-modal approach that generates questions from pictures, without using a reminiscence therapy dataset.",
"An end-to-end deep learning approach which do not require hand-crafted rules and it is ready to be used by mild cognitive impairment patients. The system is designed to be intuitive and easy to use for the users and could be reached by any smartphone with internet connection."
],
[
"The origin of chatbots goes back to 1966 with the creation of ELIZA BIBREF8 by Joseph Weizenbaum at MIT. Its implementation consisted in pattern matching and substitution methodology. Recently, data driven approaches have drawn significant attention. Existing work along this line includes retrieval-based methods BIBREF9BIBREF10 and generation-based methodsBIBREF11BIBREF12. In this work we focus on generative models, where sequence-to-sequence algorithm that uses RNNs to encode and decode inputs into responses is a current best practice.",
"Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems.",
"Some works have proposed conversational agents for older adults with a variety of uses, such as stimulate conversation BIBREF17 , palliative care BIBREF18 or daily assistance. An example of them is ‘Billie’ reported in BIBREF19 which is a virtual agent that uses facial expression for a more natural behavior and is focused on managing user’s calendar, or ‘Mary’ BIBREF20 that assists the users by organizing their tasks offering reminders and guidance with household activities. Both of the works perform well on its specific tasks, but report difficulties to maintain a casual conversation. Other works focus on the content used in Reminiscence therapy. Like BIBREF21 where the authors propose a system that recommends multimedia content to be used in therapy, or Visual Dialog BIBREF22 where the conversational agent is the one that has to answer the questions about the image."
],
[
"In this section we explain the main two components of our model, as well as how the interaction with the model works. We named it Elisabot and its goal is to mantain a dialog with the patient about her user’s life experiences.",
"Before starting the conversation, the user must introduce photos that should contain significant moments for him/her. The system randomly chooses one of these pictures and analyses the content. Then, Elisabot shows the selected picture and starts the conversation by asking a question about the picture. The user should give an answer, even though he does not know it, and Elisabot makes a relevant comment on it. The cycle starts again by asking another relevant question about the image and the flow is repeated for 4 to 6 times until the picture is changed. The Figure FIGREF3 summarizes the workflow of our system.",
"Elisabot is composed of two models: the model in charge of asking questions about the image which we will refer to it as VQG model, and the Chatbot model which tries to make the dialogue more engaging by giving feedback to the user's answers."
],
[
"The algorithm behind VQG consists in an Encoder-Decoder architecture with attention. The Encoder takes as input one of the given photos $I$ from the user and learns its information using a CNN. CNNs have been widely studied for computer vision tasks. The CNN provides the image's learned features to the Decoder which generates the question $y$ word by word by using an attention mechanism with a Long Short-Term Memory (LSTM). The model is trained to maximize the likelihood $p(y|I)$ of producing a target sequence of words:",
"where $K$ is the size of the vocabulary and $C$ is the length of the caption.",
"Since there are already Convolutional Neural Networks (CNNs) trained on large datasets to represent images with an outstanding performance, we make use of transfer learning to integrate a pre-trained model into our algorithm. In particular, we use a ResNet-101 BIBREF23 model trained on ImageNet. We discard the last 2 layers, since these layers classify the image into categories and we only need to extract its features."
],
[
"The core of our chatbot model is a sequence-to-sequence BIBREF24. This architecture uses a Recurrent Neural Network (RNN) to encode a variable-length sequence to obtain a large fixed dimensional vector representation and another RNN to decode the vector into a variable-length sequence.",
"The encoder iterates through the input sentence one word at each time step producing an output vector and a hidden state vector. The hidden state vector is passed to the next time step, while the output vector is stored. We use a bidirectional Gated Recurrent Unit (GRU), meaning we use two GRUs one fed in sequential order and another one fed in reverse order. The outputs of both networks are summed at each time step, so we encode past and future context.",
"The final hidden state $h_t^{enc}$ is fed into the decoder as the initial state $h_0^{dec}$. By using an attention mechanism, the decoder uses the encoder’s context vectors, and internal hidden states to generate the next word in the sequence. It continues generating words until it outputs an $<$end$>$ token, representing the end of the sentence. We use an attention layer to multiply attention weights to encoder's outputs to focus on the relevant information when decoding the sequence. This approach have shown better performance on sequence-to-sequence models BIBREF25."
],
[
"One of the first requirements to develop an architecture using a machine learning approach is a training dataset. The lack of open-source datasets containing dialogues from reminiscence therapy lead as to use a dataset with content similar to the one used in the therapy. In particular, we use two types of datasets to train our models: A dataset that maps pictures with questions, and an open-domain conversation dataset. The details of the two datasets are as follows."
],
[
"We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual."
],
[
"We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters."
],
[
"An important aspect of dialogue response generation systems is how to evaluate the quality of the generated response. This section presents the training procedure and the quantitative evaluation of the model, together with some qualitative results."
],
[
"Both models are trained using Stochastic Gradient Descent with ADAM optimization BIBREF28 and a learning rate of 1e-4. Besides, we use dropout regularization BIBREF29 which prevents from over-fitting by dropping some units of the network.",
"The VQG encoder is composed of 2048 neuron cells, while the VQG decoder has an attention layer of 512 followed by an embedding layer of 512 and a LSTM with the same size. We use a dropout of 50% and a beam search of 7 for decoding, which let as obtain up to 5 output questions. The vocabulary we use consists of all words seen 3 or more times in the training set, which amounts to 11.214 unique tokens. Unknown words are mapped to an $<$unk$>$ token during training, but we do not allow the decoder to produce this token at test time. We also set a maximum sequence length of 6 words as we want simple questions easy to understand and easy to learn by the model.",
"In the Chatbot model we use a hidden size of 500 and Dropout regularization of 25%. For decoding we use greedy search, which consists in making the optimal token choice at each step. We first train it with Persona-chat and then fine-tune it with Cornell dataset. The vocabulary we use consists of all words seen 3 or more times in Persona-chat dataset and we set a maximum sequence length of 12 words. For the hyperparameter setting, we use a batch size of 64."
],
[
"We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.",
"Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation."
],
[
"Our first goal was to generate meaningful questions from the provided pictures. Table TABREF16 contains some generated questions by our VQG model for each of the proposed images. These images were not in the training set as they were taken from photo albums, so the model was the first time that had these images as input. By looking at the questions, we can easily realise that the model is not only able to generate questions grammatically correct, but to understand the content of the image and ask about it. Furthermore, we not only wanted to generate only a question per image, but to bring out up to five questions for a deeper conversation.",
"We used different kind of pictures as we wanted a robust model able to work with all kind of images. In the Table TABREF16 we can see both, old gray and colorful photos, both with vertical and horizontal shapes. Most of the pictures we selected for testing were showing people, because these are the type of pictures used in reminiscence therapy, as they evoke memories from the relationships with that persons. We also tested pictures showing pets, cars and motorbikes as some people are very attached to them, triggering strong emotions.",
"Most of the generated questions could be asked by a person. Nonetheless, we found some of the questions such as How did the car get damaged? that do not correspond with the image's content, or are not well formulated like What body of dog is that?, but can be easily understood.",
"About the chatbot module, we test it manually by having a conversation with it. In the following lines, we can see two samples of a conversation, where each sentence starting with \"$>$\" is an input to our seq2seq model. In this first sample, the chatbot model is clearly overfitted as it repeats the same sentence twice. While in the second sample, we can see that the model is able to follow the conversation.",
"Overfitted chatbot model:",
"$>$ hi",
"Elisabot: hello ! how are you ?",
"$>$ fine and you?",
"Elisabot : i am good do you have any hobbies ?",
"$>$ yes i like to ski",
"Elisabot: i love to play sports . it is my favorite",
"$>$ I have a sister",
"Elisabot : i have a dog . i love them .",
"$>$ I have a dog too",
"Elisabot: i have a dog . i love them",
"$>$ ok",
"Elisabot : i am not really a good person",
"$>$ why?",
"Elisabot : i love to play sports . it is my favorite",
"Final chatbot model:",
"$>$ hello",
"Elisabot: hello how are you ?",
"$>$ fine and you?",
"Elisabot : i m great !",
"$>$ I have a sister, do you have any brother or sister?",
"Elisabot : i have a iphone but i have no idea .",
"$>$ ok",
"Elisabot : i am a pro fan of thrones",
"$>$ me too",
"Elisabot : i like to eat healthy",
"$>$ have you seen the last chapter?",
"Elisabot : no i have not .",
"$>$ where do you live?",
"Elisabot : i live in the mountains"
],
[
"As most of the metrics correlate very weakly with human judgements in the non-technical domain BIBREF31 we decide to evaluate our system with a simple user study with two patients. We present the user interface built and the feedback obtained from the patients."
],
[
"We developed a user interface for Elisabot with Telegram, an instant messaging application available for smartphones or computers. We select it because it is easy to use and it offers an API for developers to connect bots to the Telegram system. It enables to create special accounts for bots which do not require a phone number to set up.",
"Telegram is only the interface for the code running in the server. The bot is executed via an HTTP-request to the API. Users can start a conversation with Elisabot by typing @TherapistElisabot in the searcher and executing the command /start, as can be seen in the Figure FIGREF31. Messages, commands and requests sent by users are passed to the software running on the server. We add /change, /yes and /exit commands to enable more functionalities. /Change gives the opportunity to the user to change the image in case the user does not want to talk about it, /yes accepts the image which is going to talk about and /exit finishes the dialogue with Elisabot. The commands can be executed either by tapping on the linked text or typing them."
],
[
"We designed a usability study where users with and without mild cognitive impairment interacted with the system with the help of a doctor and one of the authors. The purpose was to study the acceptability and feasibility of the system with patients of mild cognitive impairment. The users were all older than 60 years old. The sessions lasted 30 minutes and were carried out by using a laptop computer connected to Telegram. As Elisabot's language is English we translated the questions to the users and the answers to Elisabot.",
"Figure FIGREF38 is a sample of the session we did with mild cognitive impairment patients from anonymized institution and location. The picture provided by the patient (Figure FIGREF37 is blurred for user's privacy rights. In this experiment all the generated questions were right according to the image content, but the feedback was wrong for some of the answers. We can see that it was the last picture of the session as when Elisabot asks if the user wants to continue or leave, and he decides to continue, Elisabot finishes the session as there are no more pictures remaining to talk about.",
"At the end of the session, we administrated a survey to ask participants the following questions about their assessment of Elisabot:",
"Did you like it?",
"Did you find it engaging?",
"How difficult have you found it?",
"Responses were given on a five-point scale ranging from strongly disagree (1) to strongly agree (5) and very easy (1) to very difficult (5). The results were 4.6 for amusing and engaging and 2.6 for difficulty. Healthy users found it very easy to use (1/5) and even a bit silly, because of some of the generated questions and comments. Nevertheless, users with mild cognitive impairment found it engaging (5/5) and challenging (4/5), because of the effort they had to make to remember the answers for some of the generated questions. All the users had in common that they enjoyed doing the therapy with Elisabot."
],
[
"We presented a dialogue system for handling sessions of 30 minutes of reminiscence therapy. Elisabot, our conversational agent leads the therapy by showing a picture and generating some questions. The goal of the system is to improve users mood and stimulate their memory and communication skills. Two models were proposed to generate the dialogue system for the reminiscence therapy. A visual question generator composed of a CNN and a LSTM with attention and a sequence-to-sequence model to generate feedback on the user's answers. We realize that fine-tuning our chatbot model with another dataset improved the generated dialogue.",
"The manual evaluation shows that our model can generate questions and feedback well formulated grammatically, but in some occasions not appropriate in content. As expected, it has tendency to produce non-specific answers and to loss its consistency in the comments with respect to what it has said before. However, the overall usability evaluation of the system by users with mild cognitive impairment shows that they found the session very entertaining and challenging. They had to make an effort to remember the answers for some of the questions, but they were very satisfied when they achieved it. Though, we see that for the proper performance of the therapy is essential a person to support the user to help remember the experiences that are being asked.",
"This project has many possible future lines. In our future work, we suggest to train the model including the Reddit dataset which could improve the chatbot model, as it has many open-domain conversations. Moreover, we would like to include speech recognition and generation, as well as real-time text translation, to make Elisabot more autonomous and open to older adults with reading and writing difficulties. Furthermore, the lack of consistency in the dialogue might be avoided by improving the architecture including information about passed conversation into the model. We also think it would be a good idea to recognize feelings from the user's answers and give a feedback according to them."
],
[
"Marioan Caros was funded with a scholarship from the Fundacion Vodafona Spain. Petia Radeva was partially funded by TIN2018-095232-B-C21, 2017 SGR 1742, Nestore, Validithi, and CERCA Programme/Generalitat de Catalunya. We acknowledge the support of NVIDIA Corporation with the donation of Titan Xp GPUs."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Methodology ::: VQG model",
"Methodology ::: Chatbot network",
"Datasets",
"Datasets ::: MS-COCO, Bing and Flickr datasets",
"Datasets ::: Persona-chat and Cornell-movie corpus",
"Validation",
"Validation ::: Implementation",
"Validation ::: Quantitative evaluation",
"Validation ::: Qualitative results",
"Usability study",
"Usability study ::: User interface",
"Feedback from patients",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"395868f357819b6de3a616992a33977f125f92d9"
],
"answer": [
{
"evidence": [
"We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.",
"Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation."
],
"extractive_spans": [],
"free_form_answer": "using the BLEU score as a quantitative metric and human evaluation for quality",
"highlighted_evidence": [
"We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.\n\nOur chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"0a2bc42cf256a183dae47c2a043832d669e89831"
],
"answer": [
{
"evidence": [
"We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual."
],
"extractive_spans": [
"5 questions per image"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"eda46fe815453f31e8ee4092686f9581bb42d7d0"
],
"answer": [
{
"evidence": [
"Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"a488e4b08f2b52306f8f0add5978e19db2db5b4f"
],
"answer": [
{
"evidence": [
"We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual.",
"We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters."
],
"extractive_spans": [],
"free_form_answer": "For the question generation model 15,000 images with 75,000 questions. For the chatbot model, around 460k utterances over 230k dialogues.",
"highlighted_evidence": [
"We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions.",
"We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How is performance of this system measured?",
"How many questions per image on average are available in dataset?",
"Is machine learning system underneath similar to image caption ML systems?",
"How big dataset is used for training this system?"
],
"question_id": [
"11d2f0d913d6e5f5695f8febe2b03c6c125b667c",
"1c85a25ec9d0c4f6622539f48346e23ff666cd5f",
"37d829cd42db9ae3d56ab30953a7cf9eda050841",
"4b41f399b193d259fd6e24f3c6e95dc5cae926dd"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Scheme of the interaction with Elisabot",
"Figure 2: Samples from Bing 2a), Coco 2b) and Flickr 2c) datasets",
"Table 1: Generated questions",
"Figure 3: Elisabot running on Telegram application",
"Figure 5: Sample of the session study with mild cognitive impairment patient"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"7-Figure3-1.png",
"8-Figure5-1.png"
]
} | [
"How is performance of this system measured?",
"How big dataset is used for training this system?"
] | [
[
"1910.11949-Validation ::: Quantitative evaluation-1",
"1910.11949-Validation ::: Quantitative evaluation-0"
],
[
"1910.11949-Datasets ::: Persona-chat and Cornell-movie corpus-0",
"1910.11949-Datasets ::: MS-COCO, Bing and Flickr datasets-0"
]
] | [
"using the BLEU score as a quantitative metric and human evaluation for quality",
"For the question generation model 15,000 images with 75,000 questions. For the chatbot model, around 460k utterances over 230k dialogues."
] | 163 |
1902.09087 | Lattice CNNs for Matching Based Chinese Question Answering | Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly. In this paper, we propose a novel lattice based CNN model (LCNs) to utilize multi-granularity information inherent in the word lattice while maintaining strong ability to deal with the introduced noisy information for matching based question answering in Chinese. We conduct extensive experiments on both document based question answering and knowledge based question answering tasks, and experimental results show that the LCNs models can significantly outperform the state-of-the-art matching models and strong baselines by taking advantages of better ability to distill rich but discriminative information from the word lattice input. | {
"paragraphs": [
[
"Short text matching plays a critical role in many natural language processing tasks, such as question answering, information retrieval, and so on. However, matching text sequences for Chinese or similar languages often suffers from word segmentation, where there are often no perfect Chinese word segmentation tools that suit every scenario. Text matching usually requires to capture the relatedness between two sequences in multiple granularities. For example, in Figure FIGREF4 , the example phrase is generally tokenized as “China – citizen – life – quality – high”, but when we plan to match it with “Chinese – live – well”, it would be more helpful to have the example segmented into “Chinese – livelihood – live” than its common segmentation. ",
"Existing efforts use neural network models to improve the matching based on the fact that distributed representations can generalize discrete word features in traditional bag-of-words methods. And there are also works fusing word level and character level information, which, to some extent, could relieve the mismatch between different segmentations, but these solutions still suffer from the original word sequential structures. They usually depend on an existing word tokenization, which has to make segmentation choices at one time, e.g., “ZhongGuo”(China) and “ZhongGuoRen”(Chinese) when processing “ZhongGuoRenMin”(Chinese people). And the blending just conducts at one position in their frameworks.",
"Specific tasks such as question answering (QA) could pose further challenges for short text matching. In document based question answering (DBQA), the matching degree is expected to reflect how likely a sentence can answer a given question, where questions and candidate answer sentences usually come from different sources, and may exhibit significantly different styles or syntactic structures, e.g. queries in web search and sentences in web pages. This could further aggravate the mismatch problems. In knowledge based question answering (KBQA), one of the key tasks is to match relational expressions in questions with knowledge base (KB) predicate phrases, such as “ZhuCeDi”(place of incorporation). Here the diversity between the two kinds of expressions is even more significant, where there may be dozens of different verbal expressions in natural language questions corresponding to only one KB predicate phrase. Those expression problems make KBQA a further tough task. Previous works BIBREF0 , BIBREF1 adopt letter-trigrams for the diverse expressions, which is similar to character level of Chinese. And the lattices are combinations of words and characters, so with lattices, we can utilize words information at the same time.",
"Recent advances have put efforts in modeling multi-granularity information for matching. BIBREF2 , BIBREF3 blend words and characters to a simple sequence (in word level), and BIBREF4 utilize multiple convoluational kernel sizes to capture different n-grams. But most characters in Chinese can be seen as words on their own, so combining characters with corresponding words directly may lose the meanings that those characters can express alone. Because of the sequential inputs, they will either lose word level information when conducting on character sequences or have to make segmentation choices.",
"In this paper, we propose a multi-granularity method for short text matching in Chinese question answering which utilizes lattice based CNNs to extract sentence level features over word lattice. Specifically, instead of relying on character or word level sequences, LCNs take word lattices as input, where every possible word and character will be treated equally and have their own context so that they can interact at every layer. For each word in each layer, LCNs can capture different context words in different granularity via pooling methods. To the best of our knowledge, we are the first to introduce word lattice into the text matching tasks. Because of the similar IO structures to original CNNs and the high efficiency, LCNs can be easily adapted to more scenarios where flexible sentence representation modeling is required.",
"We evaluate our LCNs models on two question answering tasks, document based question answering and knowledge based question answering, both in Chinese. Experimental results show that LCNs significantly outperform the state-of-the-art matching methods and other competitive CNNs baselines in both scenarios. We also find that LCNs can better capture the multi-granularity information from plain sentences, and, meanwhile, maintain better de-noising capability than vanilla graphic convolutional neural networks thanks to its dynamic convolutional kernels and gated pooling mechanism."
],
[
"Our Lattice CNNs framework is built upon the siamese architecture BIBREF5 , one of the most successful frameworks in text matching, which takes the word lattice format of a pair of sentences as input, and outputs the matching score."
],
[
"The siamese architecture and its variant have been widely adopted in sentence matching BIBREF6 , BIBREF3 and matching based question answering BIBREF7 , BIBREF0 , BIBREF8 , that has a symmetrical component to extract high level features from different input channels, which share parameters and map inputs to the same vector space. Then, the sentence representations are merged and compared to output the similarities.",
"For our models, we use multi-layer CNNs for sentence representation. Residual connections BIBREF9 are used between convolutional layers to enrich features and make it easier to train. Then, max-pooling summarizes the global features to get the sentence level representations, which are merged via element-wise multiplication. The matching score is produced by a multi-layer perceptron (MLP) with one hidden layer based on the merged vector. The fusing and matching procedure is formulated as follows: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are feature vectors of question and candidate (sentence or predicate) separately encoded by CNNs, INLINEFORM2 is the sigmoid function, INLINEFORM3 are parameters, and INLINEFORM4 is element-wise multiplication. The training objective is to minimize the binary cross-entropy loss, defined as: DISPLAYFORM0 ",
"where INLINEFORM0 is the {0,1} label for the INLINEFORM1 training pair.",
"Note that the CNNs in the sentence representation component can be either original CNNs with sequence input or lattice based CNNs with lattice input. Intuitively, in an original CNN layer, several kernels scan every n-gram in a sequence and result in one feature vector, which can be seen as the representation for the center word and will be fed into the following layers. However, each word may have different context words in different granularities in a lattice and may be treated as the center in various kernel spans with same length. Therefore, different from the original CNNs, there could be several feature vectors produced for a given word, which is the key challenge to apply the standard CNNs directly to a lattice input.",
"For the example shown in Figure FIGREF6 , the word “citizen” is the center word of four text spans with length 3: “China - citizen - life”, “China - citizen - alive”, “country - citizen - life”, “country - citizen - alive”, so four feature vectors will be produced for width-3 convolutional kernels for “citizen”."
],
[
"As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 .",
"Here, one of the key issues is how we decide a sequence of characters can be considered as a word. We approach this through an existing lookup vocabulary, which contains frequent words in BaiduBaike. Note that most Chinese characters can be considered as words on their own, thus are included in this vocabulary when they have been used as words on their own in this corpus.",
"However, doing so will inevitably introduce noisy words (e.g., “middle” in Figure FIGREF4 ) into word lattices, which will be smoothed by pooling procedures in our model. And the constructed graphs could be disconnected because of a few out-of-vocabulary characters. Thus, we append INLINEFORM0 labels to replace those characters to connect the graph.",
"Obviously, word lattices are collections of characters and all possible words. Therefore, it is not necessary to make explicit decisions regarding specific word segmentations, but just embed all possible information into the lattice and take them to the next CNN layers. The inherent graph structure of a word lattice allows all possible words represented explicitly, no matter the overlapping and nesting cases, and all of them can contribute directly to the sentence representations."
],
[
"As we mentioned in previous section, we can not directly apply standard CNNs to take word lattice as input, since there could be multiple feature vectors produced for a given word. Inspired by previous lattice LSTM models BIBREF10 , BIBREF11 , here we propose a lattice based CNN layers to allow standard CNNs to work over word lattice input. Specifically, we utilize pooling mechanisms to merge the feature vectors produced by multiple CNN kernels over different context compositions.",
"Formally, the output feature vector of a lattice CNN layer with kernel size INLINEFORM0 at word INLINEFORM1 in a word lattice INLINEFORM2 can be formulated as Eq EQREF12 : DISPLAYFORM0 ",
"where INLINEFORM0 is the activation function, INLINEFORM1 is the input vector corresponding to word INLINEFORM2 in this layer, ( INLINEFORM3 means the concatenation of these vectors, and INLINEFORM4 are parameters with size INLINEFORM5 , and INLINEFORM6 , respectively. INLINEFORM7 is the input dim and INLINEFORM8 is the output dim. INLINEFORM9 is one of the following pooling functions: max-pooling, ave-pooling, or gated-pooling, which execute the element-wise maximum, element-wise average, and the gated operation, respectively. The gated operation can be formulated as: DISPLAYFORM0 ",
"where INLINEFORM0 are parameters, and INLINEFORM1 are gated weights normalized by a softmax function. Intuitively, the gates represent the importance of the n-gram contexts, and the weighted sum can control the transmission of noisy context words. We perform padding when necessary.",
"For example, in Figure FIGREF6 , when we consider “citizen” as the center word, and the kernel size is 3, there will be five words and four context compositions involved, as mentioned in the previous section, each marked in different colors. Then, 3 kernels scan on all compositions and produce four 3-dim feature vectors. The gated weights are computed based on those vectors via a dense layer, which can reflect the importance of each context compositions. The output vector of the center word is their weighted sum, where noisy contexts are expected to have lower weights to be smoothed. This pooling over different contexts allows LCNs to work over word lattice input.",
"Word lattice can be seen as directed graphs and modeled by Directed Graph Convolutional networks (DGCs) BIBREF12 , which use poolings on neighboring vertexes that ignore the semantic structure of n-grams. But to some situations, their formulations can be very similar to ours (See Appendix for derivation). For example, if we set the kernel size in LCNs to 3, use linear activations and suppose the pooling mode is average in both LCNs and DGCs, at each word in each layer, the DGCs compute the average of the first order neighbors together with the center word, while the LCNs compute the average of the pre and post words separately and add them to the center word. Empirical results are exhibited in Experiments section.",
"Finally, given a sentence that has been constructed into a word-lattice form, for each node in the lattice, an LCN layer will produce one feature vector similar to original CNNs, which makes it easier to stack multiple LCN layers to obtain more abstract feature representations."
],
[
"Our experiments are designed to answer: (1) whether multi-granularity information in word lattice helps in matching based QA tasks, (2) whether LCNs capture the multi-granularity information through lattice well, and (3) how to balance the noisy and informative words introduced by word lattice."
],
[
"We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .",
"DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses.",
"KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve.",
"The vocabulary we use to construct word lattices contains 156k words, including 9.1k single character words. In average, each DBQA question contains 22.3 tokens (words or characters) in its lattice, each DBQA candidate sentence has 55.8 tokens, each KBQA question has 10.7 tokens and each KBQA predicate contains 5.1 tokens."
],
[
"For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used."
],
[
"The word embeddings are trained on the Baidu Baike webpages with Google's word2vector, which are 300-dim and fine tuned during training. In DBQA, we also follow previous works BIBREF15 , BIBREF16 to concatenate additional 1d-indicators with word vectors which denote whether the words are concurrent in both questions and candidate sentences. In each CNN layer, there are 256, 512, and 256 kernels with width 1, 2, and 3, respectively. The size of the hidden layer for MLP is 1024. All activation are ReLU, the dropout rate is 0.5, with a batch size of 64. We optimize with adadelta BIBREF17 with learning rate INLINEFORM0 and decay factor INLINEFORM1 . We only tune the number of convolutional layers from [1, 2, 3] and fix other hyper-parameters. We sample at most 10 negative sentences per question in DBQA and 5 in KBRE. We implement our models in Keras with Tensorflow backend."
],
[
"Our first set of baselines uses original CNNs with character (CNN-char) or word inputs. For each sentence, two Chinese word segmenters are used to obtain three different word sequences: jieba (CNN-jieba), and Stanford Chinese word segmenter in CTB (CNN-CTB) and PKU (CNN-PKU) mode.",
"Our second set of baselines combines different word segmentations. Specifically, we concatenate the sentence embeddings from different segment results, which gives four different word+word models: jieba+PKU, PKU+CTB, CTB+jieba, and PKU+CTB+jieba.",
"Inspired by previous works BIBREF2 , BIBREF3 , we also concatenate word and character embeddings at the input level. Specially, when the basic sequence is in word level, each word may be constructed by multiple characters through a pooling operation (Word+Char). Our pilot experiments show that average-pooling is the best for DBQA while max-pooling after a dense layer is the best for KBQA. When the basic sequence is in character level, we simply concatenate the character embedding with its corresponding word embedding (Char+Word), since each character belongs to one word only. Again, when the basic sequence is in character level, we can also concatenate the character embedding with a pooled representation of all words that contain this character in the word lattice (Char+Lattice), where we use max pooling as suggested by our pilot experiments.",
"DGCs BIBREF12 , BIBREF18 are strong baselines that perform CNNs over directed graphs to produce high level representation for each vertex in the graph, which can be used to build a sentence representation via certain pooling operation. We therefore choose to compare with DGC-max (with maximum pooling), DGC-ave (with average pooling), and DGC-gated (with gated pooling), where the gate value is computed using the concatenation of the vertex vector and the center vertex vector through a dense layer. We also implement several state-of-the-art matching models using the open-source project MatchZoo BIBREF19 , where we tune hyper-parameters using grid search, e.g., whether using word or character inputs. Arc1, Arc2, CDSSM are traditional CNNs based matching models proposed by BIBREF20 , BIBREF21 . Arc1 and CDSSM compute the similarity via sentence representations and Arc2 uses the word pair similarities. MV-LSTM BIBREF22 computes the matching score by examining the interaction between the representations from two sentences obtained by a shared BiLSTM encoder. MatchPyramid(MP) BIBREF23 utilizes 2D convolutions and pooling strategies over word pair similarity matrices to compute the matching scores.",
"We also compare with the state-of-the-art models in DBQA BIBREF15 , BIBREF16 ."
],
[
"Here, we mainly describe the main results on the DBQA dataset, while we find very similar trends on the KBRE dataset. Table TABREF26 summarizes the main results on the two datasets. We can see that the simple MatchZoo models perform the worst. Although Arc1 and CDSSM are also constructed in the siamese architecture with CNN layers, they do not employ multiple kernel sizes and residual connections, and fail to capture the relatedness in a multi-granularity fashion.",
" BIBREF15 is similar to our word level models (CNN-jieba/PKU/CTB), but outperforms our models by around 3%, since it benefits from an extra interaction layer with fine tuned hyper-parameters. BIBREF16 further incorporates human designed features including POS-tag interaction and TF-IDF scores, achieving state-of-the-art performance in the literature of this DBQA dataset. However, both of them perform worse than our simple CNN-char model, which is a strong baseline because characters, that describe the text in a fine granularity, can relieve word mismatch problem to some extent. And our best LCNs model further outperforms BIBREF16 by .0134 in MRR.",
"For single granularity CNNs, CNN-char performs better than all word level models, because they heavily suffer from word mismatching given one fixed word segmentation result. And the models that utilize different word segmentations can relieve this problem and gain better performance, which can be further improved by the combination of words and characters. The DGCs and LCNs, being able to work on lattice input, outperform all previous models that have sequential inputs, indicating that the word lattice is a more promising form than a single word sequence, and should be better captured by taking the inherent graph structure into account. Although they take the same input, LCNs still perform better than the best DGCs by a margin, showing the advantages of the CNN kernels over multiple n-grams in the lattice structures and the gated pooling strategy.",
"To fairly compare with previous KBQA works, we combine our LCN-ave settings with the entity linking results of the state-of-the-art KBQA model BIBREF14 . The P@1 for question answering of single LCN-ave is 86.31%, which outperforms both the best single model (84.55%) and the best ensembled model (85.40%) in literature."
],
[
"As shown in Table TABREF26 , the combined word level models (e.g. CTB+jieba or PKU+CTB) perform better than any word level CNNs with single word segmentation result (e.g. CNN-CTB or CNN-PKU). The main reason is that there are often no perfect Chinese word segmenters and a single improper segmentation decision may harm the matching performance, since that could further make the word mismatching issue worse, while the combination of different word segmentation results can somehow relieve this situation.",
"Furthermore, the models combining words and characters all perform better than PKU+CTB+jieba, because they could be complementary in different granularities. Specifically, Word+Char is still worse than CNN-char, because Chinese characters have rich meanings and compressing several characters to a single word vector will inevitably lose information. Furthermore, the combined sequence of Word+Char still exploits in a word level, which still suffers from the single segmentation decision. On the other side, the Char+Word model is also slightly worse than CNN-char. We think one reason is that the reduplicated word embeddings concatenated with each character vector confuse the CNNs, and perhaps lead to overfitting. But, we can still see that Char+Word performs better than Word+Char, because the former exploits in a character level and the fine-granularity information actually helps to relieve word mismatch. Note that Char+Lattice outperforms Char+Word, and even slightly better than CNN-char. This illustrates that multiple word segmentations are still helpful to further improve the character level strong baseline CNN-char, which may still benefit from word level information in a multi-granularity fashion.",
"In conclusion, the combination between different sequences and information of different granularities can help improve text matching, showing that it is necessary to consider the fashion which considers both characters and more possible words, which perhaps the word lattice can provide.",
"For DGCs with different kinds of pooling operations, average pooling (DGC-ave) performs the best, which delivers similar performance with LCN-ave. While DGC-max performs a little worse, because it ignores the importance of different edges and the maximum operation is more sensitive to noise than the average operation. The DGC-gated performs the worst. Compared with LCN-gated that learns the gate value adaptively from multiple n-gram context, it is harder for DGC to learn the importance of each edge via the node and the center node in the word lattice. It is not surprising that LCN-gated performs much better than GDC-gated, indicating again that n-grams in word lattice play an important role in context modeling, while DGCs are designed for general directed graphs which may not be perfect to work with word lattice.",
"For LCNs with different pooling operations, LCN-max and LCN-ave lead to similar performances, and perform better on KBRE, while LCN-gated is better on DBQA. This may be due to the fact that sentences in DBQA are relatively longer with more irrelevant information which require to filter noisy context, while on KBRE with much shorter predicate phrases, LCN-gated may slightly overfit due to its more complex model structure. Overall, we can see that LCNs perform better than DGCs, thanks to the advantage of better capturing multiple n-grams context in word lattice.",
"To investigate how LCNs utilize multi-granularity more intuitively, we analyze the MRR score against granularities of overlaps between questions and answers in DBQA dataset, which is shown in Figure FIGREF32 . It is demonstrated that CNN-char performs better than CNN-CTB impressively in first few groups where most of the overlaps are single characters which will cause serious word mismatch. With the growing of the length of overlaps, CNN-CTB is catching up and finally overtakes CNN-char even though its overall performance is much lower. This results show that word information is complementary to characters to some extent. The LCN-gated is approaching the CNN-char in first few groups, and outperforms both character and word level models in next groups, where word level information becomes more powerful. This demonstrates that LCNs can effectively take advantages of different granularities, and the combination will not be harmful even when the matching clues present in extreme cases.",
"How to Create Word Lattice In previous experiments, we construct word lattice via an existing lookup vocabulary, which will introduce some noisy words inevitably. Here we construct from various word segmentations with different strategies to investigate the balance between the noisy words and additional information introduced by word lattice. We only use the DBQA dataset because word lattices here are more complex, so the construction strategies have more influence. Pilot experiments show that word lattices constructed based on character sequence perform better, so the strategies in Table TABREF33 are based on CNN-char.",
"From Table TABREF33 , it is shown that all kinds of lattice are better than CNN-char, which also evidence the usage of word information. And among all LCN models, more complex lattice produces better performance in principle, which indicates that LCNs can handle the noisy words well and the influence of noisy words can not cancel the positive information brought by complex lattices. It is also noticeable that LCN-gated is better than LCN-C+20 by a considerable margin, which shows that the words not in general tokenization (e.g. “livelihood” in Fig FIGREF4 ) are potentially useful.",
"LCNs only introduce inappreciable parameters in gated pooling besides the increasing vocabulary, which will not bring a heavy burden. The training speed is about 2.8 batches per second, 5 times slower than original CNNs, and the whole training of a 2-layer LCN-gated on DBQA dataset only takes about 37.5 minutes. The efficiency may be further improved if the network structure builds dynamically with supported frameworks. The fast speed and little parameter increment give LCNs a promising future in more NLP tasks."
],
[
"Figure FIGREF37 shows a case study comparing models in different input levels. The word level model is relatively coarse in utilizing informations, and finds a sentence with the longest overlap (5 words, 12 characters). However, it does not realize that the question is about numbers of people, and the “DaoHang”(navigate) in question is a verb, but noun in the sentence. The character level model finds a long sentence which covers most of the characters in question, which shows the power of fine-granularity matching. But without the help of words, it is hard to distinguish the “Ren”(people) in “DuoShaoRen”(how many people) and “ChuangShiRen”(founder), so it loses the most important information. While in lattice, although overlaps are limited, “WangZhan”(website, “Wang” web, “Zhan” station) can match “WangZhi”(Internet addresses, “Wang” web, “Zhi” addresses) and also relate to “DaoHang”(navigate), from which it may infer that “WangZhan”(website) refers to “tao606 seller website navigation”(a website name). Moreover, “YongHu”(user) can match “Ren”(people). With cooperations between characters and words, it catches the key points of the question and eliminates the other two candidates, as a result, it finds the correct answer."
],
[
"Deep learning models have been widely adopted in natural language sentence matching. Representation based models BIBREF21 , BIBREF7 , BIBREF0 , BIBREF8 encode and compare matching branches in hidden space. Interaction based models BIBREF23 , BIBREF22 , BIBREF3 incorporates interactions features between all word pairs and adopts 2D-convolution to extract matching features. Our models are built upon the representation based architecture, which is better for short text matching.",
"In recent years, many researchers have become interested in utilizing all sorts of external or multi-granularity information in matching tasks. BIBREF24 exploit hidden units in different depths to realize interaction between substrings with different lengths. BIBREF3 join multiple pooling methods in merging sentence level features, BIBREF4 exploit interactions between different lengths of text spans. For those more similar to our work, BIBREF3 also incorporate characters, which is fed into LSTMs and concatenate the outcomes with word embeddings, and BIBREF8 utilize words together with predicate level tokens in KBRE task. However, none of them exploit the multi-granularity information in word lattice in languages like Chinese that do not have space to segment words naturally. Furthermore, our model has no conflicts with most of them except BIBREF3 and could gain further improvement.",
"GCNs BIBREF25 , BIBREF26 and graph-RNNs BIBREF27 , BIBREF28 have extended CNNs and RNNs to model graph information, and DGCs generalize GCNs on directed graphs in the fields of semantic-role labeling BIBREF12 , document dating BIBREF18 , and SQL query embedding BIBREF29 . However, DGCs control information flowing from neighbor vertexes via edge types, while we focus on capturing different contexts for each word in word lattice via convolutional kernels and poolings.",
"Previous works involved Chinese lattice into RNNs for Chinese-English translation BIBREF10 , Chinese named entity recognition BIBREF11 , and Chinese word segmentation BIBREF30 . To the best of our knowledge, we are the first to conduct CNNs on word lattice, and the first to involve word lattice in matching tasks. And we motivate to utilize multi-granularity information in word lattices to relieve word mismatch and diverse expressions in Chinese question answering, while they mainly focus on error propagations from segmenters."
],
[
"In this paper, we propose a novel neural network matching method (LCNs) for matching based question answering in Chinese. Rather than relying on a word sequence only, our model takes word lattice as input. By performing CNNs over multiple n-gram context to exploit multi-granularity information, LCNs can relieve the word mismatch challenges. Thorough experiments show that our model can better explore the word lattice via convolutional operations and rich context-aware pooling, thus outperforms the state-of-the-art models and competitive baselines by a large margin. Further analyses exhibit that lattice input takes advantages of word and character level information, and the vocabulary based lattice constructor outperforms the strategies that combine characters and different word segmentations together."
],
[
"This work is supported by Natural Science Foundation of China (Grant No. 61672057, 61672058, 61872294); the UK Engineering and Physical Sciences Research Council under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012). For any correspondence, please contact Yansong Feng."
]
],
"section_name": [
"Introduction",
"Lattice CNNs",
"Siamese Architecture",
"Word Lattice",
"Lattice based CNN Layer",
"Experiments",
"Datasets",
"Evaluation Metrics",
"Implementation Details",
"Baselines",
"Results",
"Analysis and Discussions",
"Case Study",
"Related Work",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"16a08b11f033b08e392175ed187aebd84970919c"
],
"answer": [
{
"evidence": [
"Word Lattice",
"As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 ."
],
"extractive_spans": [],
"free_form_answer": "By considering words as vertices and generating directed edges between neighboring words within a sentence",
"highlighted_evidence": [
"Word Lattice\nAs shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0a87b02811796b7a34c65018823bc2bf7b874e4a"
],
"answer": [
{
"evidence": [
"For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used."
],
"extractive_spans": [
"Precision@1",
"Mean Average Precision",
"Mean Reciprocal Rank"
],
"free_form_answer": "",
"highlighted_evidence": [
"For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4e2de011ee880e520268d7144efde72ef499a962"
],
"answer": [
{
"evidence": [
"Datasets",
"We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .",
"DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses.",
"KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve."
],
"extractive_spans": [
"DBQA",
"KBRE"
],
"free_form_answer": "",
"highlighted_evidence": [
"Datasets\nWe conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .",
"DBQA is a document based question answering dataset. ",
"KBRE is a knowledge based relation extraction dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they obtain word lattices from words?",
"Which metrics do they use to evaluate matching?",
"Which dataset(s) do they evaluate on?"
],
"question_id": [
"76377e5bb7d0a374b0aefc54697ac9cd89d2eba8",
"85aa125b3a15bbb6f99f91656ca2763e8fbdb0ff",
"4b128f9e94d242a8e926bdcb240ece279d725729"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A word lattice for the phrase “Chinese people have high quality of life.”",
"Figure 2: An illustration of our LCN-gated, when “人民” (people) is being considered as the center of convolutional spans.",
"Table 1: The performance of all models on the two datasets. The best results in each group are bolded. * is the best published DBQA result.",
"Figure 3: MRR score against granularities of overlaps between questions and answers, which is the average length of longest common substrings. About 2.3% questions are ignored for they have no overlaps and the rests are separated in 12 groups orderly and equally. Group 1 has the least average overlap length while group 12 has the largest.",
"Table 2: Comparisons of various ways to construct word lattice. l.qu and l.sen are the average token number in questions and sentences respectively. The 4 models in the middle construct lattices by adding words to CNN-char. +2& considers the intersection of words of CTB and PKU mode while +2 considers the union. +20 uses the top 10 results of the two segmentors.",
"Table 3: Example, questions (in word) and 3 sentences selected by 3 systems. Bold mean sequence exactly match between question and answer."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"6-Figure3-1.png",
"6-Table2-1.png",
"7-Table3-1.png"
]
} | [
"How do they obtain word lattices from words?"
] | [
[
"1902.09087-Word Lattice-0"
]
] | [
"By considering words as vertices and generating directed edges between neighboring words within a sentence"
] | 164 |
1908.07816 | A Multi-Turn Emotionally Engaging Dialog Model | Open-domain dialog systems (also known as chatbots) have increasingly drawn attention in natural language processing. Some of the recent work aims at incorporating affect information into sequence-to-sequence neural dialog modeling, making the response emotionally richer, while others use hand-crafted rules to determine the desired emotion response. However, they do not explicitly learn the subtle emotional interactions captured in human dialogs. In this paper, we propose a multi-turn dialog system aimed at learning and generating emotional responses that so far only humans know how to do. Compared with two baseline models, offline experiments show that our method performs the best in perplexity scores. Further human evaluations confirm that our chatbot can keep track of the conversation context and generate emotionally more appropriate responses while performing equally well on grammar. | {
"paragraphs": [
[
"Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.",
"Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage.",
"In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary.",
"Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate.",
"The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do."
],
[
"Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.",
"The vanilla RNN encoder-decoder is usually applied to single-turn response generation, where the response is generated based on one single input message. In multi-turn settings, where a context with multiple history utterances is given, the same structure often ignores the hierarchical characteristic of the context. Some recent work addresses this problem by adopting a hierarchical recurrent encoder-decoder (HRED) structure BIBREF15, BIBREF16, BIBREF17. To give attention to different parts of the context while generating responses, Xing et al. BIBREF13 proposed the hierarchical recurrent attention network (HRAN) that uses a hierarchical attention mechanism. However, these multi-turn dialog models do not take into account the turn-taking emotional changes of the dialog.",
"Recent work on incorporating affect information into natural language processing tasks, such as building emotional dialog systems and affect language models, has inspired our current work. For example, the Emotional Chatting Machine (ECM) BIBREF9 takes as input a post and a specified emotional category and generates a response that belongs to the pre-defined emotion category. The main idea is to use an internal memory module to capture the emotion dynamics during decoding, and an external memory module to model emotional expressions explicitly by assigning different probability values to emotional words as opposed to regular words. However, the problem setting requires an emotional label as an input, which might be unpractical in real scenarios. Asghar et al. BIBREF10 proposed to augment the word embeddings with a VAD (valence, arousal, and dominance) affective space by using an external dictionary, and designed three affect-related loss functions, namely minimizing affective dissonance, maximizing affective dissonance, and maximizing affective content. The paper also proposed the affectively diverse beam search during decoding, so that the generated candidate responses are as affectively diverse as possible. However, literature in affective science does not necessarily validate such rules. In fact, the best strategy to speak to an angry customer is the de-escalation strategy (using neutral words to validate anger) rather than employing equally emotional words (minimizing affect dissonance) or words that convey happiness (maximizing affect dissonance). Zhong et al. BIBREF11 proposed a biased attention mechanism on affect-rich words in the input message, also by taking advantage of the VAD embeddings. The model is trained with a weighted cross-entropy loss function, which encourages the generation of emotional words. However, these models only deal with single-turn conversations. More importantly, they all adopt hand-coded emotion responding mechanisms. To our knowledge, we are the first to consider modeling the emotional flow and its appropriateness in a multi-turn dialog system by learning from humans."
],
[
"In this paper, we consider the problem of generating response $\\mathbf {y}$ given a context $\\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ from a data set $\\mathcal {D}=\\lbrace (\\mathbf {X}^{(i)},\\mathbf {y}^{(i)})\\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here",
"is a sequence of $m_i$ utterances, and",
"is a sequence of $n_{ij}$ words. Similarly,",
"is the response with $T_i$ words.",
"Usually the probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ can be modeled by an RNN language model conditioned on $\\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\\mathbf {c}_t$ and $\\mathbf {e}$, and how they are combined in the decoding part."
],
[
"The hierarchical attention structure involves two encoders to produce the dialog context vector $\\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\\mathbf {x}_j$ in $\\mathbf {X}$ ($j=1,2,\\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\\mathbf {h}^\\mathrm {f}_{jk}$ and the backward hidden state $\\mathbf {h}^\\mathrm {b}_{jk}$. The final hidden state $\\mathbf {h}_{jk}$ is then obtained by concatenating the two,",
"The utterance-level encoder is a unidirectional RNN with GRU that goes from the last utterance in the context to the first, with its input at each step as the summary of the corresponding utterance, which is obtained by applying a Bahdanau-style attention mechanism BIBREF4 on the word-level encoder output. More specifically, at decoding step $t$, the summary of utterance $\\mathbf {x}_j$ is a linear combination of $\\mathbf {h}_{jk}$, for $k=1,2,\\dots ,n_j$,",
"Here $\\alpha _{jk}^t$ is the word-level attention score placed on $\\mathbf {h}_{jk}$, and can be calculated as",
"where $\\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, $\\mathbf {\\ell }_{j+1}^t$ is the previous hidden state of the utterance-level encoder, and $\\mathbf {v}_a$, $\\mathbf {U}_a$, $\\mathbf {V}_a$ and $\\mathbf {W}_a$ are word-level attention parameters. The final dialog context vector $\\mathbf {c}_t$ is then obtained as another linear combination of the outputs of the utterance-level encoder $\\mathbf {\\ell }_{j}^t$, for $j=1,2,\\dots ,m$,",
"Here $\\beta _{j}^t$ is the utterance-level attention score placed on $\\mathbf {\\ell }_{j}^t$, and can be calculated as",
"where $\\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, and $\\mathbf {v}_b$, $\\mathbf {U}_b$ and $\\mathbf {W}_b$ are utterance-level attention parameters."
],
[
"In order to capture the emotion information carried in the context $\\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\\mathbf {x}_j)$ is set to 1; otherwise, $\\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\\mathbf {x}_j)$ set to 1. For example, assuming $\\mathbf {x}_j=$ “he is worried about me”, then",
"since the word “worried” is assigned to both negative emotion and anxious. We apply a dense layer with sigmoid activation function on top of ${1}(\\mathbf {x}_j)$ to embed the emotion indicator vector into a continuous space,",
"where $\\mathbf {W}_e$ and $\\mathbf {b}_e$ are trainable parameters. The emotion flow of the context $\\mathbf {X}$ is then modeled by an unidirectional RNN with GRU going from the first utterance in the context to the last, with its input being $\\mathbf {a}_j$ at each step. The final emotion context vector $\\mathbf {e}$ is obtained as the last hidden state of this emotion encoding RNN."
],
[
"The probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ can be written as",
"We model the probability distribution using an RNN language model along with the emotion context vector $\\mathbf {e}$. Specifically, at time step $t$, the hidden state of the decoder $\\mathbf {s}_t$ is obtained by applying the GRU function,",
"where $\\mathbf {w}_{y_{t-1}}$ is the word embedding of $y_{t-1}$. Similar to Affect-LM BIBREF19, we then define a new feature vector $\\mathbf {o}_t$ by concatenating $\\mathbf {s}_t$ with the emotion context vector $\\mathbf {e}$,",
"on which we apply a softmax layer to obtain a probability distribution over the vocabulary,",
"Each term in Equation (DISPLAY_FORM16) is then given by",
"We use the cross-entropy loss as our objective function"
],
[
"We trained our model using two different datasets and compared its performance with HRAN as well as the basic sequence-to-sequence model by performing both offline and online testings."
],
[
"We use two different dialog corpora to train our model—the Cornell Movie Dialogs Corpus BIBREF20 and the DailyDialog dataset BIBREF21.",
"Cornell Movie Dialogs Corpus. The dataset contains 83,097 dialogs (220,579 conversational exchanges) extracted from raw movie scripts. In total there are 304,713 utterances.",
"DailyDialog. The dataset is developed by crawling raw data from websites used for language learners to learn English dialogs in daily life. It contains 13,118 dialogs in total.",
"We summarize some of the basic information regarding the two datasets in Table TABREF25.",
"In our experiments, the models are first trained on the Cornell Movie Dialogs Corpus, and then fine-tuned on the DailyDialog dataset. We adopted this training pattern because the Cornell dataset is bigger but noisier, while DailyDialog is smaller but more daily-based. To create a training set and a validation set for each of the two datasets, we take segments of each dialog with number of turns no more than six, to serve as the training/validation examples. Specifically, for each dialog $\\mathbf {D}=(\\mathbf {x}_1,\\mathbf {x}_2,\\dots ,\\mathbf {x}_M)$, we create $M-1$ context-response pairs, namely $\\mathbf {U}_i=(\\mathbf {x}_{s_i},\\dots ,\\mathbf {x}_i)$ and $\\mathbf {y}_i=\\mathbf {x}_{i+1}$, for $i=1,2,\\dots ,M-1$, where $s_i=\\max (1,i-4)$. We filter out those pairs that have at least one utterance with length greater than 30. We also reduce the frequency of those pairs whose responses appear too many times (the threshold is set to 10 for Cornell, and 5 for DailyDialog), to prevent them from dominating the learning procedure. See Table TABREF25 for the sizes of the training and validation sets. The test set consists of 100 dialogs with four turns. We give more detailed description of how we create the test set in Section SECREF31."
],
[
"We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.",
"For all the models, the vocabulary consists of 20,000 most frequent words in the Cornell and DailyDialog datasets, plus three extra tokens: <unk> for words that do not exist in the vocabulary, <go> indicating the begin of an utterance, and <eos> indicating the end of an utterance. Here we summarize the configurations and parameters of our experiments:",
"We set the word embedding size to 256. We initialized the word embeddings in the models with word2vec BIBREF22 vectors first trained on Cornell and then fine-tuned on DailyDialog, consistent with the training procedure of the models.",
"We set the number of hidden units of each RNN to 256, the word-level attention depth to 256, and utterance-level 128. The output size of the emotion embedding layer is 256.",
"We optimized the objective function using the Adam optimizer BIBREF23 with an initial learning rate of 0.001. We stopped training the models when the lowest perplexity on the validation sets was achieved.",
"For prediction, we used beam search BIBREF24 with a beam width of 256."
],
[
"The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work."
],
[
"To develop a test set for human evaluation, we first selected the emotionally colored dialogs with exactly four turns from the DailyDialog dataset. In the dataset each dialog turn is annotated with a corresponding emotional category, including the neutral one. For our purposes we filtered out only those dialogs where more than a half of utterances have non-neutral emotional labels. This gave us 78 emotionally positive dialogs and 14 emotionally negative dialogs. In order to have a balanced test set with equal number of positive and negative dialogs, we recruited two English-speaking students from our university without any relationship to the authors' lab and instructed them to create five negative dialogs with four turns, as if they were interacting with another human, according to each of the following topics: relationships, entertainment, service, work and study, and everyday situations. Thus each person produced 25 dialogs, and in total we obtained 50 emotionally negative daily dialogs in addition to the 14 already available. To form the test set, we randomly selected 50 emotionally positive and 50 emotionally negative dialogs from the two pools of dialogs described above (78 positive dialogs from DailyDialog, 64 negative dialogs from DailyDialog and human-generated).",
"For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral."
],
[
"Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).",
"Table TABREF34, TABREF35 and TABREF35 summarize the human evaluation results on the responses' grammatical correctness, contextual coherence, and emotional appropriateness, respectively. In the tables, we give the percentage of votes each model received for the three scores, the average score obtained with improvements over S2S, and the agreement score among the raters. Note that we report Fleiss' $\\kappa $ score BIBREF27 for contextual coherence and emotional appropriateness, and Finn's $r$ score BIBREF28 for grammatical correctness. We did not use Fleiss' $\\kappa $ score for grammatical correctness. As agreement is extremely high, this can make Fleiss' $\\kappa $ very sensitive to prevalence BIBREF29. On the contrary, we did not use Finn's $r$ score for contextual coherence and emotional appropriateness because it is only reasonable when the observed variance is significantly less than the chance variance BIBREF30, which did not apply to these two criteria. As shown in the tables, we got high agreement among the raters for grammatical correctness, and fair agreement among the raters for contextual coherence and emotional appropriateness. For grammatical correctness, all three models achieved high scores, which means all models are capable of generating fluent utterances that make sense. For contextual coherence and emotional appropriateness, MEED achieved higher average scores than S2S and HRAN, which means MEED keeps better track of the context and can generate responses that are emotionally more appropriate and natural. We conducted Friedman test BIBREF31 on the human evaluation results, showing the improvements of MEED are significant (with $p$-value $<0.01$)."
],
[
"We present four sample dialogs in Table TABREF36, along with the responses generated by the three models. Dialog 1 and 2 are emotionally positive and dialog 3 and 4 are negative. For the first two examples, we can see that MEED is able to generate more emotional content (like “fun” and “congratulations”) that is appropriate according to the context. For dialog 4, MEED responds in sympathy to the other speaker, which is consistent with the second utterance in the context. On the contrary, HRAN poses a question in reply, contradicting the dialog history."
],
[
"According to the Media Equation Theory BIBREF32, people respond to computers socially. This means humans expect talking to computers as they talk to other human beings. This is why we believe reproducing social and conversational intelligence will make social chatbots more believable and socially engaging. In this paper, we propose a multi-turn dialog system capable of generating emotionally appropriate responses, which is the first step toward such a goal. We have demonstrated how to do so by (1) modeling utterances with extra affect vectors, (2) creating an emotional encoding mechanism that learns emotion exchanges in the dataset, (3) curating a multi-turn dialog dataset, and (4) evaluating the model with offline and online experiments.",
"As future work, we would like to investigate the diversity issue of the responses generated, possibly by extending the mutual information objective function BIBREF5 to multi-turn settings. We would also like to evaluate our model on a larger dataset, for example by extracting multi-turn dialogs from the OpenSubtitles corpus BIBREF33."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Model ::: Hierarchical Attention",
"Model ::: Emotion Encoder",
"Model ::: Decoding",
"Evaluation",
"Evaluation ::: Datasets",
"Evaluation ::: Baselines and Implementation",
"Evaluation ::: Evaluation Metrics",
"Evaluation ::: Evaluation Metrics ::: Human evaluation setup",
"Evaluation ::: Results",
"Evaluation ::: Results ::: Case Study",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"3e9e850087de48e5d3228f9b691cf66ce2f76a7d"
],
"answer": [
{
"evidence": [
"Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).",
"FLOAT SELECTED: Table 2: Perplexity scores achieved by the models. Validation set 1 comes from the Cornell dataset, while validation set 2 comes from the DailyDialog dataset."
],
"extractive_spans": [],
"free_form_answer": "Perplexity of proposed MEED model is 19.795 vs 19.913 of next best result on test set.",
"highlighted_evidence": [
"Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets.",
"FLOAT SELECTED: Table 2: Perplexity scores achieved by the models. Validation set 1 comes from the Cornell dataset, while validation set 2 comes from the DailyDialog dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1237400bc18aa2feb5b5b332cf59adb203fd6651"
],
"answer": [
{
"evidence": [
"Usually the probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ can be modeled by an RNN language model conditioned on $\\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\\mathbf {c}_t$ and $\\mathbf {e}$, and how they are combined in the decoding part."
],
"extractive_spans": [
"we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution"
],
"free_form_answer": "",
"highlighted_evidence": [
"When generating the word $y_t$ at time step $t$, the context $\\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f2435e8054869e57ba5863e7f59aa3d71f02a192"
],
"answer": [
{
"evidence": [
"For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral."
],
"extractive_spans": [
"(1) grammatical correctness",
"(2) contextual coherence",
"(3) emotional appropriateness"
],
"free_form_answer": "",
"highlighted_evidence": [
"According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0acfb84fc15d0f06485d0196203c9178db36f859"
],
"answer": [
{
"evidence": [
"The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8d48966aa92b8ab8b8e1a03c138e1db25ba93db5"
],
"answer": [
{
"evidence": [
"We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one."
],
"extractive_spans": [
" sequence-to-sequence model (denoted as S2S)",
"HRAN"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How better is proposed method than baselines perpexity wise?",
"How does the multi-turn dialog system learns?",
"How is human evaluation performed?",
"Is some other metrics other then perplexity measured?",
"What two baseline models are used?"
],
"question_id": [
"c034f38a570d40360c3551a6469486044585c63c",
"9cbea686732b5b85f77868ca47d2f93cf34516ed",
"6aee16c4f319a190c2a451c1c099b66162299a28",
"4d4b9ff2da51b9e0255e5fab0b41dfe49a0d9012",
"180047e1ccfc7c98f093b8d1e1d0479a4cca99cc"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The overall architecture of our model.",
"Table 1: Statistics of the two datasets.",
"Table 2: Perplexity scores achieved by the models. Validation set 1 comes from the Cornell dataset, while validation set 2 comes from the DailyDialog dataset.",
"Table 5: Human evaluation results on emotional appropriateness.",
"Table 4: Human evaluation results on contextual coherence.",
"Table 6: Sample responses for the three models."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"7-Table2-1.png",
"7-Table5-1.png",
"7-Table4-1.png",
"8-Table6-1.png"
]
} | [
"How better is proposed method than baselines perpexity wise?"
] | [
[
"1908.07816-7-Table2-1.png",
"1908.07816-Evaluation ::: Results-0"
]
] | [
"Perplexity of proposed MEED model is 19.795 vs 19.913 of next best result on test set."
] | 167 |
1808.09409 | Semantic Role Labeling for Learner Chinese: the Importance of Syntactic Parsing and L2-L1 Parallel Data | This paper studies semantic parsing for interlanguage (L2), taking semantic role labeling (SRL) as a case task and learner Chinese as a case language. We first manually annotate the semantic roles for a set of learner texts to derive a gold standard for automatic SRL. Based on the new data, we then evaluate three off-the-shelf SRL systems, i.e., the PCFGLA-parser-based, neural-parser-based and neural-syntax-agnostic systems, to gauge how successful SRL for learner Chinese can be. We find two non-obvious facts: 1) the L1-sentence-trained systems performs rather badly on the L2 data; 2) the performance drop from the L1 data to the L2 data of the two parser-based systems is much smaller, indicating the importance of syntactic parsing in SRL for interlanguages. Finally, the paper introduces a new agreement-based model to explore the semantic coherency information in the large-scale L2-L1 parallel data. We then show such information is very effective to enhance SRL for learner texts. Our model achieves an F-score of 72.06, which is a 2.02 point improvement over the best baseline. | {
"paragraphs": [
[
"A learner language (interlanguage) is an idiolect developed by a learner of a second or foreign language which may preserve some features of his/her first language. Previously, encouraging results of automatically building the syntactic analysis of learner languages were reported BIBREF0 , but it is still unknown how semantic processing performs, while parsing a learner language (L2) into semantic representations is the foundation of a variety of deeper analysis of learner languages, e.g., automatic essay scoring. In this paper, we study semantic parsing for interlanguage, taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.",
"Before discussing a computation system, we first consider the linguistic competence and performance. Can human robustly understand learner texts? Or to be more precise, to what extent, a native speaker can understand the meaning of a sentence written by a language learner? Intuitively, the answer is towards the positive side. To validate this, we ask two senior students majoring in Applied Linguistics to carefully annotate some L2-L1 parallel sentences with predicate–argument structures according to the specification of Chinese PropBank BIBREF1 , which is developed for L1. A high inter-annotator agreement is achieved, suggesting the robustness of language comprehension for L2. During the course of semantic annotation, we find a non-obvious fact that we can re-use the semantic annotation specification, Chinese PropBank in our case, which is developed for L1. Only modest rules are needed to handle some tricky phenomena. This is quite different from syntactic treebanking for learner sentences, where defining a rich set of new annotation heuristics seems necessary BIBREF2 , BIBREF0 , BIBREF3 .",
"Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts.",
"While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent.",
"Our last concern is to explore the potential of a large-scale set of L2-L1 parallel sentences to enhance SRL systems. We find that semantic structures of the L2-L1 parallel sentences are highly consistent. This inspires us to design a novel agreement-based model to explore such semantic coherency information. In particular, we define a metric for comparing predicate–argument structures and searching for relatively good automatic syntactic and semantic annotations to extend the training data for SRL systems. Experiments demonstrate the value of the L2-L1 parallel sentences as well as the effectiveness of our method. We achieve an F-score of 72.06, which is a 2.02 percentage point improvement over the best neural-parser-based baseline.",
"To the best of our knowledge, this is the first time that the L2-L1 parallel data is utilized to enhance NLP systems for learner texts.",
"For research purpose, we have released our SRL annotations on 600 sentence pairs and the L2-L1 parallel dataset ."
],
[
"An L2-L1 parallel corpus can greatly facilitate the analysis of a learner language BIBREF9 . Following mizumoto:2011, we collected a large dataset of L2-L1 parallel texts of Mandarin Chinese by exploring “language exchange\" social networking services (SNS), i.e., Lang-8, a language-learning website where native speakers can freely correct the sentences written by foreign learners. The proficiency levels of the learners are diverse, but most of the learners, according to our judgment, is of intermediate or lower level.",
"Our initial collection consists of 1,108,907 sentence pairs from 135,754 essays. As there is lots of noise in raw sentences, we clean up the data by (1) ruling out redundant content, (2) excluding sentences containing foreign words or Chinese phonetic alphabet by checking the Unicode values, (3) dropping overly simple sentences which may not be informative, and (4) utilizing a rule-based classifier to determine whether to include the sentence into the corpus.",
"The final corpus consists of 717,241 learner sentences from writers of 61 different native languages, in which English and Japanese constitute the majority. As for completeness, 82.78% of the Chinese Second Language sentences on Lang-8 are corrected by native human annotators. One sentence gets corrected approximately 1.53 times on average.",
"In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese. It is from the above corpus that we carefully select 600 pairs of L2-L1 parallel sentences. We would choose the most appropriate one among multiple versions of corrections and recorrect the L1s if necessary. Because word structure is very fundamental for various NLP tasks, our annotation also contains gold word segmentation for both L2 and L1 sentences. Note that there are no natural word boundaries in Chinese text. We first employ a state-of-the-art word segmentation system to produce initial segmentation results and then manually fix segmentation errors.",
"The dataset includes four typologically different mother tongues, i.e., English (ENG), Japanese (JPN), Russian (RUS) and Arabic (ARA). Sub-corpus of each language consists of 150 sentence pairs. We take the mother languages of the learners into consideration, which have a great impact on grammatical errors and hence automatic semantic analysis. We hope that four selected mother tongues guarantee a good coverage of typologies. The annotated corpus can be used both for linguistic investigation and as test data for NLP systems."
],
[
"Semantic role labeling (SRL) is the process of assigning semantic roles to constituents or their head words in a sentence according to their relationship to the predicates expressed in the sentence. Typical semantic roles can be divided into core arguments and adjuncts. The core arguments include Agent, Patient, Source, Goal, etc, while the adjuncts include Location, Time, Manner, Cause, etc.",
"To create a standard semantic-role-labeled corpus for learner Chinese, we first annotate a 50-sentence trial set for each native language. Two senior students majoring in Applied Linguistics conducted the annotation. Based on a total of 400 sentences, we adjudicate an initial gold standard, adapting and refining CPB specification as our annotation heuristics. Then the two annotators proceed to annotate a 100-sentence set for each language independently. It is on these larger sets that we report the inter-annotator agreement.",
"In the final stage, we also produce an adjudicated gold standard for all 600 annotated sentences. This was achieved by comparing the annotations selected by each annotator, discussing the differences, and either selecting one as fully correct or creating a hybrid representing the consensus decision for each choice point. When we felt that the decisions were not already fully guided by the existing annotation guidelines, we worked to articulate an extension to the guidelines that would support the decision.",
"During the annotation, the annotators apply both position labels and semantic role labels. Position labels include S, B, I and E, which are used to mark whether the word is an argument by itself, or at the beginning or in the middle or at the end of a argument. As for role labels, we mainly apply representations defined by CPB BIBREF1 . The predicate in a sentence was labeled as rel, the core semantic roles were labeled as AN and the adjuncts were labeled as AM."
],
[
"For inter-annotator agreement, we evaluate the precision (P), recall (R), and F1-score (F) of the semantic labels given by the two annotators. Table TABREF5 shows that our inter-annotator agreement is promising. All L1 texts have F-score above 95, and we take this as a reflection that our annotators are qualified. F-scores on L2 sentences are all above 90, just a little bit lower than those of L1, indicating that L2 sentences can be greatly understood by native speakers. Only modest rules are needed to handle some tricky phenomena:",
"The labeled argument should be strictly limited to the core roles defined in the frameset of CPB, though the number of arguments in L2 sentences may be more or less than the number defined.",
"For the roles in L2 that cannot be labeled as arguments under the specification of CPB, if they provide semantic information such as time, location and reason, we would labeled them as adjuncts though they may not be well-formed adjuncts due to the absence of function words.",
"For unnecessary roles in L2 caused by mistakes of verb subcategorization (see examples in Figure FIGREF30 ), we would leave those roles unlabeled.",
"Table TABREF10 further reports agreements on each argument (AN) and adjunct (AM) in detail, according to which the high scores are attributed to the high agreement on arguments (AN). The labels of A3 and A4 have no disagreement since they are sparse in CPB and are usually used to label specific semantic roles that have little ambiguity.",
"We also conducted in-depth analysis on inter-annotator disagreement. For further details, please refer to duan2018argument."
],
[
"The work on SRL has included a broad spectrum of machine learning and deep learning approaches to the task. Early work showed that syntactic information is crucial for learning long-range dependencies, syntactic constituency structure and global constraints BIBREF10 , BIBREF11 , while initial studies on neural methods achieved state-of-the-art results with little to no syntactic input BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 . However, the question whether fully labeled syntactic structures provide an improvement for neural SRL is still unsettled pending further investigation.",
"To evaluate the robustness of state-of-the-art SRL algorithms, we evaluate two representative SRL frameworks. One is a traditional syntax-based SRL system that leverages a syntactic parser and manually crafted features to obtain explicit information to find semantic roles BIBREF15 , BIBREF16 In particular, we employ the system introduced in BIBREF4 . This system first collects all c-commanders of a predicate in question from the output of a parser and puts them in order. It then employs a first order linear-chain global linear model to perform semantic tagging. For constituent parsing, we use two parsers for comparison, one is Berkeley parser BIBREF5 , a well-known implementation of the unlexicalized latent variable PCFG model, the other is a minimal span-based neural parser based on independent scoring of labels and spans BIBREF6 . As proposed in BIBREF6 , the second parser is capable of achieving state-of-the-art single-model performance on the Penn Treebank. On the Chinese TreeBank BIBREF17 , it also outperforms the Berkeley parser for the in-domain test. We call the corresponding SRL systems as the PCFGLA-parser-based and neural-parser-based systems.",
"The second SRL framework leverages an end-to-end neural model to implicitly capture local and non-local information BIBREF12 , BIBREF7 . In particular, this framework treats SRL as a BIO tagging problem and uses a stacked BiLSTM to find informative embeddings. We apply the system introduced in BIBREF7 for experiments. Because all syntactic information (including POS tags) is excluded, we call this system the neural syntax-agnostic system.",
"To train the three SRL systems as well as the supporting parsers, we use the CTB and CPB data . In particular, the sentences selected for the CoNLL 2009 shared task are used here for parameter estimation. Note that, since the Berkeley parser is based on PCFGLA grammar, it may fail to get the syntactic outputs for some sentences, while the other parser does not have that problem. In this case, we have made sure that both parsers can parse all 1,200 sentences successfully."
],
[
"The overall performances of the three SRL systems on both L1 and L2 data (150 parallel sentences for each mother tongue) are shown in Table TABREF11 . For all systems, significant decreases on different mother languages can be consistently observed, highlighting the weakness of applying L1-sentence-trained systems to process learner texts. Comparing the two syntax-based systems with the neural syntax-agnostic system, we find that the overall INLINEFORM0 F, which denotes the F-score drop from L1 to L2, is smaller in the syntax-based framework than in the syntax-agnostic system. On English, Japanese and Russian L2 sentences, the syntax-based system has better performances though it sometimes works worse on the corresponding L1 sentences, indicating the syntax-based systems are more robust when handling learner texts.",
"Furthermore, the neural-parser-based system achieves the best overall performance on the L2 data. Though performing slightly worse than the neural syntax-agnostic one on the L1 data, it has much smaller INLINEFORM0 F, showing that as the syntactic analysis improves, the performances on both the L1 and L2 data grow, while the gap can be maintained. This demonstrates again the importance of syntax in semantic constructions, especially for learner texts.",
"Table TABREF45 summarizes the SRL results of the baseline PCFGLA-parser-based model as well as its corresponding retrained models. Since both the syntactic parser and the SRL classifier can be retrained and thus enhanced, we report the individual impact as well as the combined one. We can clearly see that when the PCFGLA parser is retrained with the SRL-consistent sentence pairs, it is able to provide better SRL-oriented syntactic analysis for the L2 sentences as well as their corrections, which are essentially L1 sentences. The outputs of the L1 sentences that are generated by the deep SRL system are also useful for improving the linear SRL classifier. A non-obvious fact is that such a retrained model yields better analysis for not only L1 but also L2 sentences. Fortunately, combining both results in further improvement.",
"Table TABREF46 shows the results of the parallel experiments based on the neural parser. Different from the PCFGLA model, the SRL-consistent trees only yield a slight improvement on the L2 data. On the contrary, retraining the SRL classifier is much more effective. This experiment highlights the different strengths of different frameworks for parsing. Though for standard in-domain test, the neural parser performs better and thus is more and more popular, for some other scenarios, the PCFGLA model is stronger.",
"Table TABREF47 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail. Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part. From the figure we can observe that, all the semantic roles achieve significant improvements in performances."
],
[
"To better understand the overall results, we further look deep into the output by addressing the questions:",
"What types of error negatively impact both systems over learner texts?",
"What types of error are more problematic for the neural syntax-agnostic one over the L2 data but can be solved by the syntax-based one to some extent?",
"We first carry out a suite of empirical investigations by breaking down error types for more detailed evaluation. To compare two systems, we analyze results on ENG-L2 and JPN-L2 given that they reflect significant advantages of the syntax-based systems over the neural syntax-agnostic system. Note that the syntax-based system here refers to the neural-parser-based one. Finally, a concrete study on the instances in the output is conducted, as to validate conclusions in the previous step.",
"We employ 6 oracle transformations designed by he2017deep to fix various prediction errors sequentially (see details in Table TABREF19 ), and observe the relative improvements after each operation, as to obtain fine-grained error types. Figure FIGREF21 compares two systems in terms of different mistakes on ENG-L2 and JPN-L2 respectively. After fixing the boundaries of spans, the neural syntax-agnostic system catches up with the other, illustrating that though both systems handle boundary detection poorly on the L2 sentences, the neural syntax-agnostic one suffers more from this type of errors.",
"Excluding boundary errors (after moving, merging, splitting spans and fixing boundaries), we also compare two systems on L2 in terms of detailed label identification, so as to observe which semantic role is more likely to be incorrectly labeled. Figure FIGREF24 shows the confusion matrices. Comparing (a) with (c) and (b) with (d), we can see that the syntax-based and the neural system often overly label A1 when processing learner texts. Besides, the neural syntax-agnostic system predicts the adjunct AM more than necessary on L2 sentences by 54.24% compared with the syntax-based one.",
"On the basis of typical error types found in the previous stage, specifically, boundary detection and incorrect labels, we further conduct an on-the-spot investigation on the output sentences.",
"Previous work has proposed that the drop in performance of SRL systems mainly occurs in identifying argument boundaries BIBREF18 . According to our results, this problem will be exacerbated when it comes to L2 sentences, while syntactic structure sometimes helps to address this problem.",
"Figure FIGREF30 is an example of an output sentence. The Chinese word “也” (also) usually serves as an adjunct but is now used for linking the parallel structure “用 汉语 也 说话 快” (using Chinese also speaking quickly) in this sentence, which is ill-formed to native speakers and negatively affects the boundary detection of A0 for both systems.",
"On the other hand, the neural system incorrectly takes the whole part before “很 难” (very hard) as A0, regardless of the adjunct “对 我 来说” (for me), while this can be figured out by exploiting syntactic analysis, as illustrated in Figure FIGREF30 . The constituent “对 我 来说” (for me) has been recognized as a prepositional phrase (PP) attached to the VP, thus labeled as AM. This shows that by providing information of some well-formed sub-trees associated with correct semantic roles, the syntactic system can perform better than the neural one on SRL for learner texts.",
"A second common source of errors is wrong labels, especially for A1. Based on our quantitative analysis, as reported in Table TABREF37 , these phenomena are mainly caused by mistakes of verb subcategorization, where the systems label more arguments than allowed by the predicates. Besides, the deep end-to-end system is also likely to incorrectly attach adjuncts AM to the predicates.",
"Figure FIGREF30 is another example. The Chinese verb “做饭” (cook-meal) is intransitive while this sentence takes it as a transitive verb, which is very common in L2. Lacking in proper verb subcategorization, both two systems fail to recognize those verbs allowing only one argument and label the A1 incorrectly.",
"As for AM, the neural system mistakenly adds the adjunct to the predicate, which can be avoided by syntactic information of the sentence shown in Figure FIGREF30 . The constituent “常常” (often) are adjuncts attached to VP structure governed by the verb “练习”(practice), which will not be labeled as AM in terms of the verb “做饭”(cook-meal). In other words, the hierarchical structure can help in argument identification and assignment by exploiting local information."
],
[
"We explore the valuable information about the semantic coherency encoded in the L2-L1 parallel data to improve SRL for learner Chinese. In particular, we introduce an agreement-based model to search for high-quality automatic syntactic and semantic role annotations, and then use these annotations to retrain the two parser-based SRL systems."
],
[
"For the purpose of harvesting the good automatic syntactic and semantic analysis, we consider the consistency between the automatically produced analysis of a learner sentence and its corresponding well-formed sentence. Determining the measurement metric for comparing predicate–argument structures, however, presents another challenge, because the words of the L2 sentence and its L1 counterpart do not necessarily match. To solve the problem, we use an automatic word aligner. BerkeleyAligner BIBREF19 , a state-of-the-art tool for obtaining a word alignment, is utilized.",
"The metric for comparing SRL results of two sentences is based on recall of INLINEFORM0 tuples, where INLINEFORM1 is a predicate, INLINEFORM2 is a word that is in the argument or adjunct of INLINEFORM3 and INLINEFORM4 is the corresponding role. Based on a word alignment, we define the shared tuple as a mutual tuple between two SRL results of an L2-L1 sentence pair, meaning that both the predicate and argument words are aligned respectively, and their role relations are the same. We then have two recall values:",
"L2-recall is (# of shared tuples) / (# of tuples of the result in L2)",
"L1-recall is (# of shared tuples) / (# of tuples of the result in L1)",
"In accordance with the above evaluation method, we select the automatic analysis of highest scoring sentences and use them to expand the training data. Sentences whose L1 and L2 recall are both greater than a threshold INLINEFORM0 are taken as good ones. A parser-based SRL system consists of two essential modules: a syntactic parser and a semantic classifier. To enhance the syntactic parser, the automatically generated syntactic trees of the sentence pairs that exhibit high semantic consistency are directly used to extend training data. To improve a semantic classifier, besides the consistent semantic analysis, we also use the outputs of the L1 but not L2 data which are generated by the neural syntax-agnostic SRL system."
],
[
"Our SRL corpus contains 1200 sentences in total that can be used as an evaluation for SRL systems. We separate them into three data sets. The first data set is used as development data, which contains 50 L2-L1 sentence pairs for each language and 200 pairs in total. Hyperparameters are tuned using the development set. The second data set contains all other 400 L2 sentences, which is used as test data for L2. Similarly, all other 400 L1 sentences are used as test data for L1.",
"The sentence pool for extracting retraining annotations includes all English- and Japanese-native speakers' data along with its corrections. Table TABREF43 presents the basic statistics. Around 8.5 – 11.9% of the sentence can be taken as high L1/L2 recall sentences, which serves as a reflection that argument structure is vital for language acquisition and difficult for learners to master, as proposed in vazquez2004learning and shin2010contribution. The threshold ( INLINEFORM0 ) for selecting sentences is set upon the development data. For example, we use additional 156,520 sentences to enhance the Berkeley parser."
],
[
"Statistical models of annotating learner texts are making rapid progress. Although there have been some initial studies on defining annotation specification as well as corpora for syntactic analysis, there is almost no work on semantic parsing for interlanguages. This paper discusses this topic, taking Semantic Role Labeling as a case task and learner Chinese as a case language. We reveal three unknown facts that are important towards a deeper analysis of learner languages: (1) the robustness of language comprehension for interlanguage, (2) the weakness of applying L1-sentence-trained systems to process learner texts, and (3) the significance of syntactic parsing and L2-L1 parallel data in building more generalizable SRL models that transfer better to L2. We have successfully provided a better SRL-oriented syntactic parser as well as a semantic classifier for processing the L2 data by exploring L2-L1 parallel data, supported by a significant numeric improvement over a number of state-of-the-art systems. To the best of our knowledge, this is the first work that demonstrates the effectiveness of large-scale L2-L1 parallel data to enhance the NLP system for learner texts."
],
[
"This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers and for their helpful comments. We also thank Nianwen Xue for useful comments on the final version. Weiwei Sun is the corresponding author."
]
],
"section_name": [
"Introduction",
"An L2-L1 Parallel Corpus",
"The Annotation Process",
"Inter-annotator Agreement",
"Three SRL Systems",
"Main Results",
"Analysis",
"Enhancing SRL with L2-L1 Parallel Data",
"The Method",
"Experimental Setup",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"0adb8e4cfb7d0907d69fb75e06419e00bdeee18b"
],
"answer": [
{
"evidence": [
"Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts."
],
"extractive_spans": [
"PCFGLA-based parser, viz. Berkeley parser BIBREF5",
"minimal span-based neural parser BIBREF6"
],
"free_form_answer": "",
"highlighted_evidence": [
"Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7391d39fcb6dbaedfc5ab71e250256e0ca7bcfdc"
],
"answer": [
{
"evidence": [
"While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent."
],
"extractive_spans": [
"syntax-based system may generate correct syntactic analyses for partial grammatical fragments"
],
"free_form_answer": "",
"highlighted_evidence": [
"While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"67be6b92cdb2ea380a1c9a3b33f5f6a9236b1503"
],
"answer": [
{
"evidence": [
"In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese. It is from the above corpus that we carefully select 600 pairs of L2-L1 parallel sentences. We would choose the most appropriate one among multiple versions of corrections and recorrect the L1s if necessary. Because word structure is very fundamental for various NLP tasks, our annotation also contains gold word segmentation for both L2 and L1 sentences. Note that there are no natural word boundaries in Chinese text. We first employ a state-of-the-art word segmentation system to produce initial segmentation results and then manually fix segmentation errors."
],
"extractive_spans": [],
"free_form_answer": "Authors",
"highlighted_evidence": [
"In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is the baseline model for the agreement-based mode?",
"Do the authors suggest why syntactic parsing is so important for semantic role labelling for interlanguages?",
"Who manually annotated the semantic roles for the set of learner texts?"
],
"question_id": [
"b5d6357d3a9e3d5fdf9b344ae96cddd11a407875",
"f33a21c6a9c75f0479ffdbb006c40e0739134716",
"8a1d4ed00d31c1f1cb05bc9d5e4f05fe87b0e5a4"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"irony",
"irony",
"irony"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Inter-annotator agreement.",
"Table 2: Inter-annotator agreement (F-scores) relative to languages and role types.",
"Table 3: Performances of the syntax-based and neural syntax-agnostic SRL systems on the L1 and L2 data. “ALL” denotes the overall performance.",
"Table 4: Oracle transformations paired with the relative error reduction after each operation. The operations are permitted only if they do not cause any overlapping arguments",
"Figure 1: Relative improvements of performance after doing each type of oracle transformation in sequence over ENG-L2 and JPN-L2",
"Figure 2: Confusion matrix for each semantic role (here we add up matrices of ENG-L2 and JPNL2). The predicted labels are only counted in three cases: (1) The predicated boundaries match the gold span boundaries. (2) The predicated argument does not overlap with any the gold span (Gold labeled as “O”). (3) The gold argument does not overlap with any predicted span (Prediction labeled as “O”).",
"Figure 3: Two examples for SRL outputs of both systems and the corresponding syntactic analysis for the L2 sentences",
"Table 5: Causes of labeling unnecessary A1",
"Table 6: Statistics of unlabeled data.",
"Table 7: Accuracies different PCFGLA-parserbased models on the two test data sets.",
"Table 8: Accuracies of different neural-parserbased models on the two test data sets.",
"Table 9: F-scores of the baseline and the bothretrained models relative to role types on the two data sets. We only list results of the PCFGLAparser-based system."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"6-Figure1-1.png",
"6-Figure2-1.png",
"7-Figure3-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"9-Table8-1.png",
"9-Table9-1.png"
]
} | [
"Who manually annotated the semantic roles for the set of learner texts?"
] | [
[
"1808.09409-An L2-L1 Parallel Corpus-3"
]
] | [
"Authors"
] | 169 |
1808.00265 | Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining | A key aspect of VQA models that are interpretable is their ability to ground their answers to relevant regions in the image. Current approaches with this capability rely on supervised learning and human annotated groundings to train attention mechanisms inside the VQA architecture. Unfortunately, obtaining human annotations specific for visual grounding is difficult and expensive. In this work, we demonstrate that we can effectively train a VQA architecture with grounding supervision that can be automatically obtained from available region descriptions and object annotations. We also show that our model trained with this mined supervision generates visual groundings that achieve a higher correlation with respect to manually-annotated groundings, meanwhile achieving state-of-the-art VQA accuracy. | {
"paragraphs": [
[
"We are interested in the problem of visual question answering (VQA), where an algorithm is presented with an image and a question that is formulated in natural language and relates to the contents of the image. The goal of this task is to get the algorithm to correctly answer the question. The VQA task has recently received significant attention from the computer vision community, in particular because obtaining high accuracies would presumably require precise understanding of both natural language as well as visual stimuli. In addition to serving as a milestone towards visual intelligence, there are practical applications such as development of tools for the visually impaired.",
"The problem of VQA is challenging due to the complex interplay between the language and visual modalities. On one hand, VQA algorithms must be able to parse and interpret the input question, which is provided in natural language BIBREF0 , BIBREF1 , BIBREF2 . This may potentially involve understanding of nouns, verbs and other linguistic elements, as well as their visual significance. On the other hand, the algorithms must analyze the image to identify and recognize the visual elements relevant to the question. Furthermore, some questions may refer directly to the contents of the image, but may require external, common sense knowledge to be answered correctly. Finally, the algorithms should generate a textual output in natural language that correctly answers the input visual question. In spite of the recent research efforts to address these challenges, the problem remains largely unsolved BIBREF3 .",
"We are particularly interested in giving VQA algorithms the ability to identify the visual elements that are relevant to the question. In the VQA literature, such ability has been implemented by attention mechanisms. Such attention mechanisms generate a heatmap over the input image, which highlights the regions of the image that lead to the answer. These heatmaps are interpreted as groundings of the answer to the most relevant areas of the image. Generally, these mechanisms have either been considered as latent variables for which there is no supervision, or have been treated as output variables that receive direct supervision from human annotations. Unfortunately, both of these approaches have disadvantages. First, unsupervised training of attention tends to lead to models that cannot ground their decision in the image in a human interpretable manner. Second, supervised training of attention is difficult and expensive: human annotators may consider different regions to be relevant for the question at hand, which entails ambiguity and increased annotation cost. Our goal is to leverage the best of both worlds by providing VQA algorithms with interpretable grounding of their answers, without the need of direct and explicit manual annotation of attention.",
"From a practical point of view, as autonomous machines are increasingly finding real world applications, there is an increasing need to provide them with suitable capabilities to explain their decisions. However, in most applications, including VQA, current state-of-the-art techniques operate as black-box models that are usually trained using a discriminative approach. Similarly to BIBREF4 , in this work we show that, in the context of VQA, such approaches lead to internal representations that do not capture the underlying semantic relations between textual questions and visual information. Consequently, as we show in this work, current state-of-the-art approaches for VQA are not able to support their answers with a suitable interpretable representation.",
"In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training.",
"The contributions of this paper are: (1) we introduce a mechanism to automatically obtain meaningful attention supervision from both region descriptions and object annotations in the Visual Genome dataset; (2) we show that by using the prediction of region and object label attention maps as auxiliary tasks in a VQA application, it is possible to obtain more interpretable intermediate representations. (3) we experimentally demonstrate state-of-the-art performances in VQA benchmarks as well as visual grounding that closely matches human attention annotations."
],
[
"Since its introduction BIBREF0 , BIBREF1 , BIBREF2 , the VQA problem has attracted an increasing interest BIBREF3 . Its multimodal nature and more precise evaluation protocol than alternative multimodal scenarios, such as image captioning, help to explain this interest. Furthermore, the proliferation of suitable datasets and potential applications, are also key elements behind this increasing activity. Most state-of-the-art methods follow a joint embedding approach, where deep models are used to project the textual question and visual input to a joint feature space that is then used to build the answer. Furthermore, most modern approaches pose VQA as a classification problem, where classes correspond to a set of pre-defined candidate answers. As an example, most entries to the VQA challenge BIBREF2 select as output classes the most common 3000 answers in this dataset, which account for 92% of the instances in the validation set.",
"The strategy to combine the textual and visual embeddings and the underlying structure of the deep model are key design aspects that differentiate previous works. Antol et al. BIBREF2 propose an element-wise multiplication between image and question embeddings to generate spatial attention map. Fukui et al. BIBREF5 propose multimodal compact bilinear pooling (MCB) to efficiently implement an outer product operator that combines visual and textual representations. Yu et al. BIBREF6 extend this pooling scheme by introducing a multi-modal factorized bilinear pooling approach (MFB) that improves the representational capacity of the bilinear operator. They achieve this by adding an initial step that efficiently expands the textual and visual embeddings to a high-dimensional space. In terms of structural innovations, Noh et al. BIBREF7 embed the textual question as an intermediate dynamic bilinear layer of a ConvNet that processes the visual information. Andreas et al. BIBREF8 propose a model that learns a set of task-specific neural modules that are jointly trained to answer visual questions.",
"Following the successful introduction of soft attention in neural machine translation applications BIBREF9 , most modern VQA methods also incorporate a similar mechanism. The common approach is to use a one-way attention scheme, where the embedding of the question is used to generate a set of attention coefficients over a set of predefined image regions. These coefficients are then used to weight the embedding of the image regions to obtain a suitable descriptor BIBREF10 , BIBREF11 , BIBREF5 , BIBREF12 , BIBREF6 . More elaborated forms of attention has also been proposed. Xu and Saenko BIBREF13 suggest use word-level embedding to generate attention. Yang et al. BIBREF14 iterates the application of a soft-attention mechanism over the visual input as a way to progressively refine the location of relevant cues to answer the question. Lu et al. BIBREF15 proposes a bidirectional co-attention mechanism that besides the question guided visual attention, also incorporates a visual guided attention over the input question.",
"In all the previous cases, the attention mechanism is applied using an unsupervised scheme, where attention coefficients are considered as latent variables. Recently, there have been also interest on including a supervised attention scheme to the VQA problem BIBREF4 , BIBREF16 , BIBREF17 . Das et al. BIBREF4 compare the image areas selected by humans and state-of-the-art VQA techniques to answer the same visual question. To achieve this, they collect the VQA human attention dataset (VQA-HAT), a large dataset of human attention maps built by asking humans to select images areas relevant to answer questions from the VQA dataset BIBREF2 . Interestingly, this study concludes that current machine-generated attention maps exhibit a poor correlation with respect to the human counterpart, suggesting that humans use different visual cues to answer the questions. At a more fundamental level, this suggests that the discriminative nature of most current VQA systems does not effectively constraint the attention modules, leading to the encoding of discriminative cues instead of the underlying semantic that relates a given question-answer pair. Our findings in this work support this hypothesis.",
"Related to the work in BIBREF4 , Gan et al. BIBREF16 apply a more structured approach to identify the image areas used by humans to answer visual questions. For VQA pairs associated to images in the COCO dataset, they ask humans to select the segmented areas in COCO images that are relevant to answer each question. Afterwards, they use these areas as labels to train a deep learning model that is able to identify attention features. By augmenting a standard VQA technique with these attention features, they are able to achieve a small boost in performance. Closely related to our approach, Qiao et al. BIBREF17 use the attention labels in the VQA-HAT dataset to train an attention proposal network that is able to predict image areas relevant to answer a visual question. This network generates a set of attention proposals for each image in the VQA dataset, which are used as labels to supervise attention in the VQA model. This strategy results in a small boost in performance compared with a non-attentional strategy. In contrast to our approach, these previous works are based on a supervised attention scheme that does not consider an automatic mechanism to obtain the attention labels. Instead, they rely on human annotated groundings as attention supervision. Furthermore, they differ from our work in the method to integrate attention labels to a VQA model."
],
[
"Figure FIGREF2 shows the main pipeline of our VQA model. We mostly build upon the MCB model in BIBREF5 , which exemplifies current state-of-the-art techniques for this problem. Our main innovation to this model is the addition of an Attention Supervision Module that incorporates visual grounding as an auxiliary task. Next we describe the main modules behind this model.",
"Question Attention Module: Questions are tokenized and passed through an embedding layer, followed by an LSTM layer that generates the question features INLINEFORM0 , where INLINEFORM1 is the maximum number of words in the tokenized version of the question and INLINEFORM2 is the dimensionality of the hidden state of the LSTM. Additionally, following BIBREF12 , a question attention mechanism is added that generates question attention coefficients INLINEFORM3 , where INLINEFORM4 is the so-called number of “glimpses”. The purpose of INLINEFORM5 is to allow the model to predict multiple attention maps so as to increase its expressiveness. Here, we use INLINEFORM6 . The weighted question features INLINEFORM7 are then computed using a soft attention mechanism BIBREF9 , which is essentially a weighted sum of the INLINEFORM8 word features followed by a concatenation according to INLINEFORM9 .",
"Image Attention Module: Images are passed through an embedding layer consisting of a pre-trained ConvNet model, such as Resnet pretrained with the ImageNet dataset BIBREF18 . This generates image features INLINEFORM0 , where INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are depth, height, and width of the extracted feature maps. Fusion Module I is then used to generate a set of image attention coefficients. First, question features INLINEFORM4 are tiled as the same spatial shape of INLINEFORM5 . Afterwards, the fusion module models the joint relationship INLINEFORM6 between questions and images, mapping them to a common space INLINEFORM7 . In the simplest case, one can implement the fusion module using either concatenation or Hadamard product BIBREF19 , but more effective pooling schemes can be applied BIBREF5 , BIBREF20 , BIBREF12 , BIBREF6 . The design choice of the fusion module remains an on-going research topic. In general, it should both effectively capture the latent relationship between multi-modal features meanwhile be easy to optimize. The fusion results are then passed through an attention module that computes the visual attention coefficient INLINEFORM8 , with which we can obtain attention-weighted visual features INLINEFORM9 . Again, INLINEFORM10 is the number of “glimpses”, where we use INLINEFORM11 .",
"Classification Module: Using the compact representation of questions INLINEFORM0 and visual information INLINEFORM1 , the classification module applies first the Fusion Module II that provides the feature representation of answers INLINEFORM2 , where INLINEFORM3 is the latent answer space. Afterwards, it computes the logits over a set of predefined candidate answers. Following previous work BIBREF5 , we use as candidate outputs the top 3000 most frequent answers in the VQA dataset. At the end of this process, we obtain the highest scoring answer INLINEFORM4 .",
"Attention Supervision Module: As a main novelty of the VQA model, we add an Image Attention Supervision Module as an auxiliary classification task, where ground-truth visual grounding labels INLINEFORM0 are used to guide the model to focus on meaningful parts of the image to answer each question. To do that, we simply treat the generated attention coefficients INLINEFORM1 as a probability distribution, and then compare it with the ground-truth using KL-divergence. Interestingly, we introduce two attention maps, corresponding to relevant region-level and object-level groundings, as shown in Figure FIGREF3 . Sections SECREF4 and SECREF5 provide details about our proposed method to obtain the attention labels and to train the resulting model, respectively."
],
[
"Visual Genome (VG) BIBREF21 includes the largest VQA dataset currently available, which consists of 1.7M QA pairs. Furthermore, for each of its more than 100K images, VG also provides region and object annotations by means of bounding boxes. In terms of visual grounding, these region and object annotations provide complementary information. As an example, as shown in Figure FIGREF3 , for questions related to interaction between objects, region annotations result highly relevant. In contrast, for questions related to properties of specific objects, object annotations result more valuable. Consequently, in this section we present a method to automatically select region and object annotations from VG that can be used as labels to implement visual grounding as an auxiliary task for VQA.",
"For region annotations, we propose a simple heuristic to mine visual groundings: for each INLINEFORM0 we enumerate all the region descriptions of INLINEFORM1 and pick the description INLINEFORM2 that has the most (at least two) overlapped informative words with INLINEFORM3 and INLINEFORM4 . Informative words are all nouns and verbs, where two informative words are matched if at least one of the following conditions is met: (1) Their raw text as they appear in INLINEFORM5 or INLINEFORM6 are the same; (2) Their lemmatizations (using NLTK BIBREF22 ) are the same; (3) Their synsets in WordNet BIBREF23 are the same; (4) Their aliases (provided from VG) are the same. We refer to the resulting labels as region-level groundings. Figure FIGREF3 (a) illustrates an example of a region-level grounding.",
"In terms of object annotations, for each image in a INLINEFORM0 triplet we select the bounding box of an object as a valid grounding label, if the object name matches one of the informative nouns in INLINEFORM1 or INLINEFORM2 . To score each match, we use the same criteria as region-level groundings. Additionally, if a triplet INLINEFORM3 has a valid region grounding, each corresponding object-level grounding must be inside this region to be accepted as valid. As a further refinement, selected objects grounding are passed through an intersection over union filter to account for the fact that VG usually includes multiple labels for the same object instance. As a final consideration, for questions related to counting, region-level groundings are discarded after the corresponding object-level groundings are extracted. We refer to the resulting labels as object-level groundings. Figure FIGREF3 (b) illustrates an example of an object-level grounding.",
"As a result, combining both region-level and object-level groundings, about 700K out of 1M INLINEFORM0 triplets in VG end up with valid grounding labels. We will make these labels publicly available."
],
[
"We build the attention supervision on top of the open-sourced implementation of MCB BIBREF5 and MFB BIBREF12 . Similar to them, We extract the image feature from res5c layer of Resnet-152, resulting in INLINEFORM0 spatial grid ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). We construct our ground-truth visual grounding labels to be INLINEFORM4 glimpse maps per QA pair, where the first map is object-level grounding and the second map is region-level grounding, as discussed in Section SECREF4 . Let INLINEFORM5 be the coordinate of INLINEFORM6 selected object bounding box in the grounding labels, then the mined object-level attention maps INLINEFORM7 are: DISPLAYFORM0 ",
"where INLINEFORM0 is the indicator function. Similarly, the region-level attention maps INLINEFORM1 are: DISPLAYFORM0 ",
"",
"Afterwards, INLINEFORM0 and INLINEFORM1 are spatially L1-normalized to represent probabilities and concatenated to form INLINEFORM2 .",
"The model is trained using a multi-task loss, DISPLAYFORM0 ",
"where INLINEFORM0 denotes cross-entropy and INLINEFORM1 denotes KL-divergence. INLINEFORM2 corresponds to the learned parameters. INLINEFORM3 is a scalar that weights the loss terms. This scalar decays as a function of the iteration number INLINEFORM4 . In particular, we choose to use a cosine-decay function: DISPLAYFORM0 ",
"This is motivated by the fact that the visual grounding labels have some level of subjectivity. As an example, Figure FIGREF11 (second row) shows a case where the learned attention seems more accurate than the VQA-HAT ground truth. Hence, as the model learns suitable parameter values, we gradually loose the penalty on the attention maps to provide more freedom to the model to selectively decide what attention to use. It is important to note that, for training samples in VQA-2.0 or VG that do not have region-level or object-level grounding labels, INLINEFORM0 in Equation EQREF6 , so the loss is reduced to the classification term only. In our experiment, INLINEFORM1 is calibrated for each tested model based on the number of training steps. In particular, we choose INLINEFORM2 for all MCB models and INLINEFORM3 for others."
],
[
"VQA-2.0: The VQA-2.0 dataset BIBREF2 consists of 204721 images, with a total of 1.1M questions and 10 crowd-sourced answers per question. There are more than 20 question types, covering a variety of topics and free-form answers. The dataset is split into training (82K images and 443K questions), validation (40K images and 214K questions), and testing (81K images and 448K questions) sets. The task is to predict a correct answer INLINEFORM0 given a corresponding image-question pair INLINEFORM1 . As a main advantage with respect to version 1.0 BIBREF2 , for every question VQA-2.0 includes complementary images that lead to different answers, reducing language bias by forcing the model to use the visual information.",
"Visual Genome: The Visual Genome (VG) dataset BIBREF21 contains 108077 images, with an average of 17 QA pairs per image. We follow the processing scheme from BIBREF5 , where non-informative words in the questions and answers such as “a” and “is” are removed. Afterwards, INLINEFORM0 triplets with answers to be single keyword and overlapped with VQA-2.0 dataset are included in our training set. This adds 97697 images and about 1 million questions to the training set. Besides the VQA data, VG also provides on average 50 region descriptions and 30 object instances per image. Each region/object is annotated by one sentence/phrase description and bounding box coordinates.",
"VQA-HAT: VQA-HAT dataset BIBREF4 contains 58475 human visual attention heat (HAT) maps for INLINEFORM0 triplets in VQA-1.0 training set. Annotators were shown a blurred image, a INLINEFORM1 pair and were asked to “scratch” the image until they believe someone else can answer the question by looking at the blurred image and the sharpened area. The authors also collect INLINEFORM2 HAT maps for VQA-1.0 validation sets, where each of the 1374 INLINEFORM3 were labeled by three different annotators, so one can compare the level of agreement among labels. We use VQA-HAT to evaluate visual grounding performance, by comparing the rank-correlation between human attention and model attention, as in BIBREF4 , BIBREF24 .",
"VQA-X: VQA-X dataset BIBREF24 contains 2000 labeled attention maps in VQA-2.0 validation sets. In contrast to VQA-HAT, VQA-X attention maps are in the form of instance segmentations, where annotators were asked to segment objects and/or regions that most prominently justify the answer. Hence the attentions are more specific and localized. We use VQA-X to evaluate visual grounding performance by comparing the rank-correlation, as in BIBREF4 , BIBREF24 ."
],
[
"We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0 ",
"",
"Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding.",
"Table TABREF10 also reports the result of an experiment where the decaying factor INLINEFORM0 in Equation EQREF7 is fixed to a value of 1. In this case, the model is able to achieve higher rank-correlation, but accuracy drops by 2%. We observe that as training proceeds, attention loss becomes dominant in the final training steps, which affects the accuracy of the classification module.",
"Figure FIGREF11 shows qualitative results of the resulting visual grounding, including also a comparison with respect to no-attn model."
],
[
"In this work we have proposed a new method that is able to slightly outperform current state-of-the-art VQA systems, while also providing interpretable representations in the form of an explicitly trainable visual attention mechanism. Specifically, as a main result, our experiments provide evidence that the generated visual groundings achieve high correlation with respect to human-provided attention annotations, outperforming the correlation scores of previous works by a large margin.",
"As further contributions, we highlight two relevant insides of the proposed approach. On one side, by using attention labels as an auxiliary task, the proposed approach demonstrates that is able to constraint the internal representation of the model in such a way that it fosters the encoding of interpretable representations of the underlying relations between the textual question and input image. On other side, the proposed approach demonstrates a method to leverage existing datasets with region descriptions and object labels to effectively supervise the attention mechanism in VQA applications, avoiding costly human labeling.",
"As future work, we believe that the superior visual grounding provided by the proposed method can play a relevant role to generate natural language explanations to justify the answer to a given visual question. This scenario will help to demonstrate the relevance of our technique as a tool to increase the capabilities of AI based technologies to explain their decisions.",
"",
"Acknowledgements: This work was partially funded by Oppo, Panasonic and the Millennium Institute for Foundational Research on Data."
]
],
"section_name": [
"Introduction",
"Related Work",
"VQA Model Structure",
"Mining Attention Supervision from Visual Genome",
"Implementation Details",
"Datasets",
"Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"0addc69c7a2f96afa92bfff2e2ec342bb635b4d8"
],
"answer": [
{
"evidence": [
"Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding."
],
"extractive_spans": [
"the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"ae7a841528b10c3d40718855ef440e54a412b22d"
],
"answer": [
{
"evidence": [
"We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0"
],
"extractive_spans": [
"rank-correlation BIBREF25"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"bff3cb10c3c179d03259c859c4504f5f82a54325"
],
"answer": [
{
"evidence": [
"In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training."
],
"extractive_spans": [],
"free_form_answer": "they are available in the Visual Genome dataset",
"highlighted_evidence": [
"In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"By how much do they outperform existing state-of-the-art VQA models?",
"How do they measure the correlation between manual groundings and model generated ones?",
"How do they obtain region descriptions and object annotations?"
],
"question_id": [
"17f5f4a5d943c91d46552fb75940b67a72144697",
"83f22814aaed9b5f882168e22a3eac8f5fda3882",
"ed11b4ff7ca72dd80a792a6028e16ba20fccff66"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. Interpretable VQA algorithms must ground their answer into image regions that are relevant to the question. In this paper, we aim at providing this ability by leveraging existing region descriptions and object annotations to construct grounding supervision automatically.",
"Figure 2. Schematic diagram of the main parts of the VQA model. It is mostly based on the model presented in [6]. Main innovation is the Attention Supervision Module that incorporates visual grounding as an auxiliary task. This module is trained through the use of a set of image attention labels that are automatically mined from the Visual Genome dataset.",
"Figure 3. (a) Example region-level groundings from VG. Left: image with region description labels; Right: our mined results. Here “men” in the region description is firstly lemmatized to be “man”, whose aliases contain “people”; the word “talking” in the answer also contributes to the matching. So the selected regions have two matchings which is the most among all candidates. (b) Example object-level grounding from VG. Left: image with object instance labels; Right: our mined results. Note that in this case region-level grounding will give us the same result as in (a), but object-level grounding is clearly more localized.",
"Table 1. Evaluation of different VQA models on visual grounding and answer prediction. All the listed models are trained on VQA2.0 and Visual Genome. The reported accuracies are evaluated using the VQA-2.0 test-standard set. Note that the results of MCB, MFB and MFH are taken directly from the author’s public best single model.",
"Figure 4. Visual grounding comparison: the first column is the ground-truth human attention in VQA-HAT [5]; the second column shows the results from pretrained MFH model [26]; the last column are our Attn-MFH trained with attention supervision. We can see that the attention areas considered by our model mimic the attention areas used by humans, but they are more localized in space.",
"Figure 5. Qualitative Results on complementary pairs generated by our Attn-MFH model; the model learns to attend to different regions even if the questions are the same."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Table1-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png"
]
} | [
"How do they obtain region descriptions and object annotations?"
] | [
[
"1808.00265-Introduction-4"
]
] | [
"they are available in the Visual Genome dataset"
] | 170 |
1810.09774 | Testing the Generalization Power of Neural Network Models Across NLI Benchmarks | Neural network models have been very successful in natural language inference, with the best models reaching 90% accuracy in some benchmarks. However, the success of these models turns out to be largely benchmark specific. We show that models trained on a natural language inference dataset drawn from one benchmark fail to perform well in others, even if the notion of inference assumed in these benchmarks is the same or similar. We train six high performing neural network models on different datasets and show that each one of these has problems of generalizing when we replace the original test set with a test set taken from another corpus designed for the same task. In light of these results, we argue that most of the current neural network models are not able to generalize well in the task of natural language inference. We find that using large pre-trained language models helps with transfer learning when the datasets are similar enough. Our results also highlight that the current NLI datasets do not cover the different nuances of inference extensively enough. | {
"paragraphs": [
[
"Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 .",
"In this paper we show that NLI systems that have been very successful in specific NLI benchmarks, fail to generalize when trained on a specific NLI dataset and then these trained models are tested across test sets taken from different NLI benchmarks. The results we get are in line with BIBREF1 , showing that the generalization capability of the individual NLI systems is very limited, but, what is more, they further show the only system that was less prone to breaking in BIBREF1 , breaks too in the experiments we have conducted.",
"We train six different state-of-the-art models on three different NLI datasets and test these trained models on an NLI test set taken from another dataset designed for the same NLI task, namely for the task to identify for sentence pairs in the dataset if one sentence entails the other one, if they are in contradiction with each other or if they are neutral with respect to inferential relationship.",
"One would expect that if a model learns to correctly identify inferential relationships in one dataset, then it would also be able to do so in another dataset designed for the same task. Furthermore, two of the datasets, SNLI BIBREF4 and MultiNLI BIBREF5 , have been constructed using the same crowdsourcing approach and annotation instructions BIBREF5 , leading to datasets with the same or at least very similar definition of entailment. It is therefore reasonable to expect that transfer learning between these datasets is possible. As SICK BIBREF6 dataset has been machine-constructed, a bigger difference in performance is expected.",
"In this paper we show that, contrary to our expectations, most models fail to generalize across the different datasets. However, our experiments also show that BERT BIBREF0 performs much better than the other models in experiments between SNLI and MultiNLI. Nevertheless, even BERT fails when testing on SICK. In addition to the negative results, our experiments further highlight the power of pre-trained language models, like BERT, in NLI.",
"The negative results of this paper are significant for the NLP research community as well as to NLP practice as we would like our best models to not only to be able to perform well in a specific benchmark dataset, but rather capture the more general phenomenon this dataset is designed for. The main contribution of this paper is that it shows that most of the best performing neural network models for NLI fail in this regard. The second, and equally important, contribution is that our results highlight that the current NLI datasets do not capture the nuances of NLI extensively enough."
],
[
"The ability of NLI systems to generalize and related skepticism has been raised in a number of recent papers. BIBREF1 show that the generalization capabilities of state-of-the-art NLI systems, in cases where some kind of external lexical knowledge is needed, drops dramatically when the SNLI test set is replaced by a test set where the premise and the hypothesis are otherwise identical except for at most one word. The results show a very significant drop in accuracy. BIBREF7 recognize the generalization problem that comes with training on datasets like SNLI, which tend to be homogeneous and with little linguistic variation. In this context, they propose to better train NLI models by making use of adversarial examples.",
"Multiple papers have reported hidden bias and annotation artifacts in the popular NLI datasets SNLI and MultiNLI allowing classification based on the hypothesis sentences alone BIBREF8 , BIBREF9 , BIBREF10 .",
" BIBREF11 evaluate the robustness of NLI models using datasets where label preserving swapping operations have been applied, reporting significant performance drops compared to the results with the original dataset. In these experiments, like in the BreakingNLI experiment, the systems that seem to be performing the better, i.e. less prone to breaking, are the ones where some kind of external knowledge is used by the model (KIM by BIBREF2 is one of those systems).",
"On a theoretical and methodological level, there is discussion on the nature of various NLI datasets, as well as the definition of what counts as NLI and what does not. For example, BIBREF12 , BIBREF13 present an overview of the most standard datasets for NLI and show that the definitions of inference in each of them are actually quite different, capturing only fragments of what seems to be a more general phenomenon.",
" BIBREF4 show that a simple LSTM model trained on the SNLI data fails when tested on SICK. However, their experiment is limited to this single architecture and dataset pair. BIBREF5 show that different models that perform well on SNLI have lower accuracy on MultiNLI. However in their experiments they did not systematically test transfer learning between the two datasets, but instead used separate systems where the training and test data were drawn from the same corpora."
],
[
"In this section we describe the datasets and model architectures included in the experiments."
],
[
"We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.",
"For all the datasets we report the baseline performance where the training and test data are drawn from the same corpus. We then take these trained models and test them on a test set taken from another NLI corpus. For the case where the models are trained with SNLI + MultiNLI we report the baseline using the SNLI test data. All the experimental combinations are listed in Table 1 . Examples from the selected datasets are provided in Table 2 . To be more precise, we vary three things: training dataset, model and testing dataset. We should qualify this though, since the three datasets we look at, can also be grouped by text domain/genre and type of data collection, with MultiNLI and SNLI using the same data collection style, and SNLI and SICK using roughly the same domain/genre. Hopefully, our set up will let us determine which of these factors matters the most.",
"We describe the source datasets in more detail below.",
"The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .",
"The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.",
"We used the matched development set (MultiNLI-m) for the experiments. The MultiNLI dataset was annotated using very similar instructions as for the SNLI dataset. Therefore we can assume that the definitions of entailment, contradiction and neutral is the same in these two datasets.",
"SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 ."
],
[
"We perform experiments with six high-performing models covering the sentence encoding models, cross-sentence attention models as well as fine-tuned pre-trained language models.",
"For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments.",
"For BiLSTM-max we used the Adam optimizer BIBREF21 , a learning rate of 5e-4 and batch size of 64. The learning rate was decreased by the factor of 0.2 after each epoch if the model did not improve. Dropout of 0.1 was used between the layers of the multi-layer perceptron classifier, except before the last layer.The BiLSTM-max models were initialized with pre-trained GloVe 840B word embeddings of size 300 dimensions BIBREF22 , which were fine-tuned during training. Our BiLSMT-max model was implemented in PyTorch.",
"For HBMP, ESIM, KIM and BERT we used the original implementations with the default settings and hyperparameter values as described in BIBREF18 , BIBREF19 , BIBREF2 and BIBREF0 respectively. For BERT we used the uncased 768-dimensional model (BERT-base). For ESIM + ELMo we used the AllenNLP BIBREF23 PyTorch implementation with the default settings and hyperparameter values."
],
[
"Table 4 contains all the experimental results.",
"Our experiments show that, while all of the six models perform well when the test set is drawn from the same corpus as the training and development set, accuracy is significantly lower when we test these trained models on a test set drawn from a separate NLI corpus, the average difference in accuracy being 24.9 points across all experiments.",
"Accuracy drops the most when a model is tested on SICK. The difference in this case is between 19.0-29.0 points when trained on MultiNLI, between 31.6-33.7 points when trained on SNLI and between 31.1-33.0 when trained on SNLI + MultiNLI. This was expected, as the method of constructing the sentence pairs was different, and hence there is too much difference in the kind of sentence pairs included in the training and test sets for transfer learning to work. However, the drop was more dramatic than expected.",
"The most surprising result was that the accuracy of all models drops significantly even when the models were trained on MultiNLI and tested on SNLI (3.6-11.1 points). This is surprising as both of these datasets have been constructed with a similar data collection method using the same definition of entailment, contradiction and neutral. The sentences included in SNLI are also much simpler compared to those in MultiNLI, as they are taken from the Flickr image captions. This might also explain why the difference in accuracy for all of the six models is lowest when the models are trained on MultiNLI and tested on SNLI. It is also very surprising that the model with the biggest difference in accuracy was ESIM + ELMo which includes a pre-trained ELMo language model. BERT performed significantly better than the other models in this experiment having an accuracy of 80.4% and only 3.6 point difference in accuracy.",
"The poor performance of most of the models with the MultiNLI-SNLI dataset pair is also very surprising given that neural network models do not seem to suffer a lot from introduction of new genres to the test set which were not included in the training set, as can be seen from the small difference in test accuracies for the matched and mismatched test sets (see e.g BIBREF5 ). In a sense SNLI could be seen as a separate genre not included in MultiNLI. This raises the question if the SNLI and MultiNLI have e.g. different kinds of annotation artifacts, which makes transfer learning between these datasets more difficult.",
"All the models, except BERT, perform almost equally poorly across all the experiments. Both BiLSTM-max and HBMP have an average drop in accuracy of 24.4 points, while the average for KIM is 25.5 and for ESIM + ELMo 25.6. ESIM has the highest average difference of 27.0 points. In contrast to the findings of BIBREF1 , utilizing external knowledge did not improve the model's generalization capability, as KIM performed equally poorly across all dataset combinations.",
"Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 .",
"To understand better the types of errors made by neural network models in NLI we looked at some example failure-pairs for selected models. Tables 5 and 6 contain some randomly selected failure-pairs for two models: BERT and HBMP, and for three set-ups: SNLI $\\rightarrow $ SICK, SNLI $\\rightarrow $ MultiNLI and MultiNLI $\\rightarrow $ SICK. We chose BERT as the current the state of the art NLI model. HBMP was selected as a high performing model in the sentence encoding model type. Although the listed sentence pairs represent just a small sample of the errors made by these models, they do include some interesting examples. First, it seems that SICK has a more narrow notion of contradiction – corresponding more to logical contradiction – compared to the contradiction in SNLI and MultiNLI, where especially in SNLI the sentences are contradictory if they describe a different state of affairs. This is evident in the sentence pair: A young child is running outside over the fallen leaves and A young child is lying down on a gravel road that is covered with dead leaves, which is predicted by BERT to be contradiction although the gold label is neutral. Another interesting example is the sentence pair: A boat pear with people boarding and disembarking some boats. and people are boarding and disembarking some boats, which is incorrectly predicted by BERT to be contradiction although it has been labeled as entailment. Here the two sentences describe the same event from different points of view: the first one describing a boat pear with some people on it and the second one describing the people directly. Interestingly the added information about the boat pear seems to confuse the model."
],
[
"In this paper we have shown that neural network models for NLI fail to generalize across different NLI benchmarks. We experimented with six state-of-the-art models covering sentence encoding approaches, cross-sentence attention models and pre-trained and fine-tuned language models. For all the systems, the accuracy drops between 3.6-33.7 points (the average drop being 24.9 points), when testing with a test set drawn from a separate corpus from that of the training data, as compared to when the test and training data are splits from the same corpus. Our findings, together with the previous negative findings, indicate that the state-of-the-art models fail to capture the semantics of NLI in a way that will enable them to generalize across different NLI situations.",
"The results highlight two issues to be taken into consideration: a) using datasets involving a fraction of what NLI is, will fail when tested in datasets that are testing for a slightly different definition of inference. This is evident when we move from the SNLI to the SICK dataset. b) NLI is to some extent genre/context dependent. Training on SNLI and testing on MultiNLI gives worse results than vice versa. This is particularly evident in the case of BERT. These results highlight that training on multiple genres helps. However, this help is still not enough given that, even in the case of training on MultiNLI (multi genre) and training on SNLI (single genre and same definition of inference with MultiNLI), accuracy drops significantly.",
"We also found that involving a large pre-trained language model helps with transfer learning when the datasets are similar enough, as is the case with SNLI and MultiNLI. Our results further corroborate the power of pre-trained and fine-tuned language models like BERT in NLI. However, not even BERT is able to generalize from SNLI and MultiNLI to SICK, possibly due to the difference between what kind of inference relations are contained in these datasets.",
"Our findings motivate us to look for novel neural network architectures and approaches that better capture the semantics on natural language inference beyond individual datasets. However, there seems to be a need to start with better constructed datasets, i.e. datasets that will not only capture fractions of what NLI is in reality. Better NLI systems need to be able to be more versatile on the types of inference they can recognize. Otherwise, we would be stuck with systems that can cover only some aspects of NLI. On a theoretical level, and in connection to the previous point, we need a better understanding of the range of phenomena NLI must be able to cover and focus our future endeavours for dataset construction towards this direction. In order to do this a more systematic study is needed on the different kinds of entailment relations NLI datasets need to include. Our future work will include a more systematic and broad-coverage analysis of the types of errors the models make and in what kinds of sentence-pairs they make successful predictions."
],
[
" The first author is supported by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113). ",
"The first author also gratefully acknowledges the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence.",
"The second author is supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. "
]
],
"section_name": [
"Introduction",
"Related Work",
"Experimental Setup",
"Data",
"Model and Training Details",
"Experimental Results",
"Discussion and Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0b0ee6e9614e9c96cd79c50344c5ebbe7727bc32"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined."
],
"extractive_spans": [],
"free_form_answer": "MultiNLI",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"9f5842ea139d471fa3e041b5e4a401c581e01292"
],
"answer": [
{
"evidence": [
"Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 ."
],
"extractive_spans": [
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
" The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"5dccd2cfa3288c901912f44285b3f002d1cfaef6"
],
"answer": [
{
"evidence": [
"For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments."
],
"extractive_spans": [],
"free_form_answer": "BiLSTM-max, HBMP, ESIM, KIM, ESIM + ELMo, and BERT",
"highlighted_evidence": [
"For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"4be4b9919967b8f3f08d37fc1e0b695f43d44f92"
],
"answer": [
{
"evidence": [
"We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.",
"The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .",
"The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.",
"SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 ."
],
"extractive_spans": [
"SNLI, MultiNLI and SICK"
],
"free_form_answer": "",
"highlighted_evidence": [
"We chose three different datasets for the experiments: SNLI, MultiNLI and SICK.",
"The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. ",
"The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral.",
"SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Which training dataset allowed for the best generalization to benchmark sets?",
"Which model generalized the best?",
"Which models were compared?",
"Which datasets were used?"
],
"question_id": [
"a48c6d968707bd79469527493a72bfb4ef217007",
"b69897deb5fb80bf2adb44f9cbf6280d747271b3",
"ad1f230f10235413d1fe501e414358245b415476",
"0a521541b9e2b5c6d64fb08eb318778eba8ac9f7"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Dataset combinations used in the experiments. The rows in bold are baseline experiments, where the test data comes from the same benchmark as the training and development data.",
"Table 2: Example sentence pairs from the three datasets.",
"Table 3: Model architectures used in the experiments.",
"Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined.",
"Table 5: Example failure-pairs for BERT.",
"Table 6: Example failure-pairs for HBMP."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"9-Table5-1.png",
"10-Table6-1.png"
]
} | [
"Which training dataset allowed for the best generalization to benchmark sets?",
"Which models were compared?"
] | [
[
"1810.09774-6-Table4-1.png"
],
[
"1810.09774-Model and Training Details-1"
]
] | [
"MultiNLI",
"BiLSTM-max, HBMP, ESIM, KIM, ESIM + ELMo, and BERT"
] | 171 |
1910.05608 | VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination | Nowadays, Social network sites (SNSs) such as Facebook, Twitter are common places where people show their opinions, sentiments and share information with others. However, some people use SNSs to post abuse and harassment threats in order to prevent other SNSs users from expressing themselves as well as seeking different opinions. To deal with this problem, SNSs have to use a lot of resources including people to clean the aforementioned content. In this paper, we propose a supervised learning model based on the ensemble method to solve the problem of detecting hate content on SNSs in order to make conversations on SNSs more effective. Our proposed model got the first place for public dashboard with 0.730 F1 macro-score and the third place with 0.584 F1 macro-score for private dashboard at the sixth international workshop on Vietnamese Language and Speech Processing 2019. | {
"paragraphs": [
[
"Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.",
"In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.",
"The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans.",
"In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective."
],
[
"In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique."
],
[
"The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16."
],
[
"Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:",
"Step 1: Format encoding. Vietnamese has many accents, intonations with different Unicode typing programs which may have different outputs with the same typing type. To make it unified, we build a library named visen. For example, the input \"thíêt kê will be normalized to \"thiết kế\" as the output.",
"Step 2: In social networks, people show their feelings a lot by emojis. Emoticon is often a special Unicode character, but sometimes, it is combined by multiple normal characters like `: ( = ]'. We make a dictionary mapping this emoji (combined by some characters) to a single Unicode character like other emojis to make it unified.",
"Step 3: Remove unseen characters. For human, unseen character is invisible but for a computer, it makes the model harder to process and inserts space between words, punctuation and emoji. This step aims at reducing the number of words in the dictionary which is important task, especially with low dataset resources like this HSD task.",
"Step 4: With model requiring Vietnamese word segmentation as the input, we use BIBREF9, BIBREF10 to tokenize the input text.",
"Step 5: Make all string lower. We experimented and found that lower-case or upper-case are not a significant impact on the result, but with lower characters, the number of words in the dictionary is reduced.",
"RoBERTa proposed in BIBREF8 an optimized method for pretraining self-supervised NLP systems. In our system, we use RoBERTa not only to make sentence representation but also to augment data. With mask mechanism, we replace a word in the input sentence with another word that RoBERTa model proposes. To reduce the impact of replacement word, the chosen words are all common words that appear in almost three classes of the dataset. For example, with input `nhổn làm gắt vl', we can augment to other outputs: `vl làm gắt qá', `còn làm vl vậy', `vl làm đỉnh vl' or `thanh chút gắt vl'.",
"british"
],
[
"Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository",
"The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.",
"The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.",
"The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.",
"The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.",
"The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks."
],
[
"Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class."
],
[
"The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.",
"We experiment 8 types of embedding in total:",
"comment: CBOW embedding training in all dataset comment, each word is splited by space. Embedding size is 200.",
"comment_bpe: CBOW embedding training in all dataset comment, each word is splited by subword bpe. Embedding size is 200.",
"comment_tokenize: CBOW embedding training in all dataset comment, each word is splited by space. Before split by space, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size is 200.",
"roberta: sentence embedding training in all dataset comment, training by using RoBERTa architecture. Embedding size is 256.",
"fasttext, sonvx* is all pre-trained word embedding in general domain. Before mapping word to vector, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size of fasttext is 300. (sonvx_wiki, sonvx_baomoi_w2, sonvx_baomoi_w5) have embedding size corresponding is (400, 300, 400).",
"In our experiment, the dataset is split into two-part: train set and dev set with the corresponding ratio $(0.9, 0.1)$. Two subsets have the same imbalance ratio like the root set. For each combination of model and word embedding, we train model in train set until it achieve the best result of loss score in the dev set. The table TABREF12 shows the best result of each combination on the f1_macro score.",
"For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set.",
"Statistics of the final result on the dev set shows that almost cases have wrong prediction from offensive and hate class to clean class belong to samples containing the word `vl'. (62% in the offensive class and 48% in the hate class). It means that model overfit the word `vl' to the clean class. This makes sense because `vl' appears too much in the clean class dataset.",
"In case the model predicts wrong from the clean class to the offensive class and the hate class, the model tends to decide case having sensitive words to be wrong class. The class offensive and the hate are quite difficult to distinguish even with human."
],
[
"In this study, we experiment the combination of multiple embedding types and multiple model architecture to solve a part of the problem Hate Speech Detection with a signification good classification results. Our system heavily based on the ensemble technique so the weakness of the system is slow processing speed. But in fact, it is not big trouble with this HSD problem when human usually involve handling directly in the before.",
"HSD is a hard problem even with human. In order to improve classification quality, in the future, we need to collect more data especially social networks content. This will make building text representation more correct and help model easier to classify.",
"british"
]
],
"section_name": [
"Introduction",
"System description",
"System description ::: System overview",
"System description ::: Data pre-processing",
"System description ::: Models architecture",
"System description ::: Ensemble method",
"Experiment",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0b3cf44bc00d13112653dfd6e44be62454996080"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8750ed52a25b10a49042f666fb69a331e0a935b8"
],
"answer": [
{
"evidence": [
"In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.",
"The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs.",
"The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c0e0e5fd2ec729d22dfb24cad8b4961de4f6a371"
],
"answer": [
{
"evidence": [
"The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.",
"The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.",
"The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.",
"The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.",
"The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.",
"The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.",
"Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class."
],
"extractive_spans": [
"Stacking method",
"LSTMCNN",
"SARNN",
"simple LSTM bidirectional model",
"TextCNN"
],
"free_form_answer": "",
"highlighted_evidence": [
" After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13.",
"The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.\n\nThe second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.\n\nThe third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.\n\nThe fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.",
"The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.",
"In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8093351a29b0413586ea24cffac9e4a6579fc81b"
],
"answer": [
{
"evidence": [
"For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set."
],
"extractive_spans": [],
"free_form_answer": "Private dashboard is leaderboard where competitors can see results after competition is finished - on hidden part of test set (private test set).",
"highlighted_evidence": [
"The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1ce57e4664d6c940e3c0273b522df6734e066af6"
],
"answer": [
{
"evidence": [
"For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set."
],
"extractive_spans": [],
"free_form_answer": "Public dashboard where competitors can see their results during competition, on part of the test set (public test set).",
"highlighted_evidence": [
"The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5c608801d127bf97d4546a64f1a83ae280112167"
],
"answer": [
{
"evidence": [
"The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.",
"The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes."
],
"extractive_spans": [],
"free_form_answer": "They used Wiki Vietnamese language and Vietnamese newspapers to pretrain embeddings and dataset provided in HSD task to train model (details not mentioned in paper).",
"highlighted_evidence": [
"Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7",
"The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What was the baseline?",
"Is the data all in Vietnamese?",
"What classifier do they use?",
"What is private dashboard?",
"What is public dashboard?",
"What dataset do they use?"
],
"question_id": [
"11e376f98df42f487298ec747c32d485c845b5cd",
"284ea817fd79bc10b7a82c88d353e8f8a9d7e93c",
"c0122190119027dc3eb51f0d4b4483d2dbedc696",
"1ed6acb88954f31b78d2821bb230b722374792ed",
"5a33ec23b4341584a8079db459d89a4e23420494",
"1b9119813ea637974d21862a8ace83bc1acbab8e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1. Hate Speech Detection System Overview",
"Figure 2. TextCNN model architecture",
"Figure 4. LSTM model architecture",
"Figure 3. VDCNN model architecture",
"Table I F1_MACRO SCORE OF DIFFERENT MODEL",
"Figure 5. LSTMCNN model architecture",
"Figure 6. SARNN model architecture"
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure4-1.png",
"3-Figure3-1.png",
"4-TableI-1.png",
"4-Figure5-1.png",
"5-Figure6-1.png"
]
} | [
"What is private dashboard?",
"What is public dashboard?",
"What dataset do they use?"
] | [
[
"1910.05608-Experiment-8"
],
[
"1910.05608-Experiment-8"
],
[
"1910.05608-Experiment-0",
"1910.05608-System description ::: System overview-0"
]
] | [
"Private dashboard is leaderboard where competitors can see results after competition is finished - on hidden part of test set (private test set).",
"Public dashboard where competitors can see their results during competition, on part of the test set (public test set).",
"They used Wiki Vietnamese language and Vietnamese newspapers to pretrain embeddings and dataset provided in HSD task to train model (details not mentioned in paper)."
] | 172 |
2003.06279 | Using word embeddings to improve the discriminability of co-occurrence text networks | Word co-occurrence networks have been employed to analyze texts both in the practical and theoretical scenarios. Despite the relative success in several applications, traditional co-occurrence networks fail in establishing links between similar words whenever they appear distant in the text. Here we investigate whether the use of word embeddings as a tool to create virtual links in co-occurrence networks may improve the quality of classification systems. Our results revealed that the discriminability in the stylometry task is improved when using Glove, Word2Vec and FastText. In addition, we found that optimized results are obtained when stopwords are not disregarded and a simple global thresholding strategy is used to establish virtual links. Because the proposed approach is able to improve the representation of texts as complex networks, we believe that it could be extended to study other natural language processing tasks. Likewise, theoretical languages studies could benefit from the adopted enriched representation of word co-occurrence networks. | {
"paragraphs": [
[
"The ability to construct complex and diverse linguistic structures is one of the main features that set us apart from all other species. Despite its ubiquity, some language aspects remain unknown. Topics such as language origin and evolution have been studied by researchers from diverse disciplines, including Linguistic, Computer Science, Physics and Mathematics BIBREF0, BIBREF1, BIBREF2. In order to better understand the underlying language mechanisms and universal linguistic properties, several models have been developed BIBREF3, BIBREF4. A particular language representation regards texts as complex systems BIBREF5. Written texts can be considered as complex networks (or graphs), where nodes could represent syllables, words, sentences, paragraphs or even larger chunks BIBREF5. In such models, network edges represent the proximity between nodes, e.g. the frequency of the co-occurrence of words. Several interesting results have been obtained from networked models, such as the explanation of Zipf's Law as a consequence of the least effort principle and theories on the nature of syntactical relationships BIBREF6, BIBREF7.",
"In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.",
"While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges.",
"Our main objective here is to evaluate whether such an approach is able to improve the discriminability of word co-occurrence networks in a typical text network classification task. We evaluate the methodology for different embedding techniques, including GloVe, Word2Vec and FastText. We also investigated different thresholding strategies to establish virtual links. Our results revealed, as a proof of principle, that the proposed approach is able to improve the discriminability of the classification when compared to the traditional co-occurrence network. While the gain in performance depended upon the text length being considered, we found relevant gains for intermediary text lengths. Additional results also revealed that a simple thresholding strategy combined with the use of stopwords tends to yield the best results.",
"We believe that the proposed representation could be applied in other text classification tasks, which could lead to potential gains in performance. Because the inclusion of virtual edges is a simple technique to make the network denser, such an approach can benefit networked representations with a limited number of nodes and edges. This representation could also shed light into language mechanisms in theoretical studies relying on the representation of text as complex networks. Potential novel research lines leveraging the adopted approach to improve the characterization of texts in other applications are presented in the conclusion."
],
[
"Complex networks have been used in a wide range of fields, including in Social Sciences BIBREF13, Neuroscience BIBREF14, Biology BIBREF15, Scientometry BIBREF16 and Pattern Recognition BIBREF17, BIBREF18, BIBREF19, BIBREF20. In text analysis, networks are used to uncover language patterns, including the origins of the ever present Zipf's Law BIBREF21 and the analysis of linguistic properties of natural and unknown texts BIBREF22, BIBREF23. Applications of network science in text mining and text classification encompasses applications in semantic analysis BIBREF24, BIBREF25, BIBREF26, BIBREF27, authorship attribution BIBREF28, BIBREF29 and stylometry BIBREF28, BIBREF30, BIBREF31. Here we focus in the stylometric analysis of texts using complex networks.",
"In BIBREF28, the authors used a co-occurrence network to study a corpus of English and Polish books. They considered a dataset of 48 novels, which were written by 8 different authors. Differently from traditional co-occurrence networks, some punctuation marks were considered as words when mapping texts as networks. The authors also decided to create a methodology to normalize the obtained network metrics, since they considered documents with variations in length. A similar approach was adopted in a similar study BIBREF32, with a focus on comparing novel measurements and measuring the effect of considering stopwords in the network structure.",
"A different approach to analyze co-occurrence networks was devised in BIBREF33. Whilst most approaches only considered traditional network measurements or devised novel topological and dynamical measurements, the authors combined networked and semantic information to improve the performance of network-based classification. Interesting, the combined use of network motifs and node labels (representing the corresponding words) allowed an improvement in performance in the considered task. A similar combination of techniques using a hybrid approach was proposed in BIBREF8. Networked-based approaches has also been applied to the authorship recognition tasks in other languages, including Persian texts BIBREF9.",
"Co-occurrence networks have been used in other contexts other than stylometric analysis. The main advantage of this approach is illustrated in the task aimed at diagnosing diseases via text analysis BIBREF11. Because the topological analysis of co-occurrence language networks do not require deep semantic analysis, this model is able to model text created by patients suffering from cognitive impairment BIBREF11. Recently, it has been shown that the combination of network and traditional features could be used to improve the diagnosis of patients with cognitive impairment BIBREF11. Interestingly, this was one of the first approaches suggesting the use of embeddings to address the particular problem of lack of statistics to create a co-occurrence network in short documents BIBREF34.",
"While many of the works dealing with word co-occurrence networks have been proposed in the last few years, no systematic study of the effects of including information from word embeddings in such networks has been analyzed. This work studies how links created via embeddings information modify the underlying structure of networks and, most importantly, how it can improve the model to provide improved classification performance in the stylometry task."
],
[
"To represent texts as networks, we used the so-called word adjacency network representation BIBREF35, BIBREF28, BIBREF32. Typically, before creating the networks, the text is pre-processed. An optional pre-processing step is the removal of stopwords. This step is optional because such words include mostly article and prepositions, which may be artlessly represented by network edges. However, in some applications – including the authorship attribution task – stopwords (or function words) play an important role in the stylistic characterization of texts BIBREF32. A list of stopwords considered in this study is available in the Supplementary Information.",
"The pre-processing step may also include a lemmatization procedure. This step aims at mapping words conveying the same meaning into the same node. In the lemmatization process, nouns and verbs are mapped into their singular and infinite forms. Note that, while this step is useful to merge words sharing a lemma into the same node, more complex semantical relationships are overlooked. For example, if “car” and “vehicle” co-occur in the same text, they are considered as distinct nodes, which may result in an inaccurate representation of the text.",
"Such a drawback is addressed by including “virtual” edges connecting nodes. In other words, even if two words are not adjacent in the text, we include “virtual” edges to indicate that two distant words are semantically related. The inclusion of such virtual edges is illustrated in Figure FIGREF1. In order to measure the semantical similarity between two concepts, we use the concept of word embeddings BIBREF36, BIBREF37. Thus, each word is represented using a vector representation encoding the semantical and contextual characteristics of the word. Several interesting properties have been obtained from distributed representation of words. One particular property encoded in the embeddings representation is the fact the semantical similarity between concepts is proportional to the similarity of vectors representing the words. Similarly to several other works, here we measure the similarity of the vectors via cosine similarity BIBREF38.",
"The following strategies to create word embedding were considered in this paper:",
"GloVe: the Global Vectors (GloVe) algorithm is an extension of the Word2vec model BIBREF39 for efficient word vector learning BIBREF40. This approach combines global statistics from matrix factorization techniques (such as latent semantic analysis) with context-based and predictive methods like Word2Vec. This method is called as Global Vector method because the global corpus statistics are captured by GloVe. Instead of using a window to define the local context, GloVe constructs an explicit word-context matrix (or co-occurrence matrix) using statistics across the entire corpus. The final result is a learning model that oftentimes yields better word vector representations BIBREF40.",
"Word2Vec: this is a predictive model that finds dense vector representations of words using a three-layer neural network with a single hidden layer BIBREF39. It can be defined in a two-fold way: continuous bag-of-words and skip-gram model. In the latter, the model analyzes the words of a set of sentences (or corpus) and attempts to predict the neighbors of such words. For example, taking as reference the word “Robin”, the model decides that “Hood” is more likely to follow the reference word than any other word. The vectors are obtained as follows: given the vocabulary (generated from all corpus words), the model trains a neural network with the sentences of the corpus. Then, for a given word, the probabilities that each word follows the reference word are obtained. Once the neural network is trained, the weights of the hidden layer are used as vectors of each corpus word.",
"FastText: this method is another extension of the Word2Vec model BIBREF41. Unlike Word2Vec, FastText represents each word as a bag of character n-grams. Therefore, the neural network not only trains individual words, but also several n-grams of such words. The vector for a word is the sum of vectors obtained for the character n-grams composing the word. For example, the embedding obtained for the word “computer” with $n\\le 3$ is the sum of the embeddings obtained for “co”, “com”, “omp”, “mpu”, “put”, “ute”, “ter” and “er”. In this way, this method obtains improved representations for rare words, since n-grams composing rare words might be present in other words. The FastText representation also allows the model to understand suffixes and prefixes. Another advantage of FastText is its efficiency to be trained in very large corpora.",
"Concerning the thresholding process, we considered two main strategies. First, we used a global strategy: in addition to the co-occurrence links (continuous lines in Figure FIGREF1), only “virtual” edges stronger than a given threshold are left in the network. Thus only the most similar concepts are connected via virtual links. This strategy is hereafter referred to as global strategy. Unfortunately, this method may introduce an undesired bias towards hubs BIBREF42.",
"To overcome the potential disadvantages of the global thresholding method, we also considered a more refined thresholding approach that takes into account the local structure to decide whether a weighted link is statistically significant BIBREF42. This method relies on the idea that the importance of an edge should be considered in the the context in which it appears. In other words, the relevance of an edge should be evaluated by analyzing the nodes connected to its ending points. Using the concept of disparity filter, the method devised in BIBREF42 defines a null model that quantifies the probability of a node to be connected to an edge with a given weight, based on its other connections. This probability is used to define the significance of the edge. The parameter that is used to measure the significance of an edge $e_{ij}$ is $\\alpha _{ij}$, defined as:",
"where $w_{ij}$ is the weight of the edge $e_{ij}$ and $k_i$ is the degree of the $i$-th node. The obtained network corresponds to the set of nodes and edges obtained by removing all edges with $\\alpha $ higher than the considered threshold. Note that while the similarity between co-occurrence links might be considered to compute $\\alpha _{ij}$, only “virtual” edges (i.e. the dashed lines in Figure FIGREF1) are eligible to be removed from the network in the filtering step. This strategy is hereafter referred to as local strategy.",
"After co-occurrence networks are created and virtual edges are included, in the next step we used a characterization based on topological analysis. Because a global topological analysis is prone to variations in network size, we focused our analysis in the local characterization of complex networks. In a local topological analysis, we use as features the value of topological/dynamical measurements obtained for a set of words. In this case, we selected as feature the words occurring in all books of the dataset. For each word, we considered the following network measurements: degree, betweenness, clustering coefficient, average shortest path length, PageRank, concentric symmetry (at the second and third hierarchical level) BIBREF32 and accessibility BIBREF43, BIBREF44 (at the second and third hierarchical level). We chose these measurements because all of them capture some particular linguistic feature of texts BIBREF45, BIBREF46, BIBREF47, BIBREF48. After network measurements are extracted, they are used in machine learning algorithms. In our experiments, we considered Decision Trees (DT), nearest neighbors (kNN), Naive Bayes (NB) and Support Vector Machines (SVM). We used some heuristics to optimize classifier parameters. Such techniques are described in the literature BIBREF49. The accuracy of the pattern recognition methods were evaluated using cross-validation BIBREF50.",
"In summary, the methodology used in this paper encompasses the following steps:",
"Network construction: here texts are mapped into a co-occurrence networks. Some variations exists in the literature, however here we focused in the most usual variation, i.e. the possibility of considering or disregarding stopwords. A network with co-occurrence links is obtained after this step.",
"Network enrichment: in this step, the network is enriched with virtual edges established via similarity of word embeddings. After this step, we are given a complete network with weighted links. Virtually, any embedding technique could be used to gauge the similarity between nodes.",
"Network filtering: in order to eliminate spurious links included in the last step, the weakest edges are filtered. Two approaches were considered: a simple approach based on a global threshold and a local thresholding strategy that preserves network community structure. The outcome of this network filtering step is a network with two types of links: co-occurrence and virtual links (as shown in Figure FIGREF1).",
"Feature extraction: In this step, topological and dynamical network features are extracted. Here, we do not discriminate co-occurrence from virtual edges to compute the network metrics.",
"Pattern classification: once features are extracted from complex networks, they are used in pattern classification methods. This might include supervised, unsupervised and semi-supervised classification. This framework is exemplified in the supervised scenario.",
"The above framework is exemplified with the most common technique(s). It should be noted that the methods used, however, can be replaced by similar techniques. For example, the network construction could consider stopwords or even punctuation marks BIBREF51. Another possibility is the use of different strategies of thresholding. While a systematic analysis of techniques and parameters is still required to reveal other potential advantages of the framework based on the addition of virtual edges, in this paper we provide a first analysis showing that virtual edges could be useful to improve the discriminability of texts modeled as complex networks.",
"Here we used a dataset compatible with datasets used recently in the literature (see e.g. BIBREF28, BIBREF10, BIBREF52). The objective of the studied stylometric task is to identify the authorship of an unknown document BIBREF53. All data and some statistics of each book are shown in the Supplementary Information."
],
[
"In Section SECREF13, we probe whether the inclusion of virtual edges is able to improve the performance of the traditional co-occurrence network-based classification in a usual stylometry task. While the focus of this paper is not to perform a systematic analysis of different methods comprising the adopted network, we consider two variations in the adopted methodology. In Section SECREF19, we consider the use of stopwords and the adoption of a local thresholding process to establish different criteria to create new virtual edges."
],
[
"In Figure FIGREF14, we show some of the improvements in performance obtained when including a fixed amount of virtual edges using GloVe as embedding method. In each subpanel, we show the relative improvement in performance obtained as a function of the fraction of additional edges. In this section, we considered the traditional co-occurrence as starting point. In other words, the network construction disregarded stopwords. The list of stopwords considered in this paper is available in the Supplementary Information. We also considered the global approach to filter edges.",
"The relative improvement in performance is given by $\\Gamma _+{(p)}/\\Gamma _0$, where $\\Gamma _+{(p)}$ is the accuracy rate obtained when $p\\%$ additional edges are included and $\\Gamma _0 = \\Gamma _+{(p=0)}$, i.e. $\\Gamma _0$ is the accuracy rate measured from the traditional co-occurrence model. We only show the highest relative improvements in performance for each classifier. In our analysis, we considered also samples of text with distinct length, since the performance of network-based methods is sensitive to text length BIBREF34. In this figure, we considered samples comprising $w=\\lbrace 1.0, 2.5, 5.0, 10.0\\rbrace $ thousand words.",
"The results obtained for GloVe show that the highest relative improvements in performance occur for decision trees. This is apparent specially for the shortest samples. For $w=1,000$ words, the decision tree accuracy is enhanced by a factor of almost 50% when $p=20\\%$. An excellent gain in performance is also observed for both Naive Bayes and SVM classifiers, when $p=18\\%$ and $p=12\\%$, respectively. When $w=2,500$ words, the highest improvements was observed for the decision tree algorithm. A minor improvement was observed for the kNN method. A similar behavior occurred for $w=5,000$ words. Interestingly, SVM seems to benefit from the use of additional edges when larger documents are considered. When only 5% virtual edges are included, the relative gain in performance is about 45%.",
"The relative gain in performance obtained for Word2vec is shown in Figure FIGREF15. Overall, once again decision trees obtained the highest gain in performance when short texts are considered. Similar to the analysis based on the GloVe method, the gain for kNN is low when compared to the benefit received by other methods. Here, a considerable gain for SVM in only clear for $w=2,500$ and $p=10\\%$. When large texts are considered, Naive Bayes obtained the largest gain in performance.",
"Finally, the relative gain in performance obtained for FastText is shown in Figure FIGREF16. The prominent role of virtual edges in decision tree algorithm in the classification of short texts once again is evident. Conversely, the classification of large documents using virtual edges mostly benefit the classification based on the Naive Bayes classifier. Similarly to the results observed for Glove and Word2vec, the gain in performance obtained for kNN is low compared when compared to other methods.",
"While Figures FIGREF14 – FIGREF16 show the relative behavior in the accuracy, it still interesting to observe the absolute accuracy rate obtained with the classifiers. In Table TABREF17, we show the best accuracy rate (i.e. $\\max \\Gamma _+ = \\max _p \\Gamma _+(p)$) for GloVe. We also show the average difference in performance ($\\langle \\Gamma _+ - \\Gamma _0 \\rangle $) and the total number of cases in which an improvement in performance was observed ($N_+$). $N_+$ ranges in the interval $0 \\le N_+ \\le 20$. Table TABREF17 summarizes the results obtained for $w = \\lbrace 1.0, 5.0, 10.0\\rbrace $ thousand words. Additional results for other text length are available in Tables TABREF28–TABREF30 of the Supplementary Information.",
"In very short texts, despite the low accuracy rates, an improvement can be observed in all classifiers. The best results was obtained with SVM when virtual edges were included. For $w=5,000$ words, the inclusion of new edges has no positive effect on both kNN and Naive Bayes algorithms. On the other hand, once again SVM could be improved, yielding an optimized performance. For $w=10,000$ words, SVM could not be improved. However, even without improvement it yielded the maximum accuracy rate. The Naive Bayes algorithm, in average, could be improved by a margin of about 10%.",
"The results obtained for Word2vec are summarized in Table TABREF29 of the Supplementary Information. Considering short documents ($w=1,000$ words), here the best results occurs only with the decision tree method combined with enriched networks. Differently from the GloVe approach, SVM does not yield the best results. Nonetheless, the highest accuracy across all classifiers and values of $p$ is the same. For larger documents ($w=5,000$ and $w=10,000$ words), no significant difference in performance between Word2vec and GloVe is apparent.",
"The results obtained for FastText are shown in Table TABREF18. In short texts, only kNN and Naive Bayes have their performance improved with virtual edges. However, none of the optimized results for these classifiers outperformed SVM applied to the traditional co-occurrence model. Conversely, when $w=5,000$ words, the optimized results are obtained with virtual edges in the SVM classifier. Apart from kNN, the enriched networks improved the traditional approach in all classifiers. For large chunks of texts ($w=10,000$), once again the approach based on SVM and virtual edges yielded optimized results. All classifiers benefited from the inclusion of additional edges. Remarkably, Naive Bayes improved by a margin of about $13\\%$."
],
[
"While in the previous section we focused our analysis in the traditional word co-occurrence model, here we probe if the idea of considering virtual edges can also yield optimized results in particular modifications of the framework described in the methodology. The first modification in the co-occurrence model is the use of stopwords. While in semantical application of network language modeling stopwords are disregarded, in other application it can unravel interesting linguistic patterns BIBREF10. Here we analyzed the effect of using stopwords in enriched networks. We summarize the obtained results in Table TABREF20. We only show the results obtained with SVM, as it yielded the best results in comparison to other classifiers. The accuracy rate for other classifiers is shown in the Supplementary Information.",
"The results in Table TABREF20 reveals that even when stopwords are considered in the original model, an improvement can be observed with the addition of virtual edges. However, the results show that the degree of improvement depends upon the text length. In very short texts ($w=1,000$), none of the embeddings strategy was able to improve the performance of the classification. For $w=1,500$, a minor improvement was observed with FastText: the accuracy increased from $\\Gamma _0 = 37.18\\%$ to $38.46\\%$. A larger improvement could be observed for $w=2,000$. Both Word2vec and FastText approaches allowed an increase of more than 5% in performance. A gain higher than 10% was observed for $w=2,500$ with Word2vec. For larger pieces of texts, the gain is less expressive or absent. All in all, the results show that the use of virtual edges can also benefit the network approach based on stopwords. However, no significant improvement could be observed with very short and very large documents. The comparison of all three embedding methods showed that no method performed better than the others in all cases.",
"We also investigated if more informed thresholding strategies could provide better results. While the simple global thresholding approach might not be able to represent more complex structures, we also tested a more robust approach based on the local approach proposed by Serrano et al. BIBREF42. In Table TABREF21, we summarize the results obtained with this thresholding strategies. The table shows $\\max \\Gamma _+^{(L)} / \\max \\Gamma _+^{(G)}$, where $\\Gamma _+^{(L)}$ and $\\Gamma _+^{(G)}$ are the accuracy obtained with the local and global thresholding strategy, respectively. The results were obtained with the SVM classifier, as it turned to be the most efficient classification method. We found that there is no gain in performance when the local strategy is used. In particular cases, the global strategy is considerably more efficient. This is the case e.g. when GloVe is employed in texts with $w=1,500$ words. The performance of the global strategy is $12.2\\%$ higher than the one obtained with the global method. A minor difference in performance was found in texts comprising $w=1,000$ words, yet the global strategy is still more efficient than the global one.",
"To summarize all results obtained in this study we show in Table TABREF22 the best results obtained for each text length. We also show the relative gain in performance with the proposed approach and the embedding technique yielding the best result. All optimized results were obtained with the use of stopwords, global thresholding strategy and SVM as classification algorithm. A significant gain is more evident for intermediary text lengths."
],
[
"Textual classification remains one of the most important facets of the Natural Language Processing area. Here we studied a family of classification methods, the word co-occurrence networks. Despite this apparent simplicity, this model has been useful in several practical and theoretical scenarios. We proposed a modification of the traditional model by establishing virtual edges to connect nodes that are semantically similar via word embeddings. The reasoning behind this strategy is the fact the similar words are not properly linked in the traditional model and, thus, important links might be overlooked if only adjacent words are linked.",
"Taking as reference task a stylometric problem, we showed – as a proof of principle – that the use of virtual edges might improve the discriminability of networks. When analyzing the best results for each text length, apart from very short and long texts, the proposed strategy yielded optimized results in all cases. The best classification performance was always obtained with the SVM classifier. In addition, we found an improved performance when stopwords are used in the construction of the enriched co-occurrence networks. Finally, a simple global thresholding strategy was found to be more efficient than a local approach that preserves the community structure of the networks. Because complex networks are usually combined with other strategies BIBREF8, BIBREF11, we believe that the proposed could be used in combination with other methods to improve the classification performance of other text classification tasks.",
"Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links."
],
[
"The authors acknowledge financial support from FAPESP (Grant no. 16/19069-9), CNPq-Brazil (Grant no. 304026/2018-2). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001."
],
[
"The following words were considered as stopwords in our analysis: all, just, don't, being, over, both, through, yourselves, its, before, o, don, hadn, herself, ll, had, should, to, only, won, under, ours,has, should've, haven't, do, them, his, very, you've, they, not, during, now, him, nor, wasn't, d, did, didn, this, she, each, further, won't, where, mustn't, isn't, few, because, you'd, doing, some, hasn, hasn't, are, our, ourselves, out, what, for, needn't, below, re, does, shouldn't, above, between, mustn, t, be, we, who, mightn't, doesn't, were, here, shouldn, hers, aren't, by, on, about, couldn, of, wouldn't, against, s, isn, or, own, into, yourself, down, hadn't, mightn, couldn't, wasn, your, you're, from, her, their, aren, it's, there, been, whom, too, wouldn, themselves, weren, was, until, more, himself, that, didn't, but, that'll, with, than, those, he, me, myself, ma, weren't, these, up, will, while, ain, can, theirs, my, and, ve, then, is, am, it, doesn, an, as, itself, at, have, in, any, if, again, no, when, same, how, other, which, you, shan't, shan, needn, haven, after, most, such, why, a, off i, m, yours, you'll, so, y, she's, the, having, once."
],
[
"The list of books is shown in Tables TABREF25 and TABREF26. For each book we show the respective authors (Aut.) and the following quantities: total number of words ($N_W$), total number of sentences ($N_S$), total number of paragraphs ($N_P$) and the average sentence length ($\\langle S_L \\rangle $), measured in number of words. The following authors were considered: Hector Hugh (HH), Thomas Hardy (TH), Daniel Defoe (DD), Allan Poe (AP), Bram Stoker (BS), Mark Twain (MT), Charles Dickens (CD), Pelham Grenville (PG), Charles Darwin (CD), Arthur Doyle (AD), George Eliot (GE), Jane Austen (JA), and Joseph Conrad (JC)."
],
[
"In this section we show additional results obtained for different text length. More specifically, we show the results obtained for GloVe, Word2vec and FastText when stopwords are either considered in the text or disregarded from the analysis."
]
],
"section_name": [
"Introduction",
"Related works",
"Material and Methods",
"Results and Discussion",
"Results and Discussion ::: Performance analysis",
"Results and Discussion ::: Effects of considering stopwords and local thresholding",
"Conclusion",
"Acknowledgments",
"Supplementary Information ::: Stopwords",
"Supplementary Information ::: List of books",
"Supplementary Information ::: Additional results"
]
} | {
"answers": [
{
"annotation_id": [
"c98053f61caf0057e9b860a136f79840b47e83ab"
],
"answer": [
{
"evidence": [
"Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links."
],
"extractive_spans": [
"general classification tasks",
"use of the methodology in other networked systems",
"a network could be enriched with embeddings obtained from graph embeddings techniques"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0bdf5fb318f76cc109cfa8ff324fa6c915bf9c55"
],
"answer": [
{
"evidence": [
"In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.",
"While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges."
],
"extractive_spans": [
"long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach"
],
"free_form_answer": "",
"highlighted_evidence": [
"A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.\n\nWhile the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"182529ec096a2983f73eb75bd663ceacddf6e26d"
],
"answer": [
{
"evidence": [
"While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges."
],
"extractive_spans": [],
"free_form_answer": "They use it as addition to previous model - they add new edge between words if word embeddings are similar.",
"highlighted_evidence": [
"In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"845c82e222206d736d76c979e6b88f5acd7f59b6"
],
"answer": [
{
"evidence": [
"In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks."
],
"extractive_spans": [
"in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window",
"connects only adjacent words in the so called word adjacency networks"
],
"free_form_answer": "",
"highlighted_evidence": [
"A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What other natural processing tasks authors think could be studied by using word embeddings?",
"What is the reason that traditional co-occurrence networks fail in establishing links between similar words whenever they appear distant in the text?",
"Do the use word embeddings alone or they replace some previous features of the model with word embeddings?",
"On what model architectures are previous co-occurence networks based?"
],
"question_id": [
"ec8043290356fcb871c2f5d752a9fe93a94c2f71",
"728c2fb445173fe117154a2a5482079caa42fe24",
"23d32666dfc29ed124f3aa4109e2527efa225fbc",
"076928bebde4dffcb404be216846d9d680310622"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"FIG. 1. Example of a enriched word co-occurrence network created for a text. In this model, after the removal of stopwords, the remaining words are linked whenever they appear in the same context. In the proposed network representation, “virtual” edges are included whenever two nodes (words) are semantically related. In this example, virtual edges are those represented by red dashed lines. Edges are included via embeddings similarity. The quantity of included edges is a parameter to be chosen.",
"FIG. 2. Gain in performance when considering additional virtual edges created using GloVe as embedding method. Each sub-panel shows the results obtained for distinct values of text length. In this case, the highest improvements in performance tends to occur in the shortest documents.",
"FIG. 3. Gain in performance when considering additional virtual edges created using Word2vec as embedding method. Each sub-panel shows the results obtained for distinct values of text length.",
"FIG. 4. Gain in performance when considering additional virtual edges created using FastText as embedding method. Each sub-panel shows the results obtained for distinct value of text length.",
"TABLE I. Statistics of performance obtained with GloVe for different text lengths. Additional results considering other text lengths are shown in the Supplementary Information. Γ0 is the the accuracy rate obtained with the traditional co-occurrence model and max Γ+ is the highest accuracy rate considering different number of additional virtual edges. 〈Γ+ − Γ0〉 is the average absolute improvement in performance, 〈Γ+/Γ0〉 is the average relative improvement in performance and N+ is the total number of cases in which an improvement in performance was observed. In total we considered 20 different cases, which corresponds to the addition of p = 1%, 2% . . . 20% additional virtual edges. The best result for each document length is highlighted.",
"TABLE II. Statistics of performance obtained with FastText for different text lengths. Additional results considering other text lengths are shown in the Supplementary Information. Γ0 is the the accuracy rate obtained with the traditional co-occurrence model and max Γ+ is the highest accuracy rate considering different number of additional virtual edges. 〈Γ+ − Γ0〉 is the average absolute improvement in performance, 〈Γ+/Γ0〉 is the average relative improvement in performance and N+ is the total number of cases in which an improvement in performance was observed. In total we considered 20 different cases, which corresponds to the addition of p = 1%, 2% . . . 20% additional virtual edges. The best result for each document length is highlighted.",
"TABLE III. Performance analysis of the adopted framework when considering stopwords in the construction of the networks. Only the best results obtained across all considered classifiers are shown. In this case, all optimized results were obtained with SVM. Γ0 corresponds to the accuracy obtained with no virtual edges and max Γ+ is the best accuracy rate obtained when including virtual edges. For each text length, the highest accuracy rate is highlighted. A full list of results for each classifier is available in the Supplementary Information.",
"TABLE IV. Comparison between the best results obtained via global and local thresholding. For each text length and embedding method, we show max Γ (L)",
"TABLE V. Summary of best results obtained in this paper. For each document length we show the highest accuracy rate obtained, the relative gain obtained with the proposed approach and the embedding method yielding the highest accuracy rate: GloVe (GL), Word2Vec (W2V) or FastText (FT). All the results below were obtained when stopwords were used and the SVM was used as classification method."
],
"file": [
"6-Figure1-1.png",
"11-Figure2-1.png",
"12-Figure3-1.png",
"13-Figure4-1.png",
"14-TableI-1.png",
"15-TableII-1.png",
"16-TableIII-1.png",
"17-TableIV-1.png",
"18-TableV-1.png"
]
} | [
"Do the use word embeddings alone or they replace some previous features of the model with word embeddings?"
] | [
[
"2003.06279-Introduction-2"
]
] | [
"They use it as addition to previous model - they add new edge between words if word embeddings are similar."
] | 178 |
2004.03744 | e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations | The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning. However, the automatic way in which SNLI-VE has been assembled (via combining parts of two related datasets) gives rise to a large number of errors in the labels of this corpus. In this paper, we first present a data collection effort to correct the class with the highest error rate in SNLI-VE. Secondly, we re-evaluate an existing model on the corrected corpus, which we call SNLI-VE-2.0, and provide a quantitative comparison with its performance on the non-corrected corpus. Thirdly, we introduce e-SNLI-VE-2.0, which appends human-written natural language explanations to SNLI-VE-2.0. Finally, we train models that learn from these explanations at training time, and output such explanations at testing time. | {
"paragraphs": [
[
"Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.",
"Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\\sim }31\\%$ errors in this class, and ${\\sim }1\\%$ for the contradiction and entailment classes.",
"Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs.",
"In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0.",
"Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time."
],
[
"The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:",
"Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true.",
"Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false.",
"Neutral: if neither of the earlier two are true.",
"The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3).",
"However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\\sim }31\\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors.",
"Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv"
],
[
"In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).",
"The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate.",
"First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:",
"mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43).",
"personal taste, e.g., “the sign is ugly”.",
"lack of consensus on terms such as “many people” or “crowded”.",
"To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets.",
"To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41.",
"After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE.",
"Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class."
],
[
"Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets."
],
[
"To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.",
"BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral.",
"Using the implementation from https://github.com/claudiogreco/coling18-gte.",
"We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments:",
"model selection as well as testing are done on the original uncorrected SNLI-VE.",
"model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set.",
"model selection as well as testing are done on the corrected SNLI-VE-2.0.",
"Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy."
],
[
"The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%.",
"The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant.",
"Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model."
],
[
"In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time."
],
[
"e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets.",
"We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time.",
"To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40."
],
[
"As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21.",
"To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0."
],
[
"This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation."
],
[
"PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24."
],
[
"As described in Section SECREF12, in the BUTD model, the hypothesis vector and the image vector were fused in a fixed-size feature vector f. The vector f was then given as input to an MLP which outputs a probability distribution over the three labels. In PaE-BUTD-VE, in addition to the classification layer, we add a 512-LSTM BIBREF12 decoder to generate an explanation. The decoder takes the feature vector f as initial state. Following Camburu BIBREF6, we prepend the label as a token at the beginning of the explanation to condition the explanation on the label. The ground truth label is provided at training time, whereas the predicted label is given at test time.",
"At test time, we use beam search with a beam width of 3 to decode explanations. For memory and time reduction, we replaced words that appeared less than 15 times among explanations with “#UNK#”. This strategy reduces the output vocabulary size to approximately 8.6k words."
],
[
"The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\\mathcal {L} = \\alpha \\mathcal {L}_{label} + (1-\\alpha ) \\mathcal {L}_{explanation} \\; \\textrm {;} \\; \\alpha \\in [0,1]$."
],
[
"In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy."
],
[
"As summarised in Table TABREF30, we obtain a test balanced accuracy for PaE-BUTD-VE of 73%, while the same model trained without explanations obtains 72.52%. This is encouraging, since it shows that one can obtain additional natural language explanations without sacrificing performance (and eventually even improving the label performance, however, future work is needed to conclude whether the difference $0.48\\%$ improvement in performance is statistically significant).",
"Camburu BIBREF6 mentioned that the BLEU score was not an appropriate measure for the quality of explanations and suggested human evaluation instead. We therefore manually scored the relevance of 100 explanations that were generated when the model predicted correct labels. We found that only 20% of explanations were relevant. We highlight that the relevance of explanations is in terms of whether the explanation reflects ground-truth reasons supporting the correct label. This is not to be confused with whether an explanation is correctly illustrating the inner working of the model, which is left as future work. It is also important to note that on a similar experimental setting, Camburu report as low as 34.68% correct explanations, training with explanations that were actually collected for their task. Lastly, the model selection criterion at validation time was the prediction balanced accuracy, which may contribute to the low quality of explanations. While we show that adding an explanation module does not harm prediction performance, more work is necessary to get models that output trustable explanations."
],
[
"When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32)."
],
[
"For the first network, we set $\\alpha =0$ in the training loss of the PaE-BUTD-VE model to obtain a system that only learns to generate an explanation from the image-sentence input, without label prediction. Hence, in this setting, no label is prepended before the explanation.",
"For the ExplToLabel-VE model, we use a 512-LSTM followed by an MLP with three 512-layers and ReLU activation, and softmax activation to classify the explanation between entailment, contradiction, and neutral."
],
[
"For ExplToLabel-VE, the best model is selected on balanced accuracy at validation time. For EtP-BUTD-VE, perplexity is used to select the best model parameters at validation time. It is computed between the explanations produced by the LSTM and ground truth explanations from the validation set."
],
[
"When we train ExplToLabel-VE on e-SNLI-VE-2.0, we obtain a balanced accuracy of 90.55% on the test set.",
"As reported in Table TABREF30, the overall PaE-BUTD-VE system achieves 69.40% balanced accuracy on the test set of e-SNLI-VE-2.0, which is a 3% decrease from the non-explanatory BUTD counterpart (72.52%). However, by setting $\\alpha $ to zero and selecting the model that gives the best perplexity per word at validation, the quality of explanation significantly increased, with 35% relevance, based on manual evaluation. Thus, in our model, generating better explanations involves a small sacrifice in label prediction accuracy, implying a trade-off between explanation generation and accuracy.",
"We note that there is room for improvement in our explanation generation method. For example, one can implement an attention mechanism similar to Xu BIBREF13, so that each generated word relates to a relevant part of the multimodal feature representation."
],
[
"We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations.",
"Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset.",
"Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification."
],
[
"In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems."
],
[
"This work was supported by the Oxford Internet Institute, a JP Morgan PhD Fellowship 2019-2020, an Oxford-DeepMind Graduate Scholarship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund, as well as DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and the ERC under the Horizon 2020 program (grant agreement No. 853489)."
],
[
"e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.",
"Including text hypotheses and explanations."
],
[
"We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location.",
"Each assignment consisted of a set of 10 image-sentence pairs. For each pair, the participant was asked to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using a subset of the words that they highlighted. The instructions are shown in Figure FIGREF42. Workers were also guided with three annotated examples, one for each label.",
"For each assignment of 10 questions, one trusted annotation with gold standard label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers. An example of question is shown in Figure FIGREF8 in the core paper."
],
[
"Some examples in SNLI-VE were ambiguous and could find correct justifications for incompatible labels, as shown in Figures FIGREF44, FIGREF45, and FIGREF46."
]
],
"section_name": [
"Introduction",
"SNLI-VE-2.0",
"SNLI-VE-2.0 ::: Re-annotation details",
"SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment",
"SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Model.",
"SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Results.",
"Visual-Textual Entailment with Natural Language Explanations",
"Visual-Textual Entailment with Natural Language Explanations ::: e-SNLI-VE-2.0",
"Visual-Textual Entailment with Natural Language Explanations ::: Collecting Explanations",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model.",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Loss.",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model selection.",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Results.",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model.",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model selection.",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Results.",
"Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Qualitative Analysis of Generated Explanations",
"Conclusion",
"Conclusion ::: Acknowledgements.",
"Appendix ::: Statistics of e-SNLI-VE-2.0",
"Appendix ::: Details of the Mechanical Turk Task",
"Appendix ::: Ambiguous Examples from SNLI-VE"
]
} | {
"answers": [
{
"annotation_id": [
"94b90e9041b91232b87bfc13b5fa5ff8f7feb0b2"
],
"answer": [
{
"evidence": [
"Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class."
],
"extractive_spans": [
"balanced accuracy, i.e., the average of the three accuracies on each class"
],
"free_form_answer": "",
"highlighted_evidence": [
"To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7069fb67777a7ce17a963cbbe4809993e8c99322"
],
"answer": [
{
"evidence": [
"We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location."
],
"extractive_spans": [
"2,060 workers"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a70ac2ea8449767510dc5bb9dfa1caf4a8fa11e2"
],
"answer": [
{
"evidence": [
"e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.",
"FLOAT SELECTED: Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected."
],
"extractive_spans": [],
"free_form_answer": "Totally 6980 validation and test image-sentence pairs have been corrected.",
"highlighted_evidence": [
"The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.",
"FLOAT SELECTED: Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"bb7949af7c9d62e0feda5bbbaa7283147e88306b"
],
"answer": [
{
"evidence": [
"The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant."
],
"extractive_spans": [
"73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set"
],
"free_form_answer": "",
"highlighted_evidence": [
"The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0be4666fdfe22ede55d5468e3beb6e478ec60b2f"
],
"answer": [
{
"evidence": [
"Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\\sim }31\\%$ errors in this class, and ${\\sim }1\\%$ for the contradiction and entailment classes."
],
"extractive_spans": [
"neutral class"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\\sim }31\\%$ errors in this class, and ${\\sim }1\\%$ for the contradiction and entailment classes."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Is model explanation output evaluated, what metric was used?",
"How many annotators are used to write natural language explanations to SNLI-VE-2.0?",
"How many natural language explanations are human-written?",
"How much is performance difference of existing model between original and corrected corpus?",
"What is the class with highest error rate in SNLI-VE?"
],
"question_id": [
"f33236ebd6f5a9ccb9b9dbf05ac17c3724f93f91",
"66bf0d61ffc321f15e7347aaed191223f4ce4b4a",
"5dfa59c116e0ceb428efd99bab19731aa3df4bbd",
"0c557b408183630d1c6c325b5fb9ff1573661290",
"a08b5018943d4428f067c08077bfff1af3de9703"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. Examples from SNLI-VE-2.0. (a) In red, the neutral label from SNLI-VE is wrong, since the picture clearly shows that the crowd is outdoors. We corrected it to entailment in SNLIVE-2.0. (b) In green, an ambiguous instance. There is indeed an American flag in the background but it is very hard to see, hence the ambiguity between neutral and entailment, and even contradiction if one cannot spot it. Further, it is not clear whether “they” implies the whole group or the people visible in the image.",
"Figure 2. MTurk annotation screen. (a) The label contradiction is chosen, (b) the evidence words “man”, “violin”, and “crowd” are highlighted, and (c) an explanation is written with these words.",
"Table 1. Accuracies obtained with BUTD on SNLI-VE (valoriginal, test-original) and SNLI-VE-2.0 (val-corrected, testcorrected).",
"Figure 3. Two image-sentence pairs from e-SNLI-VE-2.0 with (a) at the top, an uninformative explanation from e-SNLI, (b) at the bottom, an explanation collected from our crowdsourcing. We only collected new explanations for the neutral class (along with new labels). The SNLI premise is not included in e-SNLI-VE-2.0.",
"Figure 4. PAE-BUTD-VE. The generation of explanation is conditioned on the image premise, textual hypothesis, and predicted label.",
"Table 2. Label balanced accuracies and explanation relevance rates of our two explanatory systems on e-SNLI-VE-2.0. Comparison with their counterparts in e-SNLI [3]. Without the explanation component, the balanced accuracy on SNLI-VE-2.0 is 72.52%",
"Figure 5. Architecture of ETP-BUTD-VE. Firstly, an explanation is generated, secondly the label is predicted from the explanation. The two models (in separate dashed rectangles) are not trained jointly.",
"Figure 6. Both systems PAE-BUTD-VE and ETP-BUTD-VE predict the correct label, but only ETP-BUTD-VE generates a relevant explanation.",
"Figure 7. Both systems PAE-BUTD-VE and ETP-BUTD-VE predict the correct label, but generate irrelevant explanations.",
"Figure 8. Instructions given to workers on Mechanical Turk",
"Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected.",
"Figure 9. Ambiguous SNLI-VE instance. Some may argue that the woman’s face betrays sadness, but the image is not quite clear. Secondly, even with better resolution, facial expression may not be a strong enough evidence to support the hypothesis about the woman’s emotional state.",
"Figure 10. Ambiguous SNLI-VE instance. The lack of consensus is on whether the man is “leering” at the woman. While it is likely the case, this interpretation in favour of entailment is subjective, and a cautious annotator would prefer to label the instance as neutral.",
"Figure 11. Ambiguous SNLI-VE instance. Some may argue that it is impossible to certify from the image that the children are kindergarten students, and label the instance as neutral. On the other hand, the furniture may be considered as typical of kindergarten, which would be sufficient evidence for entailment."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"6-Table2-1.png",
"6-Figure5-1.png",
"7-Figure6-1.png",
"7-Figure7-1.png",
"8-Figure8-1.png",
"8-Table3-1.png",
"8-Figure9-1.png",
"8-Figure10-1.png",
"9-Figure11-1.png"
]
} | [
"How many natural language explanations are human-written?"
] | [
[
"2004.03744-8-Table3-1.png",
"2004.03744-Appendix ::: Statistics of e-SNLI-VE-2.0-0"
]
] | [
"Totally 6980 validation and test image-sentence pairs have been corrected."
] | 179 |
2001.09332 | An Analysis of Word2Vec for the Italian Language | Word representation is fundamental in NLP tasks, because it is precisely from the coding of semantic closeness between words that it is possible to think of teaching a machine to understand text. Despite the spread of word embedding concepts, still few are the achievements in linguistic contexts other than English. In this work, analysing the semantic capacity of the Word2Vec algorithm, an embedding for the Italian language is produced. Parameter setting such as the number of epochs, the size of the context window and the number of negatively backpropagated samples is explored. | {
"paragraphs": [
[
"In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding\" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.",
"These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.",
"Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.",
"In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples."
],
[
"The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).",
"The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.",
"Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space)."
],
[
"The common words (such as “the\", “of\", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling\" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.",
"In practice, each word is associated with a sort of “keeping probability\" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability\" to the generic word $w_i$ through the formula:",
"where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$."
],
[
"Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.",
"The “negative sampling\" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative\" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.",
"The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:",
"Negative samples are then selected by choosing a sort of “unigram distribution\", so that the most frequent words are also the most often backpropated ones."
],
[
"The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences.",
"The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation\" linked to certain words, it was also decided to replace every number present in the text with the particular $\\langle NUM \\rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\\,224$ words.",
"Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.",
"The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$."
],
[
"To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\\,791$ analogies divided into 19 different categories: 6 related to the “semantic\" macro-area (8915 analogies) and 13 to the “syntactic\" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen\"); where $b^{*}$ is the word to be guessed (“queen\"), $b$ is the word coupled to it (“king\"), $a$ is the word for the components to be eliminated (“man\"), and $a^{*}$ is the word for the components to be added (“woman\").",
"The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:",
"and through the multiplicative cosine distance (3COSMUL) BIBREF12:",
"where $\\epsilon =10^{-6}$ and $\\cos (x, y) = \\frac{x \\cdot y}{\\left\\Vert x\\right\\Vert \\left\\Vert y\\right\\Vert }$. The extremely low value chosen for the $\\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests."
],
[
"We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.",
"Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.",
"Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.",
"Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\\%$ of total accuracy (with 3COSMUL metric)."
],
[
"Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).",
"As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.",
"For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both."
],
[
"In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.",
"Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area."
]
],
"section_name": [
"Introduction",
"Word2Vec",
"Word2Vec ::: Sampling rate",
"Word2Vec ::: Negative sampling",
"Implementation details",
"Results",
"Results ::: Analysis of the various models",
"Results ::: Comparison with other models",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"707f16cbdcecaaf2438b2eea89bbbde0c2bf24a7"
],
"answer": [
{
"evidence": [
"The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences.",
"The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation\" linked to certain words, it was also decided to replace every number present in the text with the particular $\\langle NUM \\rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\\,224$ words."
],
"extractive_spans": [],
"free_form_answer": "Italian Wikipedia and Google News extraction producing final vocabulary of 618224 words",
"highlighted_evidence": [
"The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences.",
"All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\\,224$ words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0c2537b0a6e0a98a8aa8f16f37fe604db25039f0"
],
"answer": [
{
"evidence": [
"To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\\,791$ analogies divided into 19 different categories: 6 related to the “semantic\" macro-area (8915 analogies) and 13 to the “syntactic\" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen\"); where $b^{*}$ is the word to be guessed (“queen\"), $b$ is the word coupled to it (“king\"), $a$ is the word for the components to be eliminated (“man\"), and $a^{*}$ is the word for the components to be added (“woman\")."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\\,791$ analogies divided into 19 different categories: 6 related to the “semantic\" macro-area (8915 analogies) and 13 to the “syntactic\" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen\"); where $b^{*}$ is the word to be guessed (“queen\"), $b$ is the word coupled to it (“king\"), $a$ is the word for the components to be eliminated (“man\"), and $a^{*}$ is the word for the components to be added (“woman\")."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c31edf6a48d34aed1af8e1d1ad9c0590e81bf8ae"
],
"answer": [
{
"evidence": [
"Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).",
"As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).\n\nAs it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6b0d86450efcf7a1e5c54930fe1a0059721f5fec"
],
"answer": [
{
"evidence": [
"The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences."
],
"extractive_spans": [
"$421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5e5ade4049facac2ff1b0e51cbb5021f28d0b90f"
],
"answer": [
{
"evidence": [
"In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others."
],
"extractive_spans": [
"number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9c3bb13aff045629237781aa1e0cefadf9bc0ae1"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"26affe9ada758836d0f069da4cb25d48bcee44fb"
],
"answer": [
{
"evidence": [
"The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences."
],
"extractive_spans": [
"extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the dataset used as input to the Word2Vec algorithm?",
"Are the word embeddings tested on a NLP task?",
"Are the word embeddings evaluated?",
"How big is dataset used to train Word2Vec for the Italian Language?",
"How does different parameter settings impact the performance and semantic capacity of resulting model?",
"Are the semantic analysis findings for Italian language similar to English language version?",
"What dataset is used for training Word2Vec in Italian language?"
],
"question_id": [
"9447ec36e397853c04dcb8f67492ca9f944dbd4b",
"12c6ca435f4fcd4ad5ea5c0d76d6ebb9d0be9177",
"32c149574edf07b1a96d7f6bc49b95081de1abd2",
"3de27c81af3030eb2d9de1df5ec9bfacdef281a9",
"cc680cb8f45aeece10823a3f8778cf215ccc8af0",
"fab4ec639a0ea1e07c547cdef1837c774ee1adb8",
"9190c56006ba84bf41246a32a3981d38adaf422c"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"italian",
"italian",
"italian",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Representation of Word2Vec model.",
"Table 1. Accuracy at the 20th epoch for the 6 Skip-gram models analysed when the W dimension of the window and the N value of negative sampling change.",
"Fig. 2. Total accuracy using 3COSMUL at different epochs with negative sampling equal to 5, 10 and 20, where: (a) window is 5 and (b) window is 10.",
"Table 2. Accuracy at the 50th epoch for the two worst Skip-gram models.",
"Fig. 3. Total accuracy using 3COSMUL up to the 50th epoch for: (a) the two worst Skip-gram models and (b) CBOW model with W = 10 and N = 20",
"Table 3. Accuracy evaluated on the total of all the analogies",
"Table 5. Accuracy evaluated only on the analogies common to both vocabularies",
"Table 4. Accuracy evaluated only on the analogies common to both vocabularies"
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"6-Figure2-1.png",
"7-Table2-1.png",
"7-Figure3-1.png",
"7-Table3-1.png",
"8-Table5-1.png",
"8-Table4-1.png"
]
} | [
"What is the dataset used as input to the Word2Vec algorithm?"
] | [
[
"2001.09332-Implementation details-0",
"2001.09332-Implementation details-1"
]
] | [
"Italian Wikipedia and Google News extraction producing final vocabulary of 618224 words"
] | 180 |
1904.07342 | Learning Twitter User Sentiments on Climate Change with Limited Labeled Data | While it is well-documented that climate change accepters and deniers have become increasingly polarized in the United States over time, there has been no large-scale examination of whether these individuals are prone to changing their opinions as a result of natural external occurrences. On the sub-population of Twitter users, we examine whether climate change sentiment changes in response to five separate natural disasters occurring in the U.S. in 2018. We begin by showing that relevant tweets can be classified with over 75% accuracy as either accepting or denying climate change when using our methodology to compensate for limited labeled data; results are robust across several machine learning models and yield geographic-level results in line with prior research. We then apply RNNs to conduct a cohort-level analysis showing that the 2018 hurricanes yielded a statistically significant increase in average tweet sentiment affirming climate change. However, this effect does not hold for the 2018 blizzard and wildfires studied, implying that Twitter users' opinions on climate change are fairly ingrained on this subset of natural disasters. | {
"paragraphs": [
[
"Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows.",
"First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section SECREF2 ). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section SECREF4 ). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster."
],
[
"We henceforth refer to a tweet affirming climate change as a “positive\" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative\" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint\" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change\" or “global warming\", and further included disaster-specific search terms (e.g., “bomb cyclone,\" “blizzard,\" “snowstorm,\" etc.). We refer to the first data batch as “influential\" tweets, and the second data batch as “event-related\" tweets.",
"The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential\" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets.",
"The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets.",
"To create a model for predicting sentiments of event-related tweets, we divide the first data batch of influential tweets into training and validation datasets with a 90%/10% split. The training set contains 49.2% positive samples, and the validation set contains 49.0% positive samples. We form our test set by manually labeling a subset of 500 tweets from the the event-related tweets (randomly chosen across all five natural disasters), of which 50.0% are positive samples."
],
[
"Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 .",
"The RNN pre-trained using GloVe word embeddings BIBREF6 achieved the higest test accuracy. We pass tokenized features into the embedding layer, followed by an LSTM BIBREF7 with dropout and ReLU activation, and a dense layer with sigmoid activation. We apply an Adam optimizer on the binary crossentropy loss. Implementing this simple, one-layer LSTM allows us to surpass the other traditional machine learning classification methods. Note the 13-point spread between validation and test accuracies achieved. Ideally, the training, validation, and test datasets have the same underlying distribution of tweet sentiments; the assumption made with our labeling technique is that the influential accounts chosen are representative of all Twitter accounts. Critically, when choosing the influential Twitter users who believe in climate change, we highlighted primarily politicians or news sources (i.e., verifiably affirming or denying climate change); these tweets rarely make spelling errors or use sarcasm. Due to this skew, the model yields a high rate of false negatives. It is likely that we could lessen the gap between validation and test accuracies by finding more “real\" Twitter users who are climate change believers, e.g. by using the methodology found in BIBREF4 ."
],
[
"Our second goal is to compare the mean values of users' binary sentiments both pre- and post- each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare results to the prior literature.",
"In Figure FIGREF3 , we map 4-clustering results on three dimensions: predicted sentiments, latitude, and longitude. The clusters correspond to four major regions of the U.S.: the Northeast (green), Southeast (yellow), Midwest (blue), and West Coast (purple); centroids are designated by crosses. Average sentiments within each cluster confirm prior knowledge BIBREF1 : the Southeast and Midwest have lower average sentiments ( INLINEFORM0 and INLINEFORM1 , respectively) than the West Coast and Northeast (0.22 and 0.09, respectively). In Figure FIGREF5 , we plot predicted sentiment averaged by U.S. city of event-related tweeters. The majority of positive tweets emanate from traditionally liberal hubs (e.g. San Francisco, Los Angeles, Austin), while most negative tweets come from the Philadelphia metropolitan area. These regions aside, rural areas tended to see more negative sentiment tweeters post-event, whereas urban regions saw more positive sentiment tweeters; however, overall average climate change sentiment pre- and post-event was relatively stable geographically. This map further confirms findings that coastal cities tend to be more aware of climate change BIBREF8 .",
"From these mapping exercises, we claim that our “influential tweet\" labeling is reasonable. We now discuss our final method on outcomes: comparing average Twitter sentiment pre-event to post-event. In Figure FIGREF8 , we display these metrics in two ways: first, as an overall average of tweet binary sentiment, and second, as a within-cohort average of tweet sentiment for the subset of tweets by users who tweeted both before and after the event (hence minimizing awareness bias). We use Student's t-test to calculate the significance of mean sentiment differences pre- and post-event (see Section SECREF4 ). Note that we perform these mean comparisons on all event-related data, since the low number of geo-tagged samples would produce an underpowered study."
],
[
"In Figure FIGREF8 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre- and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre- and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.",
"We now comment on the two events yielding similar results between overall and within-cohort comparisons. Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively. Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively. This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs. Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event.",
"There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting\" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters."
]
],
"section_name": [
"Background",
"Data",
"Labeling Methodology",
"Outcome Analysis",
"Results & Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"344fc2c81c2b0173e51bafa2f8a8edbca4e1be14"
],
"answer": [
{
"evidence": [
"We henceforth refer to a tweet affirming climate change as a “positive\" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative\" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint\" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change\" or “global warming\", and further included disaster-specific search terms (e.g., “bomb cyclone,\" “blizzard,\" “snowstorm,\" etc.). We refer to the first data batch as “influential\" tweets, and the second data batch as “event-related\" tweets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"All data were downloaded from Twitter in two separate batches using the “twint\" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change\" or “global warming\", and further included disaster-specific search terms (e.g., “bomb cyclone,\" “blizzard,\" “snowstorm,\" etc.). "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0c3efc4450d194483719636dbab54fb1730333cb"
],
"answer": [
{
"evidence": [
"There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting\" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting\" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"a146205ea460d7b1fdd248ced2a5504d3f06a708"
],
"answer": [
{
"evidence": [
"Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 ."
],
"extractive_spans": [
"RNNs",
"CNNs",
"Naive Bayes with Laplace Smoothing",
"k-clustering",
"SVM with linear kernel"
],
"free_form_answer": "",
"highlighted_evidence": [
" Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"7444fcf3eb94af572135d50d73c7ab6e1ff84c3c"
],
"answer": [
{
"evidence": [
"The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential\" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets."
],
"extractive_spans": [],
"free_form_answer": "Influential tweeters ( who they define as individuals certain to have a classifiable sentiment regarding the topic at hand) is used to label tweets in bulk in the absence of manually-labeled tweets.",
"highlighted_evidence": [
"The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential\" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"602dcef9005c4c448d3d33589fb21b705d9eb2b2"
],
"answer": [
{
"evidence": [
"The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets."
],
"extractive_spans": [
"the East Coast Bomb Cyclone",
" the Mendocino, California wildfires",
"Hurricane Florence",
"Hurricane Michael",
"the California Camp Fires"
],
"free_form_answer": "",
"highlighted_evidence": [
"The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they report results only on English data?",
"Do the authors mention any confounds to their study?",
"Which machine learning models are used?",
"What methodology is used to compensate for limited labelled data?",
"Which five natural disasters were examined?"
],
"question_id": [
"16fa6896cf4597154363a6c9a98deb49fffef15f",
"0f60864503ecfd5b048258e21d548ab5e5e81772",
"fe578842021ccfc295209a28cf2275ca18f8d155",
"00ef9cc1d1d60f875969094bb246be529373cb1d",
"279b633b90fa2fd69e84726090fadb42ebdf4c02"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Tweets collected for each U.S. 2018 natural disaster",
"Figure 1: Four-clustering on sentiment, latitude, and longitude",
"Table 2: Selected binary sentiment analysis accuracies",
"Figure 2: Pre-event (left) and post-event (right) average climate sentiment aggregated over five U.S. natural disasters in 2018",
"Figure 3: Comparisons of overall (left) and within-cohort (right) average sentiments for tweets occurring two weeks before or after U.S. natural disasters occurring in 2018"
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Table2-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png"
]
} | [
"What methodology is used to compensate for limited labelled data?"
] | [
[
"1904.07342-Data-1"
]
] | [
"Influential tweeters ( who they define as individuals certain to have a classifiable sentiment regarding the topic at hand) is used to label tweets in bulk in the absence of manually-labeled tweets."
] | 182 |
2001.06888 | A multimodal deep learning approach for named entity recognition from social media | Named Entity Recognition (NER) from social media posts is a challenging task. User generated content which forms the nature of social media, is noisy and contains grammatical and linguistic errors. This noisy content makes it much harder for tasks such as named entity recognition. However some applications like automatic journalism or information retrieval from social media, require more information about entities mentioned in groups of social media posts. Conventional methods applied to structured and well typed documents provide acceptable results while compared to new user generated media, these methods are not satisfactory. One valuable piece of information about an entity is the related image to the text. Combining this multimodal data reduces ambiguity and provides wider information about the entities mentioned. In order to address this issue, we propose a novel deep learning approach utilizing multimodal deep learning. Our solution is able to provide more accurate results on named entity recognition task. Experimental results, namely the precision, recall and F1 score metrics show the superiority of our work compared to other state-of-the-art NER solutions. | {
"paragraphs": [
[
"A common social media delivery system such as Twitter supports various media types like video, image and text. This media allows users to share their short posts called Tweets. Users are able to share their tweets with other users that are usually following the source user. Hovewer there are rules to protect the privacy of users from unauthorized access to their timeline BIBREF0. The very nature of user interactions in Twitter micro-blogging social media is oriented towards their daily life, first witness news-reporting and engaging in various events (sports, political stands etc.). According to studies, news in twitter is propagated and reported faster than conventional news media BIBREF1. Thus, extracting first hand news and entities occurring in this fast and versatile online media gives valuable information. However, abridged and noisy content of Tweets makes it even more difficult and challenging for tasks such as named entity recognition and information retrieval BIBREF2.",
"The task of tracking and recovering information from social media posts is a concise definition of information retrieval in social media BIBREF3, BIBREF4. However many challenges are blocking useful solutions to this issue, namely, the noisy nature of user generated content and the perplexity of words used in short posts. Sometimes different entities are called exactly the same, for example \"Micheal Jordan\" refers to a basketball player and also a computer scientist in the field of artificial intelligence. The only thing that divides both of these is the context in which entity appeared. If the context refers to something related to AI, the reader can conclude \"Micheal Jordan\" is the scientist, and if the context is refers to sports and basketball then he is the basketball player. The task of distinguishing between different named entities that appear to have the same textual appearance is called named entity disambiguation. There is more useful data on the subject rather than on plain text. For example images and visual data are more descriptive than just text for tasks such as named entity recognition and disambiguation BIBREF5 while some methods only use the textual data BIBREF6.",
"The provided extra information is closely related to the textual data. As a clear example, figure FIGREF1 shows a tweet containing an image. The combination of these multimodal data in order to achieve better performance in NLP related tasks is a promising alternative explored recently.",
"An NLP task such as named entity recognition in social media is a most challenging task because users tend to invent, mistype and epitomize words. Sometimes these words correspond to named entities which makes the recognition task even more difficult BIBREF7. In some cases, the context that carries the entity (surrounding words and related image) is more descriptive than the entity word presentation BIBREF8.",
"To find a solution to the issues at hand, and keeping multimodal data in mind, recognition of named entities from social media has become a research interest which utilizes image compared to NER task in a conventional text. Researchers in this field have tried to propose multimodal architectures based on deep neural networks with multimodal input that are capable of combining text and image BIBREF9, BIBREF8, BIBREF10.",
"In this paper we draw a better solution in terms of performance by proposing a new novel method called CWI (Character-Word-Image model). We used multimodal deep neural network to overcome the NER task in micro-blogging social media.",
"The rest of the paper is organized as follows: section SECREF2 provides an insight view of previous methods; section SECREF3 describes the method we propose; section SECREF4 shows experimental evaluation and test results; finally, section SECREF5 concludes the whole article."
],
[
"Many algorithms and methods have been proposed to detect, classify or extract information from single type of data such as audio, text, image etc. However, in the case of social media, data comes in a variety of types such as text, image, video or audio in a bounded style. Most of the time, it is very common to caption a video or image with textual information. This information about the video or image can refer to a person, location etc. From a multimodal learning perspective, jointly computing such data is considered to be more valuable in terms of representation and evaluation. Named entity recognition task, on the other hand, is the task of recognizing named entities from a sentence or group of sentences in a document format.",
"Named entity is formally defined as a word or phrase that clearly identifies an item from set of other similar items BIBREF11, BIBREF12. Equation DISPLAY_FORM2 expresses a sequence of tokens.",
"From this equation, the NER task is defined as recognition of tokens that correspond to interesting items. These items from natural language processing perspective are known as named entity categories; BIO2 proposes four major categories, namely, organization, person, location and miscellaneous BIBREF13. From the biomedical domain, gene, protein, drug and disease names are known as named entities BIBREF14, BIBREF15. Output of NER task is formulated in . $I_s\\in [1,N]$ and $I_e\\in [1,N]$ is the start and end indices of each named entity and $t$ is named entity type BIBREF16.",
"BIO2 tagging for named entity recognition is defined in equation . Table TABREF3 shows BIO2 tags and their respective meanings; B and I indicate beginning and inside of the entity respectively, while O shows the outside of it. Even though many tagging standards have been proposed for NER task, BIO is the foremost accepted by many real world applications BIBREF17.",
"A named entity recognizer gets $s$ as input and provides entity tags for each token. This sequential process requires information from the whole sentence rather than only tokens and for that reason, it is also considered to be a sequence tagging problem. Another analogous problem to this issue is part of speech tagging and some methods are capable of doing both. However, in cases where noise is present and input sequence has linguistic typos, many methods fail to overcome the problem. As an example, consider a sequence of tokens where a new token invented by social media users gets trended. This trending new word is misspelled and is used in a sequence along with other tokens in which the whole sequence does not follow known linguistic grammar. For this special case, classical methods and those which use engineered features do not perform well.",
"Using the sequence $s$ itself or adding more information to it divides two approaches to overcome this problem: unimodal and multimodal.",
"Although many approaches for NER have been proposed and reviewing them is not in the scope of this article, we focus on foremost analogues classical and deep learning approaches for named entity recognition in two subsections. In subsection SECREF4 unimodal approaches for named entity recognition are presented while in subsection SECREF7 emerging multimodal solutions are described."
],
[
"The recognition of named entities from only textual data (unimodal learning approach) is a well studied and explored research criteria. For a prominent example of this category, the Stanford NER is a widely used baseline for many applications BIBREF18. The incorporation of non-local information in information extraction is proposed by the authors using of Gibbs sampling. The conditional random field (CRF) approach used in this article, creates a chain of cliques, where each clique represents the probabilistic relationship between two adjacent states. Also, Viterbi algorithm has been used to infer the most likely state in the CRF output sequence. Equation DISPLAY_FORM5 shows the proposed CRF method.",
"where $\\phi $ is the potential function.",
"CRF finds the most probable likelihood by modeling the input sequence of tokens $s$ as a normalized product of feature functions. In a simpler explanation, CRF outputs the most probable tags that follow each other. For example it is more likely to have an I-PER, O or any other that that starts with B- after B-PER rather than encountering tags that start with I-.",
"T-NER is another approach that is specifically aimed to answer NER task in twitter BIBREF19. A set of algorithms in their original work have been published to answer tasks such as POS (part of speech tagging), named entity segmentation and NER. Labeled LDA has been used by the authors in order to outperform baseline in BIBREF20 for NER task. Their approach strongly relies on dictionary, contextual and orthographic features.",
"Deep learning techniques use distributed word or character representation rather than raw one-hot vectors. Most of this research in NLP field use pretrained word embeddings such as Word2Vec BIBREF21, GloVe BIBREF22 or fastText BIBREF23. These low dimensional real valued dense vectors have proved to provide better representation for words compared to one-hot vector or other space vector models.",
"The combination of word embedding along with bidirectional long-short term memory (LSTM) neural networks are examined in BIBREF24. The authors also propose to add a CRF layer at the end of their neural network architecture in order to preserve output tag relativity. Utilization of recurrent neural networks (RNN) provides better sequential modeling over data. However, only using sequential information does not result in major improvements because these networks tend to rely on the most recent tokens. Instead of using RNN, authors used LSTM. The long and short term memory capability of these networks helps them to keep in memory what is important and forget what is not necessary to remember. Equation DISPLAY_FORM6 formulates forget-gate of an LSTM neural network, eq. shows input-gate, eq. notes output-gate and eq. presents memory-cell. Finally, eq. shows the hidden part of an LSTM unit BIBREF25, BIBREF26.",
"for all these equations, $\\sigma $ is activation function (sigmoid or tanh are commonly used for LSTM) and $\\circ $ is concatenation operation. $W$ and $U$ are weights and $b$ is the bias which should be learned over training process.",
"LSTM is useful for capturing the relation of tokens in a forward sequential form, however in natural language processing tasks, it is required to know the upcoming token. To overcome this problem, the authors have used a backward and forward LSTM combining output of both.",
"In a different approach, character embedding followed by a convolution layer is proposed in BIBREF27 for sequence labeling. The utilized architecture is followed by a bidirectional LSTM layer that ends in a CRF layer. Character embedding is a useful technique that the authors tried to use it in a combination with word embedding. Character embedding with the use of convolution as feature extractor from character level, captures relations between characters that form a word and reduces spelling noise. It also helps the model to have an embedding when pretrained word embedding is empty or initialized as random for new words. These words are encountered when they were not present in the training set, thus, in the test phase, model fails to provide a useful embedding."
],
[
"Multimodal learning has become an emerging research interest and with the rise of deep learning techniques, it has become more visible in different research areas ranging from medical imaging to image segmentation and natural language processing BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF9, BIBREF37, BIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF44, BIBREF45. On the other hand, very little research has been focused on the extraction of named entities with joint image and textual data concerning short and noisy content BIBREF46, BIBREF47, BIBREF9, BIBREF8 while several studies have been explored in textual named entity recognition using neural models BIBREF48, BIBREF49, BIBREF24, BIBREF50, BIBREF27, BIBREF51, BIBREF10, BIBREF52.",
"State-of-the-art methods have shown acceptable evaluation on structured and well formatted short texts. Techniques based on deep learning such as utilization of convolutional neural networks BIBREF52, BIBREF49, recurrent neural networks BIBREF50 and long short term memory neural networks BIBREF27, BIBREF24 are aimed to solve NER problem.",
"The multimodal named entity recognizers can be categorized in two categories based on the tasks at hand, one tries to improve NER task with utilization of visual data BIBREF46, BIBREF8, BIBREF47, and the other tries to give further information about the task at hand such as disambiguation of named entities BIBREF9. We will refer to both of these tasks as MNER. To have a better understanding of MNER, equation DISPLAY_FORM9 formulates the available multimodal data while equations and are true for this task.",
"$i$ refers to image and the rest goes same as equation DISPLAY_FORM2 for word token sequence.",
"In BIBREF47 pioneering research was conducted using feature extraction from both image and textual data. The extracted features were fed to decision trees in order to output the named entity classes. Researchers have used multiple datasets ranging from buildings to human face images to train their image feature extractor (object detector and k-means clustering) and a text classifier has been trained on texts acquired from DBPedia.",
"Researchers in BIBREF46 proposed a MNER model with regards to triplet embedding of words, characters and image. Modality attention applied to this triplet indicates the importance of each embedding and their impact on the output while reducing the impact of irrelevant modals. Modality attention layer is applied to all embedding vectors for each modal, however the investigation of fine-grained attention mechanism is still unclear BIBREF53. The proposed method with Inception feature extraction BIBREF54 and pretrained GloVe word vectors shows good results on the dataset that the authors aggregated from Snapchat. This method shows around 0.5 for precision and F-measure for four entity types (person, location, organization and misc) while for segmentation tasks (distinguishing between a named entity and a non-named entity) it shows around 0.7 for the metrics mentioned.",
"An adaptive co-attention neural network with four generations are proposed in BIBREF8. The adaptive co-attention part is similar to the multimodal attention proposed in BIBREF46 that enabled the authors to have better results over the dataset they collected from Twitter. In their main proposal, convolutional layers are used for word representation, BiLSTM is utilized to combine word and character embeddings and an attention layer combines the best of the triplet (word, character and image features). VGG-Net16 BIBREF55 is used as a feature extractor for image while the impact of other deep image feature extractors on the proposed solution is unclear, however the results show its superiority over related unimodal methods."
],
[
"In the present work, we propose a new multimodal deep approach (CWI) that is able to handle noise by co-learning semantics from three modalities, character, word and image. Our method is composed of three parts, convolutional character embedding, joint word embedding (fastText-GloVe) and InceptionV3 image feature extraction BIBREF54, BIBREF23, BIBREF22. Figure FIGREF11 shows CWI architecture in more detail.",
"Character Feature Extraction shown in the left part of figure FIGREF11 is a composition of six layers. Each sequence of words from a single tweet, $\\langle w_1, w_2, \\dots , w_n \\rangle $ is converted to a sequence of character representation $\\langle [c_{(0,0)}, c_{(0,1)}, \\dots , c_{(0,k)}], \\dots , [c_{(n,0)}, c_{(n,1)}, \\dots , c_{(n,k)}] \\rangle $ and in order to apply one dimensional convolution, it is required to be in a fixed length. $k$ shows the fixed length of the character sequence representing each word. Rather than using the one-hot representation of characters, a randomly initialized (uniform distribution) embedding layer is used. The first three convolution layers are followed by a one dimensional pooling layer. In each layer, kernel size is increased incrementally from 2 to 4 while the number of kernels are doubled starting from 16. Just like the first part, the second segment of this feature extractor uses three layers but with slight changes. Kernel size is reduced starting from 4 to 2 and the number of kernels is halved starting from 64. In this part, $\\otimes $ sign shows concatenation operation. TD + GN + SineRelu note targeted dropout, group normalization and sine-relu BIBREF56, BIBREF57, BIBREF58. These layers prevent the character feature extractor from overfitting. Equation DISPLAY_FORM12 defines SineRelu activation function which is slightly different from Relu.",
"Instead of using zero in the second part of this equation, $\\epsilon (\\sin {x}-\\cos {x})$ has been used for negative inputs, $\\epsilon $ is a hyperparameter that controls the amplitude of $\\sin {x}-\\cos {x}$ wave. This slight change prevents network from having dead-neurons and unlike Relu, it is differentiable everywhere. On the other hand, it has been proven that using GroupNormalization provides better results than BatchNormalization on various tasks BIBREF57.",
"However the dropout has major improvement on the neural network as an overfitting prevention technique BIBREF59, in our setup the TargtedDropout shows to provide better results. TargetedDropout randomly drops neurons whose output is over a threshold.",
"Word Feature Extraction is presented in the middle part of figure FIGREF11. Joint embeddings from pretrained word vectors of GloVe BIBREF22 and fastText BIBREF23 by concatenation operation results in 500 dimensional word embedding. In order to have forward and backward information for each hidden layer, we used a bidirectional long-short term memory BIBREF25, BIBREF26. For the words which were not in the pretrained tokens, we used a random initialization (uniform initialization) between -0.25 and 0.25 at each embedding. The result of this phase is extracted features for each word.",
"Image Feature Extraction is shown in the right part of figure FIGREF11. For this part, we have used InceptionV3 pretrained on ImageNet BIBREF60. Many models were available as first part of image feature extraction, however the main reason we used InceptionV3 as feature extractor backbone is better performance of it on ImageNet and the results obtained by this particular model were slightly better compared to others.",
"Instead of using headless version of InceptionV3 for image feature extraction, we have used the full model which outputs the 1000 classes of ImageNet. Each of these classes resembles an item, the set of these items can present a person, location or anything that is identified as a whole. To have better features extracted from the image, we have used an embedding layer. In other words, we looked at the top 5 extracted probabilities as words that is shown in eq. DISPLAY_FORM16; Based on our assumption, these five words present textual keywords related to the image and combination of these words should provide useful information about the objects in visual data. An LSTM unit has been used to output the final image features. These combined embeddings from the most probable items in image are the key to have extra information from a social media post.",
"where $IW$ is image-word vector, $x$ is output of InceptionV3 and $i$ is the image. $x$ is in domain of [0,1] and $\\sum \\limits _{\\forall k\\in x}k=1$ holds true, while $\\sum \\limits _{\\forall k\\in IW}k\\le 1$.",
"Multimodal Fusion in our work is presented as concatenation of three feature sets extracted from words, characters and images. Unlike previous methods, our original work does not include an attention layer to remove noisy features. Instead, we stacked LSTM units from word and image feature extractors to have better results. The last layer presented at the top right side of figure FIGREF11 shows this part. In our second proposed method, we have used attention layer applied to this triplet. Our proposed attention mechanism is able to detect on which modality to increase or decrease focus. Equations DISPLAY_FORM17, and show attention mechanism related to second proposed model.",
"Conditional Random Field is the last layer in our setup which forms the final output. The same implementation explained in eq. DISPLAY_FORM5 is used for our method."
],
[
"The present section provides evaluation results of our model against baselines. Before diving into our results, a brief description of dataset and its statistics are provided."
],
[
"In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets."
],
[
"In order to obtain the best results in tab. TABREF20 for our first model (CWI), we have used the following setup in tables TABREF22, TABREF23, TABREF24 and TABREF25. For the second proposed method, the same parameter settings have been used with an additional attention layer. This additional layer has been added after layer 31 in table TABREF25 and before the final CRF layer, indexed as 32. $Adam$ optimizer with $8\\times 10^{-5}$ has been used in training phase with 10 epochs."
],
[
"Table TABREF20 presents evaluation results of our proposed models. Compared to other state of the art methods, our first proposed model shows $1\\%$ improvement on f1 score. The effect of different word embedding sizes on our proposed method is presented in TABREF26. Sensitivity to TD+SineRelu+GN is presented in tab. TABREF28."
],
[
"In this article we have proposed a novel named entity recognizer based on multimodal deep learning. In our proposed model, we have used a new architecture in character feature extraction that has helped our model to overcome the issue of noise. Instead of using direct image features from near last layers of image feature extractors such as Inception, we have used the direct output of the last layer. This last layer which is 1000 classes of diverse objects that is result of InceptionV3 trained on ImageNet dataset. We used top 5 classes out of these and converted them to one-hot vectors. The resulting image feature embedding out of these high probability one-hot vectors helped our model to overcome the issue of noise in images posted by social media users. Evaluation results of our proposed model compared to other state of the art methods show its superiority to these methods in overall while in two categories (Person and Miscellaneous) our model outperformed others."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Unimodal Named Entity Recognition",
"Related Work ::: Multimodal Named Entity Recognition",
"The Proposed Approach",
"Experimental Evaluation",
"Experimental Evaluation ::: Dataset",
"Experimental Evaluation ::: Experimental Setup",
"Experimental Evaluation ::: Evaluation Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0c5be00c50cc9fa7c1921c32aca6b2cb254dd249"
],
"answer": [
{
"evidence": [
"In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets."
],
"extractive_spans": [
"twitter "
],
"free_form_answer": "",
"highlighted_evidence": [
"In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"d8f8f58e892ccf7370b6a3224007cc8240468fdf"
],
"answer": [
{
"evidence": [
"In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets."
],
"extractive_spans": [
"BIBREF8 a refined collection of tweets gathered from twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"97c19183567ea4de915809602b70217ba8fb19bb"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Evaluation results of different approaches compared to ours"
],
"extractive_spans": [],
"free_form_answer": "Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Evaluation results of different approaches compared to ours"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"Which social media platform is explored?",
"What datasets did they use?",
"What are the baseline state of the art models?"
],
"question_id": [
"0106bd9d54e2f343cc5f30bb09a5dbdd171e964b",
"e015d033d4ee1c83fe6f192d3310fb820354a553",
"8a871b136ccef78391922377f89491c923a77730"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"social media",
"social media",
"social media"
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: A Tweet containing Image and Text: Geoffrey Hinton and Demis Hassabis are referred in text and respective images are provided with Tweet",
"Table 1: BIO Tags and their respective meaning",
"Figure 2: Proposed CWI Model: Character (left), Word (middle) and Image (right) feature extractors combined by bidirectional long-short term memory and the conditional random field at the end",
"Table 2: Statistics of named entity types in train, development and test sets [9]",
"Table 3: Evaluation results of different approaches compared to ours",
"Table 6: Implementation details of our model (CWI): Image Feature Extractor",
"Table 8: Effect of different word embedding sizes on our proposed model"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"6-Figure2-1.png",
"8-Table2-1.png",
"8-Table3-1.png",
"9-Table6-1.png",
"10-Table8-1.png"
]
} | [
"What are the baseline state of the art models?"
] | [
[
"2001.06888-8-Table3-1.png"
]
] | [
"Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention"
] | 183 |
1604.05781 | What we write about when we write about causality: Features of causal statements across large-scale social discourse | Identifying and communicating relationships between causes and effects is important for understanding our world, but is affected by language structure, cognitive and emotional biases, and the properties of the communication medium. Despite the increasing importance of social media, much remains unknown about causal statements made online. To study real-world causal attribution, we extract a large-scale corpus of causal statements made on the Twitter social network platform as well as a comparable random control corpus. We compare causal and control statements using statistical language and sentiment analysis tools. We find that causal statements have a number of significant lexical and grammatical differences compared with controls and tend to be more negative in sentiment than controls. Causal statements made online tend to focus on news and current events, medicine and health, or interpersonal relationships, as shown by topic models. By quantifying the features and potential biases of causality communication, this study improves our understanding of the accuracy of information and opinions found online. | {
"paragraphs": [
[
"Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect.",
"Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 .",
"How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 .",
"Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication.",
"The rest of this paper is organized as follows: In Sec. \"Materials and Methods\" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. \"Results\" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. \"Discussion\" ."
],
[
"Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.",
"All document text was processed the same way. Punctuation, XML characters, and hyperlinks were removed, as were Twitter-specific “at-mentions” and “hashtags” (see also the Appendix). There is useful information here, but it is either not natural language text, or it is Twitter-specific, or both. Documents were broken into individual words (unigrams) on whitespace. Casing information was retained, as we will use it for our Named Entity analysis, but otherwise all words were considered lowercase only (see also the Appendix). Stemming BIBREF30 and lemmatization BIBREF31 were not performed.",
"Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively."
],
[
"Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research.",
"Unigrams, POS, and NEs were compared between the cause and control corpora using odds ratios (ORs): ",
"$$\\operatorname{OR}(x) = \\frac{p_C(x)/ (1-p_C(x))}{p_N(x) / (1-p_N(x))},$$ (Eq. 1) ",
" where $p_C(x)$ and $p_N(x)$ are the probabilities that a unigram, POS, or NE $x$ occurs in the causal and control corpus, respectively. These probabilities were computed for each corpus separately as $p(x) = f(x) / \\sum _{x^{\\prime } \\in V} f(x^{\\prime })$ , where $f(x)$ is the total number of occurrences of $x$ in the corpus and $V$ is the relevant set of unigrams, POS, or NEs. Confidence intervals for the ORs were computed using Wald's methodology BIBREF36 .",
"As there are many unique unigrams in the text, when computing unigram ORs we focused on the most meaningful unigrams within each corpus by using the following filtering criteria: we considered only the $\\operatorname{OR}$ s of the 1500 most frequent unigrams in that corpus that also have a term-frequency-inverse-document-frequency (tf-idf) score above the 90th percentile for that corpus BIBREF37 . The tf-idf was computed as ",
"$$\\mbox{tf-idf}(w) = \\log f(w) \\times \\log \\left(D̑{\\mathit {df}(w)} \\right) ,$$ (Eq. 2) ",
"where $D$ is the total number of documents in the corpus, and $\\mathit {df}(w)$ is the number of documents in the corpus containing unigram $w$ . Intuitively, unigrams with higher tf-idf scores appear frequently, but are not so frequent that they are ubiquitous through all documents. Filtering via tf-idf is standard practice in the information retrieval and data mining fields."
],
[
"For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search."
],
[
"Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information.",
"For the first sentiment analysis, each unigram $w$ was assigned a crowdsourced “labMT” sentiment score $s(w)$ BIBREF5 . (Unlike BIBREF5 , scores were recentered by subtracting the mean, $s(w) \\leftarrow s(w)-\\left<s\\right>$ .) Unigrams determined by volunteer raters to have a negative emotional sentiment (`hate',`death', etc.) have $s(w) < 0$ , while unigrams determined to have a positive emotional sentiment (`love', `happy', etc.) tend to have $s(w) > 0$ . Unigrams that have labMT scores and are above the 90th percentile of tf-idf for the corpus form the set $\\tilde{V}$ . (Unigrams in $\\tilde{V}$ need not be among the 1500 most frequent unigrams.) The set $\\tilde{V}$ captures 87.9% (91.5%) of total unigrams in the causal (control) corpus. Crucially, the tf-idf filtering ensures that the words `caused', `causes', and `causing', which have a slight negative sentiment, are not included and do not introduce a systematic bias when comparing the two corpora.",
"This sentiment measure works on a per-unigram basis, and is therefore best suited for large bodies of text, not short documents BIBREF5 . Instead of considering individual documents, the distributions of labMT scores over all unigrams for each corpus was used to compare the corpora. In addition, a single sentiment score for each corpus was computed as the average sentiment score over all unigrams in that corpus, weighed by unigram frequency: $\\sum _{w \\in \\tilde{V}} {f(w) s(w)} \\Big / \\sum _{w^{\\prime } \\in \\tilde{V}} f(w^{\\prime })$ .",
"To supplement this sentiment analysis method, we applied a second method capable of estimating with reasonable accuracy the sentiment of individual documents. We used the sentiment classifier BIBREF38 included in the Stanford CoreNLP v3.6.0 toolkit to documents in each corpus. Documents were individually classified into one of five categories: very negative, negative, neutral, positive, very positive. The data used to train this classifier is taken from positive and negative reviews of movies (Stanford Sentiment Treebank v1.0) BIBREF38 ."
],
[
"Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics."
],
[
"We have collected approximately 1M causal statements made on Twitter over the course of 2013, and for a control we gathered the same number of statements selected at random but controlling for time of year (see Methods). We applied Parts-of-Speech (POS) and Named Entity (NE) taggers to all these texts. Some post-processed and tagged example documents, both causal and control, are shown in Fig. 1 A. We also applied sentiment analysis methods to these documents (Methods) and we have highlighted very positive and very negative words throughout Fig. 1 .",
"In Fig. 1 B we present odds ratios for how frequently unigrams (words), POS, or NE appear in causal documents relative to control documents. The three unigrams most strongly skewed towards causal documents were `stress', `problems', and `trouble', while the three most skewed towards control documents were `photo', `ready', and `cute'. While these are only a small number of the unigrams present, this does imply a negative sentiment bias among causal statements (we return to this point shortly).",
"Figure 1 B also presents odds ratios for POS tags, to help us measure the differences in grammatical structure between causal and control documents (see also the Appendix for the effects of punctuation and casing on these odds ratios). The causal corpus showed greater odds for plural nouns (Penn Treebank tag: NNS), plural proper nouns (NNPS), Wh-determiners/pronouns (WDT, WP$) such as `whichever',`whatever', `whose', or `whosever', and predeterminers (PDT) such as `all' or `both'. Predeterminers quantify noun phrases such as `all' in `after all the events that caused you tears', showing that many causal statements, despite the potential brevity of social media, can encompass or delineate classes of agents and/or patients. On the other hand, the causal corpus has lower odds than the control corpus for list items (LS), proper singular nouns (NNP), and interjections (UH).",
"Lastly, Fig. 1 B contains odds ratios for NE tags, allowing us to quantify the types of proper nouns that are more or less likely to appear in causal statements. Of the four tags, only the “Person” tag is less likely in the causal corpus than the control. (This matches the odds ratio for the proper singular noun discussed above.) Perhaps surprisingly, these results together imply that causal statements are less likely to involve individual persons than non-causal statements. There is considerable celebrity news and gossip on social media BIBREF4 ; discussions of celebrities may not be especially focused on attributing causes to these celebrities. All other NE tags, Organization, Location, and Miscellaneous, occur more frequently in the causal corpus than the control. All the odds ratios in Fig. 1 B were significant at the $\\alpha = 0.05$ level except the List item marker (LS) POS tag.",
"The unigram analysis in Fig. 1 does not incorporate higher-order phrase structure present in written language. To explore these structures specifically in the causal corpus, we constructed “cause-trees”, shown in Fig. 2 . Inspired by association mining BIBREF41 , a cause-tree is a binary tree rooted at either `caused', `causes', or `causing', that illustrates the most frequently occurring $n$ -grams that either begin or end with that root cause word (see Methods for details).",
"The “causes” tree shows the focused writing (sentence segments) that many people use to express either the relationship between their own actions and a cause-and-effect (“even if it causes”), or the uncontrollable effect a cause may have on themselves: “causes me to have” shows a person's inability to control a causal event (“[...] i have central heterochromia which causes me to have dual colors in both eyes”). The `causing' tree reveals our ability to confine causal patterns to specific areas, and also our ability to be affected by others causal decisions. Phrases like “causing a scene in/at” and “causing a ruckus in/at” (from documents like “causing a ruckus in the hotel lobby typical [...]”) show people commonly associate bounds on where causal actions take place. The causing tree also shows people's tendency to emphasize current negativity: Phrases like “pain this is causing” coming from documents like “cant you see the pain you are causing her” supports the sentiment bias that causal attribution is more likely for negative cause-effect associations. Finally, the `caused' tree focuses heavily on negative events and indicates people are more likely to remember negative causal events. Documents with phrases from the caused tree (“[...] appalling tragedy [...] that caused the death”, “[...] live with this pain that you caused when i was so young [...]”) exemplify the negative events that are focused on are large-scale tragedies or very personal negative events in one's life.",
"Taken together, the popularity of negative sentiment unigrams (Fig. 1 ) and $n$ -grams (Fig. 2 ) among causal documents shows that emotional sentiment or “valence” may play a role in how people perform causal attribution BIBREF27 . The “if it bleeds, it leads” mentality among news media, where violent and negative news are more heavily reported, may appeal to this innate causal association mechanism. (On the other hand, many news media themselves use social media for reporting.) The prevalence of negative sentiment also contrasts with the “better angels of our nature” evidence of Pinker BIBREF42 , illustrating one bias that shows why many find the results of Ref. BIBREF42 surprising.",
"Given this apparent sentiment skew, we further studied sentiment (Fig. 3 ). We compared the sentiment between the corpora in four different ways to investigate the observation (Figs. 1 B and 2 ) that people focus more about negative concepts when they discuss causality. First, we computed the mean sentiment score of each corpus using crowdsourced “labMT” scores weighted by unigram frequency (see Methods). We also applied tf-idf filtering (Methods) to exclude very common words, including the three cause-words, from the mean sentiment score. The causal corpus text was slightly negative on average while the control corpus was slightly positive (Fig. 3 A). The difference in mean sentiment score was significant (t-test: $p < 0.01$ ).",
"Second, we moved from the mean score to the distribution of sentiment across all (scored) unigrams in the causal and control corpora (Fig. 3 B). The causal corpus contained a large group of negative sentiment unigrams, with labMT scores in the approximate range $-3 < s < -1/2$ ; the control corpus had significantly fewer unigrams in this score range.",
"Third, in Fig. 3 C we used POS tags to categorize scored unigrams into nouns, verbs, and adjectives. Studying the distributions for each, we found that nouns explain much of the overall difference observed in Fig. 3 B, with verbs showing a similar but smaller difference between the two corpora. Adjectives showed little difference. The distributions in Fig. 3 C account for 87.8% of scored text in the causal corpus and 77.2% of the control corpus. The difference in sentiment between corpora was significant for all distributions (t-test: $p < 0.01$ ).",
"Fourth, to further confirm that the causal documents tend toward negative sentiment, we applied a separate, independent sentiment analysis using the Stanford NLP sentiment toolkit BIBREF38 to classify the sentiment of individual documents not unigrams (see Methods). Instead of a numeric sentiment score, this classifier assigns documents to one of five categories ranging from very negative to very positive. The classifier showed that the causal corpus contains more negative and very negative documents than the control corpus, while the control corpus contains more neutral, positive, and very positive documents (Fig. 3 D).",
"We have found language (Figs. 1 and 2 ) and sentiment (Fig. 3 ) differences between causal statements made on social media compared with other social media statements. But what is being discussed? What are the topical foci of causal statements? To study this, for our last analysis we applied topic models to the causal statements. Topic modeling finds groups of related terms (unigrams) by considering similarities between how those terms co-occur across a set of documents.",
"We used the popular topic modeling method Latent Dirichlet Allocation (LDA) BIBREF39 . We ranked unigrams by how strongly associated they were with the topic. Inspecting these unigrams we found that a 10-topic model discovered meaningful topics. See Methods for full details. The top unigrams for each topic are shown in Tab. 1 .",
"Topics in the causal corpus tend to fall into three main categories: (i) news, covering current events, weather, etc.; (ii) medicine and health, covering cancer, obesity, stress, etc.; and (iii) relationships, covering problems, stress, crisis, drama, sorry, etc.",
"While the topics are quite different, they are all similar in their use of negative sentiment words. The negative/global features in the `news' topic are captured in the most representative words: damage, fire, power, etc. Similar to news, the `accident' topic balances the more frequent day-to-day minor frustrations with the less frequent but more severe impacts of car accidents. The words `traffic' and `delays' are the most probable words for this topic, and are common, low-impact occurrences. On the contrary, `crash', `car', `accident' and `death' are the next most probable words for the accident topic, and generally show a focus on less-common but higher-impact events.",
"The `medical' topic also focused on negative words; highly probable words for this topic included `cancer', `break', `disease', `blood', etc. Meanwhile, the `body' topic contained words like: `stress', `lose', and `weight', giving a focus on on our more personal struggles with body image. Besides body image, the `injuries' topic uses specific pronouns (`his', `him', `her') in references to a person's own injuries or the injuries of others such as athletes.",
"Aside from more factual information, social information is well represented in causal statements. The `problems' topic shows people attribute their problems to many others with terms like: `dont', `people', `they', `them'. The `stress' topic also uses general words such as `more', `than', or `people' to link stress to all people, and in the same vein, the `crisis' topic focuses on problems within organizations such as governments. The `drama' and `sorry' topics tend towards more specific causal statements. Drama used the words: `like', `she', and `her' while documents in the sorry topic tended to address other people.",
"The topics of causal documents discovered by LDA showed that both general and specific statements are made regarding news, medicine, and relationships when individuals make causal attributions online."
],
[
"The power of online communication is the speed and ease with which information can be propagated by potentially any connected users. Yet these strengths come at a cost: rumors and misinformation also spread easily. Causal misattribution is at the heart of many rumors, conspiracy theories, and misinformation campaigns.",
"Given the central role of causal statements, further studies of the interplay of information propagation and online causal attributions are crucial. Are causal statements more likely to spread online and, if so, in which ways? What types of social media users are more or less likely to make causal statements? Will a user be more likely to make a causal statement if they have recently been exposed to one or more causal statements from other users?",
"The topics of causal statements also bring forth important questions to be addressed: how timely are causal statements? Are certain topics always being discussed in causal statements? Are there causal topics that are very popular for only brief periods and then forgotten? Temporal dynamics of causal statements are also interesting: do time-of-day or time-of-year factors play a role in how causal statements are made? Our work here focused on a limited subset of causal statements, but more generally, these results may inform new methods for automatically detecting causal statements from unstructured, natural language text BIBREF17 . Better computational tools focused on causal statements are an important step towards further understanding misinformation campaigns and other online activities. Lastly, an important but deeply challenging open question is how, if it is even possible, to validate the accuracy of causal statements. Can causal statements be ranked by some confidence metric(s)? We hope to pursue these and other questions in future research.",
"Parts-of-speech tagging depends on punctuation and casing, which we filtered in our data, so a study of how robust the POS algorithm is to punctuation and casing removal is important. We computed POS tags for the corpora with and without casing as well as with and without punctuation (which includes hashtags, links and at-symbols). Two tags mentioned in Fig. 1 B, NNPS and LS (which was not significant), were affected by punctuation removal. Otherwise, there is a strong correlation (Fig. 4 ) between Odds Ratios (causal vs. control) with punctuation and without punctuation, including casing and without casing ( $\\rho = 0.71$ and $0.80$ , respectively), indicating the POS differences between the corpora were primarily not due to the removal of punctuation or casing."
],
[
"We thank R. Gallagher for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. ISS-1447634."
]
],
"section_name": [
"Introduction",
"Dataset, filtering, and corpus selection",
"Tagging and corpus comparison",
"Cause-trees",
"Sentiment analysis",
"Topic modeling",
"Results",
"Discussion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"f286d3a109fe0b38fcee6121e231001a4704e9c8"
],
"answer": [
{
"evidence": [
"Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively."
],
"extractive_spans": [],
"free_form_answer": "They identify documents that contain the unigrams 'caused', 'causing', or 'causes'",
"highlighted_evidence": [
"Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"b2733052258dc2ad74edbb76c3f152740e30bdbc"
],
"answer": [
{
"evidence": [
"Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.",
"Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively."
],
"extractive_spans": [],
"free_form_answer": "Randomly selected from a Twitter dump, temporally matched to causal documents",
"highlighted_evidence": [
"Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API.",
"Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"ae22aca6f06a3c10293e77feb2defd1a052ebf47"
],
"answer": [
{
"evidence": [
"Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively."
],
"extractive_spans": [],
"free_form_answer": "Presence of only the exact unigrams 'caused', 'causing', or 'causes'",
"highlighted_evidence": [
"Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"0ce98e42cf869d3feab61c966335792e98d16ad0"
],
"answer": [
{
"evidence": [
"The rest of this paper is organized as follows: In Sec. \"Materials and Methods\" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. \"Results\" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. \"Discussion\" ."
],
"extractive_spans": [],
"free_form_answer": "Only automatic methods",
"highlighted_evidence": [
"In Sec. \"Materials and Methods\" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"34a0794200f1e29c3849bfa03a4f6128de26733b"
],
"answer": [
{
"evidence": [
"Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.",
"Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively."
],
"extractive_spans": [],
"free_form_answer": "Randomly from a Twitter dump",
"highlighted_evidence": [
"Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API.",
"Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"d3219ac0de3157cec4bf78b9f020c264071b86a8"
],
"answer": [
{
"evidence": [
"Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.",
"Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively."
],
"extractive_spans": [],
"free_form_answer": "Randomly from Twitter",
"highlighted_evidence": [
"Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API.",
"Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"yes",
"yes"
],
"question": [
"How do they extract causality from text?",
"What is the source of the \"control\" corpus?",
"What are the selection criteria for \"causal statements\"?",
"Do they use expert annotations, crowdsourcing, or only automatic methods to analyze the corpora?",
"how do they collect the comparable corpus?",
"How do they collect the control corpus?"
],
"question_id": [
"4c822bbb06141433d04bbc472f08c48bc8378865",
"1baf87437b70cc0375b8b7dc2cfc2830279bc8b5",
"0b31eb5bb111770a3aaf8a3931d8613e578e07a8",
"7348e781b2c3755b33df33f4f0cab4b94fcbeb9b",
"f68bd65b5251f86e1ed89f0c858a8bb2a02b233a",
"e111925a82bad50f8e83da274988b9bea8b90005"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social",
"social",
"social",
"social",
"social",
"social"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Measuring the differences between causal and control documents. (A) Examples of processed documents tagged by Parts-of-Speech (POS) or Named Entities (NEs). Unigrams highlighted in red (yellow) are in the bottom 10% (top 10%) of the labMT sentiment scores. (B) Log Odds ratios with 95% Wald confidence intervals for the most heavily skewed unigrams, POS, and all NEs between the causal and control corpus. POS tags that are plural and use Wh-pronouns (that, what, which, ...) are more common in the causal corpus, while singular nouns and list items are more common in the controls. Finally, the ‘Person’ tag is the only NE less likely in the causal corpus. Certain unigrams were censored for presentation only, not analysis. All shown odds ratios were significant at the α = 0.05 level except LS (List item markers). See also the Appendix.",
"Fig. 2. “Cause-trees” containing the most probable n-grams terminating at (left) or beginning with (right) a chosen root cause-word (see Methods). Line widths are log proportional to their corresponding n-gram frequency and bar plots measure the 4-gram per-document rate N(4-gram)/D. Most trees express negative sentiment consistent with the unigram analysis (Fig. 1). The ‘causes’ tree shows (i) people think in terms of causal probability (“you know what causes [. . . ]”), and (ii) people use causal language when they are directly affected or being affected by another (“causes you”, “causes me”). The ‘causing’ tree is more global (“causing a ruckus/scene”) and ego-centric (“pain you are causing”). The ‘caused’ tree focuses on negative sentiment and alludes to humans retaining negative causal thoughts in the past.",
"Fig. 3. Sentiment analysis revealed differences between the causal and control corpora. (A) The mean unigram sentiment score (see Methods), computed from crowdsourced “labMT” scores [6], was more negative for the causal corpus than for the control. This held whether or not tf-idf filtering was applied. (B) The distribution of unigram sentiment scores for the two corpora showed more negative unigrams (with scores in the approximate range −3 < s < −1/2) in the causal corpus compared with the control corpus. (C) Breaking the sentiment distribution down by Parts-of-Speech, nouns show the most pronounced difference in sentiment between cause and control; verbs and adjectives are also more negative in the causal corpus than the control but with less of a difference than nouns. POS tags corresponding to nouns, verbs, and adjectives together account for 87.8% and 77.2% of the causal and control corpus text, respectively. (D) Applying a different sentiment analysis tool—a trained sentiment classifier [39] that assigns individual documents to one of five categories—the causal corpus had an overabundance of negative sentiment documents and fewer positive sentiment documents than the control. This shift from very positive to very negative documents further supports the tendency for causal statements to be negative.",
"TABLE I TOPICAL FOCI OF CAUSAL DOCUMENTS. EACH COLUMN LISTS THE UNIGRAMS MOST HIGHLY ASSOCIATED (IN DESCENDING ORDER) WITH A TOPIC, COMPUTED FROM A 10-TOPIC LATENT DIRICHLET ALLOCATION MODEL. THE TOPICS GENERALLY FALL INTO THREE BROAD CATEGORIES: NEWS, MEDICINE, AND RELATIONSHIPS. MANY TOPICS PLACE AN EMPHASIS ON NEGATIVE SENTIMENT TERMS. TOPIC NAMES WERE DETERMINED MANUALLY. WORDS ARE HIGHLIGHTED ACCORDING TO SENTIMENT SCORE AS IN FIG. 1.",
"Fig. 4. Comparison of Odds Ratios for all Parts-of-Speech (POS) tags with punctuation retained and removed for documents with and without casing. Tags Cardinal number (CD), List item marker (LS), and Proper noun plural (NNPS) were most affected by removing punctuation."
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"7-TableI-1.png",
"8-Figure4-1.png"
]
} | [
"How do they extract causality from text?",
"What is the source of the \"control\" corpus?",
"What are the selection criteria for \"causal statements\"?",
"Do they use expert annotations, crowdsourcing, or only automatic methods to analyze the corpora?",
"how do they collect the comparable corpus?",
"How do they collect the control corpus?"
] | [
[
"1604.05781-Dataset, filtering, and corpus selection-2"
],
[
"1604.05781-Dataset, filtering, and corpus selection-0",
"1604.05781-Dataset, filtering, and corpus selection-2"
],
[
"1604.05781-Dataset, filtering, and corpus selection-2"
],
[
"1604.05781-Introduction-4"
],
[
"1604.05781-Dataset, filtering, and corpus selection-0",
"1604.05781-Dataset, filtering, and corpus selection-2"
],
[
"1604.05781-Dataset, filtering, and corpus selection-0",
"1604.05781-Dataset, filtering, and corpus selection-2"
]
] | [
"They identify documents that contain the unigrams 'caused', 'causing', or 'causes'",
"Randomly selected from a Twitter dump, temporally matched to causal documents",
"Presence of only the exact unigrams 'caused', 'causing', or 'causes'",
"Only automatic methods",
"Randomly from a Twitter dump",
"Randomly from Twitter"
] | 187 |