paper_id
stringlengths
18
19
venue
stringclasses
2 values
focused_review
stringlengths
392
7.4k
point
stringlengths
69
489
ACL_2017_318_review
ACL_2017
1. Presentation and clarity: important details with respect to the proposed models are left out or poorly described (more details below). Otherwise, the paper generally reads fairly well; however, the manuscript would need to be improved if accepted. 2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below). - General Discussion: 1. The authors stress the importance of accounting for polysemy and learning sense-specific representations. While polysemy is taken into account by calculating sense distributions for words in particular contexts in the learning procedure, the evaluation tasks are entirely context-independent, which means that, ultimately, there is only one vector per word -- or at least this is what is evaluated. Instead, word sense disambiguation and sememe information are used for improving the learning of word representations. This needs to be clarified in the paper. 2. It is not clear how the sememe embeddings are learned and the description of the SSA model seems to assume the pre-existence of sememe embeddings. This is important for understanding the subsequent models. Do the SAC and SAT models require pre-training of sememe embeddings? 3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work. 4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting for the authors to look deeper into this. This aspect also does not seem to explain the improvements much since, e.g., the word similarity data sets contain frequent word pairs. 5. Related to the above point, the improvement gains seem more attributable to the incorporation of sememe information than word sense disambiguation in the learning procedure. As mentioned earlier, the evaluation involves only the use of context-independent word representations. Even if the method allows for learning sememe- and sense-specific representations, they would have to be aggregated to carry out the evaluation task. 6. The example illustrating HowNet (Figure 1) is not entirely clear, especially the modifiers of "computer". 7. It says that the models are trained using their best parameters. How exactly are these determined? It is also unclear how K is set -- is it optimized for each model or is it randomly chosen for each target word observation? Finally, what is the motivation for setting K' to 2?
2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below).
ARR_2022_236_review
ARR_2022
- My main criticism is that the "mismatched" image caption dataset is artificial and may not capture the kind of misinformation that is posted on platforms like Twitter. For instance, someone posting a fake image of a lockdown at a particular place may not just be about a mismatch between the image and the caption, but may rather require fact-checking, etc. Moreover, the in-the-wild datasets on which a complementary evaluation is conducted are also more about mismatched image-caption and not the real misinformation (lines 142-143). Therefore, the extent to which this dataset can be used for misinformation detection is limited. I would have liked to see this distinction between misinformation and mismatched image captions being clear in the paper. - Also, since the dataset is artificially created, the dataset itself might have a lot of noise. For instance, the collected "pristine" set of tweets may not be pristine enough and might instead contain misinformation as well as out-of-context images. I would have liked to see more analysis around the quality of the collected dataset and the amount of noise it potentially has. - Since this is a new dataset, I would have liked to see evaluation of more models (other than just CLIP). But given that it is only a short paper, it is probably not critical (the paper makes enough contributions otherwise) - Table 4 and Table 5: Are the differences statistically significant? ( especially important because the hNews and Twitter datasets are really small) - Lines 229-240: Are the differences between the topics statistically significant?
- Also, since the dataset is artificially created, the dataset itself might have a lot of noise. For instance, the collected "pristine" set of tweets may not be pristine enough and might instead contain misinformation as well as out-of-context images. I would have liked to see more analysis around the quality of the collected dataset and the amount of noise it potentially has.
ACL_2017_554_review
ACL_2017
1) The paper does not dig into the theory profs and show the convergence properties of the proposed algorithm. 2) The paper only shows the comparison between SG-MCMC vs RMSProp and did not conduct other comparison. It should explain more about the relation between pSGLD vs RMSProp other than just mentioning they are conterparts in two families. 2) The paper does not talk about the training speed impact with more details. - General Discussion:
1) The paper does not dig into the theory profs and show the convergence properties of the proposed algorithm.
ACL_2017_516_review
ACL_2017
Missing related work on anchor words Evaluation on 20 Newsgroups is not ideal Theoretical contribution itself is small - General Discussion: The authors propose a new method of interactive user specification of topics called Tandem Anchors. The approach leverages the anchor words algorithm, a matrix-factorization approach to learning topic models, by replacing the individual anchors inferred from the Gram-Schmidt algorithm with constructed anchor pseudowords created by combining the sparse vector representations of multiple words that for a topic facet. The authors determine that the use of a harmonic mean function to construct pseudowords is optimal by demonstrating that classification accuracy of document-topic distribution vectors using these anchors produces the most improvement over Gram-Schmidt. They also demonstrate that their work is faster than existing interactive methods, allowing interactive iteration, and show in a user study that the multiword anchors are easier and more effective for users. Generally, I like this contribution a lot: it is a straightforward modification of an existing algorithm that actually produces a sizable benefit in an interactive setting. I appreciated the authors’ efforts to evaluate their method on a variety of scales. While I think the technical contribution in itself is relatively small (a strategy to assemble pseudowords based on topic facets) the thoroughness of the evaluation merited having it be a full paper instead of a short paper. It would have been nice to see more ideas as to how to build these facets in the absence of convenient sources like category titles in 20 Newsgroups or when initializing a topic model for interactive learning. One frustration I had with this paper is that I find evaluation on 20 Newsgroups to not be great for topic modeling: the documents are widely different lengths, preprocessing matters a lot, users have trouble making sense of many of the messages, and naive bag-of-words models beat topic models by a substantial margin. Classification tasks are useful shorthand for how well a topic model corresponds to meaningful distinctions in the text by topic; a task like classifying news articles by section or reviews by the class of the subject of the review might be more appropriate. It would also have been nice to see a use case that better appealed to a common expressed application of topic models, which is the exploration of a corpus. There were a number of comparisons I think were missing, as the paper contains little reference to work since the original proposal of the anchor word model. In addition to comparing against standard Gram-Schmidt, it would have been good to see the method from Lee et. al. (2014), “Low-dimensional Embeddings for Interpretable Anchor-based Topic Inference”. I also would have liked to have seen references to Nguyen et. al. (2013), “Evaluating Regularized Anchor Words” and Nguyen et. al. (2015) “Is Your Anchor Going Up or Down? Fast and Accurate Supervised Topic Models”, both of which provide useful insights into the anchor selection process. I had some smaller notes: - 164: …entire dataset - 164-166: I’m not quite sure what you mean here. I think you are claiming that it takes too long to do one pass? My assumption would have been you would use only a subset of the data to retrain the model instead of a full sweep, so it would be good to clarify what you mean. - 261&272: any reason you did not consider the and operator or element-wise max? They seem to correspond to the ideas of union and intersection from the or operator and element-wise min, and it wasn’t clear to me why the ones you chose were better options. - 337: Usenet should be capitalized - 338-340: Why fewer than 100 (as that is a pretty aggressive boundary)? Also, did you remove headers, footers, and/or quotes from the messages? - 436-440: I would have liked to see a bit more explanation of what this tells us about confusion. - 692: using tandem anchors Overall, I think this paper is a meaningful contribution to interactive topic modeling that I would like to see available for people outside the machine learning community to investigate, classify, and test hypotheses about their corpora. POST-RESPONSE: I appreciate the thoughtful responses of the authors to my questions. I would maintain that for some of the complimentary related work that it's useful to compare to non-interactive work, even if it does something different.
- 261&272: any reason you did not consider the and operator or element-wise max? They seem to correspond to the ideas of union and intersection from the or operator and element-wise min, and it wasn’t clear to me why the ones you chose were better options.
ACL_2017_588_review
ACL_2017
and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems. - Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task. 2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler. 3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary. This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise. 4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities. - Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested? 2) Have you tried building a classifier that just takes h_i^e as inputs? I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores.
3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary. This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise.
ARR_2022_23_review
ARR_2022
The technical novelty is rather lacking. Although I believe this doesn't affect the contribution of this paper. - You mention that you only select 10 answers from all correct answers, why do you do this? Does this affect the underestimation of the performances? - Do you think generative PLMs that are pretrained on biomedical texts could be more suitable for solving the multi-token problem?
- You mention that you only select 10 answers from all correct answers, why do you do this? Does this affect the underestimation of the performances?
ARR_2022_65_review
ARR_2022
1. The paper covers little qualitative aspects of the domains, so it is hard to understand how they differ in linguistic properties. For example, I think it is vague to say that the fantasy novel is more “canonical” (line 355). Text from a novel may be similar to that from news articles in that sentences tend to be complete and contain fewer omissions, in contrast to product comments which are casually written and may have looser syntactic structures. However, novel text is also very different from news text in that it contains unusual predicates and even imaginary entities as arguments. It seems that the authors are arguing that syntactic factors are more significant in SRL performance, and the experimental results are also consistent with this. Then it would be helpful to show a few examples from each domain to illustrate how they differ structurally. 2. The proposed dataset uses a new annotation scheme that is different from that of previous datasets, which introduces difficulties of comparison with previous results. While I think the frame-free scheme is justified in this paper, the compatibility with other benchmarks is an important issue that needs to be discussed. It may be possible to, for example, convert frame-based annotations to frame-free ones. I believe this is doable because FrameNet also has the core/non-core sets of argument for each frame. It would also be better if the authors can elaborate more on the relationship between this new scheme and previous ones. Besides eliminating the frame annotation, what are the major changes to the semantic role labels? - In Sec. 3, it is a bit confusing why there is a division of source domain and target domain. Thus, it might be useful to mention explicitly that the dataset is designed for domain transfer experiments. - Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. mentioning Li et al. (2019a) earlier, to make it clear and precise. - More information about the annotators would be needed. Are they all native Chinese speakers? Do they have linguistics background? - Were pred-wise/arg-wise consistencies used in the construction of existing datasets? I think they are not newly invented. It is useful to know where they come from. - In the SRL formulation (Sec. 5), I am not quite sure what is “the concerned word”. Is it the predicate? Does this formulation cover the task of identifying the predicate(s), or are the predicates given by syntactic parsing results? - From Figure 3 it is not clear to me how ZX is the most similar domain to Source. Grouping the bars by domain instead of role might be better (because we can compare the shapes). It may also be helpful to leverage some quantitative measure (e.g. cross entropy). - How was the train/dev/test split determined? This should be noted (even if it is simply done randomly).
- Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. mentioning Li et al. (2019a) earlier, to make it clear and precise.
ARR_2022_93_review
ARR_2022
1. From an experimental design perspective, the experimental design suggested by the authors has been used widely for open-domain dialogue systems with the caveat of it not being done in live interactive settings. 2. The authors have not referenced those works that use continuous scales in the evaluation and there is a large bunch of literature missing from the paper. Some of the references are provided in the comments section. 3. Lack of screenshots of the experimental interface Comments: 1. Please add screenshots of the interface that was designed. 2. Repetition of the word Tables in Line 549 3. In Appendix A.3, the GLEU metric is reference as GLUE. Questions: 1. In table 1, is there any particular reason for the reduction in pass rate % from free run 1 and free run2? 2. What is the purpose of the average duration reported in Table 1? There is no supporting explanation about it. Does it include time spent by the user waiting for the model to generate a response? 3. With regards to the model section, is there any particular reason that there was an emphasis on choosing retriever-based transformer models over generative models? Even if the models are based on ConvAI2, there are other language modeling GPT2 based techniques that could have been picked. 4. In figure 6, what are the models in the last two columns lan_model_p and lan_model? Missing References: 1. Howcroft, David M., Anja Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. " Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions." In Proceedings of the 13th International Conference on Natural Language Generation, pp. 169-182. 2020. 2. Santhanam, S. and Shaikh, S., 2019. Towards best experiment design for evaluating dialogue system output. arXiv preprint arXiv:1909.10122. 3. Santhanam, S., Karduni, A. and Shaikh, S., 2020, April. Studying the effects of cognitive biases in evaluation of conversational agents. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). 4. Novikova, J., Dušek, O. and Rieser, V., 2018. RankME: Reliable human ratings for natural language generation. arXiv preprint arXiv:1803.05928. 5. Li, M., Weston, J. and Roller, S., 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087.
2. What is the purpose of the average duration reported in Table 1? There is no supporting explanation about it. Does it include time spent by the user waiting for the model to generate a response?
ACL_2017_726_review
ACL_2017
- Claims of being comparable to state of the art when the results on GeoQuery and ATIS do not support it. General Discussion: This is a sound work of research and could have future potential in the way semantic parsing for downstream applications is done. I was a little disappointed with the claims of “near-state-of-the-art accuracies” on ATIS and GeoQuery, which doesn’t seem to be the case (8 points difference from Liang et. al., 2011)). And I do not necessarily think that getting SOTA numbers should be the focus of the paper, it has its own significant contribution. I would like to see this paper at ACL provided the authors tone down their claims, in addition I have some questions for the authors. - What do the authors mean by minimal intervention? Does it mean minimal human intervention, because that does not seem to be the case. Does it mean no intermediate representation? If so, the latter term should be used, being less ambiguous. - Table 6: what is the breakdown of the score by correctness and incompleteness? What % of incompleteness do these queries exhibit? - What is expertise required from crowd-workers who produce the correct SQL queries? - It would be helpful to see some analysis of the 48% of user questions which could not be generated. - Figure 3 is a little confusing, I could not follow the sharp dips in performance without paraphrasing around the 8th/9th stages. - Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers? I thank the authors for their response.
- Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers? I thank the authors for their response.
ACL_2017_105_review
ACL_2017
Maybe the model is just an ordinary BiRNN with alignments de-coupled. Only evaluated on morphology, no other monotone Seq2Seq tasks. - General Discussion: The authors propose a novel encoder-decoder neural network architecture with "hard monotonic attention". They evaluate it on three morphology datasets. This paper is a tough one. One the one hand it is well-written, mostly very clear and also presents a novel idea, namely including monotonicity in morphology tasks. The reason for including such monotonicity is pretty obvious: Unlike machine translation, many seq2seq tasks are monotone, and therefore general encoder-decoder models should not be used in the first place. That they still perform reasonably well should be considered a strong argument for neural techniques, in general. The idea of this paper is now to explicity enforce a monotonic output character generation. They do this by decoupling alignment and transduction and first aligning input-output sequences monotonically and then training to generate outputs in agreement with the monotone alignments. However, the authors are unclear on this point. I have a few questions: 1) How do your alignments look like? On the one hand, the alignments seem to be of the kind 1-to-many (as in the running example, Fig.1), that is, 1 input character can be aligned with zero, 1, or several output characters. However, this seems to contrast with the description given in lines 311-312 where the authors speak of several input characters aligned to 1 output character. That is, do you use 1-to-many, many-to-1 or many-to-many alignments? 2) Actually, there is a quite simple approach to monotone Seq2Seq. In a first stage, align input and output characters monotonically with a 1-to-many constraint (one can use any monotone aligner, such as the toolkit of Jiampojamarn and Kondrak). Then one trains a standard sequence tagger(!) to predict exactly these 1-to-many alignments. For example, flog->fliege (your example on l.613): First align as in "f-l-o-g / f-l-ie-ge". Now use any tagger (could use an LSTM, if you like) to predict "f-l-ie-ge" (sequence of length 4) from "f-l-o-g" (sequence of length 4). Such an approach may have been suggested in multiple papers, one reference could be [*, Section 4.2] below. My two questions here are: 2a) How does your approach differ from this rather simple idea? 2b) Why did you not include it as a baseline? Further issues: 3) It's really a pitty that you only tested on morphology, because there are many other interesting monotonic seq2seq tasks, and you could have shown your system's superiority by evaluating on these, given that you explicitly model monotonicity (cf. also [*]). 4) You perform "on par or better" (l.791). There seems to be a general cognitive bias among NLP researchers to map instances where they perform worse to "on par" and all the rest to "better". I think this wording should be corrected, but otherwise I'm fine with the experimental results. 5) You say little about your linguistic features: From Fig. 1, I infer that they include POS, etc. 5a) Where did you take these features from? 5b) Is it possible that these are responsible for your better performance in some cases, rather than the monotonicity constraints? Minor points: 6) Equation (3): please re-write $NN$ as $\text{NN}$ or similar 7) l.231 "Where" should be lower case 8) l.237 and many more: $x_1\ldots x_n$. As far as I know, the math community recommends to write $x_1,\ldots,x_n$ but $x_1\cdots x_n$. That is, dots should be on the same level as surrounding symbols. 9) Figure 1: is it really necessary to use cyrillic font? I can't even address your example here, because I don't have your fonts. 10) l.437: should be "these" [*] @InProceedings{schnober-EtAl:2016:COLING, author = {Schnober, Carsten and Eger, Steffen and Do Dinh, Erik-L\^{a}n and Gurevych, Iryna}, title = {Still not there? Comparing Traditional Sequence-to-Sequence Models to Encoder-Decoder Neural Networks on Monotone String Translation Tasks}, booktitle = {Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers}, month = {December}, year = {2016}, address = {Osaka, Japan}, publisher = {The COLING 2016 Organizing Committee}, pages = {1703--1714}, url = {http://aclweb.org/anthology/C16-1160} } AFTER AUTHOR RESPONSE Thanks for the clarifications. I think your alignments got mixed up in the response somehow (maybe a coding issue), but I think you're aligning 1-0, 0-1, 1-1, and later make many-to-many alignments from these. I know that you compare to Nicolai, Cherry and Kondrak (2015) but my question would have rather been: why not use 1-x (x in 0,1,2) alignments as in Schnober et al. and then train a neural tagger on these (e.g. BiLSTM). I wonder how much your results would have differed from such a rather simple baseline. ( A tagger is a monotone model to start with and given the monotone alignments, everything stays monotone. In contrast, you start out with a more general model and then put hard monotonicity constraints on this ...) NOTES FROM AC Also quite relevant is Cohn et al. (2016), http://www.aclweb.org/anthology/N16-1102 . Isn't your architecture also related to methods like the Stack LSTM, which similarly predicts a sequence of actions that modify or annotate an input? Do you think you lose anything by using a greedy alignment, in contrast to Rastogi et al. (2016), which also has hard monotonic attention but sums over all alignments?
4) You perform "on par or better" (l.791). There seems to be a general cognitive bias among NLP researchers to map instances where they perform worse to "on par" and all the rest to "better". I think this wording should be corrected, but otherwise I'm fine with the experimental results.
ARR_2022_12_review
ARR_2022
I feel the design of NVSB and some experimental results need more explanation (more information in the section below). 1. In Figure 1, given experimental dataset have paired amateur and professional recordings from the same singer, what are the main rationals for (a) Having a separate timbre encoder module (b) SADTW takes outputs of content encoder (and not timbre encoder) as input? 2. For results shown in Table 3, how to interpret: (a) For Chinese MOS-Q, NVSB is comparable to GT Mel A. (b) For Chinese and English MOS-V, Baseline and NVSB have overlapping 95% CI.
2. For results shown in Table 3, how to interpret: (a) For Chinese MOS-Q, NVSB is comparable to GT Mel A. (b) For Chinese and English MOS-V, Baseline and NVSB have overlapping 95% CI.
ARR_2022_311_review
ARR_2022
__1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty.
ACL_2017_818_review
ACL_2017
1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technical details, without clearly explaining the overall approach and why it is a good idea. 2) The experiments and the discussion need to be finished. In particular, there is no discussion of the results of one of the two tasks tackled (lower half of Table 2), and there is one obvious experiment missing: Variant B of the authors' model gives much better results on the first task than Variant A, but for the second task only Variant A is tested -- and indeed it doesn't improve over the baseline. - General Discussion: The paper needs quite a bit of work before it is ready for publication. - Detailed comments: 026 five dimensions, not six Figure 1, caption: "implies physical relations": how do you know which physical relations it implies? Figure 1 and 113-114: what you are trying to do, it looks to me, is essentially to extract lexical entailments (as defined in formal semantics; see e.g. Dowty 1991) for verbs. Could you please explicit link to that literature? Dowty, David. " Thematic proto-roles and argument selection." Language (1991): 547-619. 135 around here you should explain the key insight of your approach: why and how does doing joint inference over these two pieces of information help overcome reporting bias? 141 "values" ==> "value"? 143 please also consider work on multimodal distributional semantics, here and/or in the related work section. The following two papers are particularly related to your goals: Bruni, Elia, et al. "Distributional semantics in technicolor." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012. Silberer, Carina, Vittorio Ferrari, and Mirella Lapata. " Models of Semantic Representation with Visual Attributes." ACL (1). 2013. 146 please clarify that your contribution is the specific task and approach -- commonsense knowledge extraction from language is long-standing task. 152 it is not clear what "grounded" means at this point Section 2.1: why these dimensions, and how did you choose them? 177 explain terms "pre-condition" and "post-condition", and how they are relevant here 197-198 an example of the full distribution for an item (obtained by the model, or crowd-sourced, or "ideal") would help. Figure 2. I don't really see the "x is slower than y" part: it seems to me like this is related to the distinction, in formal semantics, between stage-level vs. individual-level predicates: when a person throws a ball, the ball is faster than the person (stage-level) but it's not true in general that balls are faster than people (individual-level). I guess this is related to the pre-condition vs. post-condition issue. Please spell out the type of information that you want to extract. 248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them? 326 How do you know whether the frame is under- or over-generating? Table 1: are the partitions made by frame, by verb, or how? That is, do you reuse verbs or frames across partitions? Also, proportions are given for 2 cases (2/3 and 3/3 agreement), whereas counts are only given for one case; which? 336 "with... PMI": something missing (threshold?) 371 did you do this partitions randomly? 376 "rate *the* general relationship" 378 "knowledge dimension we choose": ? ( how do you choose which dimensions you will annotate for each frame?) Section 4 What is a factor graph? Please give enough background on factor graphs for a CL audience to be able to follow your approach. What are substrates, and what is the role of factors? How is the factor graph different from a standard graph? More generally, at the beginning of section 4 you should give a higher level description of how your model works and why it is a good idea. 420 "both classes of knowledge": antecedent missing. 421 "object first type" 445 so far you have been only talking about object pairs and verbs, and suddenly selectional preference factors pop in. They seem to be a crucial part of your model -- introduce earlier? In any case, I didn't understand their role. 461 "also"? 471 where do you get verb-level similarities from? Figure 3: I find the figure totally unintelligible. Maybe if the text was clearer it would be interpretable, but maybe you can think whether you can find a way to convey your model a bit more intuitively. Also, make sure that it is readable in black-and-white, as per ACL submission instructions. 598 define term "message" and its role in the factor graph. 621 why do you need a "soft 1" instead of a hard 1? 647ff you need to provide more details about the EMB-MAXENT classifier (how did you train it, what was the input data, how was it encoded), and also explain why it is an appropriate baseline. 654 "more skimp seed knowledge": ? 659 here and in 681, problem with table reference (should be Table 2). 664ff I like the thought but I'm not sure the example is the right one: in what sense is the entity larger than the revolution? Also, "larger" is not the same as "stronger". 681 as mentioned above, you should discuss the results for the task of inferring knowledge on objects, and also include results for model (B) (incidentally, it would be better if you used the same terminology for the model in Tables 1 and 2) 778 "latent in verbs": why don't you mention objects here? 781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details.
781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details.
ARR_2022_317_review
ARR_2022
- Lack of novelty: - Adversarial attacks by perturbing text has been done on many NLP models and image-text models. It is nicely summarized in related work of this paper. The only new effort is to take similar ideas and apply it on video-text models. - Checklist (Ribeiro et. al., ACL 2020) had shown many ways to stress test NLP models and evaluate them. Video-text models could also be tested on some of those dimensions. For instance on changing NER. - If you could propose any type of perturbation which is specific to video-text models (and probably not that important to image-text or just text models) will be interesting to see. Otherwise, this work, just looks like a using an already existing method on this new problem (video-text) which is just coming up. - Is there a way to take any clue from the video to create harder negatives.
- Lack of novelty:- Adversarial attacks by perturbing text has been done on many NLP models and image-text models. It is nicely summarized in related work of this paper. The only new effort is to take similar ideas and apply it on video-text models.
ACL_2017_31_review
ACL_2017
] See below for details of the following weaknesses: - Novelties of the paper are relatively unclear. - No detailed error analysis is provided. - A feature comparison with prior work is shallow, missing two relevant papers. - The paper has several obscure descriptions, including typos. [General Discussion:] The paper would be more impactful if it states novelties more explicitly. Is the paper presenting the first neural network based approach for event factuality identification? If this is the case, please state that. The paper would crystallize remaining challenges in event factuality identification and facilitate future research better if it provides detailed error analysis regarding the results of Table 3 and 4. What are dominant sources of errors made by the best system BiLSTM+CNN(Att)? What impacts do errors in basic factor extraction (Table 3) have on the overall performance of factuality identification (Table 4)? The analysis presented in Section 5.4 is more like a feature ablation study to show how useful some additional features are. The paper would be stronger if it compares with prior work in terms of features. Does the paper use any new features which have not been explored before? In other words, it is unclear whether main advantages of the proposed system come purely from deep learning, or from a combination of neural networks and some new unexplored features. As for feature comparison, the paper is missing two relevant papers: - Kenton Lee, Yoav Artzi, Yejin Choi and Luke Zettlemoyer. 2015 Event Detection and Factuality Assessment with Non-Expert Supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643-1648. - Sandeep Soni, Tanushree Mitra, Eric Gilbert and Jacob Eisenstein. 2014. Modeling Factuality Judgments in Social Media Text. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 415-420. The paper would be more understandable if more examples are given to illustrate the underspecified modality (U) and the underspecified polarity (u). There are two reasons for that. First, the definition of 'underspecified' is relatively unintuitive as compared to other classes such as 'probable' or 'positive'. Second, the examples would be more helpful to understand the difficulties of Uu detection reported in line 690-697. Among the seven examples (S1-S7), only S7 corresponds to Uu, and its explanation is quite limited to illustrate the difficulties. A minor comment is that the paper has several obscure descriptions, including typos, as shown below: - The explanations for features in Section 3.2 are somewhat intertwined and thus confusing. The section would be more coherently organized with more separate paragraphs dedicated to each of lexical features and sentence-level features, by: - (1) stating that the SIP feature comprises two features (i.e., lexical-level and sentence-level) and introduce their corresponding variables (l and c) *at the beginning*; - (2) moving the description of embeddings of the lexical feature in line 280-283 to the first paragraph; and - (3) presenting the last paragraph about relevant source identification in a separate subsection because it is not about SIP detection. - The title of Section 3 ('Baseline') is misleading. A more understandable title would be 'Basic Factor Extraction' or 'Basic Feature Extraction', because the section is about how to extract basic factors (features), not about a baseline end-to-end system for event factuality identification. - The presented neural network architectures would be more convincing if it describes how beneficial the attention mechanism is to the task. - Table 2 seems to show factuality statistics only for all sources. The table would be more informative along with Table 4 if it also shows factuality statistics for 'Author' and 'Embed'. - Table 4 would be more effective if the highest system performance with respect to each combination of the source and the factuality value is shown in boldface. - Section 4.1 says, "Aux_Words can describe the *syntactic* structures of sentences," whereas section 5.4 says, "they (auxiliary words) can reflect the *pragmatic* structures of sentences." These two claims do not consort with each other well, and neither of them seems adequate to summarize how useful the dependency relations 'aux' and 'mark' are for the task. - S7 seems to be another example to support the effectiveness of auxiliary words, but the explanation for S7 is thin, as compared to the one for S6. What is the auxiliary word for 'ensure' in S7? - Line 162: 'event go in S1' should be 'event go in S2'. - Line 315: 'in details' should be 'in detail'. - Line 719: 'in Section 4' should be 'in Section 4.1' to make it more specific. - Line 771: 'recent researches' should be 'recent research' or 'recent studies'. 'Research' is an uncountable noun. - Line 903: 'Factbank' should be 'FactBank'.
- The explanations for features in Section 3.2 are somewhat intertwined and thus confusing. The section would be more coherently organized with more separate paragraphs dedicated to each of lexical features and sentence-level features, by:
ACL_2017_779_review
ACL_2017
However, there are many points that need to be address before this paper is ready for publication. 1) Crucial information is missing Can you flesh out more clearly how training and decoding happen in your training framework? I found out that the equations do not completely describe the approach. It might be useful to use a couple of examples to make your approach clearer. Also, how is the montecarlo sampling done? 2) Organization The paper is not very well organized. For example, results are broken into several subsections, while they’d better be presented together.  The organization of the tables is very confusing. Table 7 is referred before table 6. This made it difficult to read the results. 3) Inconclusive results: After reading the results section, it’s difficult to draw conclusions when, as the authors point out in their comparisons, this can be explained by the total size of the corpus involved in their methods (621  ). 4) Not so useful information: While I appreciate the fleshing out of the assumptions, I find that dedicating a whole section of the paper plus experimental results is a lot of space. - General Discussion: Other: 578:  We observe that word-level models tend to have lower valid loss compared with sentence- level methods…. Is it valid to compare the loss from two different loss functions? Sec 3.2, the notations are not clear. What does script(Y) means? How do we get p(y|x)? this is never explained Eq 7 deserves some explanation, or better removed. 320: What approach did you use? You should talk about that here 392 : Do you mean 2016? Nitty-gritty: 742  : import => important 772  : inline citation style 778: can significantly outperform 275: Assumption 2 needs to be rewritten … a target sentence y from x should be close to that from its counterpart z.
4) Not so useful information: While I appreciate the fleshing out of the assumptions, I find that dedicating a whole section of the paper plus experimental results is a lot of space.
ARR_2022_82_review
ARR_2022
- In the “Updating Facts” section, although the results seem to show that modifying the neurons using the word embeddings is effective, the paper lacks a discussion on this. It is not intuitive to me that there is a connection between a neuron at a middle layer and the word embeddings (which are used at the input layer). - Using integrated gradients to measure the attribution has been studied in existing papers. The paper also proposes post-processing steps to filter out the “false-positive” neurons, however, the paper doesn’t show how important these post-processing steps are. I think an ablation study may be needed. - The paper lacks details of experimental settings. For example, how are those hyperparameters ($t$, $p$, $\lambda_1$, etc.) tuned? In table 5, why do “other relations” have a very different scale of perplexity compared to “erased relation” before erasing? Are “other relations” randomly selected? - The baseline method (i.e., using activation values as the attribution score) is widely used in previous studies. Although the paper empirically shows that the baseline is not as effective as the proposed method, - I expect more discussion on why using activation values is not a good idea. - One limitation of this study is that the paper only focuses on single-word cloze queries (as discussed in the paper). - Figure 3: The illustration is not clear to me. Why are there two “40%” in the figure? - I was confused that the paper targets single-token cloze queries or multi-token ones. I did not see a clear clarification until reading the conclusion.
- Using integrated gradients to measure the attribution has been studied in existing papers. The paper also proposes post-processing steps to filter out the “false-positive” neurons, however, the paper doesn’t show how important these post-processing steps are. I think an ablation study may be needed.
ACL_2017_19_review
ACL_2017
But I have a few questions regarding finding the antecedent of a zero pronoun: 1. How will an antecedent be identified, when the prediction is a pronoun? The authors proposed a method by matching the head of noun phrases. It’s not clear how to handle the situation when the head word is not a pronoun. 2. What if the prediction is a noun that could not be found in the previous contents? 3. The system achieves great results on standard data set. I’m curious is it possible to evaluate the system in two steps? The first step is to evaluate the performance of the model prediction, i.e. to recover the dropped zero pronoun into a word; the second step is to evaluate how well the systems works on finding an antecedent. I’m also curious why the authors decided to use attention-based neural network. A few sentences to provide the reasons would be helpful for other researchers. A minor comment: In figure 2, should it be s1, s2 … instead of d1, d2 ….? - General Discussion: Overall it is a great paper with innovative ideas and solid experiment setup.
1. How will an antecedent be identified, when the prediction is a pronoun? The authors proposed a method by matching the head of noun phrases. It’s not clear how to handle the situation when the head word is not a pronoun.
ACL_2017_318_review
ACL_2017
1. Presentation and clarity: important details with respect to the proposed models are left out or poorly described (more details below). Otherwise, the paper generally reads fairly well; however, the manuscript would need to be improved if accepted. 2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below). - General Discussion: 1. The authors stress the importance of accounting for polysemy and learning sense-specific representations. While polysemy is taken into account by calculating sense distributions for words in particular contexts in the learning procedure, the evaluation tasks are entirely context-independent, which means that, ultimately, there is only one vector per word -- or at least this is what is evaluated. Instead, word sense disambiguation and sememe information are used for improving the learning of word representations. This needs to be clarified in the paper. 2. It is not clear how the sememe embeddings are learned and the description of the SSA model seems to assume the pre-existence of sememe embeddings. This is important for understanding the subsequent models. Do the SAC and SAT models require pre-training of sememe embeddings? 3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work. 4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting for the authors to look deeper into this. This aspect also does not seem to explain the improvements much since, e.g., the word similarity data sets contain frequent word pairs. 5. Related to the above point, the improvement gains seem more attributable to the incorporation of sememe information than word sense disambiguation in the learning procedure. As mentioned earlier, the evaluation involves only the use of context-independent word representations. Even if the method allows for learning sememe- and sense-specific representations, they would have to be aggregated to carry out the evaluation task. 6. The example illustrating HowNet (Figure 1) is not entirely clear, especially the modifiers of "computer". 7. It says that the models are trained using their best parameters. How exactly are these determined? It is also unclear how K is set -- is it optimized for each model or is it randomly chosen for each target word observation? Finally, what is the motivation for setting K' to 2?
3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work.
ARR_2022_227_review
ARR_2022
1. The case made for adopting the proposed strategy for a new automated evaluation paradigm - auto-rewrite (where the questions that are not valid due to a coreference resolution failure in terms of the previous answer get their entity replaced to be made consistent with the gold conversational history) - seems weak. While the proposed strategy does seem to do better in terms of being closer to how humans evaluated the 4 models (all in the context of one specific English dataset), it is not clear how the proposed strategy - a) does better than the previously proposed strategy of using model-predicted history (auto-pred). Looking at the comparison results for different evaluations - in terms of table 1, there definitely does not seem to be much difference between the two strategies (auto-rewrite and auto-pred). In fig 5, for some (2/6) pairs, the pred-history strategy has higher agreement than the proposed auto-rewrite strategy while they are all at the same agreement for 1/6 pairs. b) gets to the fundamental problem with automated evaluation raised in the paper, which is that "when placed in realistic settings, the models never have access to the ground truth (gold answers) and are only exposed to the conversational history and the passage." The proposed strategy seems to need gold answers as well, which is incompatible with the real-world use case. The previously proposed auto-pred strategy, however, uses only the questions and the model's own predictions to form the conversational history - which seems to be more compatible with the real-world use case. In summary, it is not clear why the proposed new way of automatically evaluating CQA systems is better or should be adopted as opposed to the previously proposed automated evaluation method of using a model's predictions as the conversational history (auto-pred), and the comparison between the results for these two automated strategies seems to be a missing exploration and discussion. Questions to the authors (which also act as suggestions): Q1. - Line 151: "four representative CQA models" - what does representative mean here? representative in what sense? In terms of types or architectures of models? This needs clarification and takes on importance because the discrepancy, in terms of how models get evaluated on human vs automated evaluation, depends on these four models in a sense. Q2. Line 196: "We noticed that the annotators are biased when evaluating the correctness of answers" - are any statistics on this available? Q3. Section 3.1: For Mechanical Turk crowdsourcing work, what was the compensation rate for the annotators? This should be mentioned, if not in the main text, then add to appendix and point to it in the main text. Also, following the work in Card et al (2020) ("With Little Power Comes Great Responsibility.") - were there any steps taken to ensure the human annotation collection study was appropriately powered? ( If not, consider noting or discussing this somewhere in the paper as it helps with understanding the validity of human experiments) Q4. Lines 264-265: "The gap between HAM and ExCorD is significant in Auto-Gold" - how is significance measured here? Q5. Lines 360-364: "We determine whether e∗_j = e_j by checking if F1(s∗_{j,1}, s_{j,1}) > 0 .... .... as long as their first mentions have word overlap." Two questions here - 5a. It is not clear why word overlap was used and not just an exact match here? What about cases where there is some word overlap but the two entities are indeed different, and therefore, the question is invalid (in terms of coreference resolution) but deemed valid? 5b. How accurate is this invalid question detection strategy? In case this has not already been measured, perhaps a sample of instances where predicted history invalidates questions via unresolved coreference (marked by humans) can be used to then detect if the automated method catches these instances accurately. Having some idea of how well invalid question detection happens is needed to get a sense of if or how many of the invalid questions will get rewritten. Comments, suggestions, typos: - Line 031: "has the promise to revolutionize" - this should be substantiated further, seems quite vague. - Line 048: "extremely competitive performance of" - what is 'performance' for these systems? ideally be specific since, at this point in the paper, we do not know what is being measured, and 'extremely competitive' is also quite vague. - The abstract is written well and invokes intrigue early - could potentially be made even better if, for "evaluating with gold answers is inconsistent with human evaluation" - an example of the inconsistency, such as models get ranked differently is also given there. - Line 033: "With recent development of large-scale datasets" -> the* recent development, but more importantly - which languages are these datasets in? And for this overall work on CQA, the language which is focused on should be mentioned early on in the introduction and ideally in the abstract itself. - Line 147: "more modeling work has been done than in free-form question answering" - potential typo, maybe it should be "maybe more modeling work has been done 'in that'" - where that refers to extractive QA? - Line 222: "In total, we collected 1,446 human-machine con- versations and 15,059 question-answer pairs" - suggestion: It could be reasserted here that this dataset will be released as this collection of conversations is an important resource and contribution and does not appear to have been highlighted as much as it could. - Figure 2: It is a bit unintuitive and confusing to see the two y-axes with different ranges and interpret what it means for the different model evaluations. Can the same ranges on the y-axes be used at least even if the two metrics are different? Perhaps the F1 can use the same range as Accuracy - it would mean much smaller gold bars but hopefully, still get the point across without trying to keep two different ranges in our head? Still, the two measures are different - consider making two side-by-side plots instead if that is feasible instead of both evaluations represented in the same chart. - Lines 250-252: "the absolute numbers of human evaluation are much higher than those of automatic evaluations" - saying this seems a bit suspect - what does absolute accuracy numbers being higher than F1 scores mean? They are two different metrics altogether and should probably not be compared in this manner. Dropping this, the other implications still hold well and get the point across - the different ranking of certain models, and Auto-Gold conveying a gap between two models where Human Eval does not. - Line 348: "background 4, S∗ - latex styling suggestion, add footnote marker only right after the punctuation for that renders better with latex, so - "background,\footnote{} ..." in latex. - Footnote 4: 'empirically helpful' - should have a cite or something to back that there. - Related Work section: a suggestion that could make this section but perhaps also the broader work stronger and more interesting to a broader audience is making the connection to how this work fits with other work looking at different NLP tasks that looks at failures of the popular automated evaluation strategy or metrics failing to capture or differing significantly from how humans would evaluate systems in a real-world setting.
- The abstract is written well and invokes intrigue early - could potentially be made even better if, for "evaluating with gold answers is inconsistent with human evaluation" - an example of the inconsistency, such as models get ranked differently is also given there.
ACL_2017_818_review
ACL_2017
- I would have liked to see more examples of objects pairs, action verbs, and predicted attribute relations. What are some interesting action verbs and corresponding attributes relations? The paper also lacks analysis/discussion on what kind of mistakes their model makes. - The number of object pairs (3656) in the dataset is very small. How many distinct object categories are there? How scalable is this approach to larger number of object pairs? - It's a bit unclear how the frame similarity factors and attributes similarity factors are selected. General Discussion/Suggestions: - The authors should discuss the following work and compare against mining attributes/attribute distributions directly and then getting a comparative measure. What are the advantages offered by the proposed method compared to a more direct approach? Extraction and approximation of numerical attributes from the Web Dmitry Davidov, Ari Rappoport ACL 2010 Minor typos: 1. In the abstract (line 026), the authors mention 'six' dimensions, but in the paper, there is only five. 2. line 248: Above --> The above 3. line 421: object first --> first 4. line 654: more skimp --> a smaller 5. line 729: selctional --> selectional
- It's a bit unclear how the frame similarity factors and attributes similarity factors are selected.
ACL_2017_699_review
ACL_2017
1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results. 2. The evaluation process shows that the current system (which extracts 1. Present and 2. Absent both kinds of keyphrases) is evaluated against baselines (which contains only "present" type of keyphrases). Here there is no direct comparison of the performance of the current system w.r.t. other state-of-the-arts/benchmark systems on only "present" type of key phrases. It is important to note that local phrases (keyphrases) are also important for the document. The experiment does not discuss it explicitly. It will be interesting to see the impact of the RNN and Copy RNN based model on automatic extraction of local or "present" type of key phrases. 3. The impact of document size in keyphrase extraction is also an important point. It is found that the published results of [1], (see reference below) performs better than (with a sufficiently high difference) the current system on Inspec (Hulth, 2003) abstracts dataset. 4. It is reported that current system uses 527,830 documents for training, while 40,000 publications are held out for training baselines. Why are all publications not used in training the baselines? Additionally, The topical details of the dataset (527,830 scientific documents) used in training RNN and Copy RNN are also missing. This may affect the chances of repeating results. 5. As the current system captures the semantics through RNN based models. So, it would be better to compare this system, which also captures semantics. Even, Ref-[2] can be a strong baseline to compare the performance of the current system. Suggestions to improve: 1. As, per the example, given in the Figure-1, it seems that all the "absent" type of key phrases are actually "Topical phrases". For example: "video search", "video retrieval", "video indexing" and "relevance ranking", etc. These all define the domain/sub-domain/topics of the document. So, In this case, it will be interesting to see the results (or will be helpful in evaluating "absent type" keyphrases): if we identify all the topical phrases of the entire corpus by using tf-idf and relate the document to the high-ranked extracted topical phrases (by using Normalized Google Distance, PMI, etc.). As similar efforts are already applied in several query expansion techniques (with the aim to relate the document with the query, if matching terms are absent in document). Reference: 1. Liu, Zhiyuan, Peng Li, Yabin Zheng, and Maosong Sun. 2009b. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 257–266. 2. Zhang, Q., Wang, Y., Gong, Y., & Huang, X. (2016). Keyphrase extraction using deep recurrent neural networks on Twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 836-845).
1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results.
ACL_2017_818_review
ACL_2017
1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technical details, without clearly explaining the overall approach and why it is a good idea. 2) The experiments and the discussion need to be finished. In particular, there is no discussion of the results of one of the two tasks tackled (lower half of Table 2), and there is one obvious experiment missing: Variant B of the authors' model gives much better results on the first task than Variant A, but for the second task only Variant A is tested -- and indeed it doesn't improve over the baseline. - General Discussion: The paper needs quite a bit of work before it is ready for publication. - Detailed comments: 026 five dimensions, not six Figure 1, caption: "implies physical relations": how do you know which physical relations it implies? Figure 1 and 113-114: what you are trying to do, it looks to me, is essentially to extract lexical entailments (as defined in formal semantics; see e.g. Dowty 1991) for verbs. Could you please explicit link to that literature? Dowty, David. " Thematic proto-roles and argument selection." Language (1991): 547-619. 135 around here you should explain the key insight of your approach: why and how does doing joint inference over these two pieces of information help overcome reporting bias? 141 "values" ==> "value"? 143 please also consider work on multimodal distributional semantics, here and/or in the related work section. The following two papers are particularly related to your goals: Bruni, Elia, et al. "Distributional semantics in technicolor." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012. Silberer, Carina, Vittorio Ferrari, and Mirella Lapata. " Models of Semantic Representation with Visual Attributes." ACL (1). 2013. 146 please clarify that your contribution is the specific task and approach -- commonsense knowledge extraction from language is long-standing task. 152 it is not clear what "grounded" means at this point Section 2.1: why these dimensions, and how did you choose them? 177 explain terms "pre-condition" and "post-condition", and how they are relevant here 197-198 an example of the full distribution for an item (obtained by the model, or crowd-sourced, or "ideal") would help. Figure 2. I don't really see the "x is slower than y" part: it seems to me like this is related to the distinction, in formal semantics, between stage-level vs. individual-level predicates: when a person throws a ball, the ball is faster than the person (stage-level) but it's not true in general that balls are faster than people (individual-level). I guess this is related to the pre-condition vs. post-condition issue. Please spell out the type of information that you want to extract. 248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them? 326 How do you know whether the frame is under- or over-generating? Table 1: are the partitions made by frame, by verb, or how? That is, do you reuse verbs or frames across partitions? Also, proportions are given for 2 cases (2/3 and 3/3 agreement), whereas counts are only given for one case; which? 336 "with... PMI": something missing (threshold?) 371 did you do this partitions randomly? 376 "rate *the* general relationship" 378 "knowledge dimension we choose": ? ( how do you choose which dimensions you will annotate for each frame?) Section 4 What is a factor graph? Please give enough background on factor graphs for a CL audience to be able to follow your approach. What are substrates, and what is the role of factors? How is the factor graph different from a standard graph? More generally, at the beginning of section 4 you should give a higher level description of how your model works and why it is a good idea. 420 "both classes of knowledge": antecedent missing. 421 "object first type" 445 so far you have been only talking about object pairs and verbs, and suddenly selectional preference factors pop in. They seem to be a crucial part of your model -- introduce earlier? In any case, I didn't understand their role. 461 "also"? 471 where do you get verb-level similarities from? Figure 3: I find the figure totally unintelligible. Maybe if the text was clearer it would be interpretable, but maybe you can think whether you can find a way to convey your model a bit more intuitively. Also, make sure that it is readable in black-and-white, as per ACL submission instructions. 598 define term "message" and its role in the factor graph. 621 why do you need a "soft 1" instead of a hard 1? 647ff you need to provide more details about the EMB-MAXENT classifier (how did you train it, what was the input data, how was it encoded), and also explain why it is an appropriate baseline. 654 "more skimp seed knowledge": ? 659 here and in 681, problem with table reference (should be Table 2). 664ff I like the thought but I'm not sure the example is the right one: in what sense is the entity larger than the revolution? Also, "larger" is not the same as "stronger". 681 as mentioned above, you should discuss the results for the task of inferring knowledge on objects, and also include results for model (B) (incidentally, it would be better if you used the same terminology for the model in Tables 1 and 2) 778 "latent in verbs": why don't you mention objects here? 781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details.
681 as mentioned above, you should discuss the results for the task of inferring knowledge on objects, and also include results for model (B) (incidentally, it would be better if you used the same terminology for the model in Tables 1 and 2) 778 "latent in verbs": why don't you mention objects here?
ACL_2017_49_review
ACL_2017
As always, more could be done in the experiments section to strengthen the case for chunk-based models. For example, Table 3 indicates good results for Model 2 and Model 3 compared to previous papers, but a careful reader will wonder whether these improvements come from switching from LSTMs to GRUs. In other words, it would be good to see the GRU tree-to-sequence result to verify that the chunk-based approach is still best. Another important aspect is the lack of ensembling results. The authors put a lot of emphasis is claiming that this is the best single NMT model ever published. While this is probably true, in the end the best WAT system for Eng-Jap is at 38.20 (if I'm reading the table correctly) - it's an ensemble of 3. If the authors were able to report that their 3-way chunk-based ensemble comes top of the table, then this paper could have a much stronger impact. Finally, Table 3 would be more interesting if it included decoding times. The authors mention briefly that the character-based model is less time-consuming (presumably based on Eriguchi et al.'16), but no cite is provided, and no numbers from chunk-based decoding are reported either. Is the chunk-based model faster or slower than word-based? Similar? Who know... Adding a column to Table 3 with decoding times would give more value to the paper. - General Discussion: Overall I think the paper is interesting and worth publishing. I have minor comments and suggestions to the authors about how to improve their presentation (in my opinion, of course). - I think they should clearly state early on that the chunks are supplied externally - in other words, that the model does not learn how to chunk. This only became apparent to me when reading about CaboCha on page 6 - I don't think it's mentioned earlier, and it is important. - I don't see why the authors contrast against the char-based baseline so often in the text (at least a couple of times they boast a +4.68 BLEU gain). I don't think readers are bothered... Readers are interested in gains over the best baseline. - It would be good to add a bit more detail about the way UNKs are being handled by the neural decoder, or at least add a citation to the dictionary-based replacement strategy being used here. - The sentence in line 212 ("We train a GRU that encodes a source sentence into a single vector") is not strictly correct. The correct way would be to say that you do a bidirectional encoder that encodes the source sentence into a set of vectors... at least, that's what I see in Figure 2. - The motivating example of lines 69-87 is a bit weird. Does "you" depend on "bite"? Or does it depend on the source side? Because if it doesn't depend on "bite", then the argument that this is a long-dependency problem doesn't really apply.
- The sentence in line 212 ("We train a GRU that encodes a source sentence into a single vector") is not strictly correct. The correct way would be to say that you do a bidirectional encoder that encodes the source sentence into a set of vectors... at least, that's what I see in Figure 2.
ACL_2017_494_review
ACL_2017
- fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) - General Discussion: The paper describes "morph-fitting", a type of retrofitting for vector spaces that focuses specifically on incorporating morphological constraints into the vector space. The framework is based on the idea of "attract" and "repel" constraints, where attract constraints are used to pull morphological variations close together (e.g. look/looking) and repel constraints are used to push derivational antonyms apart (e.g. responsible/irresponsible). They test their algorithm on multiple different vector spaces and several language, and show consistent improvements on intrinsic evaluation (SimLex-999, and SimVerb-3500). They also test on the extrinsic task of dialogue state tracking, and again demonstrate measurable improvements over using morphologically-unaware word embeddings. I think this is a very nice paper. It is a simple and clean way to incorporate linguistic knowledge into distributional models of semantics, and the empirical results are very convincing. I have some questions/comments below, but nothing that I feel should prevent it from being published. - Comments for Authors 1) I don't really understand the need for the morph-simlex evaluation set. It seems a bit suspect to create a dataset using the same algorithm that you ultimately aim to evaluate. It seems to me a no-brainer that your model will do well on a dataset that was constructed by making the same assumptions the model makes. I don't think you need to include this dataset at all, since it is a potentially erroneous evaluation that can cause confusion, and your results are convincing enough on the standard datasets. 2) I really liked the morph-fix baseline, thank you for including that. I would have liked to see a baseline based on character embeddings, since this seems to be the most fashionable way, currently, to side-step dealing with morphological variation. You mentioned it in the related work, but it would be better to actually compare against it empirically. 3) Ideally, we would have a vector space where morphological variants are just close together, but where we can assign specific semantics to the different inflections. Do you have any evidence that the geometry of the space you end with is meaningful. E.g. does "looking" - "look" + "walk" = "walking"? It would be nice to have some analysis that suggests the morphfitting results in a more meaningful space, not just better embeddings.
- fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) -
ARR_2022_68_review
ARR_2022
1. Despite the well-motivated problem formulation, the simulation is not realistic. The author does not really collect feedback from human users but derives them from labeled data. One can imagine users can find out that returned answers are contrastive to commonsense. For instance, one can know that “Tokyo” is definitely a wrong answer to the question “What is the capital of South Africa?”. However, it is not very reasonable to assume that the users are knowledgeable enough to provide both positive and negative feedback. If so, why do they need to ask QA models? And what is the difference between collecting feedback and labeling data? In conclusion, it would be more realistic to assume that only a small portion of the negative feedback is trustworthy, and there may be little or no reliable positive feedback. According to the experiment results, however, 20% of feedback perturbation makes the proposed method fail. Therefore, the experiment results cannot support the claim made by authors. 2. There is a serious issue of missing related work. As mentioned above, Campos et al. (2020) has already investigated using user feedback to fine-tune deployed QA models. They also derive feedback from gold labels and conduct experiments with both in-domain and out-of-domain evaluation. The proposed methods are also similar: upweighting or downweighting the likelihood of predicted answer according to the feedback. Moreover, Campos et al. (2020) has a more reasonable formulation, where there could be multiple feedback for a certain pair of questions and predicted answers. 3. The adopted baseline models are weak. First of all, the author does not compare to Campos et al. (2020), which also uses feedback in QA tasks. Second, they also do no comparison with other domain adaptation methods, such as those work cited in Section 8. Line 277: “The may be attributed…” -> “This may be attributed…
3. The adopted baseline models are weak. First of all, the author does not compare to Campos et al. (2020), which also uses feedback in QA tasks. Second, they also do no comparison with other domain adaptation methods, such as those work cited in Section 8. Line 277: “The may be attributed…” -> “This may be attributed…
ARR_2022_113_review
ARR_2022
The methodology part is a little bit unclear. The author could describe clearly how the depth-first path completion really works using Figure 3. Also, I'm not sure if the ZIP algorithm is proposed by the authors and also confused about how the ZIP algorithm handles multiple sequence cases. - Figure 2, it is not clear about "merge target". If possible, you may use a shorter sentence. - Line 113 (right column), will the lattice graph size explode? For a larger dataset, it may impossible to just get the lattice graph, am I right? How should you handle that case? - Algorithm 1 step 4 and 5, you may need to give the detailed steps of *isRecomb* and *doRecomb* in the appendix. - Line 154 left, "including that it optimizes for the wrong objective". Can you clearly state what objective? why the beam search algorithm is wrong? Beam search is a greedy algorithm that can recover the best output with high probability. - For the ZIP method, one thing unclear to me is how you combine multiple sequences by if they have different lengths of shared suffixes? - Line 377, is BFSZIP an existing work? If so, you need to cite their work. - In figure 5, the y-axis label may use "Exact Match ratio" directly. - Line 409, could you cite the "R2" metric? - Appendix A, the authors state "better model score cannot result in better hypothesis". You'd better state clearly what idea hypothesis you want. " a near-optimal model score" this sentence is unclear to me, could you explain in detail? - In line 906, it is clear from the previous papers that Beam search results lack diversity and increase the beam size does not work. Can you simplify the paragraph?
- In figure 5, the y-axis label may use "Exact Match ratio" directly.
ARR_2022_149_review
ARR_2022
- The attribute-based approach can be useful if the attribute is given. This limits the application of the proposed approach if there is no attribute given but the text is implicitly offensive. - It is not described if the knowledge bases that are inserted in are free from societal biases, or the issue is not affected by such restriction. Comments - I like attacking implicit offensive texts with reasoning chains, but not yet convinced with the example of Fig. 1. If other contexts such as 'S1 is fat/poor' is not given, then the conversation between S1 and S2 seems quite natural. The addressee may ask if bookclubs provide free food without offensive intention. If the word of S2 is to be decided as 'implicitly offensive', then one of the reasoning chain, such as 'you are fat/poor', should be provided as a context. If I correctly understood, I recommend the authors to change the example to more clear-cut and non-ambiguous one. - I found some issues in the provided example; i) AIR is still written as ASR, and ii) there are some empty chains in the full file. Hope this to be checked in the final release. Suggestions - It would be nice if the boundary between explicit and implicit offensive texts is stated clearly. Is it decided as explicit if the statement contains specific terms and offensive? Or specific expression / sentence structure? - Please be more specific on the 'Chain of Reasoning' section, especially line 276. - Please describe more on MNLI corpus, on the reason why the dataset is utilized in the training of entailment system. Typos - TweetEval << two 'L's in line 349
- It is not described if the knowledge bases that are inserted in are free from societal biases, or the issue is not affected by such restriction. Comments - I like attacking implicit offensive texts with reasoning chains, but not yet convinced with the example of Fig.
ACL_2017_365_review
ACL_2017
1) Instead of arguing that the MTL approach replaces the attention mechanism, I think the authors should investigate why attention did not work on MTL, and perhaps modify the attention mechanism so that it would not harm performance. 2) I think the authors should reference past seq2seq MTL work, such as [2] and [3]. The MTL work in [2] also worked on non-attention seq2seq models. 3) This paper only tested on one German historical text data set of 44 documents. It would be interesting if the authors can evaluate the same approach in another language or data set. References: [1] Allen Schmaltz, Yoon Kim, Alexander M. Rush, and Stuart Shieber. 2016. Sentence-level grammatical error identification as sequence-to-sequence correction. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications. [2] Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Lukasz Kaiser. Multi-task Sequence to Sequence Learning. ICLR’16. [3] Dong, Daxiang, Wu, Hua, He, Wei, Yu, Dianhai, and Wang, Haifeng. Multi-task learning for multiple language translation. ACL'15 --------------------------- Here is my reply to the authors' rebuttal: I am keeping my review score of 3, which means I do not object to accepting the paper. However, I am not raising my score for 2 reasons: - the authors did not respond to my questions about other papers on seq2seq MTL, which also avoided using attention mechanism. So in terms of novelty, the main novelty lies in applying it to text normalization. - it is always easier to show something (i.e. attention in seq2seq MTL) is not working, but the value would lie in finding out why it fails and changing the attention mechanism so that it works.
- it is always easier to show something (i.e. attention in seq2seq MTL) is not working, but the value would lie in finding out why it fails and changing the attention mechanism so that it works.
ARR_2022_202_review
ARR_2022
1. The write-up has many typos and some formulas/explanations are confusing. 2. The technical innovation of the proposed method is limited. The proposed objective function is basically a combination of two related works with tiny changes. 3. Reproductivity is not ideal, as some essential parts are not addressed in the paper, such as training data. 4. More strong baselines should be included/discussed in the experiments. Comments 1. Line 18: it’s better to specify the exact tasks rather than stating several tasks in the abstract 2. Line 69: “current” -> “currently” 3. Line 112: margin-based loss can use one positive with MULTIPLE negatives. 4. Figure 1: the relation types are not explicitly modeled in this work so the figure is kind of confusing. 5. Eq. 1: what is z_p 6. Eq. 2: how do you convert BERT token embeddings into event embeddings? Concatenation? Any pooling? 7. Line 236-237: “conciser” -> “consider” 8. Line 209-211: I can understand what you mean but this part should be re-written. For example, “given an anchor event, we generate 3 positive samples with different dropout masks.” 9. Table 1: needs to include a random baseline to show the task difficulty 10. Experiments: what training data you use for pre-training? 11. Table 1: do the baselines and the proposed method use the same training data for pre-training? How did you get the results for the baselines? Did you have your own implementation, or directly use their released embeddings/code? 12. Line 450: L_{pc} or L_{cp}| in Eq. 7? 13. Table 2: what’s the difference between “SWCC w/o Prototype-based Clustering” and “BERT(InfoNCE)”? 14. Table 3: MCNC should have many strong baselines that are not compared here, such as the baselines in [1]. Can you justify the reason? 15. Can you provide an analysis on the impact of the number of augmented samples (e.g., z_{a1}, z_{a2}) here?
14. Table 3: MCNC should have many strong baselines that are not compared here, such as the baselines in [1]. Can you justify the reason?
ARR_2022_303_review
ARR_2022
- Citation type recognition is limited to two types –– dominant and reference –– which belies the complexity of the citation function, which is a significant line of research by other scholars. However this is more of a choice of the research team in limiting the scope of research. - Relies on supplemental space to contain the paper. The paper is not truly independent given this problem (esp. S3.1 reference to Sup. Fig. 6) and again later as noted with the model comparison and other details of the span vs. sentence investigation. - The previous report of SciBERT were removed, but this somewhat exacerbates the earlier problem in v1 where the analyses of the outcomes of the models was too cursory and unsupported by deeper analyses. However, this isn't very fair to write as a weakness because the current paper just simply doesn't mention this. - Only having two annotators for the dataset is a weakness, since it's not clear how the claims might generalise, given such a small sample. - A summative demographics is inferrable but not mentioned in the text. Table 1's revised caption mentions 2.9K paragraphs as the size. This paper is a differential review given that I previously reviewed the work in the Dec 2021 version submitted to ARR. There are minor changes to the introduction section, lengthening the introduction and moving the related work section to the more traditional position, right after the introduction. There are no rebuttals nor notes from the authors to interpret what has been changed from the previous submission, which could have been furnished to ease reviewer burden in checking (I had to read both the new and old manuscripts side by side and align them myself) Many figures could be wider given the margins for the column. I understand you want to preserve space to make up for the new additions into your manuscript, but the wider margins would help for legibility. Minor changes were made S3.3 to incorporate more connection to prior work. S4.1 Model design was elaborated into subsections, S5.2.1 adds an introduction to LED. 462 RoBERTa-base
- Relies on supplemental space to contain the paper. The paper is not truly independent given this problem (esp. S3.1 reference to Sup. Fig. 6) and again later as noted with the model comparison and other details of the span vs. sentence investigation.
ACL_2017_71_review
ACL_2017
-The explanation of methods in some paragraphs is too detailed and there is no mention of other work and it is repeated in the corresponding method sections, the authors committed to address this issue in the final version. -README file for the dataset [Authors committed to add README file] - General Discussion: - Section 2.2 mentions examples of DBpedia properties that were used as features. Do the authors mean that all the properties have been used or there is a subset? If the latter please list them. In the authors' response, the authors explain in more details this point and I strongly believe that it is crucial to list all the features in details in the final version for clarity and replicability of the paper. - In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they will correct the figure. - Based on section 2.4 it seems that topical relatedness implies that some features are domain dependent. It would be helpful to see how much domain dependent features affect the performance. In the final version, the authors will add the performance results for the above mentioned features, as mentioned in their response. - In related work, the authors make a strong connection to Sil and Florian work where they emphasize the supervised vs. unsupervised difference. The proposed approach is still supervised in the sense of training, however the generation of training data doesn’t involve human interference
- In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they will correct the figure.
ARR_2022_311_review
ARR_2022
- The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. - The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector. - How does \lambda influence the performances? - How does the augmentation method compare to other baselines with more training data?
- The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown.
ACL_2017_331_review
ACL_2017
The document-independent crowdsourcing annotation is unreliable. - General Discussion: This work creates a new benchmark corpus for concept-map-based MDS. It is well organized and written clearly. The supplement materials are sufficient. I have two questions here. 1) Is it necessary to treat concept map extraction as a separate task? On the one hand, many generic summarization systems build a similar knowledge graph and then generate summaries accordingly. On the other hand, with the increase of the node number, the concept map becomes growing hard to distinguish. Thus, the general summaries should be more readable. 2) How can you determine the importance of a concept independent of the documents? The definition of summarization is to reserve the main concepts of documents. Therefore, the importance of a concept highly depends on the documents. For example, in the given topic of coal mining accidents, assume there are two concepts: A) an instance of coal mining accidents and B) a cause of coal mining accidents. Then, if the document describes a series of coal mining accidents, A is more important than B. In comparison, if the document explores why coal mining accidents happen, B is more significant than A. Therefore, just given the topic and two concepts A&B, it is impossible to judge their relative importance. I appreciate the great effort spent by authors to build this dataset. However, this dataset is more like a knowledge graph based on common sense rather than summary.
1) Is it necessary to treat concept map extraction as a separate task? On the one hand, many generic summarization systems build a similar knowledge graph and then generate summaries accordingly. On the other hand, with the increase of the node number, the concept map becomes growing hard to distinguish. Thus, the general summaries should be more readable.
ARR_2022_112_review
ARR_2022
- The paper does not discuss much about linguistic aspect of the dataset. While their procedures are thoroughly described, analyses are quite limited in that they do not reveal much about linguistic challenges in the dataset as compared to, for example, information extraction. The benefit of pretraining on the target domain seems to be the only consistent finding in their paper and I believe this argument holds in majority of datasets. - Relating to the first point, authors should describe more about the traits of the experts and justify why annotation must be carried out by the experts, outside its commercial values. Were the experts linguistic experts or domain experts? Was annotation any different from what non-experts would do? Did it introduce any linguistic challenges? - The thorough description of the processes is definitely a strength, it might be easier to follow if the authors moved some of the details to appendix. L23: I was not able to find in the paper that single-task models consistently outperformed multi-task models. Could you elaborate a bit more about this? Table 1: It would be easier if you explain about "Type of Skills" in the caption. It might also be worth denoting number of sentences for your dataset, as Jia et al (2018) looks larger than SkillSpan at a first glance. Section 3: This section can be improved to better explain which of "skill", "knowledge" and "attitude" correspond to "hard" and "soft" knowledge. Soft skills are referred as attitudes (L180) but this work seems to only consider "skill" and "knowledge" (L181-183 and Figure 1)? This contradicts with the claim that SkillSpan incorporates both hard and soft knowledge. L403: I don't think it is the field's norm to call it multi-task learning when it is merely solving two sequential labeling problems at a same time. L527: Is there any justification as to why you suspected that domain-adaptive pre-raining lead to longer spans? L543: What is continuous pretraining? L543: "Pre-training" and "pretraining" are not spelled consistently.
- Relating to the first point, authors should describe more about the traits of the experts and justify why annotation must be carried out by the experts, outside its commercial values. Were the experts linguistic experts or domain experts? Was annotation any different from what non-experts would do? Did it introduce any linguistic challenges?
ACL_2017_503_review
ACL_2017
Reranking use is not mentioned in the introduction. It would be a great news in NLP context if an Earley parser would run in linear time for NLP grammars (unlike special kinds of formal language grammars). Unfortunately, this result involves deep assumptions about the grammar and the kind of input. Linear complexity of parsing of an input graph seem right for a top-down deterministic grammars but the paper does not recognise the fact that an input string in NLP usually gives rise to an exponential number of graphs. In other words, the parsing complexity result must be interpreted in the context of graph validation or where one wants to find out a derivation of the graph, for example, for the purposes of graph transduction via synchronous derivations. To me, the paper should be more clear in this as a random reader may miss the difference between semantic parsing (from strings) and parsing of semantic parses (the current work). There does not seem to be any control of the linear order of 0-arity edges. It might be useful to mention that if the parser is extended to string inputs with the aim to find the (best?) hypergraph for a given external nodes, then the item representations of the subgraphs must also keep track of the covered 0-arity edges. This makes the string-parser variant exponential. - Easily correctable typos or textual problems: 1) Lines 102-106 is misleading. While intersection and probs are true, "such distribution" cannot refer to the discussion in the above. 2) line 173: I think you should rather talk about validation or recognition algorithms than parsing algorithms as "parsing" in NLP means usually completely different thing that is much more challenging due to the lexical and structural ambiguity. 3) lines 195-196 are unclear: what are the elements of att_G; in what sense they are pairwise distinct. Compare Example 1 where ext_G and att_G(e_1) are not disjoint sets. 4) l.206. Move *rank* definition earlier and remove redundancy. 5) l. 267: rather "immediately derives", perhaps. 6) 279: add "be" 7) l. 352: give an example of a nontrivial internal path. 8) l. 472: define a subgraph of a hypergraph 9) l. 417, l.418: since there are two propositions, you may want to tell how they contribute to what is quoted. 10) l. 458: add "for" Table: Axiom: this is only place where this is introduced as an axiom. Link to the text that says it is a trigger. - General Discussion: It might be useful to tell about MSOL graph languages and their yields, which are context-free string languages. What happens if the grammar is ambiguous and not top-down deterministic? What if there are exponential number of parses even for the input graph due to lexical ambiguity or some other reasons. How would the parser behave then? Wouldn't the given Earley recogniser actually be strictly polynomial to m or k ? Even a synchronous derivation of semantic graphs can miss some linguistic phenomena where a semantic distinction is expressed by different linguistic means. E.g. one language may add an affix to a verb when another language may express the same distinction by changing the object. I am suggesting that although AMR increases language independence in parses it may have such cross-lingual challenges. I did not fully understand the role of the marker in subgraphs. It was elided later and not really used. l. 509-510: I already started to miss the remark of lines 644-647 at this point. It seems that the normal order is not unique. Can you confirm this? It is nice that def 7, cond 1 introduces lexical anchors to predictions. Compare the anchors in lexicalized grammars. l. 760. Are you sure that non-crossing links do not occur when parsing linearized sentences to semantic graphs? - Significant questions to the Authors: Linear complexity of parsing of an input graph seem right for a top-down deterministic grammars but the paper does not recognise the fact that an input string in NLP usually gives rise to an exponential number of graphs. In other words, the parsing complexity result must be interpreted in the context of graph validation or where one wants to find out a derivation of the graph, for example, for the purposes of graph transduction via synchronous derivations. What would you say about parsing complexity in the case the RGG is a non-deterministic, possibly ambiguous regular tree grammar, but one is interested to use it to assign trees to frontier strings like a context-free grammar? Can one adapt the given Earley algorithm to this purpose (by guessing internal nodes and their edges)? Although this question might seem like a confusion, it is relevant in the NLP context. What prevents the RGGs to generate hypergraphs whose 0-arity edges (~words) are then linearised? What principle determines how they are linearised? Is the linear order determined by the Earley paths (and normal order used in productions) or can one consider an actual word order in strings of a natural language? There is no clear connection to (non)context-free string languages or sets of (non)projective dependency graphs used in semantic parsing. What is written on lines 757-758 is just misleading: Lines 757-758 mention that HRGs can be used to generate non-context-free languages. Are these graph languages or string languages? How an NLP expert should interpret the (implicit) fact that RGGs generate only context-free languages? Does this mean that the graphs are noncrossing graphs in the sense of Kuhlmann & Jonsson (2015)?
1) Lines 102-106 is misleading. While intersection and probs are true, "such distribution" cannot refer to the discussion in the above.
ARR_2022_233_review
ARR_2022
Additional details regarding the creation of the dataset would be helpful to solve some doubts regarding its robustness. It is not stated whether the dataset will be publicly released. 1) Additional reference regarding explainable NLP Datasets: "Detecting and explaining unfairness in consumer contracts through memory networks" (Ruggeri et al 2021) 2) Some aspects of the creation of the dataset are unclear and the authors must address them. First of all, will the author release the dataset or will it remain private? Are the guidelines used to train the annotators publicly available? Having a single person responsible for the check at the end of the first round may introduce biases. A better practice would be to have more than one checker for each problem, at least on a subset of the corpus, to measure the agreement between them and, in case of need, adjust the guidelines. It is not clear how many problems are examined during the second round and the agreement between the authors is not reported. It is not clear what is meant by "accuracy" during the annotation stages. 3) Additional metrics that may be used to evaluate text generation: METEOR (http://dx.doi.org/10.3115/v1/W14-3348), SIM(ile) (http://dx.doi.org/10.18653/v1/P19-1427). 4) Why have the authors decided to use the colon symbol rather than a more original and less common symbol? Since the colon has usually a different meaning in natural language, do they think it may have an impact? 5) How much are these problems language-dependent? Meaning, if these problems were perfectly translated into another language, would they remain valid? What about the R4 category? Additional comments about these aspects would be beneficial for future works, cross-lingual transfers, and multi-lingual settings. 6) In Table 3, it is not clear whether the line with +epsilon refers to the human performance when the gold explanation is available or to the roberta performance when the golden explanation is available? In any case, both of these two settings would be interesting to know, so I suggest, if it is possible, to include them in the comparison if it is possible. 7) The explanation that must be generated for the query, the correct answer, and the incorrect answers could be slightly different. Indeed, if I am not making a mistake, the explanation for the incorrect answer must highlight the differences w.r.t. the query, while the explanation for the answer must highlight the similarity. It would be interesting to analyze these three categories separately and see whether if there are differences in the models' performances.
1) Additional reference regarding explainable NLP Datasets: "Detecting and explaining unfairness in consumer contracts through memory networks" (Ruggeri et al 2021)
ARR_2022_98_review
ARR_2022
1. Human evaluations were not performed. Given the weaknesses of SARI (Vásquez-Rodríguez et al. 2021) and FKGL (Tanprasert and Kauchak, 2021), the lack of human evaluations severely limits the potential impact of the results, combined with the variability in the results on different datasets. 2. While the authors explain the need to include text generation models into the framework of (Kumar et al., 2020), it is not clear as to why only the delete operation was retained from the framework, which used multiple edit operations (reordering, deletion, lexical simplification, etc.). Further, it is not clear how including those other operations will affect the quality and performance of the system. 3. ( minor) It is unclear how the authors arrived at the different components of the "scoring function," nor is it clear how they arrived at the different threshold values/ranges. 4. Finally, one might wonder that the performance gains on Newsela are due to a domain effect, given that the system was explicitly tuned for deletion operations (that abound in Newsela) and that performance is much lower on the ASSET test corpus. It is unclear how the system would generalize to new datasets with varying levels of complexity, and peripheral content. 1. Is there any reason why 'Gold Reference' was not reported for Newsela? It makes it hard to assess the performance of the existing system. 2. Similarly, is there a reason why the effect of linguistic acceptability was not analyzed (Table 3 and Section 4.6)? 3. It will be nice to see some examples of the system on actual texts (vs. other components & models). 4. What were the final thresholds that were used for the results? It will also be good for reproducibility if the authors can share the full set of hyperparameters as well.
3. It will be nice to see some examples of the system on actual texts (vs. other components & models).
ARR_2022_232_review
ARR_2022
- A number of claims from this paper would benefit from more in-depth analysis. - There are still some methodological flaws that should be addressed. ### Main questions/comments Looking at the attached dataset files, I cannot work out whether the data is noisy or if I don't understand the format. The 7th example in the test set has three parts separated by [SEP] which I thought corresponded to the headlines from the three sides (left, center, right). However, the second and third headlines don't make sense as stand-alone texts. Especially the third one which states "Finally, borrowers will receive relief." seems like a continuation of the previous statements. In addition to the previous question, I cannot find the titles, any meta-information as stated in lines 187-189, nor VAD scores which I assumed would be included. Part of the motivation is that scaling the production of neutral (all-sides) summaries is difficult. It would be good to quantify this if possible, as a counterargument to that would be that not all stories are noteworthy enough to require such treatment (and not all stories will appear on all sides). Allsides.com sometimes includes more than one source from the same side -- e.g. https://www.allsides.com/story/jan-6-panel-reportedly-finds-gaps-trump-white-house-phone-records has two stories from Center publishers (CNBC and Reuters) and none from the right. Since the inputs are always one from each side (Section 3.2), are such stories filtered out of the dataset? Of the two major insights that form the basis of this work (polarity is a proxy for framing bias and titles are good indicators of framing bias) only first one is empirically tested with the human evaluation presented in Section 5.1.3. Even then, we are missing any form of analysis of disagreements or low correlation cases that would help solidify the argument. The only evidence we have for the second insight are the results from the NeuS-Title system (compared to the NeuSFT model that doesn't explicitly look at the titles), but again, the comparison is not systematic enough (e.g. no ablation study) to give us concrete evidence to the validity of the claim. Related to the previous point, it's not clear what the case study mentioned in Section 4.2 actually involved. The insights gathered aren't particularly difficult to arrive at by reading the related literature and the examples in Table 1, while indicative of the arguments don't seem causally critical. It would be good to have more details about this study and how it drove the decisions in the rest of the paper. In particular, I would like to know if there were any counterexamples to the main points (e.g. are there titles that aren't representative of the type of bias displayed in the main article?). The example in Table 3 shows that the lexicon-based approach (VAD dataset) suffers from lack of context sensitivity (the word "close" in this example is just a marker of proximity). This is a counterpoint to the advantages of such approaches presented in Section 5.1.1 and it would be interesting to quantify it (e.g. by looking at the human-annotated data from Section 5.1.3) beyond the safeguard introduced in the second paragraph (metric calibration). For the NeuS-Title model, does that order of the input matter? It would be interesting to rerun the same evaluation with different permutations (e.g. center first, or right first). Is there a risk of running into the token limit of the encoder? From the examples shared in Table 3, it appears that both the NeuSFT and NeuS-Title models stay close to a single target article. What strikes me as odd is that neither chose the center headline (the former is basically copying the Right headline, the latter has done some paraphrasing both mostly based on the Left headline). Is there a particular reason for this? Is the training objective discouraging true multi-headline summarisation since the partisan headlines will always contain more biased information/tokens? ### Minor issues I was slightly confused by the word "headline" to refer to the summary of the article. I think of headlines and titles as fundamentally the same thing: short (one sentence) high-level descriptions of the article to come. It would be helpful to refer to longer text as a summary or a "headline roundup" as allsides.com calls it. Also there is a discrepancy between the input to the NeuS-Title model (lines 479-486) and the output shown in Table 3 (HEADLINE vs ARTICLE). Some citations have issues. Allsides.com is cited as (all, 2021) and (Sides, 2018); the year is missing for the citations on lines 110 and 369; Entman (1993) and (2002) are seemingly the same citation (and the 1993 is missing any publication details); various capitatlisation errors (e.g. "us" instead of "US" on line 716) It's not clear what the highlighting in Table 1 shows (it is implicitly mentioned later in the text). I would recommend colour-coding the different types of bias and providing the details in the caption.
- A number of claims from this paper would benefit from more in-depth analysis.
ACL_2017_483_review
ACL_2017
- 071: This formulation of argumentation mining is just one of several proposed subtask divisions, and this should be mentioned. For example, in [1], claims are detected and classified before any supporting evidence is detected. Furthermore, [2] applied neural networks to this task, so it is inaccurate to say (as is claimed in the abstract of this paper) that this work is the first NN-based approach to argumentation mining. - Two things must be improved in the presentation of the model: (1) What is the pooling method used for embedding features (line 397)? and (2) Equation (7) in line 472 is not clear enough: is E_i the random variable representing the *type* of AC i, or its *identity*? Both are supposedly modeled (the latter by feature representation), and need to be defined. Furthermore, it seems like the LHS of equation (7) should be a conditional probability. - There are several unclear things about Table 2: first, why are the three first baselines evaluated only by macro f1 and the individual f1 scores are missing? This is not explained in the text. Second, why is only the "PN" model presented? Is this the same PN as in Table 1, or actually the Joint Model? What about the other three? - It is not mentioned which dataset the experiment described in Table 4 was performed on. General Discussion: - 132: There has to be a lengthier introduction to pointer networks, mentioning recurrent neural networks in general, for the benefit of readers unfamiliar with "sequence-to-sequence models". Also, the citation of Sutskever et al. (2014) in line 145 should be at the first mention of the term, and the difference with respect to recursive neural networks should be explained before the paragraph starting in line 233 (tree structure etc.). - 348: The elu activation requires an explanation and citation (still not enough well-known). - 501: "MC", "Cl" and "Pr" should be explained in the label. - 577: A sentence about how these hyperparameters were obtained would be appropriate. - 590: The decision to do early stopping only by link prediction accuracy should be explained (i.e. why not average with type accuracy, for example?). - 594: Inference at test time is briefly explained, but would benefit from more details. - 617: Specify what the length of an AC is measured in (words?). - 644: The referent of "these" in "Neither of these" is unclear. - 684: "Minimum" should be "Maximum". - 694: The performance w.r.t. the amount of training data is indeed surprising, but other models have also achieved almost the same results - this is especially surprising because NNs usually need more data. It would be good to say this. - 745: This could alternatively show that structural cues are less important for this task. - Some minor typos should be corrected (e.g. "which is show", line 161). [1] Rinott, Ruty, et al. "Show Me Your Evidence-an Automatic Method for Context Dependent Evidence Detection." EMNLP. 2015. [2] Laha, Anirban, and Vikas Raykar. " An Empirical Evaluation of various Deep Learning Architectures for Bi-Sequence Classification Tasks." COLING. 2016.
- Two things must be improved in the presentation of the model: (1) What is the pooling method used for embedding features (line 397)? and (2) Equation (7) in line 472 is not clear enough: is E_i the random variable representing the *type* of AC i, or its *identity*? Both are supposedly modeled (the latter by feature representation), and need to be defined. Furthermore, it seems like the LHS of equation (7) should be a conditional probability.
ARR_2022_215_review
ARR_2022
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they even mentioned/discussed again). I found this not only misleading, but I would have also liked the paper to go deeper into the respective topics, at least to some extent. 2. It seemed a little disappointing to me that the 212 new pairs have _not_ been translated to English (if I'm not mistaken). To really make this dataset a bilingual resource, it would be good to have all pairs in both languages. In the given way, it seems that ultimately only the French version was of interest to the study - unlike it is claimed initially. 3. Almost no information about the reliability of the translations and the annotations is given (except for the result of the translation checking in line 285), which seems unsatisfying to me. To assess the translations, more information about the language/translation expertise of the authors would be helpful (I don't think this violates anonymity). For the annotations, I would expect some measure of inter-annotator agreement. 4. The metrics in Tables 4 and 5 need explanation, in order to make the paper self-contained. Without going to the original paper on CrowS-pairs, the values are barely understandable. Also, information on the values ranges should be given as well as whether higher or lower values are better. - 066: social contexts >> I find this term misleading here, since the text seems to be about countries/language regions. - 121: Deviding 1508 into 16*90 = 1440 cases cannot be fully correct. What about the remaining 68 cases? - 241: It would also be good to state the maximum number of tasks done by any annotator. - Table 3: Right-align the numeric columns. - Table 4 (1): Always use the same number of decimal places, for example 61.90 instead of 61.9 to match the other values. This would increase readability. - Table 4 (2): The table exceeds the page width; that needs to be fixed. - Tables 4+5 (1): While I undersand the layout problem, the different approaches would be much easier to compare if tables and columns were flipped (usually, one approach per row, one metric per column). - Tables 4+5 (2): What's the idea of showing the run-time? I didn't see for what this is helpful. - 305/310: Marie/Mary >> I think these should be written the same. - 357: The text speaks of "53", but I believe the value "52.9" from Table 4 is meant. In my view, such rounding makes understanding harder rather than helping. - 575/577: "1/" and "2/" >> Maybe better use "(1)" and "(2)"; confused me first.
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they even mentioned/discussed again). I found this not only misleading, but I would have also liked the paper to go deeper into the respective topics, at least to some extent.
ARR_2022_121_review
ARR_2022
1. The writing needs to be improved. Structurally, there should be a "Related Work" section which would inform the reader that this is where prior research has been done, as well as what differentiates the current work with earlier work. A clear separation between the "Introduction" and "Related Work" sections would certainly improve the readability of the paper. 2. The paper does not compare the results with some of the earlier research work from 2020. While the authors have explained their reasons for not doing so in the author response along the lines of "Those systems are not state-of-the-art", they have compared the results to a number of earlier systems with worse performances (Eg. Taghipour and Ng (2016)). Comments: 1. Please keep a separate "Related Work" section. Currently "Introduction" section of the paper reads as 2-3 paragraphs of introduction, followed by 3 bullet points of related work and again a lot of introduction. I would suggest that you shift those 3 bullet points ("Traditional AES", "Deep Neural AES" and "Pre-training AES") to the Related work section. 2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work. 3. While the out of domain experiment is pre-trained on other prompts, it is still fine-tuned during training on the target prompt essays. Typos: 1. In Table #2, Row 10, the reference for R2BERT is Yang et al. (2020), not Yang et al. (2019). Missing References: 1. Panitan Muangkammuen and Fumiyo Fukumoto. " Multi-task Learning for Automated Essay Scoring with Sentiment Analysis". 2020. In Proceedings of the AACL-IJCNLP 2020 Student Research Workshop. 2. Sandeep Mathias, Rudra Murthy, Diptesh Kanojia, Abhijit Mishra, Pushpak Bhattacharyya. 2020. Happy Are Those Who Grade without Seeing: A Multi-Task Learning Approach to Grade Essays Using Gaze Behaviour. In Proceedings of the 2020 AACL-IJCNLP Main Conference.
2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work.
ARR_2022_186_review
ARR_2022
- it is not clear what's the goal of the paper. Is the release of a challenging dataset or proposing an analysis of augmenting models with expert guided adversarial examples. If it is the first, ok, but the paper misses a lot of important information, and data analysis to give a sense of the quality and usefulness of such a dataset. If it is the second, it is not clear what's the novelty. - In general, it seems the authors want to propose a way of create a challenging set. However, what they describe seems very specific and not scalable. - The paper structure and writing is not sufficient My main concern is that it's not clear what's the goal of the paper. Also, the structure and writing should greatly improve. I believe also that the choice to go for a short paper was penalizing the authors, as it seems clear that they cut out some information that could've been useful to better understand the paper (also given the 5 pages appendix). Detailed comments/questions: - Line 107 data, -> data. - Line 161-162: this sentence is not clear. - Table 1: are these all the rules you defined? How the rule is applied? When you decide to make small changes to the context? For example, when you decide to add "and her team" as in the last example of Table 1? - Also, it seems that all the rules change a one-token entity to a multi-token one or vice-versa. Will models be biased by this? - Line 183-197: not clear what you're doing here. Details cannot be in appendix. - What is not clear also to me is how is used the Challenge Set. If I understood correctly, the CS is created by the linguistic experts and it's used for evaluation purposes. Is this used also to augment the training material? If yes, what is the data split you used? - Line 246-249: this sentence lacks the conclusion - Line 249: What are eligible and not eligible examples? - Line 251: what is p? - Line 253: The formula doesn't depend on p, so why the premise is "if p=100% of the eligible example"? - Line 252: Not clear what is the subject of this sentence.
- What is not clear also to me is how is used the Challenge Set. If I understood correctly, the CS is created by the linguistic experts and it's used for evaluation purposes. Is this used also to augment the training material? If yes, what is the data split you used?
ACL_2017_128_review
ACL_2017
----- I'm not very convinced by the empirical results, mostly due to the lack of details of the baselines. Comments below are ranked by decreasing importance. - The proposed model has two main parts: sentence embedding and substructure embedding. In Table 1, the baseline models are TreeRNN and DCNN, they are originally used for sentence embedding but one can easily take the node/substructure embedding from them too. It's not clear how they are used to compute the two parts. - The model uses two RNNs: a chain-based one and a knowledge guided one. The only difference in the knowledge-guided RNN is the addition of a "knowledge" vector from the memory in the RNN input (Eqn 5 and 8). It seems completely unnecessary to me to have separate weights for the two RNNs. The only advantage of using two is an increase of model capacity, i.e. more parameters. Furthermore, what are the hyper-parameters / size of the baseline neural networks? They should have comparable numbers of parameters. - I also think it is reasonable to include a baseline that just input additional knowledge as features to the RNN, e.g. the head of each word, NER results etc. - Any comments / results on the model's sensitivity to parser errors? Comments on the model: - After computing the substructure embeddings, it seems very natural to compute an attention over them at each word. Is there any reason to use a static attention for all words? I guess as it is, the "knowledge" is acting more like a filter to mark important words. Then it is reasonable to include the baseline suggest above, i.e. input additional features. - Since the weight on a word is computed by inner product of the sentence embedding and the substructure embedding, and the two embeddings are computed by the same RNN/CNN, doesn't it means nodes / phrases similar to the whole sentence gets higher weights, i.e. all leaf nodes? - The paper claims the model generalizes to different knowledge but I think the substructure has to be represented as a sequence of words, e.g. it doesn't seem straightforward for me to use constituent parse as knowledge here. Finally, I'm hesitating to call it "knowledge". This is misleading as usually it is used to refer to world / external knowledge such as a knowledge base of entities, whereas here it is really just syntax, or arguably semantics if AMR parsing is used. -----General Discussion----- This paper proposes a practical model which seems working well on one dataset, but the main ideas are not very novel (see comments in Strengths). I think as an ACL paper there should be more takeaways. More importantly, the experiments are not convincing as it is presented now. Will need some clarification to better judge the results. -----Post-rebuttal----- The authors did not address my main concern, which is whether the baselines (e.g. TreeRNN) are used to compute substructure embeddings independent of the sentence embedding and the joint tagger. Another major concern is the use of two separate RNNs which gives the proposed model more parameters than the baselines. Therefore I'm not changing my scores.
- The paper claims the model generalizes to different knowledge but I think the substructure has to be represented as a sequence of words, e.g. it doesn't seem straightforward for me to use constituent parse as knowledge here. Finally, I'm hesitating to call it "knowledge". This is misleading as usually it is used to refer to world / external knowledge such as a knowledge base of entities, whereas here it is really just syntax, or arguably semantics if AMR parsing is used.
ACL_2017_108_review
ACL_2017
The problem itself is not really well motivated. Why is it important to detect China as an entity within the entity Bank of China, to stay with the example in the introduction? I do see a point for crossing entities but what is the use case for nested entities? This could be much more motivated to make the reader interested. As for the approach itself, some important details are missing in my opinion: What is the decision criterion to include an edge or not? In lines 229--233 several different options for the I^k_t nodes are mentioned but it is never clarified which edges should be present! As for the empirical evaluation, the achieved results are better than some previous approaches but not really by a large margin. I would not really call the slight improvements as "outperformed" as is done in the paper. What is the effect size? Does it really matter to some user that there is some improvement of two percentage points in F_1? What is the actual effect one can observe? How many "important" entities are discovered, that have not been discovered by previous methods? Furthermore, what performance would some simplistic dictionary-based method achieve that could also be used to find overlapping things? And in a similar direction: what would some commercial system like Google's NLP cloud that should also be able to detect and link entities would have achieved on the datasets. Just to put the results also into contrast of existing "commercial" systems. As for the result discussion, I would have liked to see some more emphasis on actual crossing entities. How is the performance there? This in my opinion is the more interesting subset of overlapping entities than the nested ones. How many more crossing entities are detected than were possible before? Which ones were missed and maybe why? Is the performance improvement due to better nested detection only or also detecting crossing entities? Some general error discussion comparing errors made by the suggested system and previous ones would also strengthen that part. General Discussion: I like the problems related to named entity recognition and see a point for recognizing crossing entities. However, why is one interested in nested entities? The paper at hand does not really motivate the scenario and also sheds no light on that point in the evaluation. Discussing errors and maybe advantages with some example cases and an emphasis on the results on crossing entities compared to other approaches would possibly have convinced me more. So, I am only lukewarm about the paper with maybe a slight tendency to rejection. It just seems yet another try without really emphasizing the in my opinion important question of crossing entities. Minor remarks: - first mention of multigraph: some readers may benefit if the notion of a multigraph would get a short description - previously noted by ... many previous: sounds a little odd - Solving this task: which one? - e.g.: why in italics? - time linear in n: when n is sentence length, does it really matter whether it is linear or cubic? - spurious structures: in the introduction it is not clear, what is meant - regarded as _a_ chunk - NP chunking: noun phrase chunking? - Since they set: who? - pervious -> previous - of Lu and Roth~(2015) - the following five types: in sentences with no large numbers, spell out the small ones, please - types of states: what is a state in a (hyper-)graph? later state seems to be used analogous to node?! - I would place commas after the enumeration items at the end of page 2 and a period after the last one - what are child nodes in a hypergraph? - in Figure 2 it was not obvious at first glance why this is a hypergraph. colors are not visible in b/w printing. why are some nodes/edges in gray. it is also not obvious how the highlighted edges were selected and why the others are in gray ... - why should both entities be detected in the example of Figure 2? what is the difference to "just" knowing the long one? - denoting ...: sometimes in brackets, sometimes not ... why? - please place footnotes not directly in front of a punctuation mark but afterwards - footnote 2: due to the missing edge: how determined that this one should be missing? - on whether the separator defines ...: how determined? - in _the_ mention hypergraph - last paragraph before 4.1: to represent the entity separator CS: how is the CS-edge chosen algorithmically here? - comma after Equation 1? - to find out: sounds a little odd here - we extract entities_._\footnote - we make two: sounds odd; we conduct or something like that? - nested vs. crossing remark in footnote 3: why is this good? why not favor crossing? examples to clarify? - the combination of states alone do_es_ not? - the simple first order assumption: that is what? - In _the_ previous section - we see that our model: demonstrated? have shown? - used in this experiments: these - each of these distinct interpretation_s_ - published _on_ their website - The statistics of each dataset _are_ shown - allows us to use to make use: omit "to use" - tried to follow as close ... : tried to use the features suggested in previous works as close as possible? - Following (Lu and Roth, 2015): please do not use references as nouns: Following Lu and Roth (2015) - using _the_ BILOU scheme - highlighted in bold: what about the effect size? - significantly better: in what sense? effect size? - In GENIA dataset: On the GENIA dataset - outperforms by about 0.4 point_s_: I would not call that "outperform" - that _the_ GENIA dataset - this low recall: which one? - due to _an_ insufficient - Table 5: all F_1 scores seems rather similar to me ... again, "outperform" seems a bit of a stretch here ... - is more confident: why does this increase recall? - converge _than_ the mention hypergraph - References: some paper titles are lowercased, others not, why?
- first mention of multigraph: some readers may benefit if the notion of a multigraph would get a short description - previously noted by ... many previous: sounds a little odd - Solving this task: which one?
ACL_2017_614_review
ACL_2017
- I don't understand effectiveness of the multi-view clustering approach. Almost all across the board, the paraphrase similarity view does significantly better than other views and their combination. What, then, do we learn about the usefulness of the other views? There is one empirical example of how the different views help in clustering paraphrases of the word 'slip', but there is no further analysis about how the different clustering techniques differ, except on the task directly. Without a more detailed analysis of differences and similarities between these views, it is hard to draw solid conclusions about the different views. - The paper is not fully clear on a first read. Specifically, it is not immediately clear how the sections connect to each other, reading more like disjoint pieces of work. For instance, I did not understand the connections between section 2.1 and section 4.3, so adding forward/backward pointer references to sections should be useful in clearing up things. Relatedly, the multi-view clustering section (3.1) needs editing, since the subsections seem to be out of order, and citations seem to be missing (lines 392 and 393). - The relatively poor performance on nouns makes me uneasy. While I can expect TWSI to do really well due to its nature, the fact that the oracle GAP for PPDBClus is higher than most clustering approaches is disconcerting, and I would like to understand the gap better. This also directly contradicts the claim that the clustering approach is generalizable to all parts of speech (124-126), since the performance clearly isn't uniform. - General Discussion: The paper is mostly straightforward in terms of techniques used and experiments. Even then, the authors show clear gains on the lexsub task by their two-pronged approach, with potentially more to be gained by using stronger WSD algorithms. Some additional questions for the authors : - Lines 221-222 : Why do you add hypernyms/hyponyms? - Lines 367-368 : Why does X^{P} need to be symmetric? - Lines 387-389 : The weighting scheme seems kind of arbitrary. Was this indeed arbitrary or is this a principled choice? - Is the high performance of SubstClus^{P} ascribable to the fact that the number of clusters was tuned based on this view? Would tuning the number of clusters based on other matrices affect the results and the conclusions? - What other related tasks could this approach possibly generalize to? Or is it only specific to lexsub?
- The relatively poor performance on nouns makes me uneasy. While I can expect TWSI to do really well due to its nature, the fact that the oracle GAP for PPDBClus is higher than most clustering approaches is disconcerting, and I would like to understand the gap better. This also directly contradicts the claim that the clustering approach is generalizable to all parts of speech (124-126), since the performance clearly isn't uniform.
ACL_2017_108_review
ACL_2017
Clarification is needed in several places. 1. In section 3, in addition to the description of the previous model, MH, you need point out the issues of MH which motivate you to propose a new model. 2. In section 4, I don't see the reason why separators are introduced. what additional info they convene beyond T/I/O? 3. section 5.1 does not seem to provide useful info regarding why the new model is superior. 4. the discussion in section 5.2 is so abstract that I don't get the insights why the new model is better than MH. can you provide examples of spurious structures? - General Discussion: The paper presents a new model for detecting overlapping entities in text. The new model improves the previous state-of-the-art, MH, in the experiments on a few benchmark datasets. But it is not clear why and how the new model works better.
4. the discussion in section 5.2 is so abstract that I don't get the insights why the new model is better than MH. can you provide examples of spurious structures?
ARR_2022_114_review
ARR_2022
By showing that there is an equivalent graph in the rank space on which message passing is equivalent to message passing in the original joint state and rank space, this work exposes the fact that these large structured prediction models with fully decomposable clique potentials (Chiu et al 2021 being an exception) are equivalent to a smaller structured prediction model (albeit with over-parameterized clique potentials). For example, looking at Figure 5 (c), the original HMM is equivalent to a smaller MRF with state size being the rank size (which is the reason why inference complexity does not depend on the original number of states at all after calculating the equivalent transition and emission matrices). One naturally wonders why not simply train a smaller HMM, and where does the performance gain of this paper come from in Table 3. As another example, looking at Figure 4 (a), the original PCFG is equivalent to a smaller PCFG (with fully decomposable potentials) with state size being the rank size. This smaller PCFG is over-parameterized though, e.g., its potential $H\in \mathcal{R}^{r \times r}$ is parameterized as $V U^T$ where $U,V\in \mathcal{R}^{r \times m}$ and $r < m$, instead of directly being parameterized as a learned matrix of $\mathcal{R}^{r \times r}$. That being said, I don't consider this a problem introduced by this paper since this should be a problem of many previous works as well, and it seems an intriguing question why large state spaces help despite the existence of these equivalent small models. Is it similar to why overparameterizing in neural models help? Is there an equivalent form of the lottery ticket hypothesis here? In regard to weakness #1, I think this work would be strengthened by adding the following baselines: 1. For each PCFG with rank r, add a baseline smaller PCFG with state size being r, but where $H, I, J, K, L$ are directly parameterized as learned matrices of $\mathcal{R}^{r \times r}$, $\mathcal{R}^{r \times o}$, $\mathcal{R}^{r}$, etc. Under this setting, parsing F-1 might not be directly comparable, but perplexity can still be compared. 2. For each HMM with rank r, add a baseline smaller HMM with state size being r.
1. For each PCFG with rank r, add a baseline smaller PCFG with state size being r, but where $H, I, J, K, L$ are directly parameterized as learned matrices of $\mathcal{R}^{r \times r}$, $\mathcal{R}^{r \times o}$, $\mathcal{R}^{r}$, etc. Under this setting, parsing F-1 might not be directly comparable, but perplexity can still be compared.
ARR_2022_215_review
ARR_2022
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they even mentioned/discussed again). I found this not only misleading, but I would have also liked the paper to go deeper into the respective topics, at least to some extent. 2. It seemed a little disappointing to me that the 212 new pairs have _not_ been translated to English (if I'm not mistaken). To really make this dataset a bilingual resource, it would be good to have all pairs in both languages. In the given way, it seems that ultimately only the French version was of interest to the study - unlike it is claimed initially. 3. Almost no information about the reliability of the translations and the annotations is given (except for the result of the translation checking in line 285), which seems unsatisfying to me. To assess the translations, more information about the language/translation expertise of the authors would be helpful (I don't think this violates anonymity). For the annotations, I would expect some measure of inter-annotator agreement. 4. The metrics in Tables 4 and 5 need explanation, in order to make the paper self-contained. Without going to the original paper on CrowS-pairs, the values are barely understandable. Also, information on the values ranges should be given as well as whether higher or lower values are better. - 066: social contexts >> I find this term misleading here, since the text seems to be about countries/language regions. - 121: Deviding 1508 into 16*90 = 1440 cases cannot be fully correct. What about the remaining 68 cases? - 241: It would also be good to state the maximum number of tasks done by any annotator. - Table 3: Right-align the numeric columns. - Table 4 (1): Always use the same number of decimal places, for example 61.90 instead of 61.9 to match the other values. This would increase readability. - Table 4 (2): The table exceeds the page width; that needs to be fixed. - Tables 4+5 (1): While I undersand the layout problem, the different approaches would be much easier to compare if tables and columns were flipped (usually, one approach per row, one metric per column). - Tables 4+5 (2): What's the idea of showing the run-time? I didn't see for what this is helpful. - 305/310: Marie/Mary >> I think these should be written the same. - 357: The text speaks of "53", but I believe the value "52.9" from Table 4 is meant. In my view, such rounding makes understanding harder rather than helping. - 575/577: "1/" and "2/" >> Maybe better use "(1)" and "(2)"; confused me first.
- 241: It would also be good to state the maximum number of tasks done by any annotator.
ACL_2017_489_review
ACL_2017
1) The main weakness for me is the statement of the specific hypothesis, within the general research line, that the paper is probing: I found it very confusing. As a result, it is also hard to make sense of the kind of feedback that the results give to the initial hypothesis, especially because there are a lot of them and they don't all point in the same direction. The paper says: "This paper pursues the hypothesis that an accurate model of referential word meaning does not need to fully integrate visual and lexical knowledge (e.g. as expressed in a distributional vector space), but at the same time, has to go beyond treating words as independent labels." The first part of the hypothesis I don't understand: What is it to fully integrate (or not to fully integrate) visual and lexical knowledge? Is the goal simply to show that using generic distributional representation yields worse results than using specific, word-adapted classifiers trained on the dataset? If so, then the authors should explicitly discuss the bounds of what they are showing: Specifically, word classifiers must be trained on the dataset itself and only word classifiers with a sufficient amount of items in the dataset can be obtained, whereas word vectors are available for many other words and are obtained from an independent source (even if the cross-modal mapping itself is trained on the dataset); moreover, they use the simplest Ridge Regression, instead of the best method from Lazaridou et al. 2014, so any conclusion as to which method is better should be taken with a grain of salt. However, I'm hoping that the research goal is both more constructive and broader. Please clarify. 2) The paper uses three previously developed methods on a previously available dataset. The problem itself has been defined before (in Schlangen et al.). In this sense, the originality of the paper is not high. 3) As the paper itself also points out, the authors select a very limited subset of the ReferIt dataset, with quite a small vocabulary (159 words). I'm not even sure why they limited it this way (see detailed comments below). 4) Some aspects could have been clearer (see detailed comments). 5) The paper contains many empirical results and analyses, and it makes a concerted effort to put them together; but I still found it difficult to get the whole picture: What is it exactly that the experiments in the paper tell us about the underlying research question in general, and the specific hypothesis tested in particular? How do the different pieces of the puzzle that they present fit together? - General Discussion: [Added after author response] Despite the weaknesses, I find the topic of the paper very relevant and also novel enough, with an interesting use of current techniques to address an "old" problem, REG and reference more generally, in a way that allows aspects to be explored that have not received enough attention. The experiments and analyses are a substantial contribution, even though, as mentioned above, I'd like the paper to present a more coherent overall picture of how the many experiments and analyses fit together and address the question pursued. - Detailed comments: Section 2 is missing the following work in computational semantic approaches to reference: Abhijeet Gupta, Gemma Boleda, Marco Baroni, and Sebastian Pado. 2015. Distributional vectors encode referential attributes. Proceedings of EMNLP, 12-21 Aurelie Herbelot and Eva Maria Vecchi. 2015. Building a shared world: mapping distributional to model-theoretic semantic spaces. Proceedings of EMNLP, 22–32. 142 how does Roy's work go beyond early REG work? 155 focusses links 184 flat "hit @k metric": "flat"? Section 3: please put the numbers related to the dataset in a table, specifying the image regions, number of REs, overall number of words, and number of object names in the original ReferIt dataset and in the version you use. By the way, will you release your data? I put a "3" for data because in the reviewing form you marked "Yes" for data, but I can't find the information in the paper. 229 "cannot be considered to be names" ==> "image object names" 230 what is "the semantically annotated portion" of ReferIt? 247 why don't you just keep "girl" in this example, and more generally the head nouns of non-relational REs? More generally, could you motivate your choices a bit more so we understand why you ended up with such a restricted subset of ReferIt? 258 which 7 features? ( list) How did you extract them? 383 "suggest that lexical or at least distributional knowledge is detrimental when learning what a word refers to in the world": How does this follow from the results of Frome et al. 2013 and Norouzi et al. 2013? Why should cross-modal projection give better results? It's a very different type of task/setup than object labeling. 394-395 these numbers belong in the data section Table 1: Are the differences between the methods statistically significant? They are really numerically so small that any other conclusion to "the methods perform similarly" seems unwarranted to me. Especially the "This suggests..." part (407). Table 1: Also, the sim-wap method has the highest accuracy for hit @5 (almost identical to wac); this is counter-intuitive given the @1 and @2 results. Any idea of what's going on? Section 5.2: Why did you define your ensemble classifier by hand instead of learning it? Also, your method amounts to majority voting, right? Table 2: the order of the models is not the same as in the other tables + text. Table 3: you report cosine distances but discuss the results in terms of similarity. It would be clearer (and more in accordance with standard practice in CL imo) if you reported cosine similarities. Table 3: you don't comment on the results reported in the right columns. I found it very curious that the gold-top k data similarities are higher for transfer+sim-wap, whereas the results on the task are the same. I think that you could squeeze more information wrt the phenomenon and the models out of these results. 496 format of "wac" Section 6 I like the idea of the task a lot, but I was very confused as to how you did and why: I don't understand lines 550-553. What is the task exactly? An example would help. 558 "Testsets" 574ff Why not mix in the train set examples with hypernyms and non-hypernyms? 697 "more even": more wrt what? 774ff "Previous cross-modal mapping models ... force...": I don't understand this claim. 792 "larger test sets": I think that you could even exploit ReferIt more (using more of its data) before moving on to other datasets.
5) The paper contains many empirical results and analyses, and it makes a concerted effort to put them together; but I still found it difficult to get the whole picture: What is it exactly that the experiments in the paper tell us about the underlying research question in general, and the specific hypothesis tested in particular? How do the different pieces of the puzzle that they present fit together?
ACL_2017_792_review
ACL_2017
1. Unfortunately, the results are rather inconsistent and one is not left entirely convinced that the proposed models are better than the alternatives, especially given the added complexity. Negative results are fine, but there is insufficient analysis to learn from them. Moreover, no results are reported on the word analogy task, besides being told that the proposed models were not competitive - this could have been interesting and analyzed further. 2. Some aspects of the experimental setup were unclear or poorly motivated, for instance w.r.t. to corpora and datasets (see details below). 3. Unfortunately, the quality of the paper deteriorates towards the end and the reader is left a little disappointed, not only w.r.t. to the results but with the quality of the presentation and the argumentation. - General Discussion: 1. The authors aim "to learn representations for both words and senses in a shared emerging space". This is only done in the LSTMEmbed_SW version, which rather consisently performs worse than the alternatives. In any case, what is the motivation for learning representations for words and senses in a shared semantic space? This is not entirely clear and never really discussed in the paper. 2. The motivation for, or intuition behind, predicting pre-trained embeddings is not explicitly stated. Also, are the pre-trained embeddings in the LSTMEmbed_SW model representations for words or senses, or is a sum of these used again? If different alternatives are possible, which setup is used in the experiments? 3. The importance of learning sense embeddings is well recognized and also stressed by the authors. Unfortunately, however, it seems that these are never really evaluated; if they are, this remains unclear. Most or all of the word similarity datasets considers words independent of context. 4. What is the size of the training corpora? For instance, using different proportions of BabelWiki and SEW is shown in Figure 4; however, the comparison is somewhat problematic if the sizes are substantially different. The size of SemCor is moreover really small and one would typically not use such a small corpus for learning embeddings with, e.g., word2vec. If the proposed models favor small corpora, this should be stated and evaluated. 5. Some of the test sets are not independent, i.e. WS353, WSSim and WSRel, which makes comparisons problematic, in this case giving three "wins" as opposed to one. 6. The proposed models are said to be faster to train by using pre-trained embeddings in the output layer. However, no evidence to support this claim is provided. This would strengthen the paper. 7. Table 4: why not use the same dimensionality for a fair(er) comparison? 8. A section on synonym identification is missing under similarity measurement that would describe how the multiple-choice task is approached. 9. A reference to Table 2 is missing. 10. There is no description of any training for the word analogy task, which is mentioned when describing the corresponding dataset.
4. What is the size of the training corpora? For instance, using different proportions of BabelWiki and SEW is shown in Figure 4; however, the comparison is somewhat problematic if the sizes are substantially different. The size of SemCor is moreover really small and one would typically not use such a small corpus for learning embeddings with, e.g., word2vec. If the proposed models favor small corpora, this should be stated and evaluated.
README.md exists but content is empty.
Downloads last month
9