id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_200 | This points to the fact that rich context encodings with a wide range of dependency relations are promising for capturing lexical semantic distinctions. | the performance for maximum context specification was lower, which indicates that collapsing all dependency relations is not the optimal method, at least for the tasks attempted here. | contrasting |
train_201 | There have been rigourous studies of Boolean operators for information retrieval, including the pnorms of and the matrix forms of Turtle and Croft (1989), which have focussed particularly on mathematical expressions for conjunction and disjunction. | typical forms of negation (such as NOT p = 1−p) have not taken into account the relationship between the negated argument and the rest of the query. | contrasting |
train_202 | If arbitrary wordreorderings are permitted, the search problem is NP-hard. | if we restrict the possible word-reorderings in an appropriate way, we obtain a polynomial-time search algorithm. | contrasting |
train_203 | Because of the difficulties a pronoun resolution algorithm encounters in spoken dialogue, previous approaches were applied only to tiny domains, they needed deep semantic analysis and discourse processing and relied on hand-crafted knowledge bases. | we build on our existing anaphora resolution system and incrementally add new features specifically devised for spoken dialogue. | contrasting |
train_204 | Spoken dialogue contains more pronouns with non-NP-antecedents than written text does. | pronouns with NP-antecedents (like 3rd pers. | contrasting |
train_205 | Moreover, a history of just two previous embeddings (as we get it with ) is too limited in a heavily recursive setting like word formation: recursive embeddings of depth four occur in realistic text. | we can exploit more effectively the "mildly context-free" characteristics of morpholog-ical grammars (at least of German) discussed in sec. | contrasting |
train_206 | As the size results show, the non-deterministic FSAs constructed by the selective method are more complex (and hence resource-intensive in minimization) than the ones produced by the "plain" parameterized version. | the difference in exactness of the approximizations has to be taken into account. | contrasting |
train_207 | These machine learning approaches have been successful for these tasks, achieving accuracy comparable to the knowledge-engineering approach. | for the full-scale ST task of generic IE from free texts, the best reported method to date is still the knowledge-engineering approach. | contrasting |
train_208 | The maximum entropy (ME) framework is a recent learning approach which has been successfully used in various NLP tasks such as sentence segmentation, part-of-speech tagging, and parsing (Ratnaparkhi, 1998). | to our knowledge, ours is the first research effort to have applied ME learning to the full-scale ST task. | contrasting |
train_209 | In this case, only the first occurrence of OQUELI COLINDRES should be used as a positive example for the human target slot. | aLICE does not have access to such information, since the MUC-4 training documents are not annotated (i.e., only templates are provided, but the text strings in a document are not marked). | contrasting |
train_210 | Learning approaches have been shown to perform on par or even outperform knowledge-engineering approaches in many NLP tasks. | the full-scale scenario template IE task was still dominated by knowledge-engineering approaches. | contrasting |
train_211 | For example, in Figure 1(c), (triggered ( C-DATE -ADV)) is needed to extract the date entity. | the same pattern is likely to be applied to texts in other domains as well, such as "The Mexican peso was devalued and triggered a national financial crisis last week." | contrasting |
train_212 | A story about a plane crash and another story about the funeral of the crash victims are considered to be linked. | a story about hurricane Andrew and a story about hurricane Agnes are not linked because they are two different events. | contrasting |
train_213 | In the LNK task, incorrectly flagging two stories as being on the same event is considered a false alarm. | in the NED task, incorrectly flagging two stories as being on the same event will cause the true first story to be missed. | contrasting |
train_214 | Most conventional systems uniquely determine the result of the discourse understanding, i.e., the dialogue state, after each user utterance. | multiple dialogue states are created from the current dialogue state and the speech understanding results corresponding to the user utterance, which leads to ambiguity. | contrasting |
train_215 | Confusion matrix has been employed effectively in spoken document retrieval (Singhal et al, 1999 andSrinivasan et al 2000) and to minimize speech recognition errors (Shen et al, 1998). | when such method is used directly to correct speech recognition errors, it tends to bring in too many irrelevant terms (Ng 2000). | contrasting |
train_216 | Many spoken document retrieval (SDR) systems took advantage of this fact in reducing the speech recognition and matching errors . | to SDR, very little work has been done on Chinese spoken query processing (SQP), which is the use of spoken queries to retrieval textual documents. | contrasting |
train_217 | Suppose the user issues a speech query: " ¢ © ¤ ¥ £ ¦ §¨ " (please help me to collect some information about Bin Laden). | the result of speech recognition with errors is: " (please) (help) Note that there are 4 mis-recognized characters which are underlined. | contrasting |
train_218 | Moreover, they assumed that the input is text only, which does not contain errors. | spoken utterances include various information such as the interval between utterances, the presence of barge-in and so on, which can be utilized to judge the user's character. | contrasting |
train_219 | If a user is identified as having the high skill level, the dialogue management is carried out in a user-initiated manner; namely, the system generates only open-ended prompts. | when user's skill level is detected as low, the system takes an initiative and prompts necessary items in order. | contrasting |
train_220 | Then the original discourse marker may not be appropraite in the revised sentence plan. | for example, consider how the application of the following revision types requires different lexicalizations for the initial discourse markers: Clause Aggregation: The merging of two main clauses into one main clause and one subordinate clause: Clause Demotion: Two main clauses are merged where one of them no longer has a clause structure: The happy man went home.¨the man was poor. | contrasting |
train_221 | This is because there is a five-year statute of limitations on that crime. | there is no statute of limitations in murder cases." | contrasting |
train_222 | In other words, almost one of every four clause revisions potentially forces a change in discourse marker lexicalizations and one in every two discourse markers occur near a clause revision boundary. | the "penalty" associated with incorrectly selecting discourse markers is fairly high leading to confusing sentences, although there is no cognitive science evidence that states exactly how high for a typical reader, despite recent work in this direction (Tree and Schrock, 1999). | contrasting |
train_223 | Morphemes are the smallest meaning-bearing elements of language and could be used as lexical units instead of entire words. | the construction of a comprehensive morphological lexicon or analyzer based on linguistic theory requires a considerable amount of work by experts. | contrasting |
train_224 | While the former is a rather intuitive measure, the latter may not appear as intuitive. | the proportion of hapax legomena may be interpreted as a measure of the richness of the text. | contrasting |
train_225 | This is reminiscent of Church's statement that '[t]he first mention of a word obviously depends on frequency, but surprisingly, the second does not.' | (Church, 2000) Church was concerned with language modeling, and in particular cache-based models that overcome some of the limitations introduced by a Markov assumption. | contrasting |
train_226 | Comparing different probability models in terms of their effects on classification under a Naive Bayes assumption is likely to yield very conservative results, since the Naive Bayes classifier can perform accurate classifications under many kinds of adverse conditions and even when highly inaccurate probability estimates are used (Domingos and Pazzani, 1996;Garg and Roth, 2001). | an evaluation in terms of document classification has the advantages, compared with language modeling, of computational simplicity and the ability to benefit from information about non-occurrences of words. | contrasting |
train_227 | For example, the natural overdispersed variant of the multinomial model is the Dirichlet-multinomial mixture, which adds just a single parameter that globally controls the overall variation of the entire vocabulary. | church, Gale and other have demonstrated repeatedly (church and Gale, 1995;church, 2000) that adaptation or "burstiness" are clearly properties of individual words (word types). | contrasting |
train_228 | al, 1996) is another well-known method that can perform homogeneous context extension. | figure 2 illustrates heterogeneous context extension, in other words, this type of extension involves taking more information about other types of contextual features. | contrasting |
train_229 | This insertion/deletion scheme contributed to the simplicity of this representation of the translation processes, allowing a sophisticated application to run on an enormous bilingual sentence collection. | it is apparent that the weak modeling of those phenomena will lead to inferior performance for language pairs such as Japanese and English. | contrasting |
train_230 | All of these methods bias the training and/or decoding with phrase-level examples obtained by preprocessing a corpus (Och et al., 1999;Watanabe et al., 2002) or by allowing a lexicon model to hold phrases (Marcu and Wong, 2002). | the chunk-based translation model holds the knowledge of how to construct a sequence of chunks from a sequence of words. | contrasting |
train_231 | We observed that after iteration 103 only 10% of the patterns are "good", the rest are secondary. | in the first 103 iterations, over 90% of the patterns are good Management Succession patterns. | contrasting |
train_232 | (Thelen and Riloff, 2002) presents a very similar technique, in the same application as the one described in (Yangarber et al., 2002). | 7 (Thelen and Riloff, 2002) did not focus on the issue of convergence, and on leveraging negative categories to achieve or improve convergence. | contrasting |
train_233 | This is a very aggressive strategy, and it is likely to adversely affect parsing accuracy. | more lenient strategies were found to require too much space for the chart to be held in memory. | contrasting |
train_234 | In all cases, the CCG derivation includes all longrange dependencies. | with the models that exclude certain kinds of dependencies, it is possible that a word is conditioned on no dependencies. | contrasting |
train_235 | For instance, the dependency between the head of a noun phrase and the head of a reduced relative clause (the shares bought by John) is captured by the SD model, since shares and bought are both heads of the local trees that are combined to form the complex noun phrase. | in the SD model the probability of this dependency can only be estimated from occurrences of the same construction, since dependency relations are defined in terms of local trees and not in terms of the underlying predicate-argument struc- ture. | contrasting |
train_236 | Based on the relevance score, we can produce a full ranking of all the summaries in the corpus. | to (Brandow et al., 1995) who run 12 Boolean queries on a corpus of 21,000 documents and compare three types of documents (full documents, lead extracts, and ANES extracts), we measure retrieval performance under more than 300 conditions (by language, summary length, retrieval policy for 8 summarizers or baselines). | contrasting |
train_237 | Parent annotation lets us indicate an important feature of the external environment of a node which influences the internal expansion of that node. | lexicalization is a (radical) method of marking a distinctive aspect of the otherwise hidden internal contents of a node which influence the external distribution. | contrasting |
train_238 | Clearly, information about long-distance relationships is vital for semantic interpretation. | such constructions prove to be difficult for stochastic parsers (Collins et al., 1999) and they either avoid tackling the problem (Charniak, 2000;Bod, 2003) or only deal with a subset of the problematic cases (Collins, 1997). | contrasting |
train_239 | Theoretically, the 'best' way to combine the trace tagger and the parsing algorithm would be to build a unified probabilistic model. | the nature of the models are quite different: the finite-state model is conditional, taking the words as given. | contrasting |
train_240 | Conditional parsing algorithms do exist, but they are difficult to train using large corpora (Johnson, 2001). | we show that it is quite effective if the parser simply treats the output of the tagger as a certainty. | contrasting |
train_241 | The idea of threading EEs to their antecedents in a stochastic parser was proposed by Collins (1997), following the GPSG tradition (Gazdar et al., 1985). | we extend it to capture all types of EEs. | contrasting |
train_242 | Assuming that five rules on average are applied to translate a sentence, the number of sentence translations becomes 5 × C + C = 60, 000 for testing all rules. | to add a rule, the entire corpus must be re-translated because it is unknown which MT results will change by adding a rule. | contrasting |
train_243 | This table shows that the test corpus BLEU score and the subjective Focusing on the subjective quality of the proposed methods, some MT results were degraded from the baseline due to the removal of rules. | the subjective quality levels were relatively improved because our methods aim to increase the portion of correct MT results. | contrasting |
train_244 | (ii) While we can obtain large parallel corpora in the long run, to have them manually wordaligned would be too time-consuming and would defeat the original purpose of getting a sensetagged corpus without manual annotation. | are current word alignment algorithms accurate enough for our purpose? | contrasting |
train_245 | Domain Dependence The accuracy figure of M1 for each noun is obtained by training a WSD classifier on the manually sense-tagged training data (with lumped senses) provided by SENSEVAL-2 organizers, and testing on the corresponding official test data (also with lumped senses), both of which come from similar domains. | the P1 score of each noun is obtained by training the WSD classifier on a mixture of six parallel corpora, and tested on the official SENSEVAL-2 test set, and hence the training and test data come from dissimilar domains in this case. | contrasting |
train_246 | Therefore, unknown words were apt to be either concatenated as one word or divided into both a combination of known words and a single word that consisted of more than one character. | this model has the potential to correctly detect any length of unknown words. | contrasting |
train_247 | In general, the higher the OOV is, the more difficult detecting word segments and their POS categories is. | the difference between accuracies for short and long words was about 1% in recall and 2% in precision, which is not significant when we consider that the difference between OOVs for short and long words was 4%. | contrasting |
train_248 | Since the position of a word plays an important role as a syntactic constraint in English, the methods are successful even with local information. | these methods are not appropriate for chunking Korean and Japanese, because such languages have a characteristic of partially free wordorder. | contrasting |
train_249 | Firstly supertags encode much more syntactical information than POS tags, which makes supertagging a useful pre-parsing tool, so-called, almost parsing (Srinivas and Joshi, 1999). | as the term 'supertagging' suggests, the time complexity of supertagging is similar to that of POS tagging, which is linear in the length of the input sentence. | contrasting |
train_250 | As far as supertagging is concerned, word context forms a very large space. | for each word in a given sentence, only a small part of features in the space are related to the decision on supertag. | contrasting |
train_251 | Our algorithm, which is coded in Java, takes about 10 minutes to supertag the test data with a P3 1.13GHz processor. | in (Chen, 2001), the accuracy of ¢ ¤ £ ¦ ¥ £ ¤ was achieved by a Viterbi search program that took about 5 days to supertag the test data. | contrasting |
train_252 | Based on the local context of R v ' ( P , Three can be the subject of leading. | the supertag of leading is B An, which represents a modifier of a noun. | contrasting |
train_253 | It is important to note that the accuracy of supertag itself is much lower than that of POS tag while the use of supertags helps to improve the overall performance. | since the accuracy of supertagging is rather lower, there is more room left for improving. | contrasting |
train_254 | According to the parsing algorithm in Figure 2, the probability of 4 Theoretically, we should arrive at the same dependency structure no matter whether we parse the sentence left to right or right to left. | this is not the case with the approximation algorithm. | contrasting |
train_255 | Inter-annotator agreement on the frames shown in Table 1 is very high. | the lemmas we considered so far were only moderately ambiguous, and we might see lower figures for frame agreement for highly polysemous FEEs like laufen (to run). | contrasting |
train_256 | The Prague Treebank reported a disagreement of about 10% for manual thematic role assignment (Žabokrtský, 2000). | in contrast to our study, they also annotated temporal and local modifiers, which are easier to mark than other roles. | contrasting |
train_257 | We suspect that annotators saw too few instances of these elements to build up a reliable intuition. | the elements may also be inherently difficult to distinguish. | contrasting |
train_258 | It is unfortunate that annotators use underspecification only infrequently, since it can indicate interesting cases of relatedness between different frames and frame elements. | underspecification may well find its main use during the merging of independent annotations of the same corpus. | contrasting |
train_259 | Although these and other researchers have suggested that nonverbal behaviors undoubtedly play a role in grounding, previous literature does not characterize their precise role with respect to dialogue state. | a number of studies on these particular nonverbal behaviors do exist. | contrasting |
train_260 | <Assertion> In both conditions, listeners look at the map most of the time, and sometimes nod. | speakers' nonverbal behavior is very different across conditions. | contrasting |
train_261 | Moreover, when a listener keeps looking at the speaker, the speaker's next UU is go-ahead only 27% of the time. | when a listener keeps looking at the map, the speaker's next UU is go-ahead 52% of the time (z = -2.049, p<.05) 1 . | contrasting |
train_262 | If we apply this model to the example in Figure 1, none of the UU have been grounded because the listener has not returned any spoken grounding clues. | our results suggest that considering the role of nonverbal behavior, especially eye-gaze, allows a more fine-grained model of grounding, employing the UU as a unit of grounding. | contrasting |
train_263 | Consistent with the previous findings (Oviatt et al, 1997), in most cases (85% of time), gestures occurred before the referring expressions were uttered. | in 15% of the cases the speech referring expressions were uttered before the gesture occurred. | contrasting |
train_264 | In this case, the overall similarity function will return zero. | because of the iterative updating nature of the matching algorithm, the system will still find the most optimal match as a result of the matching process even some constraints are violated. | contrasting |
train_265 | The speaker first attempts to use constructions from his existing inventory to express whatever he wants to express. | when that fails or is judged unsatisfactory, the speaker may extend his existing repertoire by inventing new constructions. | contrasting |
train_266 | We assume that X is a substring of Y , i.e., that the source sentence can be obtained by deleting words from Y , so for a fixed observed sentence there are only a finite number of possible source sentences. | the number of source sentences grows exponentially with the length of Y , so exhaustive search is probably infeasible. | contrasting |
train_267 | Finally, auxiliary trees of the form (β 5 ) generate a reparandum word M i is inserted; the weight of such a tree is The TAG just described is not probabilistic; informally, it does not include the probability costs for generating the source words. | it is easy to modify the TAG so it does include a bigram model that does generate the source words, since each nonterminal encodes the preceding source word. | contrasting |
train_268 | That approach may do so well because many speech repairs are very short, involving only one or two words Shriberg and Stolcke (1998), so the reparandum, interregnum and repair are all contained in the surrounding word window used as features by the classifier. | the probabilistic model of repairs explored here seems to be most successful in identifying long repairs in which the reparandum and repair are similar enough to be unlikely to have been generated independently. | contrasting |
train_269 | It is likely that edges recently discovered by the attention shifting procedure are pruned. | the true PCFG probability model is used to prune these edges rather than the approximation used in the FOM. | contrasting |
train_270 | To produce the word-lattices, each training utterance was processed by the baseline ASR system. | these same utterances are what the acoustic and language models are built from, which leads to better performance on the training utterances than can be expected when the ASR system processes unseen utterances. | contrasting |
train_271 | Traditional concatenative speech synthesis systems use a number of heuristics to define the target and concatenation costs, essential for the design of the unit selection component. | to these approaches, we introduce a general statistical modeling framework for unit selection inspired by automatic speech recognition. | contrasting |
train_272 | This would provide improved feedback to the user about the available choices so far, guards against stilted conversations with a fixed number of dialog turns for every interaction, and mitigates against repeated scenarios where user queries return no items. | much effort is then required in configuring the numerous scenarios for users to make sequences of queries in various orders. | contrasting |
train_273 | Our classifier performs well if the utterance is short and falls into one of the selected categories (86% accuracy on the British data); and it has the advantages of automatic training, domain independence, and the ability to capture a great variety of expressions. | it can be inaccurate when applied to longer utterances, and it is not yet equipped to handle domain-specific assertions, questions, or queries about a transaction. | contrasting |
train_274 | For instance, already (Smith, 1993) observed that it is safer for beginners to be closely guided by the system, while experienced users like to take the initiative which results in more efficient dialogues in terms of decreased average completion time and a decreased average number of utterances. | being able to decide when to switch from guiding a novice to facilitating an expert requires the system to be able to keep track of the user's expertise level. | contrasting |
train_275 | As mentioned above, one of the goals of the Cooperativity model is to facilitate more natural interaction by allowing the system to adapt its utterances according to the perceived expertise level. | we also want to validate and assess the usability of the three-level model of user expertise. | contrasting |
train_276 | This should contribute to a feeling of consistency and dependability. | paris (1988) argued that the user's expertise level does not affect only the amount but the kind of information given to the user. | contrasting |
train_277 | In particular, the probability distribution over sentences can be derived from the joint probability distribution, but not from the conditional one. | the unbounded nature of the parsing problem means that the individual parameters of the discriminative model are much harder to estimate than those of the generative model. | contrasting |
train_278 | The estimation process has to guess about the future role of an unbounded number of words, which makes the estimate quite difficult. | the parameters of the generative model only include words which are either already incorporated into the structure, or are the immediate next word to be incorporated. | contrasting |
train_279 | In we argued for the use of log-linear parsing models for CCG. | estimating a log-linear model for a widecoverage CCG grammar is very computationally expensive. | contrasting |
train_280 | To solve this issue, we generally eliminate large sub-structures from the set of features used. | the main reason for using convolution kernels is that we aim to use structural features easily and efficiently. | contrasting |
train_281 | We have insufficient space to discuss this subject in detail in relation to other convolution kernels. | our proposals can be easily applied to tree kernels (Collins and Duffy, 2001) by using string encoding for trees. | contrasting |
train_282 | The coreferential chain length of a candidate, or its variants such as occurrence frequency and TFIDF, has been used as a salience factor in some learning-based reference resolution systems (Iida et al., 2003;Mitkov, 1998;Paul et al., 1999;Strube and Muller, 2003). | for an entity, the coreferential length only reflects its global salience in the whole text(s), instead of the local salience in a discourse segment which is nevertheless more informative for pronoun resolution. | contrasting |
train_283 | The model is appealing in that it can potentially overcome the limitation of mention-pair model in which dependency among mentions other than the two in question is ignored. | models in (McCallum and Wellner, 2003) compute directly the probability of an entity configuration conditioned on mentions, and it is not clear how the models can be factored to do the incremental search, as it is impractical to enumerate all possible entities even for documents with a moderate number of mentions. | contrasting |
train_284 | For training coreference classifiers and locally-optimized anaphoricity models, we use both RIPPER and MaxEnt as the underlying learning algorithms. | for training globally-optimized anaphoricity models, RIPPER is always used in conjunction with Method 1 and Max-Ent with Method 2, as described in Section 2.2. | contrasting |
train_285 | To illustrate the importance of contextual information in transliteration, let's take name /Minahan/ as an example, the correct segmentation should be /Mi-na-han/, to be transliterated as 米-纳-汉 (Pinyin: Mi-Na-Han). | a possible segmentation /Min-ah-an/ could lead to an undesirable syllabication of 明-阿-安 (Pinyin: Min-A-An). | contrasting |
train_286 | In order words, on average, for each English unit, we have 1.53 = 5,640/3,683 Chinese correspondences. | for each Chinese unit, we have 15.1 = 5,640/374 English back-transliteration units! | contrasting |
train_287 | The segmentation step correctly segmented the romanji to "matsu-da". | in the Unihan database, 14 At this rate, checking the 21 million combinations remaining after filtering with bigrams using the Web (without the corpus filtering step) would take more than a year. | contrasting |
train_288 | From a biological point of view, there are two problems with such approaches: 1) the meaning of the extracted events will depend strongly on the selectional restrictions and 2) the same meaning can be expressed using a number of different verbs. | and alike (Friedman et al., 2001), we instead set out to handle only one specific biological problem and, in return, extract the related events with their whole range of syntactic variations. | contrasting |
train_289 | to the number of standard deviations the score differs from the mean of the scores of the unattributed samples. | this renormalization only makes sense in the situation that we have a fixed set of authors who each produced one text for each topic. | contrasting |
train_290 | The combination of the best two individual systems leads to an FAR FRR=0 of 10.3%, a solid improvement over lexical features by themselves. | the best individual systems are not necessarily the best combiners. | contrasting |
train_291 | Above, we focused on the authorship verification task, since it is the harder problem, given that the potential group of authors is unknown. | as mentioned in Section 2, previous work with this data has focused on the authorship recognition problem, to be exact on selecting the correct author out of two potential authors. | contrasting |
train_292 | Formal aspects of a summary (or report), such as legibility, grammatical correctness, informativeness, etc., can only be evaluated manually. | automatic evaluation metrics can play a useful role in the evaluation of how well the information from the original sources is preserved (Mani, 2001). | contrasting |
train_293 | While the ROUGE measure remains stable (0.53 versus 0.54), the key concept similarity is much worse with IE topics (0.52 versus 0.77). | all baselines improve, and some of them (SentenceSim precision and perplexity) give better results than both ROUGE and NICOS. | contrasting |
train_294 | 4 By reducing the tag-set considered by the parsing model, we reduce the search space and increase the speed. | the simple tagger used to narrow the search also introduces tagging error. | contrasting |
train_295 | Further improvement is seen with the combination of acoustic, parsing, and trigram scores (α ¡ 1¦ 16 β ¡ 1). | the combination of the parsing model (trained on 1M words) with the lattice trigram (trained on 40M words) resulted in a higher WER than the lattice trigram alone. | contrasting |
train_296 | This translation clarified the precise relationship between these two related formalisms, and made the powerful meta-theory of dominance constraints accessible to MRS. Their goal was to also make the large grammars for MRS and the efficient constraint solvers for dominance constraints available to the other formalism. | niehren and Thater made three technical assumptions: 1. that EP-conjunction can be resolved in a preprocessing step; 2. that the qeq relation in MRS is simply dominance; 3. and (most importantly) that all linguistically correct and relevant MRS expressions belong to a certain class of constraints called nets. | contrasting |
train_297 | In unsupervised models, these coefficients are assumed to be known. | when labeled documents are available, it may be advantageous to estimate them. | contrasting |
train_298 | As with document-level polarity classification, we could perform subjectivity detection on individual sentences by applying a standard classification algorithm on each sentence in isolation. | modeling proximity relationships between sentences would enable us to leverage coherence: text spans occurring near each other (within discourse boundaries) may share the same subjectivity status, other things being equal (Wiebe, 1994). | contrasting |
train_299 | Assuming that one had an accurate WSD system then one could obtain frequency counts for senses and rank them with these counts. | the most accurate WSD systems are those which require manually sense tagged data in the first place, and their accuracy depends on the quantity of training examples (Yarowsky and Florian, 2002) available. | contrasting |