|
{ |
|
"paper_id": "I17-1040", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:38:12.888336Z" |
|
}, |
|
"title": "Identifying Usage Expression Sentences in Consumer Product Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Shibamouli", |
|
"middle": [], |
|
"last": "Lahiri", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Michigan", |
|
"location": {} |
|
}, |
|
"email": "lahiri@umich.edu" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"G Vinod" |
|
], |
|
"last": "Vydiswaran", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Michigan", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Michigan", |
|
"location": {} |
|
}, |
|
"email": "mihalcea@umich.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we introduce the problem of identifying usage expression sentences in a consumer product review. We create a human-annotated gold standard dataset of 565 reviews spanning five distinct product categories. Our dataset consists of more than 3, 000 annotated sentences. We further introduce a classification system to label sentences according to whether or not they describe some \"usage.\" The system combines lexical, syntactic, and semantic features in a product-agnostic fashion to yield good classification performance. We show the effectiveness of our approach using importance ranking of features, error analysis, and cross-product classification experiments.", |
|
"pdf_parse": { |
|
"paper_id": "I17-1040", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we introduce the problem of identifying usage expression sentences in a consumer product review. We create a human-annotated gold standard dataset of 565 reviews spanning five distinct product categories. Our dataset consists of more than 3, 000 annotated sentences. We further introduce a classification system to label sentences according to whether or not they describe some \"usage.\" The system combines lexical, syntactic, and semantic features in a product-agnostic fashion to yield good classification performance. We show the effectiveness of our approach using importance ranking of features, error analysis, and cross-product classification experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Identification of usage expressions -phrases or sentence snippets describing product use in reviews -is an important problem in mining consumer product reviews. Identifying such usage expressions accurately allows us to view the relationship between consumers and products more clearly (e.g., by indicating how frequently a consumer uses a product). Further, the language and style employed in describing product use bring relevant and unseen aspects of the products to the fore (e.g., describing usage of a product in nontraditional and unique ways).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Usage expressions can take several forms, such as which aspects of the product are used, why the product is used, where it is used, how it is used, when it is used, and so forth (c.f. Section 3 for specific examples). The product could be used by a consumer in a number of ways, sometimes in unique ways not intended for originally. Hence enumerating all possible uses of a product is computationally intractable. In this paper, therefore, we focus on four specific cases of product usage: why the product is used, where it is used, how it is used, and if there are any non-standard or nontraditional use (cf. Section 3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While the relationship between product usage and consumer behavior has mostly been discussed by marketing researchers and psychologists, the question of whether the phenomenon of usage has any detectable signature in terms of the language used by consumers has not been addressed thus far. In this paper, we introduce the task of identifying usage expressions from consumer product reviews. In particular, we focus on classifying review sentences as to whether they contain a usage expression or not. We create our own humanannotated corpus of 565 reviews on five distinct product categories containing more than 3000 sentences. We introduce a system that classifies sentences according to whether they contain a usage expression or not with 87.2% accuracy. We also show that an appropriate combination of lexical, syntactic, and semantic features performs better than individual feature categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Existing research could be organized into six self-consistent psycho-sociological theories, namely psycho-analysis, social theories, stimulusresponse theories, trait and factor theories, self-theories, and life style theories. Kassarjian (1971) offers a comprehensive review of the literature on consumer behavior and psychological traits. Robertson and Myers (1969) found weak relationships between opinion leadership and innovative buying behavior, but observed that the relationship strength varied by product category. Painter (1961), and Sparks and Tucker (1971) showed that there were correlations between personality traits and the types of products used. Dolich (1969) posited that products as symbols were organized into congruent relationships with the consumer's self-image. More recently, Govers and Schoormans (2005) found that people preferred products with a product personality that matched their self-image, and the positive effect of product-personality congruence was independent of user-image congruence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 244, |
|
"text": "Kassarjian (1971)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 340, |
|
"end": 366, |
|
"text": "Robertson and Myers (1969)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 542, |
|
"text": "Painter (1961), and", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 553, |
|
"text": "Sparks and", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 567, |
|
"text": "Tucker (1971)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 663, |
|
"end": 676, |
|
"text": "Dolich (1969)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 801, |
|
"end": 829, |
|
"text": "Govers and Schoormans (2005)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In natural language processing research, the closest problem to usage expressions is perhaps that of opinion mining from product reviews and product aspects. Dave et al. (2003) classified reviews as expressing positive or negative sentiment. They identified four problems with review classification, including rating inconsistency, ambivalence, data sparseness, and skewed distribution. Hu and Liu (2004) extracted product features from the reviews of a single product, taking user opinion into account. Opinion/product features were mined if a reviewer had commented on them. Popescu and Etzioni (2005) presented OPINE, an unsupervised information extraction system that mined reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products. OPINE's use of relaxation labeling led to strong performance on the tasks of finding opinion phrases and their polarity. Ding et al. (2008) presented a \"holistic lexicon-based approach\" for mining context-dependent opinion words. The proposed method used an aggregating function for multiple conflicting opinion words in a sentence. The authors further implemented a system called \"Opinion Observer\" based on their method. Lastly, Wu et al. (2009) implemented a special dependency parser for opinion mining that used phrases (rather than words) as the primitive building blocks. Since many product features are in fact phrases, this approach led to good results for extracting relations between product features and opinion expressions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 176, |
|
"text": "Dave et al. (2003)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 404, |
|
"text": "Hu and Liu (2004)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 603, |
|
"text": "Popescu and Etzioni (2005)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 939, |
|
"end": 957, |
|
"text": "Ding et al. (2008)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1249, |
|
"end": 1265, |
|
"text": "Wu et al. (2009)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Yet another related task is that of mining semantic affordances (Chao et al., 2015) . In this task, \"usage\" of a product can be viewed as an action performed on an object with the help of the product. Relationships between such actions and objects are known as \"semantic affordances\". As Chao et al. showed, text mining can be very effective at ascertaining affordance relationships between verb and noun classes. Similar verbnoun relationships have also been formulated in the problem of learning selectional preferences from text (Resnik, 1997; Brockmann and Lapata, 2003; Erk, 2007; Pantel et al., 2007; Bergsma et al., 2008; Van de Cruys, 2014) , and more generally, in the problem of probabilistic frame induction (Chambers and Jurafsky, 2011; Cheung et al., 2013; Chen et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 83, |
|
"text": "(Chao et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 299, |
|
"text": "Chao et al.", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 546, |
|
"text": "(Resnik, 1997;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 574, |
|
"text": "Brockmann and Lapata, 2003;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 585, |
|
"text": "Erk, 2007;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 586, |
|
"end": 606, |
|
"text": "Pantel et al., 2007;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 607, |
|
"end": 628, |
|
"text": "Bergsma et al., 2008;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 629, |
|
"end": 648, |
|
"text": "Van de Cruys, 2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 719, |
|
"end": 748, |
|
"text": "(Chambers and Jurafsky, 2011;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 749, |
|
"end": 769, |
|
"text": "Cheung et al., 2013;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 788, |
|
"text": "Chen et al., 2013)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Another topic of research related to our work is the problem of research idea extraction from academic papers. Gupta and Manning (2011) took the first stab at this problem by implementing a bootstrapping algorithm on dependency tree kernels. Gupta and Manning's method was later refined by Tsai et al. (2013) who worked with a more crisp set of idea categories. We view this problem as conceptually parallel to ours; however, a key difference is that usage expressions are typically more obscure in text as compared to research ideas.", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 135, |
|
"text": "Gupta and Manning (2011)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 308, |
|
"text": "Tsai et al. (2013)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Product reviews often contain usage information. Specifically, in addition to opinions on product quality, reviewers often share how, where, or why they use the product. We therefore build our dataset of product usage expressions starting with a collection of product reviews.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We collect Amazon product reviews for five different product categories, as shown in Table 1 With the help of three linguistics undergraduate students, each sentence in the dataset was annotated as containing a usage expression or not. Initially, as an early trial, we asked the annota- tors to indicate if a sentence contained a usage expression. This approach led to low inter-annotator agreement, so we refined the annotation process to a two-step process as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 92, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the first step, we instructed the annotators to read each product review carefully, identify all usage expressions in the review (examples below), and write them in a given textbox, one usage expression per line. Annotators were requested to write the usage expressions in their own words. This component was employed to make sure annotators carefully read and understood the review.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The second step involved answering the following four questions on usage types:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(A) Does the sentence describe why the product was being used? (usage reason/purpose) E.g., \"I used unstopables to freshen my room.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(B) Does the sentence describe where the product was used? E.g., \"I used unstopables with my cat litter.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(C) Does the sentence describe how the product was used? E.g., \"I use three cups of Downy Unstopables in every wash.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(D) Does the sentence describe any non-traditional or non-standard usage of the product? E.g., \"I always love to add some hot water to unstopables and make my own DIY air freshener !\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "If a sentence had a positive answer to one or more of these four questions, then it was labeled as containing a usage expression. 1 Additionally, several specific instructions were added to deal with potentially difficult or complex cases, by asking annotators to (1) consider the context (one sentence before and after the target sentence) before deciding whether to mark a sentence or not. (2) determine if a sentence contains an opinion (\"Love it\", \"Hate it\", etc.) or a recommendation (\"I'd recommend this product to all aspiring gardeners\"), and if so, pairing it with an explicit usage expression in some form. 3determine if a sentence talks about usage of another product that is not the primary focus of the review (i.e., a secondary product), then mark the sentence only if the primary product is being used in addition to the secondary product. (4) determine if the secondary product is used instead of the primary product: \"Unstopables were not good, so I used sheets instead.\", or if only the secondary product was used: \"I used sheets, they are better.\" then do not label the sentence. (5) focus only on products, and ignore other (named) entities like persons, organizations, locations, and dates. Table 2 shows an example product review, and sentences that were agreed upon by all annotators to contain, or not, a usage expression. We also show sentences on which there was no consensus. Note that such sentences have a fair amount of ambiguity. For example, the sentence \"I do recommend this for times when you may want extra freshness for your clothes or towels.\" does not seem to contain an explicit usage expression, but it does indicate that the consumer used the product to obtain extra freshness for clothes or towels. Sentences like this demonstrate the difficulty of identifying usage expressions in product reviews.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 131, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1212, |
|
"end": 1219, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Inter-annotator agreement values, shown in Table 3, indicate that the task is moderately difficult. We can see that different products have different difficulty levels, with Vinegar being the least difficult (highest A 3 agreement as well as highest \u03ba), while for the other four products, \u03ba was between 0.43 and 0.48. This is presumably owing to the fact that Vinegar is a cooking agent and used in many different ways, thus providing more opportunity to find a usage sentence (by several people) in a product review.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To construct a gold standard, we took the majority of the three votes assigned by the three anno-Sample Review I used this recently when I washed my blankets and towels, and I was definitely impressed. Just a small amount (half a capful) was necessary to give my blankets and towels an extra burst of freshness. The scent is a little bit floral and lasts for a few days. I put the Downy booster directly into the washer. (Instructions say NOT to put in your dispenser) And it does work fine with high efficiency washers. I do recommend this for times when you may want extra freshness for your clothes or towels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Usage annotations (agreed by all) I used this recently when I washed my blankets and towels, and I was definitely impressed. Just a small amount (half a capful) was necessary to give my blankets and towels an extra burst of freshness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Non-usage annotations (agreed by all) The scent is a little bit floral and lasts for a few days.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Mixed usage/non-usage annotations I put the Downy booster directly into the washer. (Instructions say NOT to put in your dispenser) And it does work fine with high efficiency washers. I do recommend this for times when you may want extra freshness for your clothes or towels. tators to each sentence. There were 36 sentences (1.19% of all sentences) that did not have a majority. One of the authors manually arbitrated these sentences into \"usage\" (n = 22) and \"not usage\" (n = 14) classes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building a Usage Expression Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Once the annotated dataset was finalized, our primary goal was to build a classifier to predict if a given sentence contains usage expressions or not. We learn the classifier over five categories of features extracted from the sentence and neighboring context. In this paper, we show the performance using a logistic regression classifier, chosen based on its performance on a small development dataset of usage-annotated sentences drawn from 20 product reviews. The following features are included: (A) Lexical features: As n-grams are usually very helpful in document classification, we explore their utility on the task of usage expression sentence classification. We use word unigrams and bigrams, part-of-speech (POS) bigrams, and character trigrams. We use the CRFTagger (Phan, 2006) for POS tagging. (B) Embeddings: Embeddings encode latent semantics and could reflect usage patterns. We train a word embedding using word2vec (Mikolov et al., 2013 ) over a large corpus of 55, 463 product reviews. This corpus is constructed from all Amazon reviews associated with any product that has \"Unstopables\", \"Olive oil\", \"Vinegar\", \"Aspirin\", or \"Toothpaste\" in its title. Once the word embedding is trained, a sentence is represented by the weighted average of the embeddings of all the unique words in it. (C) Syntax: We use bags of constituency and dependency production rules, obtained from the output of the Stanford parser (Klein and Manning, 2003; Chen and Manning, 2014) . For constituency grammar, we use terminal and non-terminal rules separately as well as together. For the dependency grammar, we use the (collapsed) dependency types (amod, nsubj, etc.), and the lexicalized dependencies (e.g., (nsubj, Kirkland, seems)) as separate features. (D) Style: We extract thirteen shallow surfacelevel and style features to encode the stylistic properties of a sentence, in the hope that they would be predictive of whether the sentence contains a usage expression. These features are: sentence position, average word length (in chars), sentence length (in words and characters), typetoken ratio, Flesch Reading Ease (Flesch, 1948; Farr et al., 1951) , Automated Readability Index (Senter and Smith, 1967) , Flesch-Kincaid Grade Level (Kincaid et al., 1975) , Coleman-Liau Index (Coleman and Liau, 1975 ), Gunning Fog Index (Gunning, 1968) , SMOG Score (McLaughlin, 1969) , Formality (Heylighen and Dewaele, 1999) , and Lexical Density (Ure, 1971) . (E) Semantics: Since usage is above all a semantic phenomenon, a semantic space should be able to capture the dominant properties of the usage expression. We use the following feature sets to capture a semantic space for a sentence. Each feature set effectively describes a lexicon, and we turn \"on\" the features in the lexicon that are present in the target sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 777, |
|
"end": 789, |
|
"text": "(Phan, 2006)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 933, |
|
"end": 954, |
|
"text": "(Mikolov et al., 2013", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1429, |
|
"end": 1454, |
|
"text": "(Klein and Manning, 2003;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1455, |
|
"end": 1478, |
|
"text": "Chen and Manning, 2014)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 2122, |
|
"end": 2136, |
|
"text": "(Flesch, 1948;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 2137, |
|
"end": 2155, |
|
"text": "Farr et al., 1951)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 2186, |
|
"end": 2210, |
|
"text": "(Senter and Smith, 1967)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 2213, |
|
"end": 2262, |
|
"text": "Flesch-Kincaid Grade Level (Kincaid et al., 1975)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2284, |
|
"end": 2307, |
|
"text": "(Coleman and Liau, 1975", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 2329, |
|
"end": 2344, |
|
"text": "(Gunning, 1968)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 2358, |
|
"end": 2376, |
|
"text": "(McLaughlin, 1969)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 2389, |
|
"end": 2418, |
|
"text": "(Heylighen and Dewaele, 1999)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 2441, |
|
"end": 2452, |
|
"text": "(Ure, 1971)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding Usage Expression Sentences", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. Product categories: This feature set consists of the list of product categories obtained from the Walmart API. 2 We use both main categories and sub-categories.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 115, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding Usage Expression Sentences", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The set of words, along with their concreteness scores, available as part of the Free Association Norms Database (Nelson et al., 1998) . There are more than 3,000 words available as part of the database.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 134, |
|
"text": "(Nelson et al., 1998)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concreteness:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "3. Table 3 : Majority label statistics, and three-way inter-annotator agreement. A 3 is the % of sentences where all three annotators agreed. \u03ba is the Fleiss' kappa among three annotators (Fleiss, 1971) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 202, |
|
"text": "(Fleiss, 1971)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Concreteness:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "verb alternations, leading to four types of features (Levin, 1993) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 66, |
|
"text": "(Levin, 1993)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concreteness:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Like Levin classes, we included another set of features derived from the LIWC dictionary of psychological word categories (Tausczik and Pennebaker, 2010).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LIWC:", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "5. Semantic lexicons: Like Levin classes, we use the Roget thesaurus and WordNet Affect word categories, with a binary feature representation. If a word falls under any of the Roget word categories, the corresponding feature is set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LIWC:", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We use the Stanford NER (Finkel et al., 2005) to identify named entities in our corpus, and then use these entities as bag-of-features. We use the terms, the entity types, and the lexicalized entity types (terms + entities) as our bags. Standard tf, tfidf, and binary representations are used. We use the seven-class typology of named entities (Location, Person, Organization, Money, Percent, Date, Time).", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 45, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities:", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Recent studies have shown prepositions to be a precious source of semantic information (Srikumar and Roth, 2013; Schneider et al., 2015 Schneider et al., , 2016 . We use a lexicon of spatial prepositions 3 as a bagof-words feature. The rationale was to observe if spatial properties of usage of objects (\"use olive oil with celery\", \"put detergent in washer\") can be captured in terms of prepositions such as on, in, by, with, etc.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 112, |
|
"text": "(Srikumar and Roth, 2013;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 135, |
|
"text": "Schneider et al., 2015", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 160, |
|
"text": "Schneider et al., , 2016", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spatial Prepositions:", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "8. Semantic Distance: Finally, we added the 3 Obtained by combining the two lists at https://owl. english.purdue.edu/owl/resource/594/04/ and http://www.firstschoolyears.com/ literacy/sentence/grammar/prepositions/ resources/Spatial%20Prepositions%20word% 20bank.pdf.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spatial Prepositions:", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "(weighted) WordNet distance 4 between all words and the verb use, where weights are set as binary, tf, and tfidf, as before. The rationale behind this feature is that it captures words similar to the verb use in the sentence, and their relative importance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spatial Prepositions:", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "We use the dataset introduced in Section 3 to evaluate the accuracy of the usage detection classifier. 20% of the data for each product is held out as test data, and the remaining 80% is used for training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We start by evaluating each individual feature using a ten-fold cross-validation on the training data. We then explore three combination methods, applied on a subset of seven feature sets, selected based on their performance and diversity: word unigrams, POS bigrams, character trigrams, embeddings, constituency rules, product categories, and concreteness. We combine these features through: classifier voting, where we assign the class predicted by the majority of the classifiers; feature fusion, where we join all the individual features into one feature vector used in the classification; and meta-learning, where we use the output of the individual classifiers as input into another classifier (again using logistic regression for the meta-learner). Table 4 shows the results of these evaluations. As seen in the table, while simple features, such as word n-grams and character trigrams, lead to the best performance among the individual features, better performance is obtained when they are combined with other features (bottom rows of Table 4 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 756, |
|
"end": 763, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1044, |
|
"end": 1051, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The meta-learner based combination strategy resulted in the best performing classifier during the cross-validation experiments on training data. We next evaluate this classifier on the test data consisting of 20% reviews of all five products. Table 5 : Micro-averaged sentence-level results (%) on the test set (20% of all products). Maximum value in each column is boldfaced.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 250, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "comparison, the table also shows the performance of the word unigram classifier, as well as a majority class baseline that labels every sentence as \"non-usage.\" As before, the meta-learner significantly improves over the unigram classifier, 5 and also over the majority class baseline. 6 We also report the performance of the metalearner classifier on individual products in Table 6 . Across all the products, vinegar appears to have the highest F-score. This can be partly explained by the high inter-annotator agreement: the same product had the highest three-way agreement in the manual annotations, as shown in Table 3 , likely an indication of a less difficult dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 287, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 383, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 616, |
|
"end": 623, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To gain further insights, we perform several additional analyses, to determine: the role played by different features; the relation between classifier performance and amount of training data; the role of in-domain vs. cross-domain classification; and Table 6 : Micro-averaged sentence-level results (%) per product using the meta learner. finally the types of errors produced by the system. Table 7 shows the top features (ranked by their Gini importance (Breiman et al., 1984) ) for three prominent individual feature-based classifiersviz. word unigrams, category words, and concreteness -and the meta-learner. Note that topranking words include product properties (smell), secondary objects on which the product was used (clothes), how the product was used (day, daily, drink, water), usage verbs (use), prepositions and conjunctions (and, for, with), pronouns (i, it, this), and articles (a, the). For the meta learner, lexical features (character trigrams and word unigrams) and embedding features (Word2vec) are among the top-ranked feature classes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 455, |
|
"end": 477, |
|
"text": "(Breiman et al., 1984)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 258, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 398, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Additional Analyses", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Next, we experiment with varying the size of the training data to understand the learning curve. We gradually increased the amount of training data from 10% to 80%, in steps of 5%; and evaluated on the full test data. Figure 1 shows the variation of F-score achieved by the meta-learner as Table 8 : Cross-domain classification: Microaveraged sentence-level results (%), where test set is an individual product, and training set is four other products. Maximum value in each column is boldfaced.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 226, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 297, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Curve", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "the training data is increased, smoothed over three consecutive data points. The test performance was the highest when trained on 60% of training data and then decreased gradually, which suggests that the system might not benefit from additional training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Curve", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "To understand the role played by in-domain data, we further experiment with two different configurations of training and test sets. In one configuration, we train on four products, and test on the remaining product (cross-domain training). As can be seen from Table 8 , this results in lower F-scores than Table 5 . This suggests that identifying usage expressions of a product is intimately related to the identity of the product, echoing the findings by Govers and Schoormans (2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 484, |
|
"text": "Govers and Schoormans (2005)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 267, |
|
"text": "Table 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 313, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Role of In-Domain Data", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "In the second configuration, we train on 80% of a product, and test on 20% of the same product (in-domain training). The results, averaged over the five products, are shown in Table 9 . Note that the F-score values are much improved compared to the previous configuration, and are comparable to the results shown in Table 5 . This suggests that when storage/memory might be a concern, we could simply use training data from within the do- Table 9 : In-domain classification: Micro-averaged sentence-level results (%), where test set is 20% of an individual product, and training set is 80% of the same product. Maximum value in each column is boldfaced.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 183, |
|
"text": "Table 9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 323, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 446, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Role of In-Domain Data", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "main to achieve comparable performance. This strategy also results in a faster training time and a smaller model, similar to the findings in (Bucilu\u01ce et al., 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 163, |
|
"text": "(Bucilu\u01ce et al., 2006)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role of In-Domain Data", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Finally, we also conducted a manual inspection of two broad categories of errors -false positives, i.e. \"not usage\" sentences marked as \"usage\" (n = 25), and false negatives, i.e. \"usage\" sentences marked as \"not usage\" (n = 56). This analysis revealed the following sub-categories for the false positives:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Number expressions: Seven instances (29.17%) of errors can be attributed to numeric expressions occurring within sentences (\"two years\", \"3am\", \"third bottle\", etc.).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Erroneous gold labels: Six instances (25%) were actually correctly labeled as \"usage\" by the system, whereas the gold label was wrong (\"I really love the smell of fresh laundry, and the smell of Downy.\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Shortcomings: Six examples (25%) talk about actual or perceived shortcoming(s) of a product. \"Olive oil used for healthy properties doesn't keep well in plastic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "[sic]\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Others: Five instances (20.83%) were not captured by the above categories: \"I used to drink a small shot each day, but haven't for a while.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "False negatives have the following subcategories:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Positive adjectives and adverbs: 21 instances (37.5%) can be attributed to positive adjectives (\"good\", \"great\", \"excellent\"), and/or positive adverbs (\"really\", \"impressively\", \"well\"). \"It smells amazing and lasts forever.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Use-related verb in primary clause: Eleven examples (19.64%) contain a use-related verb (\"use\", \"help\", \"need\") in the primary clause: \"I use this to eat, not to cook with.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Erroneous gold labels: Nine instances (16.07%) are actually correctly labeled as \"not usage\" by the system, but the gold label was wrong (\"When I have to hang dry clothes, they get this horrible egg water odor.\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Non-traditional usage: There are three instances (5.36%) that talk about nontraditional or innovative usage of a product: \"I have since made small sachet bags for my closets, car and as gifts.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Others: Twelve instances (21.43%) were not captured by the above categories: \"I actually saw results after the first use.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "In this paper, we introduced the task of identifying usage expression sentences in consumer product reviews. A dataset comprising more than 3, 000 annotated sentences was created from reviews of five products. We also trained a binary classifier to identify sentences that talk about the usage of a product. Extensive feature tuning and fusion experiments resulted in performance values comparable to the inter-annotator agreement. Detailed feature ranking, error analysis, and per-product performance numbers have been reported. Directions for future research include: experiments on a larger dataset of reviews with more diverse product types, expanding to other genres of reviews such as product blogs, and identifying types of usage expressions (how, where, why, and nontraditional uses). The work can also be extended to model the \"personality\" of a product with the \"personality\" of users -perhaps measured by the average personality of all people using the target product. The annotated dataset is publicly available for research use from http://lit.eecs. umich.edu/downloads.html.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Note that in this paper, we ignore the different ways of product usage (why, where, how, non-traditional), but we plan to utilize the detailed annotations in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://developer.walmartlabs.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the Wu-Palmer similarity(Wu and Palmer, 1994).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Charles Welch, Aparna Garimella, Erin Donahue, Katie Cox, and Michelle Huang for their help with the annotations; Srayan Datta and Soumik Mandal for many helpful discussions and ideas. This material is based in part upon work supported by the Michigan Institute for Data Science. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Discriminative Learning of Selectional Preference from Unlabeled Text", |
|
"authors": [ |
|
{ |
|
"first": "Shane", |
|
"middle": [], |
|
"last": "Bergsma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Randy", |
|
"middle": [], |
|
"last": "Goebel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "59--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shane Bergsma, Dekang Lin, and Randy Goebel. 2008. Discriminative Learning of Selectional Preference from Unlabeled Text. In Proceedings of the 2008 Conference on Empirical Methods in Natural Lan- guage Processing, pages 59-68, Honolulu, Hawaii. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Classification and Regression Trees", |
|
"authors": [ |
|
{ |
|
"first": "Leo", |
|
"middle": [], |
|
"last": "Breiman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jerome", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Friedman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Olshen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Stone", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. 1984. Classification and Re- gression Trees. Wadsworth and Brooks, Monterey, CA.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Evaluating and Combining Approaches to Selectional Preference Acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Carsten", |
|
"middle": [], |
|
"last": "Brockmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "27--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carsten Brockmann and Mirella Lapata. 2003. Eval- uating and Combining Approaches to Selectional Preference Acquisition. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics -Volume 1, EACL '03, pages 27-34, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Model Compression", |
|
"authors": [ |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Bucilu\u01ce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandru", |
|
"middle": [], |
|
"last": "Niculescu-Mizil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "535--541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristian Bucilu\u01ce, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model Compression. In Proceedings of the 12th ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, KDD '06, pages 535-541, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Template-Based Information Extraction without the Templates", |
|
"authors": [ |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "976--986", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2011. Template-Based Information Extraction without the Templates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 976-986, Portland, Oregon, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Mining Semantic Affordances of Visual Object Categories", |
|
"authors": [ |
|
{ |
|
"first": "Yu-Wei", |
|
"middle": [], |
|
"last": "Chao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4259--4267", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu-Wei Chao, Zhan Wang, Rada Mihalcea, and Jia Deng. 2015. Mining Semantic Affordances of Vi- sual Object Categories. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 4259-4267.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A Fast and Accurate Dependency Parser using Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "740--750", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqi Chen and Christopher Manning. 2014. A Fast and Accurate Dependency Parser using Neural Net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yun-Nung", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Rudnicky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--125", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yun-Nung Chen, William Yang Wang, and Alexan- der I. Rudnicky. 2013. Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing. In 2013 IEEE Work- shop on Automatic Speech Recognition and Under- standing, Olomouc, Czech Republic, December 8- 12, 2013, pages 120-125.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Probabilistic Frame Induction", |
|
"authors": [ |
|
{ |
|
"first": "Jackie Chi Kit", |
|
"middle": [], |
|
"last": "Cheung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoifung", |
|
"middle": [], |
|
"last": "Poon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "837--846", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jackie Chi Kit Cheung, Hoifung Poon, and Lucy Van- derwende. 2013. Probabilistic Frame Induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 837-846, Atlanta, Georgia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A computer readability formula designed for machine scoring", |
|
"authors": [ |
|
{ |
|
"first": "Meri", |
|
"middle": [], |
|
"last": "Coleman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Liau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "Journal of Applied Psychology", |
|
"volume": "60", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meri Coleman and T. L. Liau. 1975. A computer read- ability formula designed for machine scoring. Jour- nal of Applied Psychology, 60(2):283.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Neural Network Approach to Selectional Preference Acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Van De Cruys", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tim Van de Cruys. 2014. A Neural Network Approach to Selectional Preference Acquisition. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 26-35, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Mining the Peanut Gallery: Opinion Extraction and Semantic Classification of Product Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Kushal", |
|
"middle": [], |
|
"last": "Dave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Pennock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Twelfth International World Wide Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "519--528", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kushal Dave, Steve Lawrence, and David M. Pennock. 2003. Mining the Peanut Gallery: Opinion Ex- traction and Semantic Classification of Product Re- views. In Proceedings of the Twelfth International World Wide Web Conference, WWW 2003, Budapest, Hungary, May 20-24, 2003, pages 519-528.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A Holistic Lexicon-Based Approach to Opinion Mining", |
|
"authors": [ |
|
{ |
|
"first": "Xiaowen", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 International Conference on Web Search and Data Mining, WSDM '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "231--240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A Holistic Lexicon-Based Approach to Opinion Min- ing. In Proceedings of the 2008 International Con- ference on Web Search and Data Mining, WSDM '08, pages 231-240, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Congruence Relationships Between Self Images and Product Brands", |
|
"authors": [ |
|
{ |
|
"first": "Ira", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Dolich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "Journal of Marketing Research", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "80--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ira J. Dolich. 1969. Congruence Relationships Be- tween Self Images and Product Brands. Journal of Marketing Research, 6(1):80-84.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A Simple, Similarity-based Model for Selectional Preferences", |
|
"authors": [ |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--223", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katrin Erk. 2007. A Simple, Similarity-based Model for Selectional Preferences. In Proceedings of the 45th Annual Meeting of the Association of Compu- tational Linguistics, pages 216-223, Prague, Czech Republic. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Simplification of Flesch Reading Ease Formula", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Farr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Jenkins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1951, |
|
"venue": "Journal of applied psychology", |
|
"volume": "35", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James N. Farr, James J. Jenkins, and Donald G. Pater- son. 1951. Simplification of Flesch Reading Ease Formula. Journal of applied psychology, 35(5):333.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating Non-local Informa- tion into Information Extraction Systems by Gibbs Sampling. In Proceedings of the 43rd Annual Meet- ing on Association for Computational Linguistics, ACL '05, pages 363-370, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Measuring Nominal Scale Agreement Among Many Raters", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Fleiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "Psychological Bulletin", |
|
"volume": "76", |
|
"issue": "5", |
|
"pages": "378--382", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph L. Fleiss. 1971. Measuring Nominal Scale Agreement Among Many Raters. Psychological Bulletin, 76(5):378-382.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A new readability yardstick", |
|
"authors": [ |
|
{ |
|
"first": "Rudolph", |
|
"middle": [], |
|
"last": "Flesch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1948, |
|
"venue": "Journal of applied psychology", |
|
"volume": "32", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Product personality and its influence on consumer preference", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"C M" |
|
], |
|
"last": "Govers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"P L" |
|
], |
|
"last": "Schoormans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Consumer Marketing", |
|
"volume": "22", |
|
"issue": "4", |
|
"pages": "189--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. C. M. Govers and J. P. L. Schoormans. 2005. Prod- uct personality and its influence on consumer prefer- ence. Journal of Consumer Marketing, 22(4):189- 197.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The technique of clear writing", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gunning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Gunning. 1968. The technique of clear writing. McGraw-Hill New York.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Analyzing the Dynamics of Research by Extracting Key Aspects of Scientific Papers", |
|
"authors": [ |
|
{ |
|
"first": "Sonal", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sonal Gupta and Christopher Manning. 2011. Analyz- ing the Dynamics of Research by Extracting Key As- pects of Scientific Papers. In Proceedings of 5th In- ternational Joint Conference on Natural Language Processing, pages 1-9, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Formality of Language: definition, measurement and behavioral determinants. Interner Bericht, Center", |
|
"authors": [ |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Heylighen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-Marc", |
|
"middle": [], |
|
"last": "Dewaele", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Leo Apostel", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francis Heylighen and Jean-Marc Dewaele. 1999. For- mality of Language: definition, measurement and behavioral determinants. Interner Bericht, Center \"Leo Apostel\", Vrije Universiteit Br\u00fcssel.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Mining Opinion Features in Customer Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Minqing", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "755--760", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining Opinion Fea- tures in Customer Reviews. In Proceedings of the Nineteenth National Conference on Artificial Intelli- gence, Sixteenth Conference on Innovative Applica- tions of Artificial Intelligence, July 25-29, 2004, San Jose, California, USA, pages 755-760.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Personality and Consumer Behavior: A Review", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Harold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kassarjian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "Journal of marketing Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "409--418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harold H. Kassarjian. 1971. Personality and Consumer Behavior: A Review. Journal of marketing Re- search, pages 409-418.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Kincaid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Fishburne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brad", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Rogers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chissom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Peter Kincaid, Robert P. Fishburne Jr., Richard L. Rogers, and Brad S. Chissom. 1975. Derivation of new readability formulas (automated readability in- dex, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, DTIC Document.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Accurate Unlexicalized Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate Unlexicalized Parsing. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics, pages 423-430, Sapporo, Japan. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "English Verb Classes And Alternations: A Preliminary Investigation", |
|
"authors": [ |
|
{ |
|
"first": "Beth", |
|
"middle": [], |
|
"last": "Levin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beth Levin. 1993. English Verb Classes And Alterna- tions: A Preliminary Investigation. The University of Chicago Press.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "SMOG grading: A new readability formula", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Mclaughlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "Journal of reading", |
|
"volume": "12", |
|
"issue": "8", |
|
"pages": "639--646", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Harry McLaughlin. 1969. SMOG grading: A new readability formula. Journal of reading, 12(8):639- 646.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Distributed Representations of Words and Phrases and their Compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed Rep- resentations of Words and Phrases and their Com- positionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Pro- ceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111- 3119.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The University of South Florida word association, rhyme, and word fragment norms", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Nelson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mcevoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Schreiber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. L. Nelson, C. L. McEvoy, and T. A. Schreiber. 1998. The University of South Florida word association, rhyme, and word fragment norms.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "ISP: Learning Inferential Selectional Preferences", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Bhagat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonaventura", |
|
"middle": [], |
|
"last": "Coppola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Chklovski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "564--571", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard Hovy. 2007. ISP: Learning Inferential Selectional Preferences. In Hu- man Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 564-571, Rochester, New York. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "CRFTagger: CRF English POS Tagger", |
|
"authors": [ |
|
{ |
|
"first": "Xuan-Hieu", |
|
"middle": [], |
|
"last": "Phan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuan-Hieu Phan. 2006. CRFTagger: CRF English POS Tagger.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Extracting Product Features and Opinions from Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Ana-Maria", |
|
"middle": [], |
|
"last": "Popescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "339--346", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extract- ing Product Features and Opinions from Reviews. In Proceedings of the Conference on Human Lan- guage Technology and Empirical Methods in Natu- ral Language Processing, HLT '05, pages 339-346, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Selectional Preference and Sense Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Resnik. 1997. Selectional Preference and Sense Disambiguation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Se- mantics: Why, What, and How, pages 52-57. Wash- ington, DC.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Personality Correlates of Opinion Leadership and Innovative Buying Behavior", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Robertson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Myers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "Journal of Marketing Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "164--168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas S. Robertson and James H. Myers. 1969. Per- sonality Correlates of Opinion Leadership and Inno- vative Buying Behavior. Journal of Marketing Re- search, pages 164-168.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "A Corpus of Preposition Supersenses", |
|
"authors": [ |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jena", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meredith", |
|
"middle": [], |
|
"last": "Green", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhijit", |
|
"middle": [], |
|
"last": "Suresh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathryn", |
|
"middle": [], |
|
"last": "Conger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Tim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Gorman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th Linguistic Annotation Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Meredith Green, Abhijit Suresh, Kathryn Conger, Tim O'Gorman, and Martha Palmer. 2016. A Cor- pus of Preposition Supersenses. In Proceedings of the 10th Linguistic Annotation Workshop, Berlin, Germany. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A Hierarchy with, of, and for Preposition Supersenses", |
|
"authors": [ |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jena", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of The 9th Linguistic Annotation Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "112--123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathan Schneider, Vivek Srikumar, Jena D. Hwang, and Martha Palmer. 2015. A Hierarchy with, of, and for Preposition Supersenses. In Proceedings of The 9th Linguistic Annotation Workshop, pages 112-123.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Automated readability index", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Senter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. J. Senter and E. A. Smith. 1967. Automated read- ability index. Technical report, DTIC Document.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "A Multivariate Analysis of Personality and Product Use", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Sparks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tucker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "Journal of Marketing Research", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "67--70", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David L. Sparks and William T. Tucker. 1971. A Mul- tivariate Analysis of Personality and Product Use. Journal of Marketing Research, 8(1):67-70.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Modeling Semantic Relations Expressed by Prepositions", |
|
"authors": [ |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "231--242", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vivek Srikumar and Dan Roth. 2013. Modeling Se- mantic Relations Expressed by Prepositions. 1:231- 242.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Yla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Tausczik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pennebaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yla R. Tausczik and James W. Pennebaker. 2010. The Psychological Meaning of Words: LIWC and Com- puterized Text Analysis Methods.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Concept-Based Analysis of Scientific Literature", |
|
"authors": [ |
|
{ |
|
"first": "Chen-Tse", |
|
"middle": [], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gourab", |
|
"middle": [], |
|
"last": "Kundu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, CIKM '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1733--1738", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen-Tse Tsai, Gourab Kundu, and Dan Roth. 2013. Concept-Based Analysis of Scientific Literature. In Proceedings of the 22nd ACM international confer- ence on Conference on information & knowl- edge management, CIKM '13, pages 1733-1738, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Personality and product use", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Tucker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Painter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1961, |
|
"venue": "Journal of Applied Psychology", |
|
"volume": "45", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William T. Tucker and John J. Painter. 1961. Personal- ity and product use. Journal of Applied Psychology, 45(5):325.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Lexical density and register differentiation. Applications of Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Ure", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "443--452", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Ure. 1971. Lexical density and register differenti- ation. Applications of Linguistics, pages 443-452.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Phrase Dependency Parsing for Opinion Mining", |
|
"authors": [ |
|
{ |
|
"first": "Yuanbin", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuangjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lide", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1533--1541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuanbin Wu, Qi Zhang, Xuangjing Huang, and Lide Wu. 2009. Phrase Dependency Parsing for Opinion Mining. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Process- ing, pages 1533-1541, Singapore. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Verb Semantics and Lexical Selection", |
|
"authors": [ |
|
{ |
|
"first": "Zhibiao", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the 32Nd Annual Meeting on Association for Computational Linguistics, ACL '94", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verb Semantics and Lexical Selection. In Proceedings of the 32Nd Annual Meeting on Association for Computational Linguistics, ACL '94, pages 133-138, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Learning curve using micro-averaged sentence-level results for the meta-learner classifier.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "Product categories in our dataset.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "An example review and its annotations.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"text": "shows the results obtained on the test data. For", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>Feature Type</td><td>Prec.</td><td>Rec.</td><td colspan=\"2\">F-score Accu.</td></tr><tr><td>Word unigrams</td><td colspan=\"2\">71.56 54.94</td><td>62.16</td><td>83.88</td></tr><tr><td>Word bigrams</td><td colspan=\"2\">77.06 30.85</td><td>44.06</td><td>81.13</td></tr><tr><td>Character trigrams</td><td colspan=\"2\">70.06 57.19</td><td>62.98</td><td>83.80</td></tr><tr><td>POS bigrams</td><td colspan=\"2\">55.72 39.69</td><td>46.36</td><td>77.87</td></tr><tr><td>Embeddings</td><td colspan=\"2\">71.92 47.49</td><td>57.20</td><td>82.88</td></tr><tr><td>Constituency</td><td colspan=\"2\">70.49 52.17</td><td>59.96</td><td>83.22</td></tr><tr><td>Dependency</td><td colspan=\"2\">57.53 33.10</td><td>42.02</td><td>78.00</td></tr><tr><td>Style</td><td colspan=\"2\">54.17 11.27</td><td>18.65</td><td>76.33</td></tr><tr><td>Product categories</td><td colspan=\"2\">67.19 44.37</td><td>53.44</td><td>81.38</td></tr><tr><td>Concreteness</td><td colspan=\"2\">59.61 53.21</td><td>56.23</td><td>80.04</td></tr><tr><td>Levin classes</td><td colspan=\"2\">59.72 37.26</td><td>45.89</td><td>78.83</td></tr><tr><td>LIWC</td><td colspan=\"2\">57.14 38.13</td><td>45.74</td><td>78.20</td></tr><tr><td>Semantic lexicons</td><td colspan=\"2\">56.02 50.78</td><td>53.27</td><td>78.54</td></tr><tr><td colspan=\"2\">Spatial prepositions 41.67</td><td>3.47</td><td>6.40</td><td>75.57</td></tr><tr><td>Semantic distance</td><td colspan=\"2\">66.29 20.45</td><td>31.26</td><td>78.33</td></tr><tr><td>Classifier voting</td><td colspan=\"2\">66.84 67.76</td><td>67.30</td><td>84.13</td></tr><tr><td>Feature fusion</td><td colspan=\"2\">63.92 60.49</td><td>62.15</td><td>82.25</td></tr><tr><td>Meta learner</td><td colspan=\"2\">73.61 59.45</td><td>65.77</td><td>85.09</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"5\">: Micro-averaged sentence-level results</td></tr><tr><td colspan=\"5\">(%) under 10-fold cross-validation on the training</td></tr><tr><td colspan=\"5\">data. Maximum value in each column (within each</td></tr><tr><td colspan=\"2\">section) is boldfaced.</td><td/><td/><td/></tr><tr><td>Feature Type</td><td>Prec.</td><td>Rec.</td><td colspan=\"2\">F-score Accu.</td></tr><tr><td>Majority</td><td>0.00</td><td>0.00</td><td>0.00</td><td>76.13</td></tr><tr><td colspan=\"3\">Word unigrams 71.82 58.09</td><td>64.23</td><td>85.92</td></tr><tr><td>Meta learner</td><td colspan=\"2\">76.92 58.82</td><td>66.67</td><td>87.20</td></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"text": "Feature importance ranking for four feature types. We show ten top-ranked features along with their importance scores. For the meta-learner, we show the ranking over the subset of seven feature sets used in this classifier.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>Feature Type</td><td>Prec.</td><td>Rec.</td><td colspan=\"2\">F-score Accu.</td></tr><tr><td>Baseline</td><td>0.00</td><td>0.00</td><td>0.00</td><td>76.39</td></tr><tr><td colspan=\"3\">Word unigrams 69.15 35.20</td><td>46.65</td><td>80.99</td></tr><tr><td>Meta-learner</td><td colspan=\"2\">70.62 38.43</td><td>49.77</td><td>81.69</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |