{ "paper_id": "I05-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:39.835521Z" }, "title": "Assigning Polarity Scores to Reviews Using Machine Learning Techniques", "authors": [ { "first": "Daisuke", "middle": [], "last": "Okanohara", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1, Bunkyo-ku", "postCode": "113-0013", "settlement": "Hongo, Tokyo" } }, "email": "" }, { "first": "Jun", "middle": [ "'" ], "last": "Ichi Tsujii", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1, Bunkyo-ku", "postCode": "113-0013", "settlement": "Hongo, Tokyo" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a novel type of document classification task that quantifies how much a given document (review) appreciates the target object using not binary polarity (good or bad) but a continuous measure called sentiment polarity score (sp-score). An sp-score gives a very concise summary of a review and provides more information than binary classification. The difficulty of this task lies in the quantification of polarity. In this paper we use support vector regression (SVR) to tackle the problem. Experiments on book reviews with five-point scales show that SVR outperforms a multi-class classification method using support vector machines and the results are close to human performance.", "pdf_parse": { "paper_id": "I05-1028", "_pdf_hash": "", "abstract": [ { "text": "We propose a novel type of document classification task that quantifies how much a given document (review) appreciates the target object using not binary polarity (good or bad) but a continuous measure called sentiment polarity score (sp-score). An sp-score gives a very concise summary of a review and provides more information than binary classification. The difficulty of this task lies in the quantification of polarity. In this paper we use support vector regression (SVR) to tackle the problem. Experiments on book reviews with five-point scales show that SVR outperforms a multi-class classification method using support vector machines and the results are close to human performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, discussion groups, online shops, and blog systems on the Internet have gained popularity and the number of documents, such as reviews, is growing dramatically. Sentiment classification refers to classifying reviews not by their topics but by the polarity of their sentiment (e.g, positive or negative). It is useful for recommendation systems, fine-grained information retrieval systems, and business applications that collect opinions about a commercial product.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, sentiment classification has been actively studied and experimental results have shown that machine learning approaches perform well [13, 11, 10, 20] . We argue, however, that we can estimate the polarity of a review more finely. For example, both reviews A and B in Table 1 would be classified simply as positive in binary classification. Obviously, this classification loses the information about the difference in the degree of polarity apparent in the review text.", "cite_spans": [ { "start": 143, "end": 147, "text": "[13,", "ref_id": "BIBREF12" }, { "start": 148, "end": 151, "text": "11,", "ref_id": "BIBREF10" }, { "start": 152, "end": 155, "text": "10,", "ref_id": "BIBREF9" }, { "start": 156, "end": 159, "text": "20]", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 277, "end": 284, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a novel type of document classification task where we evaluate reviews with scores like five stars. We call this score the sentiment polarity score (sp-score). If, for example, the range of the score is from one to five, we could give five to review A and four to review B. This task, namely, ordered multi-class classification, is considered as an extension of binary sentiment classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe a machine learning method for this task. Our system uses support vector regression (SVR) [21] to determine the sp-scores of (1,. ..,5) Review A I believe this is very good and a \"must read\" plus 5 I can't wait to read the next book in the series. Review B This book is not so bad. plus 4 You may find some interesting points in the book. reviews. This method enables us to annotate sp-scores for arbitrary reviews such as comments in bulletin board systems or blog systems. We explore several types of features beyond a bag-of-words to capture key phrases to determine sp-scores: n-grams and references (the words around the reviewed object).", "cite_spans": [ { "start": 116, "end": 120, "text": "[21]", "ref_id": "BIBREF20" }, { "start": 151, "end": 155, "text": "(1,.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conducted experiments with book reviews from amazon.com each of which had a five-point scale rating along with text. We compared pairwise support vector machines (pSVMs) and SVR and found that SVR outperformed better than pSVMs by about 30% in terms of the squared error, which is close to human performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent studies on sentiment classification focused on machine learning approaches. Pang [13] represents a review as a feature vector and estimates the polarity with SVM, which is almost the same method as those for topic classification [1] . This paper basically follows this work, but we extend this task to a multi-order classification task.", "cite_spans": [ { "start": 88, "end": 92, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 236, "end": 239, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There have been many attempts to analyze reviews deeply to improve accuracy. Mullen [10] used features from various information sources such as references to the \"work\" or \"artist\", which were annotated by hand, and showed that these features have the potential to improve the accuracy. We use reference features, which are the words around the fixed review target word (book), while Mullen annotated the references by hand.", "cite_spans": [ { "start": 84, "end": 88, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Turney [20] used semantic orientation, which measures the distance from phrases to \"excellent\" or \"poor\" by using search engine results and gives the word polarity. Kudo [8] developed decision stumps, which can capture substructures embedded in text (such as word-based dependency), and suggested that subtree features are important for opinion/modality classification.", "cite_spans": [ { "start": 7, "end": 11, "text": "[20]", "ref_id": "BIBREF19" }, { "start": 170, "end": 173, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Independently of and in parallel with our work, two other papers consider the degree of polarity for sentiment classification. Koppel [6] exploited a neutral class and applied a regression method as ours. Pang [12] applied a metric labeling method for the task. Our work is different from their works in several respects. We exploited square errors instead of precision for the evaluation and used five distinct scores in our experiments while Koppel used three and Pang used three/four distinct scores in their experiments.", "cite_spans": [ { "start": 134, "end": 137, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 210, "end": 214, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section we present a novel task setting where we predict the degree of sentiment polarity of a review. We first present the definition of sp-scores and the task of assigning them to review documents. We then explain an evaluation data set. Using this data set, we examined the human performance for this task to clarify the difficulty of quantifying polarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing Reviews with Polarity Scores", "sec_num": "3" }, { "text": "We extend the sentiment classification task to the more challenging task of assigning rating scores to reviews. We call this score the sp-score. Examples of sp-scores include five-star and scores out of 100. Let sp-scores take discrete values 1 in a closed interval [min...max]. The task is to assign correct sp-scores to unseen reviews as accurately as possible. Let\u0177 be the predicted sp-score and y be the sp-score assigned by the reviewer. We measure the performance of an estimator with the mean square error:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Polarity Scores", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 n n i=1 (\u0177 i \u2212 y i ) 2 ,", "eq_num": "(1)" } ], "section": "Sentiment Polarity Scores", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Polarity Scores", "sec_num": "3.1" }, { "text": "(x 1 , y 1 ), ..., (x n , y n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Polarity Scores", "sec_num": "3.1" }, { "text": "is the test set of reviews. This measure gives a large penalty for large mistakes, while ordered multi-class classification gives equal penalties to any types of mistakes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Polarity Scores", "sec_num": "3.1" }, { "text": "We used book reviews on amazon.com for evaluation data 2 3 . Each review has stars assigned by the reviewer. The number of stars ranges from one to five:", "cite_spans": [ { "start": 55, "end": 58, "text": "2 3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Data", "sec_num": "3.2" }, { "text": "one indicates the worst and five indicates the best. We converted the number of stars into sp-scores {1, 2, 3, 4, 5} 4 . Although each review may include several paragraphs, we did not exploit paragraph information. From these data, we made two data sets. The first was a set of reviews for books in the Harry Potter series (Corpus A). The second was a set of reviews for books of arbitrary kinds (Corpus B). It was easier to predict sp-scores for Corpus A than Corpus B because Corpus A books have a smaller vocabulary and each review was about twice as large. To create a data set with a uniform score distribution (the effect of skewed class distributions is out of the scope of this paper), we selected 330 reviews per sp-score for Corpus A and 280 reviews per sp-score for Corpus B 5 . Table 2 shows the number of words and sentences in the corpora. There is no significant difference in the average number of words/sentences among different sp-scores. ", "cite_spans": [ { "start": 117, "end": 118, "text": "4", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 791, "end": 798, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Evaluation Data", "sec_num": "3.2" }, { "text": "We treat the sp-scores assigned by the reviewers as correct answers. However, the content of a review and its sp-score may not be related. Moreover, sp-scores may vary depending on the reviewers. We examined the universality of the sp-score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments: Human Performance for Assigning Sp-scores", "sec_num": "3.3" }, { "text": "We asked two researchers of computational linguistics independently to assign an sp-score to each review from Corpus A. We first had them learn the relationship between reviews and sp-scores using 20 reviews. We then gave them 100 reviews with uniform sp-score distribution as test data. Table 3 shows the results in terms of the square error. The Random row shows the performance achieved by random assignment, and the All3 row shows the performance achieved by assigning 3 to all the reviews. These results suggest that sp-scores would be estimated with 0.78 square error from only the contents of reviews. Table 4 shows the distribution of the estimated sp-scores and correct spscores. In the table we can observe the difficulty of this task: the precise quantification of sp-scores. For example, human B tended to overestimate the sp-score as 1 or 5. We should note that if we consider this task as binary classification by treating the reviews whose sp-scores are 4 and 5 as positive examples and those with 1 and 2 as negative examples (ignoring the reviews whose spscores are 3), the classification precisions by humans A and B are 95% and 96% respectively.", "cite_spans": [], "ref_spans": [ { "start": 288, "end": 295, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 609, "end": 616, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Preliminary Experiments: Human Performance for Assigning Sp-scores", "sec_num": "3.3" }, { "text": "This section describes a machine learning approach to predict the sp-scores of review documents. Our method consists of the following two steps: extraction of feature vectors from reviews and estimation of sp-scores by the feature vectors. The first step basically uses existing techniques for document classification. On the other hand, the prediction of sp-scores is different from previous studies because we consider ordered multi-class classification, that is, each sp-score has its own class and the classes are ordered. Unlike usual multi-class classification, large mistakes in terms of the order should have large penalties. In this paper, we discuss two methods of estimating sp-scores: pSVMs and SVR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assigning Sp-scores to Reviews", "sec_num": "4" }, { "text": "We represent a review as a feature vector. Although this representation ignores the syntactic structure, word positions, and the order of words, it is known to work reasonably well for many tasks such as information retrieval and document classification. We use binary, tf, and tf-idf as feature weighting methods [15] . The feature vectors are normalized to have L 2 norm 1.", "cite_spans": [ { "start": 314, "end": 318, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Review Representation", "sec_num": "4.1" }, { "text": "Support vector regression (SVR) is a method of regression that shares the underlying idea with SVM [3, 16] . SVR predicts the sp-score of a review by the following regression:", "cite_spans": [ { "start": 99, "end": 102, "text": "[3,", "ref_id": "BIBREF2" }, { "start": 103, "end": 106, "text": "16]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Support Vector Regression", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f : R n \u2192 R, y = f (x) = w \u2022 x + b.", "eq_num": "(2)" } ], "section": "Support Vector Regression", "sec_num": "4.2" }, { "text": "SVR uses an -insensitive loss function. This loss function means that all errors inside the cube are ignored. This allows SVR to use few support vectors and gives generalization ability. Given a training set, (x 1 , y 1 ), ...., (x n , y n ), parameters w and b are determined by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Support Vector Regression", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "minimize 1 2 w \u2022 w + C n i=1 (\u03be i + \u03be * i ) subject to ( w \u2022 x i + b) \u2212 y i \u2264 + \u03be i y i \u2212 ( w \u2022 x i + b) \u2264 + \u03be * i \u03be ( * ) i \u2265 0 i = 1, ..., n.", "eq_num": "(3)" } ], "section": "Support Vector Regression", "sec_num": "4.2" }, { "text": "The factor C > 0 is a parameter that controls the trade-off between training error minimization and margin maximization. The loss in training data increases as C becomes smaller, and the generalization is lost as C becomes larger. Moreover, we can apply a kernel-trick to SVR as in the case with SVMs by using a kernel function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Support Vector Regression", "sec_num": "4.2" }, { "text": "This approach captures the order of classes and does not suffer from data sparseness. We could use conventional linear regression instead of SVR [4] . But we use SVR because it can exploit the kernel-trick and avoid over-training. Another good characteristic of SVR is that we can identify the features contributing to determining the sp-scores by examining the coefficients (w in (2)), while pSVMs does not give such information because multiple classifiers are involved in determining final results. A problem in this approach is that SVR cannot learn non-linear regression. For example, when given training data are (x = 1, y = 1), (x = 2, y = 2), (x = 3, y = 8), SVR cannot perform regression correctly without adjusting the feature values.", "cite_spans": [ { "start": 145, "end": 148, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Support Vector Regression", "sec_num": "4.2" }, { "text": "We apply a multi-class classification approach to estimating sp-scores. pSVMs [7] considers each sp-score as a unique class and ignores the order among the classes. Given reviews with sp-scores {1, 2, .., m}, we construct m \u2022 (m \u2212 1)/2 SVM classifiers for all the pairs of the possible values of sp-scores. The classifier for a sp-score pair (avsb) assigns the sp-score to a review with a or b. The class label of a document is determined by majority voting of the classifiers. Ties in the voting are broken by choosing the class that is closest to the neutral sp-score (i.e, (1 + m)/2).", "cite_spans": [ { "start": 78, "end": 81, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Pairwise Support Vector Machines", "sec_num": "4.3" }, { "text": "This approach ignores the fact that sp-scores are ordered, which causes the following two problems. First, it allows large mistakes. Second, when the number of possible values of the sp-score is large (e.g, n > 100), this approach suffers from the data sparseness problem. Because pSVMs cannot employ examples that have close sp-scores (e.g, sp-score = 50) for the classification of other sp-scores (e.g, the classifier for a sp-score pair (51vs100)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pairwise Support Vector Machines", "sec_num": "4.3" }, { "text": "Previous studies [9, 2] suggested that complex features do not work as expected because data become sparse when such features are used and a bag-of-words words in a sentence including \"book\" (I) (can't) ... (series) aroundbook words near \"book\" within two words. (the) (next) (in) (the) approach is enough to capture the information in most reviews. Nevertheless, we observed that reviews include many chunks of words such as \"very good\" or \"must buy\" that are useful for estimating the degree of polarity. We confirmed this observation by using n-grams. Since the words around the review target might be expected to influence the whole sp-score more than other words, we use these words as features. We call these features reference. We assume the review target is only the word \"book\", and we use \"inbook\" and \"aroundbook\" features. The \"inbook\" features are the words appear in the sentences which include the word \"book\". The \"around book\" are the words around the word \"book\" within two words. Table 5 summarizes the feature list for the experiments.", "cite_spans": [ { "start": 17, "end": 20, "text": "[9,", "ref_id": "BIBREF8" }, { "start": 21, "end": 23, "text": "2]", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 999, "end": 1006, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Features Beyond Bag-of-Words", "sec_num": "4.4" }, { "text": "We performed two series of experiments. First, we compared pSVMs and SVR. Second, we examined the performance of various features and weighting methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We used Corpus A/B introduced in Sec. 3.2 for experiment data. We removed all HTML tags and punctuation marks beforehand. We also applied the Porter stemming method [14] to the reviews.", "cite_spans": [ { "start": 165, "end": 169, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We divided these data into ten disjoint subsets, maintaining the uniform class distribution. All the results reported below are the average of ten-fold crossvalidation. In SVMs and SVR, we used SVMlight 6 with the quadratic polynomial kernel K(x, z) = ( x \u2022 z + 1) 2 and set the control parameter C to 100 in all the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We compared pSVMs and SVR to see differences in the properties of the regression approach compared with those of the classification approach. Both pSVMs and SVR used unigram/tf-idf to represent reviews. Table 6 shows the square error results for SVM, SVR and a simple regression (least square error) method for Corpus A/B. These results indicate that SVR outperformed SVM in terms of the square error and suggests that regression methods avoid large mistakes by taking account of the fact that sp-scores are ordered, while pSVMs does not. We also note that the result of a simple regression method is close to the result of SVR with a linear kernel. Figure 1 shows the distribution of estimation results for humans (top left: human 1, top right: human 2), pSVMs (below left), and SVR (below right). The horizontal axis shows the estimated sp-scores and the vertical axis shows the correct sp-scores. Color density indicates the number of reviews. These figures suggest that pSVMs and SVR could capture the gradualness of sp-scores better than humans could. They also show that pSVMs cannot predict neutral sp-scores well, while SVR can do so well.", "cite_spans": [], "ref_spans": [ { "start": 203, "end": 210, "text": "Table 6", "ref_id": "TABREF4" }, { "start": 650, "end": 658, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Comparison of pSVMs and SVR", "sec_num": "5.1" }, { "text": "We compared the different features presented in Section 4.4 and feature weighting methods. First we compared different weighting methods. We used only unigram features for this comparison. We then compared different features. We used only tf-idf weighting methods for this comparison. Table 7 summarizes the comparison results of different feature weighting methods. The results show that tf-idf performed well on both test corpora. We should note that simple representation methods, such as binary or tf, give comparable results to tf-idf, which indicates that we can add more complex features without considering the scale of feature values. For example, when we add word-based dependency features, we have some difficulty in adjusting these feature values to those of unigrams. But we could use these features together in binary weighting methods. Table 8 summarizes the comparison results for different features. For Corpus A, unigram + bigram and unigram + trigram achieved high performance. The performance of unigram + inbook was not good, which is contrary to our intuition that the words that appear around the target object are more important than others. For Corpus B, the results was different, that is, n-gram features could not predict the sp-scores well. This is because the variety of words/phrases was much larger than in Corpus A and n-gram features may have suffered from the data sparseness problem. We should note that these feature settings are too simple, and we cannot accept the result of reference or target object (inbook /aroundbook ) directly.", "cite_spans": [], "ref_spans": [ { "start": 285, "end": 292, "text": "Table 7", "ref_id": null }, { "start": 851, "end": 858, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Comparison of Different Features", "sec_num": "5.2" }, { "text": "Note that the data used in the preliminary experiments described in Section 3.3 are a part of Corpus A. Therefore we can compare the results for humans with those for Corpus A in this experiment. The best result by the machine learning approach (0.89) was close to the human results (0.78).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Different Features", "sec_num": "5.2" }, { "text": "To analyze the influence of n-gram features, we used the linear kernel k(x, z):= x \u2022 z in SVR training. We used tf-idf as feature weighting. We then examined each coefficient of regression. Since we used the linear kernel, the coefficient value of SVR showed the polarity of a single feature, that is, this value expressed how much the occurrence of a feature affected the sp-score. Tables 9 shows the coefficients resulting from the training of SVR. These results show that neutral polarity words themselves, such as \"all\" and \"age\", will affect the overall sp-scores of reviews with other neutral polarity words, such as, \"all ag (age)\", \"can't wait\", \"on (one) star\", and \"not interest\".", "cite_spans": [], "ref_spans": [ { "start": 383, "end": 391, "text": "Tables 9", "ref_id": null } ], "eq_spans": [], "section": "Comparison of Different Features", "sec_num": "5.2" }, { "text": "We generated learning curves to examine the effect of the size of training data on the performance. Figure 2 shows the results of a classification task using unigram /tf-idf to represent reviews. The results suggest that the performance can still be improved by increasing the training data. ", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 108, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Learning Curve", "sec_num": "5.3" }, { "text": "In this paper, we described a novel task setting in which we predicted sp-scores -degree of polarity -of reviews. We proposed a machine learning method using SVR to predict sp-scores. We compared two methods for estimating sp-scores: pSVMs and SVR. Experimental results with book reviews showed that SVR performed better in terms of the square error than pSVMs by about 30%. This result agrees with our intuition that pSVMs does not consider the order of sp-scores, while SVR captures the order of sp-scores and avoids high penalty mistakes. With SVR, spscores can be estimated with a square error of 0.89, which is very close to the square error achieved by human (0.78).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We examined the effectiveness of features beyond a bag-of-words and reference features (the words around the reviewed objects.) The results suggest that n-gram features and reference features contribute to improve the accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "As the next step in our research, we plan to exploit parsing results such as predicate argument structures for detecting precise reference information. We will also capture other types of polarity than attitude, such as modality and writing position [8] , and we will consider estimating these types of polarity.", "cite_spans": [ { "start": 250, "end": 253, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We plan to develop a classifier specialized for ordered multi-class classification using recent studies on machine learning for structured output space [19, 18] or ordinal regression [5] because our experiments suggest that both pSVMs and SVR have advantages and disadvantages. We will develop a more efficient classifier that outperforms pSVMs and SVR by combining these ideas.", "cite_spans": [ { "start": 152, "end": 156, "text": "[19,", "ref_id": "BIBREF18" }, { "start": 157, "end": 160, "text": "18]", "ref_id": "BIBREF17" }, { "start": 183, "end": 186, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "R. Dale et al. (Eds.): IJCNLP 2005, LNAI 3651, pp. 314-325, 2005. c Springer-Verlag Berlin Heidelberg 2005", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We could allow sp-scores to have continuous values. However, in this paper we assume sp-scores take only discrete values since the evaluation data set was annotated by only discrete values. 2 http://www.amazon.com3 These data were gathered from google cache using google API.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "One must be aware that different scales may reflect the different reactions than just scales as Keller indicated[17].5 We actually corrected 25000 reviews. However, we used only 2900 reviews since the number of reviews with 1 star is very small. We examined the effect of the number of training data is discussed in 5.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://svmlight.joachims.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning to Classify Text Using Support Vector Machines", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T . Joachims. Learning to Classify Text Using Support Vector Machines. Kluwer, 2002.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automated learning of decision rules for text categorization", "authors": [ { "first": "C", "middle": [], "last": "Apte", "suffix": "" }, { "first": "F", "middle": [], "last": "Damerau", "suffix": "" }, { "first": "S", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 1994, "venue": "Information Systems", "volume": "12", "issue": "3", "pages": "233--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Apte, F. Damerau, and S. Weiss. Automated learning of decision rules for text categorization. Information Systems, 12(3):233-251, 1994.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An Introduction to Support Vector Machines and other Kernel-based Learning Methods", "authors": [ { "first": "N", "middle": [], "last": "Cristianini", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Taylor", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Cristianini and J. S. Taylor. An Introduction to Support Vector Machines and other Kernel-based Learning Methods. Cambridge University Press, 2000.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The Elements of Statistical Learning", "authors": [ { "first": "T", "middle": [], "last": "Hastie", "suffix": "" }, { "first": "R", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "J", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2001.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Large margin rank boundaries for ordinal regression", "authors": [ { "first": "Ralf", "middle": [], "last": "Herbrich", "suffix": "" }, { "first": "Thore", "middle": [], "last": "Graepel", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Obermayer", "suffix": "" } ], "year": 2000, "venue": "Advances in Large Margin Classifiers", "volume": "", "issue": "", "pages": "115--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Large margin rank bound- aries for ordinal regression. In Advances in Large Margin Classifiers, pages 115-132. MIT press, 2000.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The importance of neutral examples for learning sentiment", "authors": [ { "first": "Moshe", "middle": [], "last": "Koppel", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Schler", "suffix": "" } ], "year": 2005, "venue": "Workshop on the Analysis of Informal and Formal Information Exchange during Negotiations (FINEXIN)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moshe Koppel and Jonathan Schler. The importance of neutral examples for learn- ing sentiment. In In Workshop on the Analysis of Informal and Formal Information Exchange during Negotiations (FINEXIN), 2005.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Pairwise Classification and Support Vector Machines Methods", "authors": [ { "first": "U", "middle": [], "last": "Kresel", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "U. Kresel. Pairwise Classification and Support Vector Machines Methods. MIT Press, 1999.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A boosting algorithm for classification of semistructured text", "authors": [ { "first": "T", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "301--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Kudo and Y. Matsumoto. A boosting algorithm for classification of semi- structured text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 301-308, 2004.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An evaluation of phrasal and clustered representations on a text categorization task", "authors": [ { "first": "D", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 1992, "venue": "Proceedings of SIGIR-92, 15th ACM International Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "37--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Lewis. An evaluation of phrasal and clustered representations on a text cate- gorization task. In Proceedings of SIGIR-92, 15th ACM International Conference on Research and Development in Information Retrieval, pages 37-50, 1992.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Sentiment analysis using Support Vector Machines with diverse information sources", "authors": [ { "first": "A", "middle": [], "last": "Mullen", "suffix": "" }, { "first": "N", "middle": [], "last": "Collier", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Mullen and N. Collier. Sentiment analysis using Support Vector Machines with diverse information sources. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL), 2004.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Pang and L. Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL), pages 271-278, 2004.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43nd Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Pang and L. Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43nd Meeting of the Association for Computational Linguistics (ACL), 2005.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Thumbs up? sentiment classification using machine learning techniques", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Pang, L. Lee, and S. Vaithyanathan. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 79-86, 2002.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An algorithm for suffix stripping", "authors": [ { "first": "M", "middle": [ "F" ], "last": "Porter", "suffix": "" } ], "year": 1980, "venue": "program. Program", "volume": "14", "issue": "3", "pages": "130--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.F. Porter. An algorithm for suffix stripping, program. Program, 14(3):130-137, 1980.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Machine learning in automated text categorization", "authors": [ { "first": "F", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2002, "venue": "ACM Computing Surveys", "volume": "34", "issue": "1", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Sebastiani. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1-47, 2002.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A tutorial on Support Vector Regression", "authors": [ { "first": "A", "middle": [], "last": "Smola", "suffix": "" }, { "first": "B", "middle": [], "last": "Sch", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Smola and B. Sch. A tutorial on Support Vector Regression. Technical report, NeuroCOLT2 Technical Report NC2-TR-1998-030, 1998.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Gradience in linguistic data", "authors": [ { "first": "Antonella", "middle": [], "last": "Sorace", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2005, "venue": "Lingua", "volume": "115", "issue": "11", "pages": "1497--1524", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonella Sorace and Frank Keller. Gradience in linguistic data. Lingua, 115(11):1497-1524, 2005.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning Structured Prediction Models: A Large Margin Approach", "authors": [ { "first": "B", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Taskar. Learning Structured Prediction Models: A Large Margin Approach. PhD thesis, Stanford University, 2004.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Support vector machine learning for interdependent and structured output spaces", "authors": [ { "first": "I", "middle": [], "last": "Tsochantaridis", "suffix": "" }, { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" }, { "first": "T", "middle": [], "last": "Joachims", "suffix": "" }, { "first": "Y", "middle": [], "last": "Altun", "suffix": "" } ], "year": 2004, "venue": "Machine Learning, Proceedings of the Twenty-first International Conference (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In Machine Learning, Proceedings of the Twenty-first International Conference (ICML), 2004.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews", "authors": [ { "first": "P", "middle": [ "D" ], "last": "Turney", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "417--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. D. Turney. Thumbs up or thumbs down? semantic orientation applied to un- supervised classification of reviews. In Proceedings of the 40th Meeting of the Association for Computational Linguistics (ACL), pages 417-424, 2002.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Nature of Statistical Learning Theory", "authors": [ { "first": "V", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Results of sp-score estimation: Human 1 (left) and Human 2", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Distribution of estimation results. Color density indicates the number of reviews. Top left: Human A, top right: Human B, below left: pSVMs, below right: SVR.", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Learning curve for our task setting for Corpus A and Corpus B. We used SVR as the classifier and unigram/tf-idf to represent of reviews.", "num": null }, "TABREF0": { "num": null, "type_str": "table", "text": "Examples of book reviewsExample of Review binary sp-score", "html": null, "content": "" }, "TABREF1": { "num": null, "type_str": "table", "text": "Corpus A: reviews for Harry Potter series book. Corpus B: reviews for all kinds of books. The column of word shows the average number of words in a review, and the column of sentences shows the average number of sentences in a review.", "html": null, "content": "
sp-scoreCorpus ACorpus B
review words sentences review words sentences
1330 160.09.1250 91.95.1
2330 196.011.0250 105.25.2
3330 169.19.2250 118.66.0
4330 150.28.6250 123.26.1
5330 153.88.9250 124.86.1
" }, "TABREF2": { "num": null, "type_str": "table", "text": "Human performance of sp-score estimation. Test data: 100 reviews of Corpus A with 1,2,3,4,5 sp-score.", "html": null, "content": "
Square error
Human 10.77
Human 20.79
Human average0.78
cf. Random3.20
All32.00
" }, "TABREF3": { "num": null, "type_str": "table", "text": "Feature list for experiments", "html": null, "content": "
FeaturesDescriptionExample in Fig.1 review 1
unigramsingle word(I) (believe) .. (series)
bigrampair of two adjacent words(I believe) ... (the series)
trigramadjacent three words(I believe this) ... (in the series)
inbook
" }, "TABREF4": { "num": null, "type_str": "table", "text": "Comparison of multi-class SVM and SVR. Both use unigram/tf-idf.", "html": null, "content": "
Square error
MethodsCorpus A Corpus B
pSVMs1.322.13
simple regression1.051.49
SVR (linear kernel)1.011.46
SVR (polynomial kernel ( x \u2022 z + 1) 2 )0.941.38
" }, "TABREF5": { "num": null, "type_str": "table", "text": "Comparison results of different feature weighting methods. We used unigrams as features of reviews. Comparison results of different features. For comparison of different features we tf-idf as weighting methods. List of bigram features that have ten best/worst polarity values estimated by SVR in Corpus A/B. The column of pol expresses the estimated sp-score of a feature, i.e., only this feature is fired in a feature vector. (word stemming was applied)", "html": null, "content": "
Square error
Weighting methods (unigram) Corpus A Corpus B
tf1.031.49
tf-idf0.941.38
binary1.041.47
Square error
Feature (tf-idf)Corpus A Corpus B
unigram (baseline)0.941.38
unigram + bigram0.891.41
unigram + trigram0.901.42
unigram + inbook0.971.36
unigram + aroundbook0.931.37
Corpus A (best)Corpus B (best)Corpus A (worst) Corpus B (worst)
pol bigrampol bigrampol bigrampol bigram
1.73 best book1.64 the best-1.61 at all-1.19 veri disappoint
1.69 is a1.60 read it-1.50 wast of-1.13 wast of
1.49 read it1.37 a great-1.38 potter book -0.98 the worst
1.44 all ag1.34 on of-1.36 out of-0.97 is veri
1.30 can't wait1.31 fast food-1.28 not interest -0.96 ! !
1.20 it is1.22 harri potter-1.18 on star-0.85 i am
1.14 the sorcer's 1.19 highli recommend-1.14 the worst-0.81 the exampl
1.14 great !1.14 an excel-1.13 first four-0.79 bui it
1.13 sorcer's stone 1.12 to read-1.11 a wast-0.76 veri littl
1.11 come out1.01 in the-1.08 no on-0.74 onli to
" } } } }