{ "paper_id": "I05-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:25:01.269499Z" }, "title": "A New Method for Sentiment Classification in Text Retrieval", "authors": [ { "first": "Yi", "middle": [], "last": "Hu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200030", "settlement": "Shanghai", "country": "China" } }, "email": "huyi@cs.sjtu.edu.cn" }, { "first": "Jianyong", "middle": [], "last": "Duan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200030", "settlement": "Shanghai", "country": "China" } }, "email": "duan_jy@cs.sjtu.edu.cn" }, { "first": "Xiaoming", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200030", "settlement": "Shanghai", "country": "China" } }, "email": "chen-xm@cs.sjtu.edu.cn" }, { "first": "Bingzhen", "middle": [], "last": "Pei", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200030", "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Ruzhan", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200030", "settlement": "Shanghai", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Traditional text categorization is usually a topic-based task, but a subtle demand on information retrieval is to distinguish between positive and negative view on text topic. In this paper, a new method is explored to solve this problem. Firstly, a batch of Concerned Concepts in the researched domain is predefined. Secondly, the special knowledge representing the positive or negative context of these concepts within sentences is built up. At last, an evaluating function based on the knowledge is defined for sentiment classification of free text. We introduce some linguistic knowledge in these procedures to make our method effective. As a result, the new method proves better compared with SVM when experimenting on Chinese texts about a certain topic.", "pdf_parse": { "paper_id": "I05-1001", "_pdf_hash": "", "abstract": [ { "text": "Traditional text categorization is usually a topic-based task, but a subtle demand on information retrieval is to distinguish between positive and negative view on text topic. In this paper, a new method is explored to solve this problem. Firstly, a batch of Concerned Concepts in the researched domain is predefined. Secondly, the special knowledge representing the positive or negative context of these concepts within sentences is built up. At last, an evaluating function based on the knowledge is defined for sentiment classification of free text. We introduce some linguistic knowledge in these procedures to make our method effective. As a result, the new method proves better compared with SVM when experimenting on Chinese texts about a certain topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Classical technology in text categorization pays much attention to determining whether a text is related to a given topic [1] , such as sports and finance. However, as research goes on, a subtle problem focuses on how to classify the semantic orientation of the text. For instance, texts can be for or against \"racism\", and not all the texts are bad. There exist two possible semantic orientations: positive and negative (the neutral view is not considered in this paper). Labeling texts by their semantic orientation would provide readers succinct summaries and be great useful in intelligent retrieval of information system.", "cite_spans": [ { "start": 122, "end": 125, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditional text categorization algorithms, including Na\u00efve Bayes, ANN, SVM, etc, depend on a feature vector representing a text. They usually utilize words or n-grams as features and construct the weightiness according to their presence/absence or frequencies. It is a convenient way to formalize the text for calculation. On the other hand, employing one vector may be unsuitable for sentiment classification. See the following simple sentence in English:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "-Seen from the history, the great segregation is a pioneering work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here, \"segregation\" is very helpful to determine that the text is about the topic of racism, but the terms \"great\" and \"pioneering work\" may just be the important hints for semantic orientation (support the racism). These two terms probably contribute less to sentiment classification if they are dispersed into the text vector because the relations between them and \"segregation\" are lost. Intuitively, these terms can provide more contribution if they are considered as a whole within the sentence. We explore a new idea for sentiment classification by focusing on sentences rather than entire text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\"Segregation\" is called as Concerned Concept in our work. These Concerned Concepts are always the sensitive nouns or noun phrases in the researched domain such as \"race riot\", \"color line\" and \"government\". If the sentiment classifying knowledge about how to comment on these concepts can be acquired, it will be helpful for sentiment classification when meeting these concepts in free texts again. In other words, the task of sentiment classification of entire text has changed into recognizing the semantic orientation of the context of all Concerned Concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We attempt to build up this kind of knowledge to describe different sentiment context by integrating extended part of speech (EPOS), modified triggered bi-grams and position information within sentences. At last, we experiment on Chinese texts about \"racism\" and draw some conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A lot of past work has been done about text categorization besides topic-based classification. Biber [2] concentrated on sorting texts in terms of their source or source style with stylistic variation such as author, publisher, and native-language background.", "cite_spans": [ { "start": 101, "end": 104, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Some other related work focused on classifying the semantic orientation of individual words or phrases by employing linguistic heuristics [3] [4] . Hatzivassiloglou et al worked on predicting the semantic orientation of adjectives rather than phrases containing adjectives and they noted that there are linguistic constraints on these orientations of adjectives in conjunctions.", "cite_spans": [ { "start": 138, "end": 141, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 142, "end": 145, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Past work on sentiment-based categorization of entire texts often involved using cognitive linguistics [5] [11] or manually constructing discriminated lexicons [7] [12] . All these work enlightened us on the research on Concerned Concepts in given domain.", "cite_spans": [ { "start": 103, "end": 106, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 107, "end": 111, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 160, "end": 163, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 164, "end": 168, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Turney's work [9] applied an unsupervised learning algorithm based on the mutual information between phrases and the both words \"excellent\" and \"poor\". The mutual information was computed using statistics gathered by a search engine and simple to be dealt with, which encourage further work with sentiment classification.", "cite_spans": [ { "start": 14, "end": 17, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Pang et al [10] utilized several prior-knowledge-free supervised machine learning methods in the sentiment classification task in the domain of movie review, and they also analyzed the problem to understand better how difficult it is. They experimented with three standard algorithms: Na\u00efve Bayes, Maximum Entropy and Support Vector Machines, then compared the results. Their work showed that, generally, these algorithms were not able to achieve accuracies on the sentiment classification problem comparable to those reported for standard topic-based categorization.", "cite_spans": [ { "start": 11, "end": 15, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "As mentioned above, terms in a text vector are usually separated from the Concerned Concepts (CC for short), which means no relations between these terms and CCs. To avoid the coarse granularity of text vector to sentiment classification, the context of each CC is researched on. We attempt to determine the semantic orientation of a free text by evaluating context of CCs contained in sentences. Our work is based on the two following hypothesizes: \u2666 H 1 . A sentence holds its own sentiment context and it is the processing unit for sentiment classification. \u2666 H 2 . A sentence with obvious semantic orientation contains at least one Concerned Concept. H 1 allows us to research the classification task within sentences and H 2 means that a sentence with the value of being learnt or evaluated should contain at least one described CC. A sentence can be formed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( 1) 1 1 ( 1) ... ... m m i n n word word word CC word word word \u2212 \u2212 \u2212 \u2212 \u2212 .", "eq_num": "(1)" } ], "section": "Basic Idea", "sec_num": "3.1" }, { "text": "CC i (given as an example in this paper) is a noun or noun phrase occupying the position 0 in sentence that is automatically tagged with extended part of speech (EPOS for short)(see section 3.2). A word and its tagged EPOS combine to make a 2-tuple, and all these 2-tuples on both sides of CC i can form a sequence as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea", "sec_num": "3.1" }, { "text": "\u23a5 \u23a6 \u23a4 \u23a2 \u23a3 \u23a1 \u23a5 \u23a6 \u23a4 \u23a2 \u23a3 \u23a1 \u22c5 \u22c5 \u22c5 \u23a5 \u23a6 \u23a4 \u23a2 \u23a3 \u23a1 \u23a5 \u23a6 \u23a4 \u23a2 \u23a3 \u23a1 \u22c5 \u22c5 \u22c5 \u23a5 \u23a6 \u23a4 \u23a2 \u23a3 \u23a1 \u23a5 \u23a6 \u23a4 \u23a2 \u23a3 \u23a1 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea", "sec_num": "3.1" }, { "text": "All the words and corresponding EPOSes are divided into two parts: m 2-tuples on the left side of CC i (from -m to -1) and n 2-tuples on the right (from 1 to n). These 2tuples construct the context of the Concerned Concept CC i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea", "sec_num": "3.1" }, { "text": "The sentiment classifying knowledge (see sections 3.3 and 3.4) is the contribution of all the 2-tuples to sentiment classification. That is to say, if a 2-tuple often cooccurs with CC i in training corpus with positive view, it contributes more to positive orientation than negative one. On the other hand, if the 2-tuple often co-occurs with CC i in training corpus with negative view, it contributes more to negative orientation. This kind of knowledge can be acquired by statistic technology from corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea", "sec_num": "3.1" }, { "text": "When judging a free text, the context of CC i met in a sentence is respectively compared with the positive and negative sentiment classifying knowledge of the same CC i trained from corpus. Thus, an evaluating function E (see section 3.5) is defined to evaluate the semantic orientation of the free text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea", "sec_num": "3.1" }, { "text": "Usual part of speech (POS) carries less sentiment information, so it cannot distinguish the semantic orientation between positive and negative. For example, \"hearty\" and \"felonious\" are both tagged as \"adjective\", but for the sentiment classification, only the tag \"adjective\" cannot classify their sentiment. This means different adjective has different effect on sentiment classification. So we try to extend words' POS (EPOS) according to its semantic orientation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended Part of Speech", "sec_num": "3.2" }, { "text": "Generally speaking, empty words only have structural function without sentiment meaning. Therefore, we just consider substantives in context, which mainly include nouns/noun phrases, verbs, adjectives and adverbs. We give a subtler manner to define EPOS of substantives. Their EPOSes are classified to be positive orientation (PosO) or negative orientation (NegO). Thus, \"hearty\" is labeled with \"pos-adj\", which means PosO of adjective; \"felonious\" is labeled with \"neg-adje\", which means NegO of adjective. Similarly, nouns, verbs and adverbs tagged with their EPOS construct a new word list. In our work, 12,743 Chinese entries in machine readable dictionary are extended by the following principles:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended Part of Speech", "sec_num": "3.2" }, { "text": "\u2666 To nouns, their PosO or NegO is labeled according to their semantic orientation to the entities or events they denote (pos-n or neg-n). \u2666 To adjectives, their common syntax structure is {Adj.+Noun*}. If adjectives are favor of or oppose to their headwords (Noun*), they will be defined as PosO or NegO (pos-adj or neg-adj). \u2666 To adverbs, their common syntax structure is {Adv.+Verb*/Adj*.}, and Verb*/Adj*. is headword. Their PosO or NegO are analyzed in the same way of adjective (pos-adv or neg-adv). \u2666 To transitive verb, their common syntax structure is {TVerb+Object*}, and Object* is headword. Their PosO or NegO are analyzed in the same way of adjective (pos-tv or neg-tv). \u2666 To intransitive verb, their common syntax structure is {Subject*+InTVerb}, and Subject* is headword. Their PosO or NegO are analyzed in the same way of adjective (pos-iv or neg-iv).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended Part of Speech", "sec_num": "3.2" }, { "text": "Sentiment classifying knowledge is defined as the importance of all 2-tuples that compose the context of CC i (given as an example) to sentiment classification and every Concerned Concept like CC i has its own positive and negative sentiment classifying knowledge that can be formalized as a 3-tuple K:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classifying Knowledge Framework", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ": ( , , ) pos neg K CC S S = .", "eq_num": "(3)" } ], "section": "Sentiment Classifying Knowledge Framework", "sec_num": "3.3" }, { "text": "To CC i , its S i pos has concrete form that is described as a set of 5-tuples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classifying Knowledge Framework", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{ } : ( , ,", "eq_num": ", , , ) pos left" } ], "section": "Sentiment Classifying Knowledge Framework", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "right i S word epos wordval eposval \u03be \u03be \u03be \u03be \u03be \u03be \u03b1 \u03b1 = < > .", "eq_num": "(4)" } ], "section": "Sentiment Classifying Knowledge Framework", "sec_num": "3.3" }, { "text": "Where S i pos represents the positive sentiment classifying knowledge of CC i , and it is a data set about all 2-tuples appearing in the sentences containing CC i in training texts with positive view. In contrast, S i neg is acquired from the training texts with negative view. In other words, S i pos and S i neg respectively reserve the features for positive and negative classification to CC i in corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classifying Knowledge Framework", "sec_num": "3.3" }, { "text": "In terms of S i pos , the importance of , word epos \u03be \u03be < > is divided into wordval \u03be and eposval \u03be (see section 4.1) which is estimated by modified triggered bi-grams to fit the long distance dependence. If , word epos \u03be \u03be < > appears on the left side of CC i , the \"side\" adjusting factor is left i \u03b1 ; if it appears on the right, the \"side\" adjusting factor is right i \u03b1 . We also define another factor \u03b2 (see section 4.3) that denotes dynamic \"positional\" adjusting information during processing a sentence in free text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classifying Knowledge Framework", "sec_num": "3.3" }, { "text": "If a often co-occurs with CC i in sentences in training corpus with positive view, which may means it contribute more to positive orientation than negative one, and if it often co-occurs with CC i in negative corpus, it may contribute more to negative orientation. We modify the classical bi-grams language model to introduce long distance triggered mechanism of , It has been mentioned that \u03b1 and \u03b2 are adjusting factor to the sentiment contribution of pair . \u03b1 rectifies the effect of the 2-tuple according to its appearance on which side of CC i , and \u03b2 rectifies the effect of the 2-tuple according to its distance from CC i . They embody the effect of \"side\" and \"position\". Thus, it can be inferred that even the same will contribute differently because of its side and position.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contribution of ", "sec_num": "3.4" }, { "text": "We propose a function E (equation (6)) to evaluate a free text by comparing the context of every appearing CC with the two sorts of sentiment context of the same CC trained from corpus respectively. ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "' ' 1 (1/ ) ( , )", "eq_num": "( , )" } ], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "N pos neg i i i i i E N Sim S S Sim S S = = \u2212 \u2211 .", "eq_num": "(6)" } ], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "N is the number of total Concerned Concepts in the free text, and i denotes certain CC i . E is the semantic orientation of the whole text. Obviously, if 0 \u2265 E , the text is to be regarded as positive, otherwise, negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "To clearly explain the function E, we just give the similarity between the context of CC i (S i ' ) in free text and the positive sentiment context of the same CC i trained from corpus. The function Sim is defined as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "= = \u239b \u239e \u239b \u239e < > \u239c \u239f \u239c \u239f \u239d \u23a0 \u239d \u23a0 \u2211 \u220f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "is the right one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "Equation 7means that the sentiment contribution c of each calculated by (5) in the context of CC i within a sentence in free text, which is S i ' , construct the overall semantic orientation of the sentence together. On the other hand, ' ( , )", "cite_spans": [ { "start": 85, "end": 88, "text": "(5)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "neg i i Sim S S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "can be thought about in the same way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Function E", "sec_num": "3.5" }, { "text": "In terms of CC i , its sentiment classifying knowledge is depicted by (3) and 4, and the parameters wordval and eposval need to be leant from corpus. Every calculation of Pr(|CC i , Pos_Neg) is divided into two parts like (8) according to statistic theory: ", "cite_spans": [ { "start": 70, "end": 73, "text": "(3)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr( , | , _ ) Pr( | , _ ) Pr( | ,", "eq_num": "_ , ) i i" } ], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+ = + \u2211 .", "eq_num": "(9)" } ], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "The numerator in (9) is the co-occurring frequency between epos \u03be and CC i within sentence in training texts with Pos_Neg (certain one of {positive, negative}) view and the denominator is the frequency of co-occurrence between all EPOSes appearing in CC i 's context with Pos_Neg view. The \"wordval\"is the conditional probability of \u03be word given CC i and epos \u03be which can also be estimated by MLE: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "#( , ,", "eq_num": ") 1 Pr" } ], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ,", "eq_num": "_ , ) #( , , ) 1 i" } ], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+ = + \u2211 \u2211 .", "eq_num": "(10)" } ], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "The numerator in (10) is the frequency of co-occurrence between < \u03be word , epos \u03be > and CC i , and the denominator is the frequency of co-occurrence between all possible words corresponding to epos \u03be appearing in CC i 's context with Pos_Neg view.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "For smoothing, we adopt add-one method in (9) and (10).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating Wordval and Eposval", "sec_num": "4.1" }, { "text": "The \u03be \u03b1 is the adjusting factor representing the different effect of the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating \u03b1", "sec_num": "4.2" }, { "text": "4.3 Calculating \u03b2 \u03b2 is positional adjusting factor, which means different position to some CC will be assigned different weight. This is based on the linguistic hypothesis that the further a word get away from a researched word, the looser their relation is. That is to say, \u03b2 ought to satisfy an inverse proportion relationship with position.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating \u03b1", "sec_num": "4.2" }, { "text": "Unlike wordval, eposval and \u03b1 which are all private knowledge to some CC, \u03b2 is a dynamic positional factor which is independent of semantic orientation of training texts and it is only depend on the position from CC. To the example CC i , \u03b2 of , word epos \u00b5 \u00b5 < > occupying the th \u00b5 position on its left side is left \u00b5 \u03b2 , which can be defined as: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating \u03b1", "sec_num": "4.2" }, { "text": "| | 1 1 1 (1 2) (2 (1 2) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating \u03b1", "sec_num": "4.2" }, { "text": "Our research topic is about \"Racism\" in Chinese texts. The training corpus is built up from Chinese web pages and emails. As mentioned above, all these extracted texts in corpus have obvious semantic orientations to racism: be favor of or oppose to. There are 1137 texts with positive view and 1085 texts with negative view. All the Chinese texts are segmented and tagged with defined EPOS in advance. They are also marked posi-tive/negative for supervised learning. The two sorts of texts with different view are respectively divided into 10 folds. 9 of them are trained and the left one is used for test. For the special domain, there is no relative result that can be consulted. So, we compare the new method with a traditional classification algorithm, i.e. the popular SVM that uses bi-grams as features. Our experiment includes two parts: a part experiments on the relatively \"long\" texts that contain more than 15 sentences and the other part experiments on the \"short\" texts that contain less than 15 sentences. We choose \"15\" as the threshold to distinguish long or short texts because it is the mathematic expectation of \"length\" variable of text in our testing corpus. The recall, precision and F1-score are listed in the following Experiment Result Table. The experiment shows that our method is useful for sentiment classifica-tion\uff0c especially for short texts. Seen from the table, when evaluating texts that have more than 15 sentences, for enough features, SVM has better result, while ours is averagely close to it. However, when evaluating the texts containing less than 15 sentences, our method is obviously superior to SVM in either positive or negative view. That means our method has more potential value to sentiment classification of short texts, such as emails, short news, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test and Conclusions", "sec_num": "5" }, { "text": "The better result owes to the fine description within sentences and introducing linguistic knowledge to sentiment classification (such as EPOS, \u03b1 and \u03b2 ), which proved the two hypothesizes may be reasonable. We use modified triggered bi-grams to describe the importance among features ({}) and Concerned Concepts, then construct sentiment classifying knowledge rather than depend on statistic algorithm only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test and Conclusions", "sec_num": "5" }, { "text": "To sum up, we draw the following conclusions from our work: \u2666 Introducing more linguistic knowledge is helpful for improving statistic sentiment classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test and Conclusions", "sec_num": "5" }, { "text": "\u2666 Sentiment classification is a hard task, and it needs subtly describing capability of language model. Maybe the intensional logic of words will be helpful in this field in future. \u2666 Chinese is a language of concept combination and the usage of words is more flexible than Indo-European language, which makes it more difficult to acquire statistic information than English [10] . \u2666 We assume an independent condition among sentences yet. We should introduce a suitable mathematic model to group the close sentences.", "cite_spans": [ { "start": 374, "end": 378, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Test and Conclusions", "sec_num": "5" }, { "text": "Our experiment also shows that the algorithm will become weak when no CC appears in sentences, but this method is still deserved to explore further. In future, we will integrate more linguistic knowledge and expand our method to a suitable sentence group to improve its performance. Constructing a larger sentiment area may balance the capability of our method between long and short text sentiment classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test and Conclusions", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Text-Based Intelligent Systems: Current Research and Practice in Information Extraction and Retrieval", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hearst, M.A.: Direction-based text interpretation as an information access refinement. In P. Jacobs (Ed.), Text-Based Intelligent Systems: Current Research and Practice in Infor- mation Extraction and Retrieval. Mahwah, NJ: Lawrence Erlbaum Associates (1992)", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Variation across Speech and Writing", "authors": [ { "first": "Douglas", "middle": [], "last": "Biber", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Biber: Variation across Speech and Writing. Cambridge University Press (1988)", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Predicting the semantic orientation of adjectives", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 1997, "venue": "Proc. of the 35th ACL/8th EACL", "volume": "", "issue": "", "pages": "174--181", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasileios Hatzivassiloglou and Kathleen McKeown: Predicting the semantic orientation of adjectives. In Proc. of the 35th ACL/8th EACL (1997) 174-181", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised learning of semantic orientation from a hundred-billion-word corpus", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Michael", "middle": [ "L" ], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Littman", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney and Michael L. Littman: Unsupervised learning of semantic orientation from a hundred-billion-word corpus. Technical Report EGB-1094, National Research Council Canada (2002)", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Direction-based text interpretation as an information access refinement", "authors": [ { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti Hearst: Direction-based text interpretation as an information access refinement. In Paul Jacobs, editor, Text-Based Intelligent Systems. Lawrence Erlbaum Associates (1992)", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42 nd ACL", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee: A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. Proceedings of the 42 nd ACL (2004) 271--278", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Yahoo! for Amazon: Extracting market sentiment from stock message boards", "authors": [ { "first": "Sanjiv", "middle": [], "last": "Das", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2001, "venue": "Proc. of the 8th Asia Pacific Finance Association Annual Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjiv Das and Mike Chen: Yahoo! for Amazon: Extracting market sentiment from stock message boards. In Proc. of the 8th Asia Pacific Finance Association Annual Conference (2001)", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Effects of Adjective Orientation and Gradability on Sentence Subjectivity", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2000, "venue": "COLING", "volume": "", "issue": "", "pages": "299--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasileios Hatzivassiloglou, Janyce Wiebe: Effects of Adjective Orientation and Gradabil- ity on Sentence Subjectivity. COLING (2000) 299-305", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Thumbs up or thumbs down? Semantic orientation applied to unsupervised classication of reviews", "authors": [ { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2002, "venue": "Proc. of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Turney: Thumbs up or thumbs down? Semantic orientation applied to unsupervised classication of reviews. In Proc. of the ACL (2002)", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Thumbs up? Sentiment Classification using Machine Learning Techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proc. Conf. on EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee and Shivakumar Vaithyanathan: Thumbs up? Sentiment Classifica- tion using Machine Learning Techniques. In Proc. Conf. on EMNLP (2002)", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On the computation of point of view", "authors": [ { "first": "Warren", "middle": [], "last": "Sack", "suffix": "" } ], "year": 1994, "venue": "Proc. of the Twelfth AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Warren Sack: On the computation of point of view. In Proc. of the Twelfth AAAI, page 1488. Student abstract (1994)", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An operational system for detecting and tracking opinions in on-line discussion. Workshop note, SIGIR Workshop on Operational Text Classification", "authors": [ { "first": "Richard", "middle": [ "M" ], "last": "Tong", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard M. Tong: An operational system for detecting and tracking opinions in on-line discussion. Workshop note, SIGIR Workshop on Operational Text Classification (2001)", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "text": "to describe, the contribution c of each 2-tuple in a positive or negative context (denoted by Pos_Neg) is calculated by(5). This is an analyzing measure of using multi-feature resources. the contribution of to sentiment classification in the sentence containing CC i . Obviously, when \u03b1 and \u03b2 are fixed, the bigger Pr(|CC i , Pos_Neg>) is, the bigger contribution c of the 2-tuple to the semantic orientation Pos_Neg (one of {positive, negative} view) is.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "occupying the th \u03c5 position on the right side of CC i is right", "uris": null }, "TABREF4": { "num": null, "type_str": "table", "text": "", "content": "
Experiment Result
Texts with Positive ViewTexts with Negative View
(more than 15 sentences)(more than 15 sentences)
SVMOur MethodSVMOur Method
Recall(%)80.673.268.476.1
Precision(%)74.175.375.673.8
F1-score(%)77.274.271.8274.9
Texts with Positive ViewTexts with Negative View
(less than 15 sentences)(less than 15 sentences)
SVMOur MethodSVMOur Method
Recall(%)62.163.062.169.5
Precision(%)65.170.159.062.3
F1-score(%)63.666.460.565.7
", "html": null } } } }