{ "paper_id": "I05-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:25:12.860752Z" }, "title": "Classifying Chinese Texts in Two Steps", "authors": [ { "first": "Xinghua", "middle": [], "last": "Fan", "suffix": "", "affiliation": { "laboratory": "State Key Laboratory of Intelligent Technology and Systems", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "State Key Laboratory of Intelligent Technology and Systems", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "", "affiliation": {}, "email": "kschoi@cs.kaist.ac.kr" }, { "first": "Qin", "middle": [], "last": "Zhang", "suffix": "", "affiliation": {}, "email": "zhangqin@sipo.gov.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes a two-step method for Chinese text categorization (TC). In the first step, a Na\u00efve Bayesian classifier is used to fix the fuzzy area between two categories, and, in the second step, the classifier with more subtle and powerful features is used to deal with documents in the fuzzy area, which are thought of being unreliable in the first step. The preliminary experiment validated the soundness of this method. Then, the method is extended from two-class TC to multi-class TC. In this two-step framework, we try to further improve the classifier by taking the dependences among features into consideration in the second step, resulting in a Causality Na\u00efve Bayesian Classifier.", "pdf_parse": { "paper_id": "I05-1027", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes a two-step method for Chinese text categorization (TC). In the first step, a Na\u00efve Bayesian classifier is used to fix the fuzzy area between two categories, and, in the second step, the classifier with more subtle and powerful features is used to deal with documents in the fuzzy area, which are thought of being unreliable in the first step. The preliminary experiment validated the soundness of this method. Then, the method is extended from two-class TC to multi-class TC. In this two-step framework, we try to further improve the classifier by taking the dependences among features into consideration in the second step, resulting in a Causality Na\u00efve Bayesian Classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text categorization (TC) is a task of assigning one or multiple predefined category labels to natural language texts. To deal with this sophisticated task, a variety of statistical classification methods and machine learning techniques have been exploited intensively [1] , including the Na\u00efve Bayesian (NB) classifier [2] , the Vector Space Model (VSM)-based classifier [3] , the example-based classifier [4] , and the Support Vector Machine [5] .", "cite_spans": [ { "start": 268, "end": 271, "text": "[1]", "ref_id": null }, { "start": 319, "end": 322, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 371, "end": 374, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 406, "end": 409, "text": "[4]", "ref_id": null }, { "start": 443, "end": 446, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Text filtering is a basic type of text categorization (two-class TC). It can find many real-life applications [6] , a typical one is the ill information filtering, such as erotic information and garbage information filtering on the web, in e-mails and in short messages of mobile phone. It is obvious that this sort of information should be carefully controlled. On the other hand, the filtering performance using the existing methodologies is still not satisfactory in general. The reason lies in that there exist a number of documents with high degree of ambiguity, from the TC point of view, in a document collection, that is, there is a fuzzy area across the border of two classes (for the sake of expression, we call the class consisting of the ill information-related texts, or, the negative samples, the category of TARGET, and, the class consisting of the ill information-not-related texts, or, the positive samples, the category of Non-TARGET). Some documents in one category may have great similarities with some other documents in the other category, for example, a lot of words concerning love story and sex are likely appear in both negative samples and positive samples if the filtering target is erotic information. We observe that most of the classification errors come from the documents falling into the fuzzy area between two categories.", "cite_spans": [ { "start": 110, "end": 113, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The idea of this paper is inspired by the fuzzy area between categories. A two-step TC method is thus proposed: in the first step, a classifier is used to fix the fuzzy area between categories; in the second step, a classifier (probably the same as that in the first step) with more subtle and powerful features is used to deal with documents in the fuzzy area which are thought of being unreliable in the first step. Experimental results validate the soundness of this method. Then we extend it from two-class TC to multi-class TC. Furthermore, in this two-step framework, we try to improve the classifier by taking the dependences among features into consideration in the second step, resulting in a Causality Na\u00efve Bayesian Classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows: Section 2 describes the two-step method in the context of two-class Chinese TC; Section 3 extends it to multi-class TC; Section 4 introduces the Causality Na\u00efve Bayesian Classifier; and Section 5 is conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use the Na\u00efve Bayesian Classifier to fix the fuzzy area in the first step. For a document represented by a binary-valued vector d = (W 1 , W 2 , \u2026, W |D| ), the two-class Na\u00efve Bayesian Classifier is given as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "\u2211 \u2211 \u2211 = = = \u2212 + + = = 1 log 1 log 1 1 log } Pr{ } Pr{ log } Pr{ } Pr{ log ) ( D k k k k D k k k k D k k k -p p W -p p W -p -p c c |d c |d c d f (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "where Pr{ \u2022 } is the probability that event { \u2022 } occurs, c i is category i, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "p ki =Pr{W k =1|c i } (i=1,2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "). If f(d) \u22650, the document d will be assigned the category label c 1 , otherwise, c 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "Let:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "\u2211 = + = | | 1 2 1 2 1 1 1 log } Pr{ } Pr{ log D k k k -p -p c c Con (2) \u2211 = = | | 1 1 1 1 log D k k k k -p p W X (3) \u2211 = = | | 1 2 2 1 log D k k k k -p p W Y (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "where Con is a constant relevant only to the training set, X and Y are the measures that the document d belongs to categories c 1 and c 2 respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "We rewrite (1) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "Con Y X d f + \u2212 = ) ( (5) Apparently, f(d)=0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "is the separate line in a two-dimensional space with X and Y being X-coordinate and Y-coordinate. In this space, a given document d can be viewed as a point (x, y), in which the values of x and y are calculated according to (3) and (4) .", "cite_spans": [ { "start": 224, "end": 227, "text": "(3)", "ref_id": "BIBREF2" }, { "start": 232, "end": 235, "text": "(4)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "As shown in Fig.1 , the distance from the point (x, y) to the separate line will be: 2) regarding Dist in the two-dimensional space, with the curve on the left for the negative samples, and the curve on the right for the positive samples. As can be seen in the figure, most of the misclassified documents, which unexpectedly across the separate line, are near the line. The error rate of the classifier is heavily influenced by this area, though the documents falling into this area only constitute a small portion of the training set. Fig. 2 . Distribution of the training set in the two-dimensional space Thus, the space can be partitioned into reliable area and unreliable area: (7) where Dist 1 and Dist 2 are constants determined by experiments, Dist 1 is positive real number and Dist 2 is negative real number.", "cite_spans": [ { "start": 682, "end": 685, "text": "(7)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 12, "end": 17, "text": "Fig.1", "ref_id": "FIGREF0" }, { "start": 536, "end": 542, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": ") ( 2 1 Con y x Dist + \u2212 = (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "\u23aa \u23a9 \u23aa \u23a8 \u23a7 < > \u2264", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "In the second step, more subtle and powerful features will be designed in particular to tackle the unreliable area identified in the first step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Na\u00efve Bayesian Classifier", "sec_num": "2.1" }, { "text": "The dataset used here is composed of 12,600 documents with 1,800 negative samples of TARGET and 10,800 positive samples of Non-TARGET. It is split into 4 parts randomly, with three parts as training set and one part as test set. All experiments in this section are performed in 4-fold cross validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on the Two-Class TC", "sec_num": "2.2" }, { "text": "CSeg&Tag3.0, a Chinese word segmentation and POS tagging system developed by Tsinghua University, is used to perform the morphological analysis for Chinese texts. In the first step, Chinese words with parts-of-speech verb, noun, adjective and adverb are considered as features. The original feature set is further reduced to a much smaller one according to formula (8) or (9) . A Na\u00efve Bayesian Classifier is then applied to the test set. In the second step, only the documents that are identified unreliable in terms of (7) in the first step are concerned. This time, bigrams of Chinese words with parts-of-speech verb and noun are used as features, and the Na\u00efve Bayesian Classifier is re-trained and applied again.", "cite_spans": [ { "start": 372, "end": 375, "text": "(9)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on the Two-Class TC", "sec_num": "2.2" }, { "text": "\u2211 = = n i i k i k i k k c t ,c t ,c t ,c t MI 1 1 } Pr{ } Pr{ } Pr{ log } Pr{ ) ( (8) \u2211 = = n i i k i k k c t ,c t ,c t MI 1 2 } Pr{ } Pr{ } Pr{ log ) ( (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on the Two-Class TC", "sec_num": "2.2" }, { "text": "where t k stands for the kth feature, which may be a Chinese word or a word bigram, and c i is the ith predefined category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on the Two-Class TC", "sec_num": "2.2" }, { "text": "We try five methods as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on the Two-Class TC", "sec_num": "2.2" }, { "text": "Method-1: Use Chinese words as features, reduce features with (9), and classify documents directly without exploring the two-step strategy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on the Two-Class TC", "sec_num": "2.2" }, { "text": "Method-2: same as Method-1 except feature reduction with (8) . Method-3: same as Method-1 except Chinese word bigrams as features. Method-4: Use the mixture of Chinese words and Chinese word bigrams as features, reduce features with (8) , and classify documents directly. Method-5: (i.e., the proposed method): Use Chinese words as features in the first step and then use word bigrams as features in the second step, reduce features with (8) , and classify the documents in two steps.", "cite_spans": [ { "start": 57, "end": 60, "text": "(8)", "ref_id": "BIBREF7" }, { "start": 233, "end": 236, "text": "(8)", "ref_id": "BIBREF7" }, { "start": 438, "end": 441, "text": "(8)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on the Two-Class TC", "sec_num": "2.2" }, { "text": "Note that the proportion of negative samples and positive samples is 1:6. Thus if all the documents in the test set is arbitrarily set to positive, the precision will reach 85.7%. For this reason, only the experimental results for negative samples are considered in evaluation, as given in Table 1 . For each method, the number of features is set by the highest point in the curve of the classifier performance with respect to the number of features (For the limitation of space, we omit all the curves here). The numbers of features set in five methods are 4000, 500, 15000, 800 and 500+3000 (the first step + the second step) respectively. Table 1 . Performance comparisons of the five methods in two-class TC Comparing Method-1 and Method-2, we can see that feature reduction formula (8) is superior to (9) . Moreover, the number of features determined in the former is less than that in the latter (500 vs. 4000). Comparing Method-2, Method-3 and Method-4, we can see that Chinese word bigrams as features have better discriminating capability meanwhile with more serious data sparseness: the performances of Method-3 and Method-4 are higher than that of Method-2, but the number of features used in Method-3 is more than those used in Method-2 and Method-4 (15000 vs. 500 and 800). Table 1 shows that the proposed method (Methond-5) has the best performance (95.54% F1) and good efficiency. It integrates the merit of words and word bigrams. Using words as features in the first step aims at its better statistical coverage, --the 500 selected features in the first step can treat a majority of documents, constituting 63.13% of the test set. On the other hand, using word bigrams as features in the second step aims at its better discriminating capability, although the number of features becomes comparatively large (3000). Comparing Method-5 with Method-2, Method-3 and Method-4, we find that the two-step approach is superior to either using only one kind of features (word or word bigram) in the classifier, or using the mixture of two kinds of features in one step.", "cite_spans": [ { "start": 806, "end": 809, "text": "(9)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 290, "end": 297, "text": "Table 1", "ref_id": null }, { "start": 642, "end": 649, "text": "Table 1", "ref_id": null }, { "start": 1287, "end": 1294, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments on the Two-Class TC", "sec_num": "2.2" }, { "text": "We extend the two-step method presented in Section 2 to handle the multi-class TC now. The idea is to transfer the multi-class TC to the two-class TC. Similar to twoclass TC, the emphasis is still on the misclassified documents given by a classifier, though we use a modified multi-class Na\u00efve Bayesian Classifier here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extending the Two-Step Approach to the Multi-class TC", "sec_num": "3" }, { "text": "For a document represented by a binary-valued vector d = (W 1 , W 2 , \u2026, W |D| ), the multi-class Na\u00efve Bayesian Classifier can be re-written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "\u2211 \u2211 = = \u2208 * + + = | | 1 | | 1 ) 1 log ) 1 ( log } { Pr log ( max arg D k ki ki k D k ki i C c -p p W -p c c i (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "where Pr{ \u2022 } is the probability that event { \u2022 } occurs, p ki =Pr{W k =1|c i }, (i=1,2, \u2026, |C|), C is the number of predefined categories. Let: (13) where MV i stands for the likelihood of assigning a label c i \u2208 C to the document d, MV max_F and MV max_S are the maximum and the second maximum over all MV i (i\u2208|C|) respectively. We approximately rewrite (10) as:", "cite_spans": [ { "start": 145, "end": 149, "text": "(13)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "\u2211 \u2211 = = + + = | | 1 | | 1 1 log ) 1 ( log } { Pr log D k ki ki k D k ki i i -p p W -p c MV (11) ) ( maximum max_ i C c F MV MV i \u2208 = (12) C c i S i MV MV \u2208 = ) imum( second_max max_", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "S F MV MV d f max_ max_ ) ( \u2212 = (14)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "We try to transfer the multi-class TC described by (10) into a two-class TC described by (14) . Formula (14) means that the binary-valued multi-class Na\u00efve Bayesian Classifier can be approximately regarded as searching a separate line in a twodimensional space with MV max_F being the X-coordinate and MV max_S being the Ycoordinate. The distance from a given document, represented as a point (x, y) with the values of x and y calculated according to (12) and (13) respectively, to the separate line in this two-dimensional space will be:", "cite_spans": [ { "start": 89, "end": 93, "text": "(14)", "ref_id": "BIBREF13" }, { "start": 104, "end": 108, "text": "(14)", "ref_id": "BIBREF13" }, { "start": 451, "end": 455, "text": "(12)", "ref_id": "BIBREF11" }, { "start": 460, "end": 464, "text": "(13)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y) (x Dist \u2212 = 2 1", "eq_num": "(15)" } ], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "The value of Dist directly reflects the degree of confidence of assigning the label c * to the document d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "The distribution of a training set (refer to Section 3.2) regarding Dist in this twodimensional space, and, consequently, the fuzzy area for the Na\u00efve Bayesian Classifier, are observed and identified, similar to its counterpart in Section 2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fix the Fuzzy Area Between Categories by the Multi-class Bayesian Classifier", "sec_num": "3.1" }, { "text": "We construct a dataset, including 5 categories and the total of 17756 Chinese documents. The document numbers of five categories are 4192, 6968, 2080, 3175 and 1800 respectively, among which the last three categories have the high degree of ambiguity each other. The dataset is split into four parts randomly, one as the test set and the other three as the training set. We again run the five methods described in Section 2.2 on this dataset. The strategy of determining the number of features also follows that used in Section 2.2. The experimentally determined numbers of features regarding the five methods are 8000, 400, 5000, 800 and 400 + 9000 (the first step + the second step) respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on the Multi-class TC", "sec_num": "3.2" }, { "text": "The average precision, average recall and average F 1 over the five categories are used to evaluate the experimental results, as shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 138, "end": 145, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments on the Multi-class TC", "sec_num": "3.2" }, { "text": "We can see from Table 2 that the very similar conclusions as that in the two-class TC in Section 2.2 can be obtained here: 1) Formula (8) is superior to (9) in feature reduction. This comes from the performance comparison between Method-2 and Method-1: the former has higher performance and higher efficiency that the latter (the average F 1 , 97.20% vs. 91.48%, and the number of features used, 400 vs. 8000).", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 23, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Table 2. Performance comparisons of the five methods in multi-class TC", "sec_num": null }, { "text": "2) Word bigrams as features have better discriminating capability than words as features, along with more serious data sparseness. The performances of Method-3 and Method-4, which use Chinese word bigrams and the mixture of words and word bigrams as features respectively, are higher than that of Method-2, which only uses Chinese words as features. But the number of features used in Method-3 is much more than those used in Method-2 and Method-4 (5000 vs. 400 and 800).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Performance comparisons of the five methods in multi-class TC", "sec_num": null }, { "text": "3) The proposed method (Methond-5) has the best performances and acceptable efficiency. In term of the average F 1 , the performance is improved from the baseline 91.48% (Method-1) to 98.56% (Method-5). In the first step in Method-5, the number of feature set is small (only 400), but a majority of documents can be treated by it. The number of features exploited in Method-5 is the highest among the five methods (9000), but it is still acceptable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Performance comparisons of the five methods in multi-class TC", "sec_num": null }, { "text": "In this section, a two-step text categorization method taking the dependences among features into account is presented. We do the same task with the Na\u00efve Bayesian Classifier in the first step, exactly same as what we did in Section 2 and Section 3. In the second step, each document identified unreliable in the first step are further processed by exploring the dependences among features. This is realized by a model named the Causality Na\u00efve Bayesian Classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Dependences Among Features in Two-Step Categorization", "sec_num": "4" }, { "text": "The Causality Na\u00efve Bayesian Classifier (CNB) is an improved Na\u00efve Bayesian Classifier. It contains two additional parts, i.e., the k-dependence feature list and the feature causality diagram. The former is used to represent the dependence relation among features, and the latter is used to estimate the probability distribution of a feature dynamically while taking its dependences into account. Obviously, there exists a 0-dependence feature list for every feature in the Na\u00efve Bayesian Classifier, from the definition of K-DFL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "The algorithm of constructing K-DFL is as follows: Given the maximum dependence number k, mutual information threshold \u03b8 and the class ct. For each feature Y, repeat the follow steps. 1) Compute class conditional mutual information MI(Y i , Y j | c t ), for every pair of features Y i and Y j , where i\u2260j. 2) Construct the set Si={ Y j | MI(Y i , Y j | c t ) > \u03b8}. 3) Let m= min (k, | S i |), select the top m features as K-DFL from S i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "Feature Causality Diagram (FCD): CNB allows each feature Y, which occurs in a given document, to have a Feature Causality Diagram (FCD). FCD is a double-layer directed diagram, in which the first layer has only the feature node Y, and the second layer allows to have multiple nodes that include the class node C and the corresponding dependence node set S of Y. Here, S=S d \u2229S F , S d is the K-DFL node set of Y and S F ={X i | X i is a feature node that occurs in the given document. There exists a directed arc from every node X i at the second layer to the node Y at the first layer. The arc is called causality link event L i which represents the causality intensity between node Y and X i , and the probability of L i is p i =Pr{L i }=Pr{Y=1|X i =1}. The relation among all arcs is logical OR. The Feature Causality Diagram can be considered as a sort of simplified causality diagram [9] [10] .", "cite_spans": [ { "start": 889, "end": 892, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 893, "end": 897, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "Suppose feature Y's FCD is G, and it parent node set S={X 1 , X 2 ,\u2026,X m } (m\u22651) in G, we can estimate the conditional probability as follows while considering the de- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "= = = m i i j j i i i p p p 2 1 1 1 m 1 m 1 ) 1 ( } L Pr{ G} | 1 Pr{Y 1} X , 1, X | 1 Pr{Y U L (16)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "Note that when m=1,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "C} | 1 Pr{Y G} | 1 Pr{Y 1} X | 1 Pr{Y 1 = = = = = = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "Causality Na\u00efve Bayesian Classifier (CNB): For a document represented by a binary-valued vector d=(X 1 ,X 2 , \u2026,X |d| ), divide the features into two sets X 1 and X 2 , X 1 = {X i | X i =1} and X 2 = {X j | X j =0}. The Causality Na\u00efve Bayesian Classifier can be written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "})) c | {X Pr log(1 } G | logPr{X } (logPr{c max arg c* | | 1 | | 1 t j j i i i t C c t \u2211 \u2211 = = \u2208 \u2212 + + = 2 1 X X (17)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Causality Na\u00efve Bayesian Classifier (CNB)", "sec_num": "4.1" }, { "text": "As mentioned earlier, the first step remains unchanged as that in Section 2 and Section 3. The difference is in the second step: for the documents identified unreliable in the first step, we apply the Causality Na\u00efve Bayesian Classifier to handle them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on CNB", "sec_num": "4.2" }, { "text": "We use two datasets in the experiments. one is the two-class dataset described in Section 2.2, called Dataset-I, and the other one is the multi-class dataset described in Section 3.2, called Dataset-I.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on CNB", "sec_num": "4.2" }, { "text": "To evaluate CNB and compare all methods presented in this paper, we experiment the following methods: 1) Na\u00efve Bayesian Classifier (NB), i.e., the method-2 in Section 2.2; 2) CNB without exploring the two-step strategy;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on CNB", "sec_num": "4.2" }, { "text": "3) The two-step strategy: NB and CNB in the first and second step (TS-CNB); 4) Limited Dependence Bayesian Classifier (DNB) [11] ; 5) Method-5 in Section 2.2 and Section 3.2 (denoted TS-DF here).", "cite_spans": [ { "start": 124, "end": 128, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on CNB", "sec_num": "4.2" }, { "text": "Experimental results for two-class Dataset-I and multi-class Dataset-II are listed in Table3 and Table 4 . The data for NB and TS-DF are derived from the corresponding columns of Table 1 and Table 2 . The parameters in CNB and TS-CNB are that the dependence number k=1 and 5, the threshold\u03b8= 0.0545 and 0.0045 for Dataset-I and Dataset-II respectively. The parameters in DNB are that dependence number k=1and 3, the threshold\u03b8= 0.0545 and 0.0045 for Dataset-I and Dataset-II respectively. Table 3 . Performance comparisons in two-class Dataset-I Table 3 and Table 4 demonstrate that 1) The performance of the Na\u00efve Bayesian Classifier can be improved by taking the dependences among features into account, as evidenced by the fact that CNB, TS-CNB and DNB outperform NB. By tracing the experiment, we find an interesting phenomenon, as expected: for the documents identified reliable by NB, CNB cannot improve it, but for those identified unreliable by NB, CNB can improve it. The reason should be even though NB and CNB use the same features, but CNB uses the dependences among features additionally. 2) CNB and TS-CNB have the same capability in effectiveness, but TS-CNB has a higher computational efficiency. As stated earlier, TS-CNB uses NB to classify documents in the reliable area and then uses CNB to classify documents in the unreliable area. At the first glance, the efficiency of TS-CNB seems lower than that of using CNB only because the former additionally uses NB in the first step, but in fact, a majority of documents (e.g., 63.13% of the total documents in dataset-I) fall into the reliable area and are then treated by NB successfully (obviously, NB is higher than CNB in efficiency) in the first step, so they will never go to the second step, resulting in a higher computational efficiency of TS-CNB than CNB.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 104, "text": "Table3 and Table 4", "ref_id": "TABREF2" }, { "start": 179, "end": 198, "text": "Table 1 and Table 2", "ref_id": null }, { "start": 489, "end": 496, "text": "Table 3", "ref_id": null }, { "start": 546, "end": 553, "text": "Table 3", "ref_id": null }, { "start": 558, "end": 565, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments on CNB", "sec_num": "4.2" }, { "text": "3) The performances of CNB, TS-CNB and DNB are almost identical, among which, the efficiency of TS-CNB is the highest. And, the efficiency of CNB is higher than that of DNB, because CNB uses a simpler network structure than DNB, with the same learning and inference formalism. 4) TS-DF has the highest performance among the all. Meanwhile, the ranking of computational efficiency (in descending order) is NB, TS-DF, TS-CNB, CNB, and DNB. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on CNB", "sec_num": "4.2" }, { "text": "Combining multiple methodologies or representations has been studied in several areas of information retrieval so far, for example, retrieval effectiveness can be improved by using multiple representations [12] . In the area of text categorization in particular, many methods of combining different classifiers have been developed. For example, Yang et al. [13] used simple equal weights for normalized score of each classifier output so as to integrate multiple classifiers linearly in the domain of Topic Detection and Tracking; Hull at al. [14] used linear combination for probabilities or log odds scores of multiple classifier output in the context of document filtering. Larkey et al. [15] used weighted linear combination for system ranks and scores of multiple classifier output in the medical document domain; Li and Jain [16] used voting and classifier selection technique including dynamic classifier selection and adaptive classifier. Lam and Lai [17] automatically selected a classifier for each category based on the category-specific statistical characteristics. Bennett et al. [18] used voting, classifier-selection techniques and a hierarchical combination method with reliability indicators.", "cite_spans": [ { "start": 206, "end": 210, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 357, "end": 361, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 543, "end": 547, "text": "[14]", "ref_id": "BIBREF13" }, { "start": 691, "end": 695, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 831, "end": 835, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 959, "end": 963, "text": "[17]", "ref_id": "BIBREF16" }, { "start": 1093, "end": 1097, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "5" }, { "text": "The issue of how to classify Chinese documents characterized by high degree ambiguity from text categorization's point of view is a challenge. For this issue, this paper presents two solutions in a uniform two-step framework, which makes use of the distributional characteristics of misclassified documents, that is, most of the misclassified documents are near to the separate line between categories. The first solution is a two-step TC approach based on the Na\u00efve Bayesian Classifier. The second solution is to further introduce the dependences among features into the model, resulting in a two-step approach based on the so-called Causality Na\u00efve Bayesian Classifier. Experiments show that the second solution is superior to the Na\u00efve Bayesian Classifier, and is equal to CNB without exploring two-step strategy in performance, but has a higher computational efficiency than the latter. The first solution has the best performance in all the experiments, outperforming all other methods (including the second solution): in the two-class experiments, its F 1 increases from the baseline 82.67% to the final 95.54%, and in the multi-class experiments, its average F 1 increases from the baseline 91.48% to the final 98.56%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "In addition, the other two conclusions can be drawn from the experiments: 1) Using Chinese word bigrams as features has a better discriminating capability than using words as features, but more serious data sparseness will be faced; 2) formula (8) is superior to (9) in feature reduction in both the two-class and multi-class Chinese text categorization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "It is worth point out that we believe the proposed method is in principle language independent, though all the experiments are performed on Chinese datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Naive Bayes at Forty: The Independence Assumption in Information Retrieval", "authors": [ { "first": "D", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ECML-98", "volume": "", "issue": "", "pages": "4--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lewis, D. Naive Bayes at Forty: The Independence Assumption in Information Retrieval. In Proceedings of ECML-98, 4-15, 1998.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salton, G. Automatic Text Processing: The Transformation, Analysis, and Retrieval of In- formation by Computer. Addison-Wesley, Reading, MA, 1989.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Re-examination of Text Categorization Methods", "authors": [ { "first": "Y", "middle": [], "last": "Yang", "suffix": "" }, { "first": "X", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1999, "venue": "Proceedings of SIGIR-99", "volume": "", "issue": "", "pages": "42--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, Y., and Liu, X. A Re-examination of Text Categorization Methods. In Proceedings of SIGIR-99, 42-49,1999.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Causality Reasoning and Text Categorization", "authors": [ { "first": "Xinghua", "middle": [], "last": "Fan", "suffix": "" } ], "year": 2004, "venue": "P.R. China", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinghua Fan. Causality Reasoning and Text Categorization, Postdoctoral Research Report of Tsinghua University, P.R. China, April 2004. (In Chinese)", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Inductive Learning Algorithms and Representation for Text Categorization", "authors": [ { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "J", "middle": [], "last": "Platt", "suffix": "" }, { "first": "D", "middle": [], "last": "Hecherman", "suffix": "" }, { "first": "M", "middle": [], "last": "Sahami", "suffix": "" } ], "year": 1998, "venue": "Proceedings of CIKM-98", "volume": "", "issue": "", "pages": "148--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dumais, S.T., Platt, J., Hecherman, D., and Sahami, M. Inductive Learning Algorithms and Representation for Text Categorization. In Proceedings of CIKM-98, Bethesda, MD, 148-155, 1998.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bayesian Approach to Filtering Junk E-Mail", "authors": [ { "first": "M", "middle": [], "last": "Sahami", "suffix": "" }, { "first": "S", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "D", "middle": [], "last": "Hecherman", "suffix": "" }, { "first": "E", "middle": [ "A" ], "last": "Horvitz", "suffix": "" } ], "year": 1998, "venue": "Learning for Text Categorization: Papers from the AAAI Workshop", "volume": "", "issue": "", "pages": "55--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sahami, M., Dumais, S., Hecherman, D., and Horvitz, E. A. Bayesian Approach to Filter- ing Junk E-Mail. In Learning for Text Categorization: Papers from the AAAI Workshop, 55-62, Madison Wisconsin. AAAI Technical Report WS-98-05, 1998.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Causality Diagram Theory Research and Applying It to Fault Diagnosis of Complexity System", "authors": [ { "first": "Xinghua", "middle": [], "last": "Fan", "suffix": "" } ], "year": 2002, "venue": "P.R. China", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinghua Fan. Causality Diagram Theory Research and Applying It to Fault Diagnosis of Complexity System, Ph.D. Dissertation of Chongqing University, P.R. China, April 2002. (In Chinese)", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Reasoning Algorithm in Multi-Valued Causality Diagram", "authors": [ { "first": "Xinghua", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Sun", "middle": [], "last": "Maosong", "suffix": "" }, { "first": "Huang", "middle": [], "last": "Xiyue", "suffix": "" } ], "year": 2003, "venue": "Chinese Journal of Computers", "volume": "26", "issue": "3", "pages": "310--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinghua Fan, Zhang Qin, Sun Maosong, and Huang Xiyue. Reasoning Algorithm in Multi-Valued Causality Diagram, Chinese Journal of Computers, 26(3), 310-322, 2003. (In Chinese)", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning Limited Dependence Bayesian Classifiers", "authors": [ { "first": "M", "middle": [], "last": "Sahami", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Second International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "335--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sahami, M. Learning Limited Dependence Bayesian Classifiers. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, 335-338, 1996.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Combining Automatic and Manual Index Representations in Probabilistic Retrieval", "authors": [ { "first": "T", "middle": [ "B" ], "last": "Rajashekar", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1995, "venue": "Journal of the American society for information science", "volume": "6", "issue": "4", "pages": "272--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajashekar, T. B. and Croft, W. B. Combining Automatic and Manual Index Representa- tions in Probabilistic Retrieval. Journal of the American society for information science, 6(4): 272-283,1995.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Combining Multiple Learning Strategies for Effective Cross Validation", "authors": [ { "first": "Y", "middle": [], "last": "Yang", "suffix": "" }, { "first": "T", "middle": [], "last": "Ault", "suffix": "" }, { "first": "T", "middle": [], "last": "Pierce", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ICML 2000", "volume": "", "issue": "", "pages": "1167--1174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, Y., Ault, T. and Pierce, T. Combining Multiple Learning Strategies for Effective Cross Validation. In Proceedings of ICML 2000, 1167-1174, 2000.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Method Combination for Document Filtering", "authors": [ { "first": "D", "middle": [ "A" ], "last": "Hull", "suffix": "" }, { "first": "J", "middle": [ "O" ], "last": "Pedersen", "suffix": "" }, { "first": "H", "middle": [], "last": "Schutze", "suffix": "" } ], "year": 1996, "venue": "Proceedings of SIGIR-96", "volume": "", "issue": "", "pages": "279--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hull, D. A., Pedersen, J. O. and H. Schutze. Method Combination for Document Filtering. In Proceedings of SIGIR-96, 279-287, 1996.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Combining Classifiers in Text Categorization", "authors": [ { "first": "L", "middle": [ "S" ], "last": "Larkey", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1996, "venue": "Proceedings of SIGIR-96", "volume": "", "issue": "", "pages": "289--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Larkey, L. S. and Croft, W. B. Combining Classifiers in Text Categorization. In Proceed- ings of SIGIR-96, 289-297, 1996.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Classification of Text Documents", "authors": [ { "first": "Y", "middle": [ "H" ], "last": "Li", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Jain", "suffix": "" } ], "year": 1998, "venue": "The Computer Journal", "volume": "41", "issue": "8", "pages": "537--546", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Y. H., and Jain, A. K. Classification of Text Documents. The Computer Journal, 41(8): 537-546, 1998.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Meta-learning Approach for Text Categorization", "authors": [ { "first": "W", "middle": [], "last": "Lam", "suffix": "" }, { "first": "K", "middle": [ "Y" ], "last": "Lai", "suffix": "" } ], "year": 2001, "venue": "Proceedings of SIGIR-2001", "volume": "", "issue": "", "pages": "303--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lam, W., and Lai, K.Y. A Meta-learning Approach for Text Categorization. In Proceed- ings of SIGIR-2001, 303-309, 2001.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Probabilistic Combination of Text Classifiers Using Reliability Indicators: Models and Results", "authors": [ { "first": "P", "middle": [ "N" ], "last": "Bennett", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "E", "middle": [], "last": "Horvitz", "suffix": "" } ], "year": 2002, "venue": "Proceedings of SIGIR-2002", "volume": "", "issue": "", "pages": "11--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bennett, P. N., Dumais, S. T., and Horvitz, E. Probabilistic Combination of Text Classifi- ers Using Reliability Indicators: Models and Results. In Proceedings of SIGIR-2002, 11- 15, 2002.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Distance from point (x, y) to the separate line", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "illustrates the distribution of a training set (refer to Section 2.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "text": "CNB allows each feature node Y to have a maximum of k features nodes as parents that constitute the k-dependence feature list representing the dependences among features. In other words, \u220f(Y) = {Y d , C}, where Y", "num": null, "type_str": "table", "content": "", "html": null }, "TABREF2": { "text": "Performance comparisons in multi-class Dataset-II", "num": null, "type_str": "table", "content": "
", "html": null } } } }