Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N06-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:17.614218Z"
},
"title": "Exploiting Domain Structure for Named Entity Recognition",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign Urbana",
"location": {
"postCode": "61801",
"region": "IL"
}
},
"email": "jiang4@cs.uiuc.edu"
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign Urbana",
"location": {
"postCode": "61801",
"region": "IL"
}
},
"email": "czhai@cs.uiuc.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Named Entity Recognition (NER) is a fundamental task in text mining and natural language understanding. Current approaches to NER (mostly based on supervised learning) perform well on domains similar to the training domain, but they tend to adapt poorly to slightly different domains. We present several strategies for exploiting the domain structure in the training data to learn a more robust named entity recognizer that can perform well on a new domain. First, we propose a simple yet effective way to automatically rank features based on their generalizabilities across domains. We then train a classifier with strong emphasis on the most generalizable features. This emphasis is imposed by putting a rank-based prior on a logistic regression model. We further propose a domain-aware cross validation strategy to help choose an appropriate parameter for the rank-based prior. We evaluated the proposed method with a task of recognizing named entities (genes) in biology text involving three species. The experiment results show that the new domainaware approach outperforms a state-ofthe-art baseline method in adapting to new domains, especially when there is a great difference between the new domain and the training domain.",
"pdf_parse": {
"paper_id": "N06-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Named Entity Recognition (NER) is a fundamental task in text mining and natural language understanding. Current approaches to NER (mostly based on supervised learning) perform well on domains similar to the training domain, but they tend to adapt poorly to slightly different domains. We present several strategies for exploiting the domain structure in the training data to learn a more robust named entity recognizer that can perform well on a new domain. First, we propose a simple yet effective way to automatically rank features based on their generalizabilities across domains. We then train a classifier with strong emphasis on the most generalizable features. This emphasis is imposed by putting a rank-based prior on a logistic regression model. We further propose a domain-aware cross validation strategy to help choose an appropriate parameter for the rank-based prior. We evaluated the proposed method with a task of recognizing named entities (genes) in biology text involving three species. The experiment results show that the new domainaware approach outperforms a state-ofthe-art baseline method in adapting to new domains, especially when there is a great difference between the new domain and the training domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named Entity Recognition (NER) is the task of identifying and classifying phrases that denote certain types of named entities (NEs), such as persons, organizations and locations in news articles, and genes, proteins and chemicals in biomedical literature. NER is a fundamental task in many natural language processing applications, such as question answering, machine translation, text mining, and information retrieval (Srihari and Li, 1999; Huang and Vogel, 2002) .",
"cite_spans": [
{
"start": 420,
"end": 442,
"text": "(Srihari and Li, 1999;",
"ref_id": "BIBREF14"
},
{
"start": 443,
"end": 465,
"text": "Huang and Vogel, 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing approaches to NER are mostly based on supervised learning. They can often achieve high accuracy provided that a large annotated training set similar to the test data is available (Borthwick, 1999; Zhou and Su, 2002; Florian et al., 2003; Klein et al., 2003; Finkel et al., 2005) . Unfortunately, when the test data has some difference from the training data, these approaches tend to not perform well. For example, Ciaramita and Altun (2005) reported a performance degradation of a named entity recognizer trained on CoNLL 2003 Reuters corpus, where the F1 measure dropped from 0.908 when tested on a similar Reuters set to 0.643 when tested on a Wall Street Journal set. The degradation can be expected to be worse if the training data and the test data are more different.",
"cite_spans": [
{
"start": 188,
"end": 205,
"text": "(Borthwick, 1999;",
"ref_id": "BIBREF2"
},
{
"start": 206,
"end": 224,
"text": "Zhou and Su, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 225,
"end": 246,
"text": "Florian et al., 2003;",
"ref_id": "BIBREF7"
},
{
"start": 247,
"end": 266,
"text": "Klein et al., 2003;",
"ref_id": "BIBREF10"
},
{
"start": 267,
"end": 287,
"text": "Finkel et al., 2005)",
"ref_id": "BIBREF6"
},
{
"start": 424,
"end": 450,
"text": "Ciaramita and Altun (2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The performance degradation indicates that existing approaches adapt poorly to new domains. We believe one reason for this poor adaptability is that these approaches have not considered the fact that, depending on the genre or domain of the text, the entities to be recognized may have different mor-phological properties or occur in different contexts. Indeed, since most existing learning-based NER approaches explore a large feature space, without regularization, a learned NE recognizer can easily overfit the training domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain overfitting is a serious problem in NER because we often need to tag entities in completely new domains. Given any new test domain, it is generally quite expensive to obtain a large amount of labeled entity examples in that domain. As a result, in many real applications, we must train on data that do not fully resemble the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This problem is especially serious in recognizing entities, in particular gene names, from biomedical literature. Gene names of one species can be quite different from those of another species syntactically due to their different naming conventions. For example, some biological species such as yeast use symbolic gene names like tL(CAA)G3, while some other species such as fly use descriptive gene names like wingless.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present several strategies for exploiting the domain structure in the training data to learn a more robust named entity recognizer that can perform well on a new domain. Our work is motivated by the fact that in many real applications, the training data available to us naturally falls into several domains that are similar in some aspects but different in others. For example, in biomedical literature, the training data can be naturally grouped by the biological species being discussed, while for news articles, the training data can be divided by the genre, the time, or the news agency of the articles. Our main idea is to exploit such domain structure in the training data to identify generalizable features which, presumably, are more useful for recognizing named entities in a new domain. Indeed, named entities across different domains often share certain common features, and it is these common features that are suitable for adaptation to new domains; features that only work for a particular domain would not be as useful as those working for multiple domains. In biomedical literature, for example, surrounding words such as expression and encode are strong indicators of gene mentions, regardless of the specific biological species being discussed, whereas species-specific name characteristics (e.g., prefix = \"-less\") would clearly not gener-alize well, and may even hurt the performance on a new domain. Similarly, in news articles, the part-ofspeeches of surrounding words such as \"followed by a verb\" are more generalizable indicators of name mentions than capitalization, which might be misleading if the genre of the new domain is different; an extreme case is when every letter in the new domain is capitalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on these intuitions, we regard a feature as generalizable if it is useful for NER in all training domains, and propose a generalizability-based feature ranking method, in which we first rank the features within each training domain, and then combine the rankings to promote the features that are ranked high in all domains. We further propose a rankbased prior on logistic regression models, which puts more emphasis on the more generalizable features during the learning stage in a principled way. Finally, we present a domain-aware validation strategy for setting an appropriate parameter value for the rank-based prior. We evaluated our method on a biomedical literature data set with annotated gene names from three species, fly, mouse, and yeast, by treating one species as the new domain and the other two as the training domains. The experiment results show that the proposed method outperforms a baseline method that represents the state-of-the-art NER techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: In Section 2, we introduce a feature ranking method based on the generalizability of features across domains. In Section 3, we briefly introduce the logistic regression models for NER. We then propose a rankbased prior on logistic regression models and describe the domain-aware validation strategy in Section 4. The experiment results are presented in Section 5. Finally we discuss related work in Section 6 and conclude our work in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take a commonly used approach and treat NER as a sequential tagging problem (Borthwick, 1999; Zhou and Su, 2002; Finkel et al., 2005) . Each token is assigned the tag I if it is part of an NE and the tag O otherwise. Let x denote the feature vector for a token, and let y denote the tag for x. We first compute the probability p(y|x) for each token, using a learned classifier. We then apply Viterbi algorithm to assign the most likely tag sequence to a sequence of tokens, i.e., a sentence. The features we use follow the common practice in NER, including surface word features, orthographic features, POS tags, substrings, and contextual features in a local window of size 5 around the target token (Finkel et al., 2005) .",
"cite_spans": [
{
"start": 79,
"end": 96,
"text": "(Borthwick, 1999;",
"ref_id": "BIBREF2"
},
{
"start": 97,
"end": 115,
"text": "Zhou and Su, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 116,
"end": 136,
"text": "Finkel et al., 2005)",
"ref_id": "BIBREF6"
},
{
"start": 704,
"end": 725,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "As in any learning problem, feature selection may affect the NER performance significantly. Indeed, a very likely cause of the domain overfitting problem may be that the learned NE recognizer has picked up some non-generalizable features, which are not useful for a new domain. Below, we present a generalizability-based feature ranking method, which favors more generalizable features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "Formally, we assume that the training examples are divided into",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "m subsets T 1 , T 2 , . . . , T m , corre- sponding to m different domains D 1 , D 2 , . . . , D m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "We further assume that the test set T m+1 is from a new domain D m+1 , and this new domain shares some common features of the m training domains. Note that these are reasonable assumptions that reflect the situation in real problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "We use generalizability to denote the amount of contribution a feature can make to the classification accuracy on any domain. Thus, a feature with high generalizability should be useful for classification on any domain. To identify the highly generalizable features, we must then compare their contributions to classification among different domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "Suppose in each individual domain, the features can be ranked by their contributions to the classification accuracy. There are different feature ranking methods based on different criteria. Without loss of generality, let us use r T : F \u2192 {1, 2, . . . , |F |} to denote a ranking function that maps a feature f \u2208 F to a rank r T (f ) based on a set of training examples T , where F is the set of all features, and the rank denotes the position of the feature in the final ranked list. The smaller the rank r T (f ) is, the more important the feature f is in the training set T . For the m training domains, we thus have m ranking functions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "r T 1 , r T 2 , . . . , r Tm .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "To identify the generalizable features across the m different domains, we propose to combine the m individual domain ranking functions in the following way. The idea is to give high ranks to features that are useful in all training domains . To achieve this goal, we first define a scoring function s : F \u2192 R as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(f ) = m min i=1 1 r T i (f ) .",
"eq_num": "(1)"
}
],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "We then rank the features in decreasing order of their scores using the above scoring function. This is essentially to rank features according to their maximum rank max i r T i (f ) among the m domains. Let function r gen return the rank of a feature in this combined, generalizability-based ranked list. The original ranking function r T used for individual domain feature ranking can use different criteria such as information gain or \u03c7 2 statistic (Yang and Pedersen, 1997) . In our experiments, we used a ranking function based on the model parameters of the classifier, which we will explain in Section 5.2.",
"cite_spans": [
{
"start": 451,
"end": 476,
"text": "(Yang and Pedersen, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "Next, we need to incorporate this preference for generalizable features into the classifier. Note that because this generalizability-based feature ranking method is independent of the learning algorithm, it can be applied on top of any classifier. In this work, we choose the logistic regression classifier. One way to incorporate the feature ranking into the classifier is to select the top-k features, where k is chosen by cross validation. There are two potential problems with this hard feature selection approach. First, once k features are selected, they are treated equally during the learning stage, resulting in a loss of the preference among these k features. Second, this incremental feature selection approach does not consider the correlation among features. We propose an alternative way to incorporate the feature ranking into the classifier, where the preference for generalizable features is transformed into a non-uniform prior over the feature parameters in the model. This can be regarded as a soft feature selection approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability-Based Feature Ranking",
"sec_num": "2"
},
{
"text": "In binary logistic regression models, the probability of an observation x being classified as I is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression for NER",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(I|x, \u03b2) = exp(\u03b2 0 + |F | i=1 \u03b2 i x i ) 1 + exp(\u03b2 0 + |F | i=1 \u03b2 i x i ) (2) = exp(\u03b2 \u2022 x ) 1 + exp(\u03b2 \u2022 x ) ,",
"eq_num": "(3)"
}
],
"section": "Logistic Regression for NER",
"sec_num": "3"
},
{
"text": "where \u03b2 0 is the bias weight, \u03b2 i (1 \u2264 i \u2264 |F |) are the weights for the features, and x is the augmented feature vector with x 0 = 1. The weight vector \u03b2 can be learned from the training examples by a maximum likelihood estimator. It is worth pointing out that logistic regression has a close relation with maximum entropy models. Indeed, when the features in a maximum entropy model are defined as conjunctions of a feature on observations only and a Kronecker delta of a class label, which is a common practice in NER, the maximum entropy model is equivalent to a logistic regression model (Finkel et al., 2005) . Thus the logistic regression method we use for NER is essentially the same as the maximum entropy models used for NER in previous work.",
"cite_spans": [
{
"start": 593,
"end": 614,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression for NER",
"sec_num": "3"
},
{
"text": "To avoid overfitting, a zero mean Gaussian prior on the weights is usually used (Chen and Rosenfeld, 1999; Bender et al., 2003) , and a maximum a posterior (MAP) estimator is used to maximize the posterior probability:",
"cite_spans": [
{
"start": 80,
"end": 106,
"text": "(Chen and Rosenfeld, 1999;",
"ref_id": "BIBREF3"
},
{
"start": 107,
"end": 127,
"text": "Bender et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression for NER",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 = arg max \u03b2 p(\u03b2) N j=1 p(y j |x j , \u03b2),",
"eq_num": "(4)"
}
],
"section": "Logistic Regression for NER",
"sec_num": "3"
},
{
"text": "where y j is the true class label for x j , N is the number of training examples, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression for NER",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(\u03b2) = |F | i=1 1 2\u03c0\u03c3 2 i exp(\u2212 \u03b2 2 i 2\u03c3 2 i ).",
"eq_num": "(5)"
}
],
"section": "Logistic Regression for NER",
"sec_num": "3"
},
{
"text": "In previous work, \u03c3 i are set uniformly to the same value for all features, because there is in general no additional prior knowledge about the features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression for NER",
"sec_num": "3"
},
{
"text": "Instead of using the same \u03c3 i for all features, we propose a rank-based non-uniform Gaussian prior on the weights of the features so that more generalizable features get higher prior variances (i.e., low prior strength) and features on the bottom of the list get low prior variances (i.e., high prior strength).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank-Based Prior",
"sec_num": "4"
},
{
"text": "Since the prior has a zero mean, such a prior would force features on the bottom of the ranked list, which have the least generalizability, to have near-zero weights, but allow more generalizable features to be assigned higher weights during the training stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank-Based Prior",
"sec_num": "4"
},
{
"text": "We need to find a transformation function h : {1, 2, . . . , |F |} \u2192 R + so that we can set \u03c3 2 i = h(r gen (f i )), where r gen (f i ) is the rank of feature f i in the generalizability-based ranked feature list, as defined in Section 2. We choose the following h function because it has the desired properties as described above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation Function",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h(r) = a r 1/b ,",
"eq_num": "(6)"
}
],
"section": "Transformation Function",
"sec_num": "4.1"
},
{
"text": "where a and b (a, b > 0) are parameters that control the degree of the confidence in the generalizabilitybased ranked feature list. Note that a corresponds to the prior variance assigned to the top-most feature in the ranked list. When b is small, the prior variance drops rapidly as the rank r increases, giving only a small number of top features high prior variances. When b is larger, there will be less discrimination among the features. When b approaches infinity, the prior becomes a uniform prior with the variance set to a for all features. If we set a small threshold \u03c4 on the variance, then we can derive that at least m = a \u03c4 b features have a prior variance greater than \u03c4 . Thus b is proportional to the logarithm of the number of features that are assigned a variance greater than the threshold \u03c4 when a is fixed. Figure 1 shows the h function when a is set to 20 and b is set to a set of different values. ",
"cite_spans": [],
"ref_spans": [
{
"start": 829,
"end": 837,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Transformation Function",
"sec_num": "4.1"
},
{
"text": "We need to set the appropriate values for the parameters a and b. For parameter a, we use the following simple strategy to obtain an estimation. We first train a logistic regression model on all the training data using a Gaussian prior with a fixed variance (set to 1 in our experiments). We then find the maximum weight",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 max = |F | max i=1 |\u03b2 i |",
"eq_num": "(7)"
}
],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "in this trained model. Finally we set a = \u03b2 2 max . Our reasoning is that since a is the variance of the prior for the best feature, a is related to the \"permissible range\" of \u03b2 for the best feature, and \u03b2 max gives us a way for adjusting a according to the empirical range of \u03b2 i 's.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "As we pointed out in Section 4.1, when a is fixed, parameter b controls the number of top features that are given a relatively high prior variance, and hence implicitly controls the number of top features to choose for the classifier to put the most weights on. To select an appropriate value of b, we can use a held-out validation set to tune the parameter value b. Here we present a validation strategy that exploits the domain structure in the training data to set the parameter b for a new domain. Note that in regular validation, both the training set and the validation set contain examples from all training domains. As a result, the average performance on the validation set may be dominated by domains in which the NEs are easy to classify. Since our goal is to build a classifier that performs well on new domains, we should pay more attention to hard domains that have lower classification accuracy. We should therefore examine the performance of the classifier on each training domain individually in the validation stage to gain an insight into the appropriate value of b for a new domain, which has an equal chance of being similar to any of the training domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "Our domain-aware validation strategy first finds the optimal value of b for each training domain. For each subset T i of the training data belonging to domain D i , we divide it into a training set T t i and a validation set T v i . Then for each domain D i , we train a classifier on the training sets of all domains, that is, we train on m j=1 T t j . We then test the classifier on",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "T v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "i . We try a set of different values of b with a fixed value of a, and choose the optimal b that gives the best performance on T v i . Let this optimal value of b for domain D i be b i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "Given b i (1 \u2264 i \u2264 m), we can choose an appropriate value of b m+1 for an unknown test domain D m+1 based on the assumption that D m+1 is a mixture of all the training domains. b m+1 is then chosen to be a weighted average of b i , (1 \u2264 i \u2264 m):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "b m+1 = m i=1 \u03bb i b i ,",
"eq_num": "(8)"
}
],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "where \u03bb i indicates how similar D m+1 is to D i . In many cases, the test domain D m+1 is completely unknown. In this case, the best we can do is to set \u03bb i = 1/m for all i, that is, to assume that D m+1 is an even mixture of all training domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "5 Empirical Evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting using Domain-Aware Validation",
"sec_num": "4.2"
},
{
"text": "We evaluated our domain-aware approach to NER on the problem of gene recognition in biomedical literature. The data we used is from BioCreAtIvE Task 1B (Hirschman et al., 2005) . We chose this data set because it contains three subsets of MED-LINE abstracts with gene names from three species (fly, mouse, and yeast), while no other existing annotated NER data set has such explicit domain structure. The original BioCreAtIvE 1B data was not provided with every gene annotated, but for each abstract, a list of genes that were mentioned in the abstract was given. A gene synonym list was also given for each species. We used a simple string matching method with slight relaxation to tag the gene mentions in the abstracts. We took 7500 sentences from each species for our experiments, where half of the sentences contain gene mentions. We further split the 7500 sentences of each species into two sets, 5000 for training and 2500 for testing. We conducted three sets of experiments, each combining the 5000-sentence training data of two species as training data, and the 2500-sentence test data of the third species as test data. The 2500sentence test data of the training species was used for validation. We call these three sets of experiments F+M\u21d2Y, F+Y\u21d2M, and M+Y\u21d2F.",
"cite_spans": [
{
"start": 152,
"end": 176,
"text": "(Hirschman et al., 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "we use FEX 1 for feature extraction and BBR 2 for logistic regression in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "Because the data set was generated by our automatic tagging procedure using the given gene lists, there is no previously reported performance on this data set for us to compare with. Therefore, to see whether using the domain structure in the training data can really help the adaptation to new domains, we compared our method with a state-of-the-art baseline method based on logistic regression. It uses a Gaussian prior with zero mean and uniform variance on all model parameters. It also employs 5-fold regular cross validation to pick the optimal variance for the prior. Regular feature selection is also considered in the baseline method, where the features are first ranked according to some criterion, and then cross validation is used to select the top-k features. We tested three popular regular feature ranking methods: feature frequency (F), information gain (IG), and \u03c7 2 statistic (CHI). These methods were discussed in (Yang and Pedersen, 1997) . However, with any of the three feature ranking criteria, cross validation showed that selecting all features gave the best average validation performance. Therefore, the best baseline method which we compare our method with uses all features. We call the baseline method BL.",
"cite_spans": [
{
"start": 933,
"end": 958,
"text": "(Yang and Pedersen, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Baseline Method",
"sec_num": "5.2"
},
{
"text": "In our method, the generalizability-based feature ranking requires a first step of feature ranking within each training domain. While we could also use F, IG or CHI to rank features in each domain, to make our method self-contained, we used the following strategy. We first train a logistic regression model on each domain using a zero-mean Gaussian prior with variance set to 1. Then, features are ranked in decreasing order of the absolute values of their weights. The rationale is that, in general, features with higher weights in the logistic regression model are more important. With this ranking within each training domain, we then use the generalizabilitybased feature ranking method to combine the m domain-specific rankings. The obtained ranked feature list is used to construct the rank-based prior, where the parameters a and b are set in the way as discussed in Section 4.2. We call our method DOM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Baseline Method",
"sec_num": "5.2"
},
{
"text": "In Table 1 , we show the precision, recall, and F1 measures of our domain-aware method (DOM) and the baseline method (BL) in all three sets of experiments. We see that the domain-aware method out-performs the baseline method in all three cases when F1 is used as the primary performance measure. In F+Y\u21d2M and M+Y\u21d2F, both precision and recall are also improved over the baseline method. Note that the absolute performance shown in Table 1 is lower than the state-of-the-art performance of gene recognition (Finkel et al., 2005 ). 3 One reason is that we explicitly excluded the test domain from the training data, while most previous work on gene recognition was conducted on a test set drawn from the same collection as the training data. Another reason is that we used simple string matching to generate the data set, which introduced noise to the data because gene names often have irregular lexical variants. To further understand how our method improved the performance, we compared the generalizabilitybased feature ranking method with the three regular feature ranking methods, F, IG, and CHI, that were used in the baseline method. To make fair comparison, for the regular feature ranking methods, we also used the rank-based prior transformation as described in Section 4 to incorporate the preference for top-ranked features. Figure 2 , Figure 3 and Figure 4 show the performance of different feature ranking methods in the three sets of experiments as the parameter b for the rank-based prior changes. As we pointed out in Section 4, b is proportional to the logarithm of the number of \"effective features\".",
"cite_spans": [
{
"start": 505,
"end": 525,
"text": "(Finkel et al., 2005",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1335,
"end": 1343,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 1346,
"end": 1354,
"text": "Figure 3",
"ref_id": "FIGREF4"
},
{
"start": 1359,
"end": 1367,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Comparison with Baseline Method",
"sec_num": "5.2"
},
{
"text": "From the figures, we clearly see that the curve for the generalizability-based ranking method DOM is always above the curves of the other methods, indicating that when the same amount of top features are being emphasized by the prior, the features selected by DOM give better performance on a new domain than the features selected by the other methods. This suggests that the top-ranked features in DOM are indeed more suitable for adaptation to new domains than the top features ranked by the other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with",
"sec_num": "5.3"
},
{
"text": "The figures also show that the ranking method DOM achieved better performance than the baseline over a wide range of b values, especially in F+Y\u21d2M and M+Y\u21d2F, whereas for methods F, IG and CHI, the performance quickly converged to the baseline performance as b increased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with",
"sec_num": "5.3"
},
{
"text": "It is interesting to note the comparison between F and IG (or CHI). In general, when the test data is similar to the training data, IG (or CHI) is advantageous over F (Yang and Pedersen, 1997) . However, in this case when the test domain is different from the training domains, F shows advantages for adaptation. A possible explanation is that frequent features are in general less likely to be domain-specific, and therefore feature frequency can also be used as a criterion to select generalizable features and to filter out domain-specific features, although it is still not as effective as the method we proposed.",
"cite_spans": [
{
"start": 167,
"end": 192,
"text": "(Yang and Pedersen, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with",
"sec_num": "5.3"
},
{
"text": "The NER problem has been extensively studied in the NLP community. Most existing work has focused on supervised learning approaches, employing models such as HMMs (Zhou and Su, 2002) , MEMMs (Bender et al., 2003; Finkel et al., 2005) , and CRFs (McCallum and Li, 2003) . Collins and Singer (1999) proposed an unsupervised method for named entity classification based on the idea of cotraining. Ando and Zhang (2005) proposed a semisupervised learning method to exploit unlabeled data for building more robust NER systems. In all these studies, the evaluation is conducted on unlabeled data similar to the labeled data.",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "(Zhou and Su, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 191,
"end": 212,
"text": "(Bender et al., 2003;",
"ref_id": "BIBREF1"
},
{
"start": 213,
"end": 233,
"text": "Finkel et al., 2005)",
"ref_id": "BIBREF6"
},
{
"start": 245,
"end": 268,
"text": "(McCallum and Li, 2003)",
"ref_id": "BIBREF11"
},
{
"start": 271,
"end": 296,
"text": "Collins and Singer (1999)",
"ref_id": "BIBREF5"
},
{
"start": 394,
"end": 415,
"text": "Ando and Zhang (2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Recently there have been some studies on adapting NER systems to new domains employing techniques such as active learning and semi-supervised learning (Shen et al., 2004; Mohit and Hwa, 2005) , or incorporating external lexical knowledge (Ciaramita and Altun, 2005) . However, there has not been any study on exploiting the domain structure contained in the training examples themselves to build generalizable NER systems. We focus on the domain structure in the training data to build a classifier that relies more on features generalizable across different domains to avoid overfitting the training domains. As our method is orthogonal to most of the aforementioned work, they can be combined to further improve the performance.",
"cite_spans": [
{
"start": 151,
"end": 170,
"text": "(Shen et al., 2004;",
"ref_id": "BIBREF13"
},
{
"start": 171,
"end": 191,
"text": "Mohit and Hwa, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 238,
"end": 265,
"text": "(Ciaramita and Altun, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Named entity recognition is an important problem that can help many text mining and natural language processing tasks such as information extraction and question answering. Currently NER faces a poor domain adaptability problem when the test data is not from the same domain as the training data. We present several strategies to exploit the domain structure in the training data to improve the performance of the learned NER classifier on a new domain. Our results show that the domain-aware strategies we proposed improved the performance over a baseline method that represents the state-ofthe-art NER techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "http://l2r.cs.uiuc.edu/ cogcomp/asoftware.php?skey=FEX 2 http://www.stat.rutgers.edu/ madigan/BBR/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our baseline method performed comparably to the state-ofthe-art systems on the standard BioCreAtIvE 1A data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was in part supported by the National Science Foundation under award numbers 0425852, 0347933, and 0428472. We would like to thank Bruce Schatz, Xin He, Qiaozhu Mei, Xu Ling, and some other BeeSpace project members for useful discussions. We would like to thank Mark Sammons for his help with FEX. We would also like to thank the anonymous reviewers for their comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A highperformance semi-supervised learning method for text chunking",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Kubota",
"suffix": ""
},
{
"first": "Ando",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL-2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Kubota Ando and Tong Zhang. 2005. A high- performance semi-supervised learning method for text chunking. In Proceedings of ACL-2005.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Maximum entropy models for named entity recognition",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of CoNLL-2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Bender, Franz Josef Och, and Hermann Ney. 2003. Maximum entropy models for named entity recognition. In Proceedings of CoNLL-2003.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Maximum Entropy Approach to Named Entity Recognition",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Borthwick",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Borthwick. 1999. A Maximum Entropy Ap- proach to Named Entity Recognition. Ph.D. thesis, New York University.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Gaussian prior for smoothing maximum entropy models",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen and Ronald Rosenfeld. 1999. A Gaus- sian prior for smoothing maximum entropy models. Technical Report CMU-CS-99-108, School of Com- puter Science, Carnegie Mellon University.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Named-entity recognition in novel domains with external lexical knowledge",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2005,
"venue": "Workshop on Advances in Structured Learning for Text and Speech Processing (NIPS-2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita and Yasemin Altun. 2005. Named-entity recognition in novel domains with ex- ternal lexical knowledge. In Workshop on Advances in Structured Learning for Text and Speech Processing (NIPS-2005).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised models for named entity classification",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EMNLP/VLC-1999",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In Proceedings of EMNLP/VLC-1999.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Exploring the boundaries: Gene and protein identification in biomedical text",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Shipra",
"middle": [],
"last": "Dingare",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [
"Alex"
],
"last": "",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
}
],
"year": 2005,
"venue": "BMC Bioinformatics",
"volume": "",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Finkel, Shipra Dingare, Christopher D. Manning, Malvina Nissim, Beatrice Alex, and Claire Grover. 2005. Exploring the boundaries: Gene and protein identification in biomedical text. BMC Bioinformat- ics, 6(Suppl 1):S5.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Named entity recognition through classifier combination",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of CoNLL-2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recognition through clas- sifier combination. In Proceedings of CoNLL-2003.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of BioCreAtIvE task 1B: normailized gene lists",
"authors": [
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Colosimo",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yeh",
"suffix": ""
}
],
"year": 2005,
"venue": "BMC Bioinformatics",
"volume": "",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynette Hirschman, Marc Colosimo, Alexander Morgan, and Alexander Yeh. 2005. Overview of BioCreAtIvE task 1B: normailized gene lists. BMC Bioinformatics, 6(Suppl 1):S11.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improved named entity translation and bilingual named entity extraction",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Huang and Stephan Vogel. 2002. Improved named entity translation and bilingual named entity extrac- tion. In ICMI-2002.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Named entity recognition with character-level models",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Smarr",
"suffix": ""
},
{
"first": "Huy",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of CoNLL-2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein, Joseph Smarr, Huy Nguyen, and Christo- pher D. Manning. 2003. Named entity recogni- tion with character-level models. In Proceedings of CoNLL-2003.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of CoNLL-2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of CoNLL-2003.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Syntax-based semi-supervised named entity tagging",
"authors": [
{
"first": "Behrang",
"middle": [],
"last": "Mohit",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL-2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Behrang Mohit and Rebecca Hwa. 2005. Syntax-based semi-supervised named entity tagging. In Proceedings of ACL-2005.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-criteria-based active learning for named entity recognition",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chew-Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew- Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceedings of ACL- 2004.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Information extraction supported question answering",
"authors": [
{
"first": "Rohini",
"middle": [],
"last": "Srihari",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 1999,
"venue": "TREC-8",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohini Srihari and Wei Li. 1999. Information extraction supported question answering. In TREC-8.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A comparative study on feature selection in text categorization",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"O"
],
"last": "Pedersen",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ICML-1997",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Yang and Jan O. Pedersen. 1997. A comparative study on feature selection in text categorization. In Proceedings of ICML-1997.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Named entity recognition using an HMM-based chunk tagger",
"authors": [
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guodong Zhou and Jian Su. 2002. Named entity recog- nition using an HMM-based chunk tagger. In Proceed- ings of ACL-2002.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Transformation Function h(r) = 20 r 1/b",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Comparison between regular feature ranking and generalizability-based feature ranking on F+M\u21d2Y",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Comparison between regular feature ranking and generalizability-based feature ranking on F+Y\u21d2M",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "Comparison between regular feature ranking and generalizability-based feature ranking on M+Y\u21d2F",
"num": null
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Comparison of the domain-aware method</td></tr><tr><td>and the baseline method, where in the domain-aware method, b = 0.5b 1 + 0.5b 2</td></tr></table>"
}
}
}
}