Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I05-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:25:03.259476Z"
},
"title": "Mining Inter-Entity Semantic Relations Using Improved Transductive Learning",
"authors": [
{
"first": "Zhu",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Michigan",
"location": {
"postCode": "48105",
"settlement": "Ann Arbor",
"region": "MI",
"country": "U.S.A"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper studies the problem of mining relational data hidden in natural language text. In particular, it approaches the relation classification problem with the strategy of transductive learning. Different algorithms are presented and empirically evaluated on the ACE corpus. We show that transductive learners exploiting various lexical and syntactic features can achieve promising classification performance. More importantly, transductive learning performance can be significantly improved by using an induced similarity function.",
"pdf_parse": {
"paper_id": "I05-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper studies the problem of mining relational data hidden in natural language text. In particular, it approaches the relation classification problem with the strategy of transductive learning. Different algorithms are presented and empirically evaluated on the ACE corpus. We show that transductive learners exploiting various lexical and syntactic features can achieve promising classification performance. More importantly, transductive learning performance can be significantly improved by using an induced similarity function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The world today is full of various information sources, with different ways of representing the same information. One common problem that arises in the data management community is that data existing in one format may be needed in a different format for another purpose. An instance of this general problem is that relational data don't always exist in the form of relational tables; lots of them are hidden in natural language text. For example, (author, book) pairs can be instantiated as \". . . Shakespeare's famous work Hamlet . . . \" or \". . . A Brief History of Time was written by Stephen Hawking . . . \" in text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, within the information retrieval and natural language processing community, Information Extraction (IE) systems are understood as techniques for automatically extracting information from text, specifically, identifying relevant information (usually of pre-defined types) from text documents in a certain domain. Once extracted, the information can be used for purposes such as database population and text indexing. While significant progress has been made in IE research, stimulated in particular by the Message Understanding Conferences (MUC) 1 and the recent ACE (Automatic Content Extraction) program 2 organized by the LDC (Linguistic Data Consortium), it is generally agreed that many barriers exist to the wider use of IE technologies due to the difficulties in adapting systems to new applications and domains. Keeping track of dynamic information sources (e.g., web pages) is challenging as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address these challenges, there has been a recent trend shift in the research community from knowledge-based approaches to machine learning techniques. Moreover, due to the cost related to acquiring large amount of labeled training data, researchers have been looking at various learning algorithms exploiting cheaply available unlabeled data (usually in much larger amounts), which aim at minimizing the need for labeled data while still achieving comparable results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to the scope of ACE, current IE research has three main objectives: Entity Detection and Tracking (EDT), Relation Detection and Characterization (RDC), and Event Detection and Characterization (EDC). This study focuses on the second subproblem, RDC. In particular, the goal is to automatically classify binary relations between entities, i.e., to decide in which relational table to put each entity pair, using transductive learning algorithms. We propose an improved transductive learner and empirically compare it with the baseline learner on the ACE corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current paper draws upon previous work in NLP and machine learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Within the realm of information extraction, there are several representative systems that use machine learning for extracting relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction and Classification",
"sec_num": "2.1"
},
{
"text": "Snowball [1] is a bootstrapping-based system that requires only a handful of training examples of tuples of interest. These examples are used to generate extraction patterns, which in turn result in new tuples being extracted from the document collection. At each iteration of the extraction process, Snowball evaluates the quality of these patterns and tuples without human intervention, and keeps only the most reliable ones for the next iteration. A scalable evaluation methodology is also developed for the task. The approach was illustrated on the problem of extracting (organization, headquarter location) pairs from a collection of more than 300, 000 newspaper documents. DIPRE (Dual Iterative Pattern Relation Expansion) [2] is another technique that exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. The technique was used to extract (author, title) pairs from the World Wide Web.",
"cite_spans": [
{
"start": 9,
"end": 12,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 729,
"end": 732,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction and Classification",
"sec_num": "2.1"
},
{
"text": "In [3] , an application of kernel methods to extracting relations from natural language text is presented. The authors introduce kernels defined over shallow parse representations of text, and design efficient algorithms for computing the kernels. The devised kernels are used in conjunction with SVM and Voted Perceptron learning algorithms for the task of extracting person-affiliation and organization-location relations from text. The proposed methods are compared with feature-based learning algorithms, with promising results.",
"cite_spans": [
{
"start": 3,
"end": 6,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction and Classification",
"sec_num": "2.1"
},
{
"text": "More recently, Zhang [4] investigates the relation classification problem by bootstrapping from a small amount of labeled data. Bootstrapping procedures are built on top of SVM classifiers and evaluated on the ACE corpus.",
"cite_spans": [
{
"start": 21,
"end": 24,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction and Classification",
"sec_num": "2.1"
},
{
"text": "Rosario and Hearst [5] examine the problem of distinguishing among seven relation types that can occur between the entities \"treatment\" and \"disease\" in bioscience text, and the problem of identifying such entities. Five different generative graphical models and a neural network model using lexical, syntactic, and semantic features are compared. The authors find that the neural network helps achieve high classification accuracy.",
"cite_spans": [
{
"start": 19,
"end": 22,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction and Classification",
"sec_num": "2.1"
},
{
"text": "Almost all work above falls into the realm of \"inductive learning\", in the sense that a \"model\" is first induced from the labeled (training) data and then used to predict unseen data. The beauty of this approach is that once the classification function (model) is generalized (assuming a \"good\" generalization algorithm), it can be used for prediction independently of the labeled data on which it was trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transductive Learning",
"sec_num": "2.2"
},
{
"text": "In many domains, including NLP, there is usually a large amount of unlabeled data but only limited amount of labeled training data. If a generalized model is preferred, one can still follow the inductive learning paradigm, which entails work such as bootstrapping [6] . On the other hand, we might encounter the following situation:",
"cite_spans": [
{
"start": 264,
"end": 267,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transductive Learning",
"sec_num": "2.2"
},
{
"text": "we are only concerned about performance on a particular pool of data, -and we don't care about generalizability, -and data points can be effectively queried/accessed If all the conditions above are true, the learner can observe the test data and potentially exploit structures in their distribution. In other words, there is really no difference between \"unlabeled data\" and \"test data\", and the research question is: \"given some labeled data and a large set of (unlabeled) test data, can properties of the entire data set be used to make predictions?\" This is the motivation behind transductive learning. The setting itself, specifically, transductive SVMs, was first introduced by Vapnik [7] , and then later refined by [8] and [9] . Other approaches are based on s \u2212 t cuts [10, 11] or multi-way cuts [12] . Joachims [13] presents Spectral Graph Transducer (SGT), which is a transductive version of the k nearest-neighbor classifier.",
"cite_spans": [
{
"start": 690,
"end": 693,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 722,
"end": 725,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 730,
"end": 733,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 777,
"end": 781,
"text": "[10,",
"ref_id": "BIBREF9"
},
{
"start": 782,
"end": 785,
"text": "11]",
"ref_id": "BIBREF10"
},
{
"start": 804,
"end": 808,
"text": "[12]",
"ref_id": "BIBREF11"
},
{
"start": 820,
"end": 824,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transductive Learning",
"sec_num": "2.2"
},
{
"text": "The research problem of this paper is classification of relations between entities. In other words, the task is to determine the appropriate relational table into which one should put a given pair of related entities. To be more precise, -We only focus on binary relations, i.e., ones between pairs of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "-We only deal with intra-sentence explicit relations in this study. In other words, the (two) EDT mentions of the entity arguments of a relation must occur within a common syntactic construction, in this case a sentence. The relations also have to be \"explicit\" in the sense that they should have explicit textual support and don't require further reasoning based on understanding of the context's meaning. -We don't actually \"detect\" relations. Rather, the goal is to classify the type of relation between two entities (or, in other words, to put the entity pair into the correct relational table), given that they are known to be related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "-It is also assumed that entity recognition already takes place beforehand, hence all entity-related information is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "We use the five high-level relations defined in ACE RDC Annotation Guidelines V3.6 as the target set of classes of the classification task (in other words, they define the five candidate relational tables into which the entity pairs will be dispatched). These are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "ROLE affiliation between people and organizations, facilities, and GPEs (Geo-Political Entities). This includes employment, office holder, ownership, founder, member, and nationality relationships, etc. PART part-whole relationships between organizations, facilities and GPEs. AT location of a Person, Organization, GPE, or Facility entity. For example, a person is at a Location, GPE or Facility if the context indicates that the person was, is or will be there. An Organization is in a Location/GPE if it has a branch there. NEAR indicates that an entity is explicitly near a location, but not actually in that location or part of that location. SOC personal or professional relationships between people, such as relative, associate, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "First, for each relation r in the list above, we learn the following classifier: \"Disney\" and \"ABC\" are the two \"ORGANIZATION\" entities, and they divide the whole sentence into three context windows (the pre-context before \"Disney\", the post-context after \"ABC\", and the mid-context between the two entities). With regard to the \"PART\" relation, the label is \"1\", and \"0\" for other relations. Then we combine the multiple binary classifiers and get a single classifier",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "C r : (c pr , e 1 , c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "C(c pr , e 1 , c m , e 2 , c pt ) = arg max ri C ri (c pr , e 1 , c m , e 2 , c pt )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "In the example above, a label \"PART\" is eventually assigned to the tuple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "Assuming we have -Input (instance) space X and output (label) space Y -Labeled data set L and unlabeled data set U (as mentioned before, no distinction is made between \"unlabeled\" and \"test\" data in the transductive learning setting)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalization of Different Learning Paradigms",
"sec_num": "4.1"
},
{
"text": "One could distinguish three types of learning paradigms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalization of Different Learning Paradigms",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "-Induction (X L , Y L ) \u2192 f",
"eq_num": "(1)"
}
],
"section": "Formalization of Different Learning Paradigms",
"sec_num": "4.1"
},
{
"text": "where f represents the induced model -Induction with unlabeled data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalization of Different Learning Paradigms",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(X L , Y L ) \u222a X U \u2192 f (2) -Transduction (X L , Y L ) \u222a X U \u2192 Y U",
"eq_num": "(3)"
}
],
"section": "Formalization of Different Learning Paradigms",
"sec_num": "4.1"
},
{
"text": "The three learning paradigms clearly have different advantages and different application scenarios. However, when it comes to exploiting unlabeled data, the tradeoff between the last two is not yet well understood. In this paper, we focus on the last learning paradigm, i.e., transductive learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalization of Different Learning Paradigms",
"sec_num": "4.1"
},
{
"text": "A general approach to transductive learning is to construct a graph of all data points based on distance or similarity among them, and then to use the \"known\" labels to perform some type of graph partitioning or label propagation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transductive Learning with Learned Similarity Function",
"sec_num": "4.2"
},
{
"text": "In this study, we use the Spectral Graph Transducer (implemented in SGTlight) [13] as our baseline transductive learner, which exactly follows the transducitve learning paradigm defined by Equation (3). The basic idea of SGT is to construct a similarity weighted undirected k nearest-neighbor (kNN) graph G on X with adjacency matrix A (defined below), and then run spectral partitioning on it.",
"cite_spans": [
{
"start": 78,
"end": 82,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transductive Learning with Learned Similarity Function",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A ij = similarity(xi ,xj) x k \u2208knn(x i ) similarity(xi ,x k ) x j \u2208 knn(x i ) 0 otherwise",
"eq_num": "(4)"
}
],
"section": "Transductive Learning with Learned Similarity Function",
"sec_num": "4.2"
},
{
"text": "Notice that what takes a crucial role in shaping the structure of graph is the similarity function, as which SGT uses the cosine value between feature vectors. However, there might exist other choices for similarity functions. Our hypothesis is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transductive Learning with Learned Similarity Function",
"sec_num": "4.2"
},
{
"text": "This defines the following modified version of the transductive learning paradigm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "If we can learn (induce) a similarity function from part of the labeled data and use it to construct a new weighted graph G over the unlabeled data and the remaining labeled data, a transductive learner on G will outperform the baseline transductive learner that works on G.",
"sec_num": null
},
{
"text": "(X L1 , Y L1 ) \u2192 f L1 f L1 (X L2 \u222a X U ) \u2192 G ((X L2 , Y L2 ) \u222a X U ) G \u2192 Y U (5) in which L 1 \u222a L 2 = L and L 1 \u2229 L 2 = \u03c6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "If we can learn (induce) a similarity function from part of the labeled data and use it to construct a new weighted graph G over the unlabeled data and the remaining labeled data, a transductive learner on G will outperform the baseline transductive learner that works on G.",
"sec_num": null
},
{
"text": "Below is a very straightforward (yet effective, as the readers will see from experimental results) way of defining the \"learned\" similarity function. Suppose the induced model f L1 assigns a confidence score conf idence fL 1 (x i ) to each data points based on its model trained on the the labeled data, then the similarity function in G can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "If we can learn (induce) a similarity function from part of the labeled data and use it to construct a new weighted graph G over the unlabeled data and the remaining labeled data, a transductive learner on G will outperform the baseline transductive learner that works on G.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "similarity(x i , x j ) = e \u2212distance(xi,xj)",
"eq_num": "(6)"
}
],
"section": "If we can learn (induce) a similarity function from part of the labeled data and use it to construct a new weighted graph G over the unlabeled data and the remaining labeled data, a transductive learner on G will outperform the baseline transductive learner that works on G.",
"sec_num": null
},
{
"text": "where the \"distance\" between two data points is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "If we can learn (induce) a similarity function from part of the labeled data and use it to construct a new weighted graph G over the unlabeled data and the remaining labeled data, a transductive learner on G will outperform the baseline transductive learner that works on G.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "distance(x i , x j ) = |conf idence fL 1 (x i ) \u2212 conf idence fL 1 (x j )|",
"eq_num": "(7)"
}
],
"section": "If we can learn (induce) a similarity function from part of the labeled data and use it to construct a new weighted graph G over the unlabeled data and the remaining labeled data, a transductive learner on G will outperform the baseline transductive learner that works on G.",
"sec_num": null
},
{
"text": "Simply put: the more different the confidence scores, the further away two instances are from each other; the further away, the less similar they are.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "If we can learn (induce) a similarity function from part of the labeled data and use it to construct a new weighted graph G over the unlabeled data and the remaining labeled data, a transductive learner on G will outperform the baseline transductive learner that works on G.",
"sec_num": null
},
{
"text": "We extract the following lexical and syntactic features (all categorical features are binarized) from the linguistic context in which the two entities co-occur:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "Lexical features. Surface tokens of the two entities and three context windows. Shallow-syntactic features. Part-Of-Speech tags (e.g., \"noun\", etc.) corresponding to all tokens in the two entities and three context windows. Deep-syntactic features. To capture the syntactic dependencies between entities, the following features are extracted from the chunklink representation (flattened parse trees): -Chunk tags of the two entities and three context windows. This information is not explicitly present in the treebank format. For example, the \"O\" tag means that the current word is outside of any chunk; the \"I-XP\" tag means that this word is inside an XP chunk; the \"B-XP\" by default means that the word is at the beginning of an XP chunk. -Grammatical function tags of the two entities and three context windows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "The last word in each chunk is its head, and the function of the head is the function of the whole chunk. For example, \"NP-SBJ\" means an NP chunk as the subject of the sentence. The other words in a chunk that are not the head have \"NOFUNC\" as their function. -IOB-chains of the heads of the two entities, each of which is a lexicalized path, in other words, a concatenation of the syntactic categories of all the constituents on the path from the root node to this leaf node of the tree (e.g., \"S/VP/NP/NN\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "-An ordering flag that indicates the relative position of the two entity arguments of a relation. -Types of the two entities, such as \"PERSON\" or \"GPE\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other features. Miscellaneous information including:",
"sec_num": null
},
{
"text": "The context windows are defined as the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other features. Miscellaneous information including:",
"sec_num": null
},
{
"text": "-Mid-context: everything between the two entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other features. Miscellaneous information including:",
"sec_num": null
},
{
"text": "-Pre-(post-) context: up to two words before (after) the corresponding entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other features. Miscellaneous information including:",
"sec_num": null
},
{
"text": "We use the ACE corpus for our task. Specifically, ACE-2 version 1.0 is used, which contains 519 files from sources including broadcast, newswire, and newspaper. The corpus contains 5, 260 manually tagged relations (a small number of additional relations are dropped out due to data preprocessing errors). A breakdown of the data by different relation type is given in Table 1 . We treat the \"training\" and \"devtest\" portions of the corpus as a whole and perform our split on the data in the experiments. The following steps are taken to process the data:",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 375,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "1. Parse the ACE data in XML format; extract and index entities and relations. 2. Segment the text into sentences using the sentence segmenter provided by the DUC competition 3 . 3. Parse the sentences using the Charniak parser [14] . 4. Convert the parse trees into chunklink format using chunklink.pl [15] . 5. Extract and compute features from the chunklink format.",
"cite_spans": [
{
"start": 228,
"end": 232,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 303,
"end": 307,
"text": "[15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "To test the superiority of the learned similarity function in the transductive setting, we experiment the following three scenarios:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "-A vanilla SGT learner that uses a labeled set of size 2, 000 and an unlabeled set (by hiding the labels) of size 3, 260. -A modified SGT learner (SVM-SGT) that uses SVM-light [16] as the inductive learner for similarity functions. (In this case, the confidence score for each data point is the value of the decision function.) -Another modified SGT learner (SNoW-SGT) that uses SNoW [17] with the Winnow updating rule [18] as the inductive learner for similarity functions. (In this case, the confidence score for each data point is the softmax normalized activation for the positive label.)",
"cite_spans": [
{
"start": 176,
"end": 180,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 384,
"end": 388,
"text": "[17]",
"ref_id": "BIBREF16"
},
{
"start": 419,
"end": 423,
"text": "[18]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "For both the SVM-SGT and SNoW-SGT learners, we use the same amount of labeled and unlabeled data as for the vanilla SGT learner, with half of the labeled data (1, 000 data points) used for inducing the similarity function, and the other half used for SGT learning on the modified graph/matrix. All three experiments are run with 10 random splits of the whole data set, which contains 5, 260 data points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "In all three scenarios, the final combination of multiple classifiers is done by assigning the label for which the corresponding binary classifier has the highest confidence score (i.e., the solution of the spectral optimization problem in SGT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "To evaluate the performance of learning algorithms, we compute overall classification accuracy, and for each class, the precision, recall, and F-measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "We experimented different values of k, ranging from 20 to 120, for kNN graph. Empirically, they do not seem to make a lot of difference. All the performance numbers reported below are based on 100-NN graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results: Effect of Induced Similarity Measure",
"sec_num": "5.3"
},
{
"text": "With the vanilla SGT learner, we get a 70.34% accuracy, and the class-specific performance is summarized in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Results: Effect of Induced Similarity Measure",
"sec_num": "5.3"
},
{
"text": "With the SVM-SGT learner, we get a 78.04% accuracy, and the class-specific performance is summarized in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results: Effect of Induced Similarity Measure",
"sec_num": "5.3"
},
{
"text": "With the SNoW-SGT learner, we get a 76.02% accuracy, and the class-specific performance is summarized in Table 4 . The most important result of interest is that both modified SGT learners consistently outperforms the vanilla SGT learner across all random runs, and the differences are statically significant (p << 0.01). This justifies our hypothesis that a learned similarity function between data points, as opposed to naive cosine similarity, can significantly improve the performance of transductive learners.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results: Effect of Induced Similarity Measure",
"sec_num": "5.3"
},
{
"text": "To get a sense of the empirical difference between transductive, improved transductive, and inductive learning algorithms, we also present the performance of a few supervised inducitve learners on the same number of training examples (2, 000). Results are also averaged over 10 random runs. With the supervised SVM learner, we get a 82.31% accuracy, and the classspecific performance is summarized in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 408,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results: Comparison with Supervised Inductive Learners",
"sec_num": "5.4"
},
{
"text": "With the supervised SNoW learner, we get a 77.37% accuracy, and the classspecific performance is summarized in Table 6 . With the supervised Naive Bayes learner, we get a 56.10% accuracy, and the class-specific performance is summarized in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 6",
"ref_id": null
},
{
"start": 240,
"end": 247,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results: Comparison with Supervised Inductive Learners",
"sec_num": "5.4"
},
{
"text": "If we compare the performance presented in this subsection with those of the corresponding transductive learners in the previous subsection, we observe the following pattern: NB < SGT < SNoW-SGT < SNoW < SVM-SGT < SVM With regard to the purpose of this study, again, it is most important to notice that the induction-aided transductive learners significantly outperform the \"pure\" transductive learner. On the other hand, it is reasonable to expect that with improvement of the fundamental algorithm (e.g., spectral partioning), the transductive learners (with or without induced similarity measures) may outperform the best inductive learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results: Comparison with Supervised Inductive Learners",
"sec_num": "5.4"
},
{
"text": "This paper approaches the relation classification problem with improved transductive learning. Specifically, we learned the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "-Application of transductive learning on NLP problems, including information extraction, has been under-explored. This paper makes the attempt to show that binary relations hidden in natural language text can be effectively classified by using transductive learning. -It is shown that an improved transductive learner using similarity functions induced from a small amount of labeled data outperforms its naive transductive counterpart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "-Further more, the general idea of inducing similarity functions for transductive learning are potentially applicable to other classification problems, since it doesn't have any specific characteristics tied to the current relation classification problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In the future, we are interested in pursuing the following directions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "-The current work only deals with binary relations. The algorithms presented should be generalized so that they can work on higher-order relations. -In this study, we only used a randomly selected portion of the labeled data available as the seed labeled set for inducing similarity functions. It is conceivable that if we anchor the seed data points more intelligently (e.g., using clustering or in other unsupervised fashion), better classification performance of the modified transductive learner can be expected. -This chapter presents one particular way of inducing the similarity function for transductive learning, which is simple yet effective. However, it may be worth the effort to investigate other alternatives. -In the machine learning community, how to exploit unlabeled data remains largely an open question. In the long run, it would be very interesting and useful to investigate, both theoretically and empirically, the tradeoff between induction with unlabeled data vs. transduction (including \"induction-aided\" transduction discussed in this paper).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "http://duc.nist.gov/past duc/duc2003/software/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Snowball: Extracting relations from large plain-text collections",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gravano",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Fifth ACM International Conference on Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agichtein, E., Gravano, L.: Snowball: Extracting relations from large plain-text collections. In: Proceedings of the Fifth ACM International Conference on Digital Libraries. (2000)",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Extracting patterns and relations from the world wide web",
"authors": [
{
"first": "S",
"middle": [],
"last": "Brin",
"suffix": ""
}
],
"year": 1998,
"venue": "WebDB Workshop at 6th International Conference on Extending Database Technology, EDBT'98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brin, S.: Extracting patterns and relations from the world wide web. In: WebDB Workshop at 6th International Conference on Extending Database Technology, EDBT'98. (1998)",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "1083--1106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zelenko, D., Aone, C., Richardella, A.: Kernel methods for relation extraction. J. Mach. Learn. Res. 3 (2003) 1083-1106",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Weakly-supervised relation classification for information extraction",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 13th International Conference on Information and Knowledge Management CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Z.: Weakly-supervised relation classification for information extraction. In: Proceedings of the 13th International Conference on Information and Knowledge Management CIKM 2004, Washington DC (2004)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Classifying semantic relations in bioscience text",
"authors": [
{
"first": "B",
"middle": [],
"last": "Rosario",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosario, B., Hearst, M.: Classifying semantic relations in bioscience text. In: Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. (2004)",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Understanding the Yarowsky algorithm",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abney, S.: Understanding the Yarowsky algorithm. Computational Linguistics 30 (2004)",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical learning theory",
"authors": [
{
"first": "V",
"middle": [
"N"
],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vapnik, V.N.: Statistical learning theory. John Wiley, NY (1998)",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Transductive inference for text classification using support vector machines",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of ICML-99, 16th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "200--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T.: Transductive inference for text classification using support vector machines. In Bratko, I., Dzeroski, S., eds.: Proceedings of ICML-99, 16th Interna- tional Conference on Machine Learning, Bled, SL, Morgan Kaufmann Publishers, San Francisco, US (1999) 200-209",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining support vector and mathematical programming methods for classification",
"authors": [
{
"first": "K",
"middle": [],
"last": "Bennett",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bennett, K.: Combining support vector and mathematical programming methods for classification. In Sch?lkopf, B., Burges, C., Smola, A., eds.: Advances in Kernel Methods -Support Vector Learning. MIT-Press (1999)",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning from labeled and unlabeled data using graph mincuts",
"authors": [
{
"first": "A",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chawla",
"suffix": ""
}
],
"year": 2001,
"venue": "ICML '01: Proceedings of the Eighteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blum, A., Chawla, S.: Learning from labeled and unlabeled data using graph mincuts. In: ICML '01: Proceedings of the Eighteenth International Conference on Machine Learning, San Francisco, CA, USA, Morgan Kaufmann Publishers Inc. (2001) 19-26",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semi-supervised learning using randomized mincuts",
"authors": [
{
"first": "A",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Rwebangira",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reddy",
"suffix": ""
}
],
"year": 2004,
"venue": "ICML '04: Twenty-first international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blum, A., Lafferty, J., Rwebangira, M.R., Reddy, R.: Semi-supervised learning using randomized mincuts. In: ICML '04: Twenty-first international conference on Machine learning, New York, NY, USA, ACM Press (2004)",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Approximation algorithms for classification problems with pairwise relationships: metric labeling and Markov random fields",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kleinberg",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Tardos",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 40th Annual Symposium on Foundations of Computer Science",
"volume": "",
"issue": "",
"pages": "14--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kleinberg, J., Tardos, E.: Approximation algorithms for classification problems with pairwise relationships: metric labeling and Markov random fields. In: Pro- ceedings of the 40th Annual Symposium on Foundations of Computer Science. (1999) 14-23",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Transductive learning via spectral graph partitioning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of The Twentieth International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T.: Transductive learning via spectral graph partitioning. In: Proceed- ings of The Twentieth International Conference on Machine Learning (ICML). (2003)",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, E.: A maximum-entropy-inspired parser. Technical Report CS-99-12, Computer Scicence Department, Brown University (1999)",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The chunklink script",
"authors": [
{
"first": "S",
"middle": [],
"last": "Buchholz",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Buchholz, S.: The chunklink script. (2000) Software available at http://ilk.uvt.nl/~sabine/chunklink/.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Making large-scale support vector machine learning practical",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in kernel methods: support vector learning",
"volume": "",
"issue": "",
"pages": "169--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T.: Making large-scale support vector machine learning practical. In: Advances in kernel methods: support vector learning. MIT Press, Cambridge, MA, USA (1999) 169-184",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The SNoW learning architecture",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cumby",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rosen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1999,
"venue": "UIUC Computer Science Department",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlson, A., Cumby, C., Rosen, J., Roth, D.: The SNoW learning architec- ture. Technical Report UIUCDCS-R-99-2101, UIUC Computer Science Depart- ment (1999)",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm",
"authors": [
{
"first": "N",
"middle": [],
"last": "Littlestone",
"suffix": ""
}
],
"year": 1988,
"venue": "Mach. Learn",
"volume": "2",
"issue": "",
"pages": "285--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear- threshold algorithm. Mach. Learn. 2 (1988) 285-318",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "m , e 2 , c pt ) \u2192 l where a sentence is a concatenation of five parts, with e 1 and e 2 representing the entities, and c pr , c m , and c pt representing the pre-, mid-, and post-context respectively. A label l \u2208 {0, 1} is assigned to the five-tuple. For example, in the following sentence, Shares of Disney, parent company of ABC, are up five eighths.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"text": "Number of relations: break-down by relation type",
"content": "<table><tr><td colspan=\"3\">Relation type Training Devtest</td></tr><tr><td>ROLE</td><td>1964</td><td>472</td></tr><tr><td>PART</td><td>549</td><td>123</td></tr><tr><td>AT</td><td>1249</td><td>328</td></tr><tr><td>NEAR</td><td>78</td><td>31</td></tr><tr><td>SOC</td><td>398</td><td>68</td></tr><tr><td>Total</td><td>4238</td><td>1022</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"text": "Performance of vanilla SGT learner (full)",
"content": "<table><tr><td colspan=\"3\">Relation type Precision Recall F-measure</td></tr><tr><td>ROLE</td><td colspan=\"2\">73.72% 83.31% 78.19%</td></tr><tr><td>PART</td><td colspan=\"2\">63.34% 42.32% 49.93%</td></tr><tr><td>AT</td><td colspan=\"2\">67.43% 72.88% 69.95%</td></tr><tr><td>NEAR</td><td>65.92% 7.36%</td><td>12.71%</td></tr><tr><td>SOC</td><td colspan=\"2\">71.87% 47.81% 56.96%</td></tr></table>"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "Performance of supervised SVM learner (full) Performance of supervised SNoW learner (full) Performance of supervised naive bayes learner (full)",
"content": "<table><tr><td colspan=\"3\">Relation type Precision Recall F-measure</td></tr><tr><td>ROLE</td><td colspan=\"2\">86.27% 85.96% 86.11%</td></tr><tr><td>PART</td><td colspan=\"2\">75.90% 58.36% 65.89%</td></tr><tr><td>AT</td><td colspan=\"2\">78.87% 88.65% 83.46%</td></tr><tr><td>NEAR</td><td>83.96% 3.57%</td><td>8.41%</td></tr><tr><td>SOC</td><td colspan=\"2\">82.13% 94.29% 87.74%</td></tr></table>"
}
}
}
}