File size: 61,476 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 |
{
"paper_id": "I05-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:26:47.653615Z"
},
"title": "A Method of Recognizing Entity and Relation",
"authors": [
{
"first": "Xinghua",
"middle": [],
"last": "Fan",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Intelligent Technology and Systems",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Intelligent Technology and Systems",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The entity and relation recognition, i.e. (1) assigning semantic classes to entities in a sentence, and (2) determining the relations held between entities, is an important task in areas such as information extraction. Subtasks (1) and (2) are typically carried out sequentially, but this approach is problematic: the errors made in subtask (1) are propagated to subtask (2) with an accumulative effect; and, the information available only in subtask (2) cannot be used in subtask (1). To address this problem, we propose a method that allows subtasks (1) and (2) to be associated more closely with each other. The process is performed in three stages: firstly, employing two classifiers to do subtasks (1) and (2) independently; secondly, recognizing an entity by taking all the entities and relations into account, using a model called the Entity Relation Propagation Diagram; thirdly, recognizing a relation based on the results of the preceding stage. The experiments show that the proposed method can improve the entity and relation recognition in some degree.",
"pdf_parse": {
"paper_id": "I05-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "The entity and relation recognition, i.e. (1) assigning semantic classes to entities in a sentence, and (2) determining the relations held between entities, is an important task in areas such as information extraction. Subtasks (1) and (2) are typically carried out sequentially, but this approach is problematic: the errors made in subtask (1) are propagated to subtask (2) with an accumulative effect; and, the information available only in subtask (2) cannot be used in subtask (1). To address this problem, we propose a method that allows subtasks (1) and (2) to be associated more closely with each other. The process is performed in three stages: firstly, employing two classifiers to do subtasks (1) and (2) independently; secondly, recognizing an entity by taking all the entities and relations into account, using a model called the Entity Relation Propagation Diagram; thirdly, recognizing a relation based on the results of the preceding stage. The experiments show that the proposed method can improve the entity and relation recognition in some degree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The entity and relation recognition, i.e. assigning semantic classes (e.g., person, organization and location) to entities in a sentence and determining the relations (e.g., born-in and employee-of) that hold between entities, is an important task in areas such as information extraction (IE) [1] [2] [3] [4] , question answering (QA) [5] and story comprehension [6] . In a QA system, many questions concern the specific entities in some relations. For example, the question that \"Where was Poe born?\" in TREC-9 asks for the location entity in which Poe was born. In a typical IE task in constructing a job database from unstructured texts, the system are required to extract many meaningful entities like titles and salary from the texts and to determine how these entities are associated with job positions.",
"cite_spans": [
{
"start": 293,
"end": 296,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 297,
"end": 300,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 305,
"end": 308,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 335,
"end": 338,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 363,
"end": 366,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of recognizing entity and relation is usually treated as two separate subtasks carried out sequentially: (1) to recognize entities using an entity recognizer, and (2) to determine the relations held between them. This approach has two shortcomings. Firstly, the errors made in subtask (1) will be propagated to subtask (2) with an accumulative effect, leading to a loss in performance of relation recognition. For example, if \"Boston\" is mislabeled as a person, it will never have chance to be classified as the location of Poe's birthplace. Secondly, the information available only in subtask (2) cannot be used for subtask (1) . For example, if we feel difficult to determine whether the entity X is a person or not, but we can determine that there exists a relation born-in between X and China easily, it is obvious that we can claim that X must be a person.",
"cite_spans": [
{
"start": 114,
"end": 117,
"text": "(1)",
"ref_id": "BIBREF0"
},
{
"start": 172,
"end": 175,
"text": "(2)",
"ref_id": "BIBREF1"
},
{
"start": 328,
"end": 331,
"text": "(2)",
"ref_id": "BIBREF1"
},
{
"start": 634,
"end": 637,
"text": "(1)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the problems described above, this paper presents a novel approach which allows subtasks (1) and (2) to be linked more closely together. The process is separated into three stages. Firstly, employing two classifiers to perform subtasks (1) and (2) independently. Secondly, recognizing an entity by taking all the entities and relations into account using a particularly designed model called the Entity Relation Propagation Diagram. And, thirdly, recognizing a relation based on the results of the preceding step.",
"cite_spans": [
{
"start": 100,
"end": 103,
"text": "(1)",
"ref_id": "BIBREF0"
},
{
"start": 108,
"end": 111,
"text": "(2)",
"ref_id": "BIBREF1"
},
{
"start": 247,
"end": 250,
"text": "(1)",
"ref_id": "BIBREF0"
},
{
"start": 255,
"end": 258,
"text": "(2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 defines the problem of entity and relation recognition in a formal way. Section 3 describes the proposed method of recognizing entity and relation. Section 4 gives the experimental results. Section 5 is the related work and comparison. Section 6 is conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Conceptually, the entities and relations in a sentence can be viewed, while taking account of the mutual dependencies among them, as a labeled graph in Fig. 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 158,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Problem of Entity and Relation Recognition",
"sec_num": "2"
},
{
"text": "In Fig.1 , a node represents an entity and a link denotes the relation held between two entities. The arrowhead of a link represents the direction of the relation. Each entity or relation has several attributes, which are structured as a list of the node or the edge. These attributes can be classified into two classes. Some of them that are easy to acquire, such as words in an entity and parts of speech of words in a context, are called local attributes; the others that are difficult to acquire, such as semantic classes of phrases and relations among them, are called decision attributes. The issue of entity and relation recognition is to determine a unique value for each decision attribute of all entities and relations, by considering the local attributes of them. To describe the problem in a formal way, we first give some basic definitions as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 8,
"text": "Fig.1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fig. 1. Concept view of the entities and relations among them",
"sec_num": null
},
{
"text": "An entity can be a single word or a set of consecutive words with a predefined boundary. A sentence is a linked list, which consists of words and entities. Entities in a sentence are denoted as E 1 , E 2 \u2026 according to their order, with values ranging over a set of entity class C E . For example, the sentence in Fig. 2 has three entities: E 1 = \"Dole\", E 2 = \"Elizabeth\" and E 3 = \"Salisbury, N.C.\". Note that it is not easy to determine the entity boundaries [7] . Here we assume that it has been solved and its output serves as the input to our model.",
"cite_spans": [
{
"start": 462,
"end": 465,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 314,
"end": 320,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition 1 (Entity).",
"sec_num": null
},
{
"text": "In this paper, we only consider the relation between two entities. An entity pair (E i , E j ) represents a relation R ij from entity E i and E j , where E i is the first argument and E j is the second argument. Relation R ij takes its value that ranges over a set of relation class C R . Note that (E i , E j ) is an ordered pair, and there exist two relations R ij =(E i , E j ) and R ji =(E j , E i ) between entities E i and E j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 2 (Relation).",
"sec_num": null
},
{
"text": "The class of an entity or relation is its decision attribute, which is one of the predefined class set and is unknown before being recognized. We denote the sets of predefined entity class and relation class as C E and C R respectively. C E has one special element other-ent, which represents any unlisted entity class. For algorithmic reasons, we suppose all elements in C E are mutually exclusive. Similarly, C R also has one special element other-rel, which represents that the two involved entities are irrelevant or their relation class is undefined. For algorithmic reasons, we suppose all elements in C R are mutually exclusive. In fact, because the class of an entity or a relation is only a label that we want to predict, if an entity or a relation have more than one labels simultaneously, to satisfy the constraint that all elements in C E or C R are mutually exclusive, we can separate it into several cases and construct several predefined entity class sets and relation class sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3 (Class).",
"sec_num": null
},
{
"text": "The classes of entities and relations in a sentence must satisfy some constraints. For example, if the class of entity E 1 , which is the first argument of relation R 12 , is a location, then the class of relation R 12 cannot be born-in because the class of the first argument in relation R 12 has to be a person. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3 (Class).",
"sec_num": null
},
{
"text": ") , , , R, ( R 2 1 \u03b5 \u03b1 \u03b1 \u03b5 \u03b5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 4 (Constraint). A constraint is a 5-tuple",
"sec_num": null
},
{
"text": "distribution } R | , Pr{ 2 1 R \u03b5 \u03b5 \u03b1 = . ] 1 , 0 [ \u2208 \u03b5 \u03b1 is a real number that represents a condi- tional probability distribution } , | R Pr{ 2 1 \u03b5 \u03b5 \u03b1 \u03b5 = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 4 (Constraint). A constraint is a 5-tuple",
"sec_num": null
},
{
"text": "Note that R \u03b1 and \u03b5 \u03b1 need not to be specified manually and can be learned from an annotated training dataset easily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 4 (Constraint). A constraint is a 5-tuple",
"sec_num": null
},
{
"text": "We denote the observations of an entity and a relation in a sentence as O E and O R respectively. O E or O R represent all the \"known\" local attributes of an entity or a relation, e.g., the spelling of a word, parts of speech, and semantic related attributes acquired from external resources such as WordNet. The observations O E and O R can be viewed as a random event, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 5 (Observation).",
"sec_num": null
},
{
"text": "1 } Pr{O } Pr{O R E \u2261 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 5 (Observation).",
"sec_num": null
},
{
"text": "because O E and O R in a sentence are known.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 5 (Observation).",
"sec_num": null
},
{
"text": "Based on the above definitions, the issue of entity and relation recognition can be described in a formal way as follows. Suppose in a sentence, the set of entity is {E 1 , E 2 \u2026 E n }, the set of relation is {R 12 , R 21 , R 13 , R 31 , \u2026, R 1n , R n1 , \u2026, R n-1,n , R n,n-1 }, the predefined sets of entity class and relation class are C E ={e 1 , e 2 , \u2026 e m } and C R ={ r 1 , r 2 , \u2026 r k } respectively, the observation of entity E i is E i O , and the observation of relation R ij is R ij O . n, m and k represent the number of entity, the number of the predefined entity class and the number of the predefined relation class respectively. The problem is to search the most probable class assignment for each entity and each relation of interest, given the observations of all entities and relations. In other words, the problem is to solve the following two equations, using two kinds of constraint knowledge \u03b5 \u03b1 \u03b1 , R and the interaction among entities and relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 5 (Observation).",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "} O , O , , O , O , , O , O , O , , O , O | e Pr{E max arg e R 1 - n n, R n 1, - n R n1 R 1n R 21 R 12 E n E 2 E 1 d i d L L L = = (1) } O , O , , O , O , , O , O , O , , O , O | r Pr{R max arg r R 1 - n n, R n 1, - n R n1 R 1n R 21 R 12 E n E 2 E 1 d ij d L L L = =",
"eq_num": "(2)"
}
],
"section": "Definition 5 (Observation).",
"sec_num": null
},
{
"text": "In (1), d =1, 2, \u2026, m, and in (2), d=1, 2, \u2026, k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 5 (Observation).",
"sec_num": null
},
{
"text": "Because the class assignment of a single entity or relation depends not only on local attributes itself, but also on those of all other entities and relations, the equations (1) and equation 2cannot be solved directly. To simplify the problem, we present the following method consisting of three stages. Firstly, employ two classifiers to perform entity recognition and relation recognition independently. Their outputs are the conditional probability distributions Pr{E| O E } and Pr{R|O R }, given the corresponding observations. Secondly, recognize an entity by taking account of all entities and relations, as computed in the previous step. This is achieved by using the model Entity Relation Propagation Diagram (ERPD). And, recognize a relation based on the results of the second step at last. In this paper, we concentrate on the processes at the second and the third stages, assuming that the process at the first stage is solved and its output are given to us as input. At the second stage, the aim of introducing ERPD is to estimate the conditional probability distribution ERPD} | Pr{E given the constraint R \u03b1 in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V V V V RV + \u2212 =",
"eq_num": "(4)"
}
],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "The reason of introducing RV is due to a fact that only for ambiguous entities, it is effective by taking the classes of all entities in a sentence into account. \"Reliable Value\" measures whether an entity is ambiguous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "At the third stage, the basic idea of recognizing a relation is to search the probable relation given its observation, under a condition of satisfying the constraints imposed by the results of entity recognition at the second stage. The relation recognition equation (2) becomes the equation 5. In the following sections, we present ERPD and two algorithms to estimate the conditional probability distribution ERPD} | Pr{E .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "R R k k W } O | r Pr{R max arg r \u00d7 = = \u23aa \u23a9 \u23aa \u23a8 \u23a7 = > = 0 } \u03b5 , \u03b5 | Pr{r if 0 0 } \u03b5 , \u03b5 | Pr{r if 1 W 2 1 2 1 R (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "To represent the mutual dependencies among entities and relations, a model named the Entity Relation Propagation Diagram that can deal with cycles, similar to the Causality Diagram [8] [9] for the complex system fault diagnosis, is developed for entity and relation recognition. The classes of any two entities are dependent on each other through the relations between them, while taking account of the relations in between. For example, the class of entity E i in Fig. 3 (a) depends on the classes of relations R ji between entities E i and E j , and the classes of relations R ij and R ji depend on the classes of entities E i and E j . This means that we can predict the class of a target entity according to the class of its neighboring entity, making use of the relations between them. We further introduce the relation reaction intensity to describe the prediction ability of this kind.",
"cite_spans": [
{
"start": 181,
"end": 184,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 465,
"end": 475,
"text": "Fig. 3 (a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Entity Relation Propagation Diagram",
"sec_num": "3.1"
},
{
"text": "Definition 6 (Relation Reaction Intensity). We denote the relation reaction intensity from entity E i to entity E j as P ij , which represents the ability that we guess the class of E j if we know the class of its neighboring entity E i and the relation R ij between them. The relation reaction intensity could be modeled using a condition probability distribution P ij =Pr {E j |E i }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fig. 3. Illustration of relation reaction",
"sec_num": null
},
{
"text": "The ) . We denote the observation reaction intensity as the conditional probability distribution } O | Pr{E E of an entity class, given the observation, which is the output at the first stage. Fig. 4 , the symbols used in the ERPD are defined as follows. A circle node represents an event variable that can be any one from a set of mutually exclusive events, which all together cover the whole sample space. Here, an event variable represents an entity, an event represents a predefined entity class, and the whole sample space represents the set of predefined entity classes. Box node represents a basic event which is one of the independent sources of the associated event variable. Here, a basic event represents the observation of an entity. Directed arc represents a linkage event variable that may or may not enable an input event to cause the corresponding output event. The linkage event variable from an event variable to another event variable represents the relation reaction intensity in Definition 6. And, the linkage event variable from a basic event to the corresponding event variable represents the observation reaction intensity in Definition 7. All arcs pointing to a node are in a logical OR relationship.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 5,
"text": ")",
"ref_id": null
},
{
"start": 193,
"end": 199,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fig. 3. Illustration of relation reaction",
"sec_num": null
},
{
"text": "Now, we present two algorithms to compute the conditional probability distribution ERPD} | Pr{E , one is based on the entity relation propagation tree, and the other is the directed iteration algorithm on ERPD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fig. 4. Illustration of the Entity Relation Propagation Diagram",
"sec_num": null
},
{
"text": "The Entity Relation Propagation Tree (ERPT). is a tree decomposed from an ERPD, which represents the relation reaction propagation from all basic events to each event variable logically. Each event variable in the ERPD corresponds to an ERPT. For example, the ERPT of X 1 in Fig. 4 is illustrated in Fig. 5 . The symbols used in the ERPT are defined as follows. The root of the tree, denoted as Circle, is an event variable corresponding to the event variable in the ERPD. A leaf of the tree, denoted as Box, is a basic event corresponding to the basic event in the ERPD. The middle node of the tree, denoted as Diamond, is a logical OR gate variable, which is made from an event variable that has been expanded in the ERPD, and, the label in Diamond corresponds to the label of the expanded event variable. The directed arc of the tree corresponds to the linkage event variable in the ERPD. All arcs pointing to a node are in a logical OR relationship. The relation between the directed arc and the node linked to it is in logical AND relationship.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 281,
"text": "Fig. 4",
"ref_id": null
},
{
"start": 300,
"end": 306,
"text": "Fig. 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "The Entity Relation Propagation Tree",
"sec_num": "3.2"
},
{
"text": "To decompose an ERPD into entity relation propagation trees, firstly we decompose the ERPD into mini node trees. Each event variables in the ERPD corresponds to a mini node tree, in which the root of the mini tree is the event variable in concern at present, and the leaves are composed of all neighboring basic events and event variables that are connected to the linkage event variables pointing to the top event variables. Secondly, expand a mini node tree into an entity relation propagation tree, i.e., the neighboring event variables in the mini node tree are replaced with their corresponding mini trees. During expanding a node event variable, when there are loops, Rule BreakLoop is applied to break down the loops. . When an event variable X i has more than one input, these inputs will be in logic OR relationship, as defined in the ERPD. Since these inputs are independent, there exists such a case that one input causes X i to be an instance k i X while another input causes X i to be an instance l i X , this would be impossible because k i X and l i X are exclusive. In the real world, the mechanism, in which i X can response to more than one independent input properly, is very complicated and may vary from one case to another. To avoid this difficulty, a basic assumption is introduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Entity Relation Propagation Tree",
"sec_num": "3.2"
},
{
"text": "Assumption. When there is more than one input to X i , each input will contribute a possibility to X i . For each input, its contribution to this possibility equals to the probability that it causes X i directly, as if the other inputs do not exist. The final possibility that X i occurs is the sum of the possibilities from all inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Entity Relation Propagation Tree",
"sec_num": "3.2"
},
{
"text": "Suppose an event variable X has m inputs, and the probability distributions of all linkage event variables, linked basic events or event variables are P i and Pr {X i } respectively, i=1,2\u2026m. Based on the above assumption, the formula for computing the probability distribution of X can be derived as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Entity Relation Propagation Tree",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") } Pr{X } Pr{X P Norm( } Pr{X } Pr{X m 1 i n i 1 i i n 1 \u2211 = \u23a5 \u23a5 \u23a5 \u23a6 \u23a4 \u23a2 \u23a2 \u23a2 \u23a3 \u23a1 \u00d7 = \u23a5 \u23a5 \u23a5 \u23a6 \u23a4 \u23a2 \u23a2 \u23a2 \u23a3 \u23a1 M M",
"eq_num": "(7)"
}
],
"section": "The Entity Relation Propagation Tree",
"sec_num": "3.2"
},
{
"text": "where, Norm () is a function that normalizes the vector in {}, and n is the state number of X. So, the probability distribution ERPT} | Pr{E of the variable X in the corresponding ERPT can be computed in the following steps. Firstly, to find the middle node sequence in the corresponding ERPT in the depth-first search; secondly, according to the sequence, for each middle node, equation 7is applied to compute its probability distribution. In this procedure, the previous results can be used for the latter computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Entity Relation Propagation Tree",
"sec_num": "3.2"
},
{
"text": "The idea is to compute the probability distribution of the event variable on the ERPD directly, without decomposing the ERPD to some ERPTs. The aim is to avoid the computational complexity of using ERPT. This is achieved by adopting an iteration strategy, which is the same as that used in the loopy belief network [10] .",
"cite_spans": [
{
"start": 315,
"end": 319,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Directed Iteration Algorithm on ERPD",
"sec_num": "3.3"
},
{
"text": "The Directed Iteration Algorithm. is as follows: Firstly, only take the basic event as input, and initialize each event variable according to formula (7), i.e., assigning an initialized probability distribution to each event variable. Secondly, take the basic event and the probability distributions of all neighboring nodes computed in the previous step as input, and iterate to update the probability distributions of all nodes in ERPD in parallel according to formula (7) . Thirdly, if none of the probability distribution of all nodes in ERPD in successive iterations changes larger than a small threshold, the iteration is said to converge and then stops.",
"cite_spans": [
{
"start": 471,
"end": 474,
"text": "(7)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Directed Iteration Algorithm on ERPD",
"sec_num": "3.3"
},
{
"text": "Dataset. The dataset in our experiments is the same as the Roth's dataset \"all\" [11] , which consists of 245 sentences that have the relation kill, 179 sentences that have the relation born-in and 502 sentences that have no relations. The predefined entity classes are other-ent, person and location, and the predefined relation classes are other-rel, kill and born-in. In fact, we use the results at the first stage in our method as the input, which are provided by W. Yih. Experiment Design. We compare five approaches in the experiments: Basic, Omniscient, ERPD, ERPD* and BN. The Basic approach, which is a baseline, tests the performance of the two classifiers at the first stage, which are learned from their local attributes independently. The Omniscient approach is similar to Basic, the only deference is that the classes of entities are exposed to relation classifier and vice versa. Note that it is certainly impossible to know the true classes of an entity and a relation in advance. The BN is the method based on the belief network, --we follow the BN method according to the description in [11] . The ERPD is the proposed method based on ERPT, and the ERPD* is the proposed method based on the directed iteration algorithm. The threshold of RV is 0.4.",
"cite_spans": [
{
"start": 80,
"end": 84,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 1104,
"end": 1108,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The experimental results are shown in Table 1 . It can be seen from the table that 1) it is very difficult to improve the entity recognition because BN and Omniscient almost do not improve the performance of Basic; 2) the proposed method can improve the precision, which is thought of being more important than the recall for the task of recognizing entity; 3) the relation recognition can be improved if we can improve the entity recognition, as indicated by the comparisons of Basic, ERPD and Omniscient; 4) the proposed method can improve the relation recognition, and it performance is almost equal to that of BN; 5) the performance of ERPD and ERPD* is almost equal, so the directly iteration algorithm is effective. ",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results.",
"sec_num": null
},
{
"text": "Targeting at the problems mentioned above, a method based on the belief network has been presented in [11] , in which two subtasks are carried out simultaneously. Its procedure is as follows: firstly, two classifiers are trained for recognizing entities and relations independently and their outputs are treated as the conditional probability distributions for each entity and relation, given the observed data; secondly, this information together with the constraint knowledge among relations and entities are represented in a belief network [12] and are used to make global inferences for all entities and relations of interest. This method is denoted BN in our experiments.",
"cite_spans": [
{
"start": 102,
"end": 106,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 543,
"end": 547,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Comparison",
"sec_num": "5"
},
{
"text": "Although BN can block the error propagation from the entity recognizer to the relation classifier as well as improve the relation recognition, it cannot make use of the information, which is only available in relation recognition, to help entity recognition. Experiments show that BN cannot improve entity recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Comparison",
"sec_num": "5"
},
{
"text": "Comparing to BN, the proposed method in this paper can overcome the two shortcomings of it. Experiments show that it can not only improve the relation recognition, but also improve the precision of entity recognition. Moreover, the model ERPD could be more expressive enough than the belief network for the task of recognizing entity and relation. It can represent the mutually dependences between entities and relations by introducing relation reaction intensity, and can deal with a loop without the limitation of directed acyclic diagram (DAG) in the belief network. At the same time, the proposed method can merge two kinds of constraint knowledge (i.e. \u03b5 \u03b1 \u03b1 and R in Definition 4), but the method based on belief network can only use \u03b5 \u03b1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Comparison",
"sec_num": "5"
},
{
"text": "Finally, the proposed method has a high computation efficiency while using the directed iteration algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Comparison",
"sec_num": "5"
},
{
"text": "The subtasks of entity recognition and relation recognition are typically carried out sequentially. This paper proposed an integrated approach that allows the two subtasks to be performed in a much closer way. Experimental results show that this method can improve the entity and relation recognition in some degree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In addition, the Entity Relation Propagation Diagram (ERPD) is used to figure out the dependencies among entities and relations. It can also merge some constraint knowledge. Regarding to ERPD, two algorithms are further designed, one is based on the entity relation propagation tree, the other is the directed iteration algorithm on ERPD. The latter can be regarded as an approximation of the former with a higher computational efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "We would like to express our deepest gratitude to Roth D. and Yih W. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "MUC-7 Information Extraction Task Definition",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceeding of the Seventh Message Understanding Conference (MUC-7), Appendices",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, N. MUC-7 Information Extraction Task Definition. In Proceeding of the Sev- enth Message Understanding Conference (MUC-7), Appendices, 1998.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Relational Learning of Pattern-match Rules for Information Extraction",
"authors": [
{
"first": "M",
"middle": [],
"last": "Califf",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "328--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Califf, M. and Mooney, R. Relational Learning of Pattern-match Rules for Information Extraction. In Proceedings of the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intelligence, 328-334, Orlando, Florida, USA, AAAI Press, 1999.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Machine Learning for Information Extraction in Informal Domains",
"authors": [
{
"first": "D",
"middle": [],
"last": "Freitag",
"suffix": ""
}
],
"year": 2000,
"venue": "Machine learning",
"volume": "39",
"issue": "2/3",
"pages": "169--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freitag, D. Machine Learning for Information Extraction in Informal Domains. Machine learning, 39(2/3): 169-202, 2000.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Relational Learning via Prepositional Algorithms: An Information Extraction Case Study",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1257--1263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roth, D. and Yih, W. Relational Learning via Prepositional Algorithms: An Information Extraction Case Study. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, 1257-1263, Seattle, Washington, USA, Morgan Kaufmann, 2001.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview of the Trec-9 Question Answering Track",
"authors": [
{
"first": "E",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2000,
"venue": "The Ninth Text Retrieval Conference (TREC-9",
"volume": "",
"issue": "",
"pages": "71--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Voorhees, E. Overview of the Trec-9 Question Answering Track. In The Ninth Text Re- trieval Conference (TREC-9), 71-80, 2000.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep Read: A Reading Comprehension System",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Light",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirschman, L., Light, M., Breck, E. and Burger, J. Deep Read: A Reading Comprehension System. In Proceedings of the 37th Annual Meeting of Association for Computational Lin- guistics, 1999.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parsing by Chunks",
"authors": [
{
"first": "S",
"middle": [
"P"
],
"last": "Abney",
"suffix": ""
}
],
"year": 1991,
"venue": "Principle-based parsing: Computation and Psycholinguistics",
"volume": "",
"issue": "",
"pages": "257--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abney, S.P. Parsing by Chunks. In S. P. Abney, R. C. Berwick, and C. Tenny, editors, Principle-based parsing: Computation and Psycholinguistics, 257-278. Kluwer, Dordrecht, 1991.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Causality Diagram Theory Research and Applying it to Fault Diagnosis of Complexity System",
"authors": [
{
"first": "Xinghua",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2002,
"venue": "P.R. China",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinghua Fan. Causality Diagram Theory Research and Applying it to Fault Diagnosis of Complexity System, Ph.D. Dissertation of Chongqing University, P.R. China, 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Reasoning Algorithm in Multi-Valued Causality Diagram",
"authors": [
{
"first": "Xinghua",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Zhang",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Sun",
"middle": [],
"last": "Maosong",
"suffix": ""
},
{
"first": "Huang",
"middle": [],
"last": "Xiyue",
"suffix": ""
}
],
"year": 2003,
"venue": "Chinese Journal of Computers",
"volume": "26",
"issue": "3",
"pages": "310--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinghua Fan, Zhang Qin, Sun Maosong, Huang Xiyue. Reasoning Algorithm in Multi- Valued Causality Diagram, Chinese Journal of Computers, 26(3), 310-322, 2003.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Loopy Belief Propagation for Approximate Inference: An empirical study",
"authors": [
{
"first": "K",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceeding of Uncertainty in AI",
"volume": "",
"issue": "",
"pages": "467--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murphy, K., Weiss, Y., and Jordan, M. Loopy Belief Propagation for Approximate Infer- ence: An empirical study. In Proceeding of Uncertainty in AI, 467-475, 1999.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Probability Reasoning for Entity & Relation Recognition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 20th International Conference on Computational Linguistics (COLING-02)",
"volume": "",
"issue": "",
"pages": "835--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roth, D. and Yih, W. Probability Reasoning for Entity & Relation Recognition. In Pro- ceedings of 20th International Conference on Computational Linguistics (COLING-02), 835-841, 2002.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Probability Reasoning in Intelligence Systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pearl, J. Probability Reasoning in Intelligence Systems. Morgan Kaufmann, 1988.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "in Definition 4, and W R is the weight of the constraint knowledge.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "element kl ij p of P ij represents the conditional probability Pr {E j =e l |E i =e k }: the number of relations in relation class set. In equation (",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Illustration of the entity relation propagation treeRule BreakLoop. An event variable cannot propagate the relation reaction to itself. Rule 1 is derived from a law commonsense -one can attest that he is sinless. When such a loop is encountered, the descendant event variable, which is same as the head event variable of the loop, is treated as a null event variable, together with its connected linkage event variable to be deleted.Compute the Conditional Probability Distribution in an ERPT.After an ERPD is decomposed into entity relation propagation trees, the conditional probability distribu-",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "is a threshold determined by the experiment. RV\u2208 [0, 1] is a real number, called the reliable value, representing the belief degree of the output of the entity recognizer at the first stage. Suppose the maximum value of the conditional probabil-",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td>e</td><td colspan=\"2\">=</td><td colspan=\"2\">\u23aa \u23a9 \u23aa \u23a8 \u23a7 arg arg</td><td>max max</td><td colspan=\"2\">d d</td><td colspan=\"2\">Pr{E Pr{E</td><td>i i</td><td>= =</td><td>e e</td><td>d d</td><td>| |</td><td>\u2264 \u03b8 RV > RV ERPD} } O E i</td><td>\u03b8</td><td>(3)</td></tr><tr><td colspan=\"2\">where \u03b8 ity distribution</td><td colspan=\"14\">} Pr{E E is V m and the second value is V s , RV is defined as: O |</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>m</td><td>s</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>m</td><td>s</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Definition 5 and</td></tr><tr><td>the sets {</td><td>Pr{E</td><td>i</td><td>|</td><td>O</td><td>E i</td><td colspan=\"2\">}</td><td colspan=\"3\">} and {</td><td colspan=\"2\">Pr{R</td><td colspan=\"2\">ij</td><td>|</td><td>O</td><td>R ij</td><td>}</td><td>} (i, j=1,\u2026,n), as computed at the first</td></tr><tr><td colspan=\"13\">stage. For the readability, suppose</td><td/><td colspan=\"2\">Pr{E</td><td>|</td><td>ERPD}</td><td>is given, the entity recognition</td></tr><tr><td colspan=\"15\">equation (1) becomes the equation (3).</td></tr></table>"
},
"TABREF2": {
"text": "Experimental results",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
} |