{ "paper_id": "O12-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:03:07.606395Z" }, "title": "A possibilistic approach for automatic word sense disambiguation", "authors": [ { "first": "Oussama", "middle": [ "Ben" ], "last": "Khiroun", "suffix": "", "affiliation": { "laboratory": "RIADI Research Laboratory", "institution": "ENSI Manouba University", "location": { "postCode": "2010", "country": "Tunisia" } }, "email": "oussama.ben.khiroun@gmail.com" }, { "first": "Bilel", "middle": [], "last": "Elayeb", "suffix": "", "affiliation": { "laboratory": "RIADI Research Laboratory", "institution": "ENSI Manouba University", "location": { "postCode": "2010", "country": "Tunisia" } }, "email": "bilel.elayeb@riadi.rnu.tn" }, { "first": "Ibrahim", "middle": [], "last": "Bounhas", "suffix": "", "affiliation": {}, "email": "bounhas.ibrahim@yahoo.fr" }, { "first": "Fabrice", "middle": [], "last": "Evrard", "suffix": "", "affiliation": {}, "email": "fabrice.evrard@enseeiht.fr" }, { "first": "Narj\u00e8s", "middle": [], "last": "Bellamine", "suffix": "", "affiliation": {}, "email": "narjes.bellamine@ensi.rnu.tn" }, { "first": "Ben", "middle": [], "last": "Saoud", "suffix": "", "affiliation": { "laboratory": "RIADI Research Laboratory", "institution": "ENSI Manouba University", "location": { "postCode": "2010", "country": "Tunisia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents and experiments a new approach for automatic word sense disambiguation (WSD) applied for French texts. First, we are inspired from possibility theory by taking advantage of a double relevance measure (possibility and necessity) between words and their contexts. Second, we propose, analyze and compare two different training methods: judgment and dictionary based training. Third, we summarize and discuss the overall performance of the various performed tests in a global analysis way. In order to assess and compare our approach with similar WSD systems we performed experiments on the standard ROMANSEVAL test collection.", "pdf_parse": { "paper_id": "O12-1025", "_pdf_hash": "", "abstract": [ { "text": "This paper presents and experiments a new approach for automatic word sense disambiguation (WSD) applied for French texts. First, we are inspired from possibility theory by taking advantage of a double relevance measure (possibility and necessity) between words and their contexts. Second, we propose, analyze and compare two different training methods: judgment and dictionary based training. Third, we summarize and discuss the overall performance of the various performed tests in a global analysis way. In order to assess and compare our approach with similar WSD systems we performed experiments on the standard ROMANSEVAL test collection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word Sense Disambiguation (WSD) is the ability to identify the meaning of a word in its context in a computational manner. A lexical semantic disambiguation allows to select in a predefined list the significance of a word given its context. In fact, the task of semantic disambiguation requires enormous resources such as labeled corpora, dictionaries, semantic networks or ontologies. This task is important in many fields such as optical character recognition, lexicography, speech recognition, natural language comprehension, accent restoration, content analysis, content categorization, information retrieval and computer aided translation [13] [14] . The problem of WSD has been considered as a difficult task in the field of Natural Language Processing. In fact, a reader is frequently faced to problems of ambiguity in information retrieval or automatic translation tasks. Indeed, the main idea on which were based many researches in this field is to find relations between an occurrence of a word and its context which will help identify the most probable sense of this occurrence [1] [2] . We discuss in this paper the contribution of a new approach for WSD. We presuppose that combining knowledge extracted from corpora and traditional dictionaries will improve disambiguation rates. We also show that this approach may perform satisfactory results even without using manually labeled corpora for training. We also propose to apply possibility theory as an efficient framework to solve the WSD problem seen as a case of imprecision. Indeed, WSD approaches need training and matching models which compute the similarities (or the relevance) between senses and contexts. Existing models for WSD are based on poor, uncertain and imprecise data. Whereas, possibility theory is naturally designed to this kind of applications; because it makes it possible to express ignorance and to take account of the imprecision and uncertainty at the same time. For example a recent work of [23] [24] which have proposed possibilistic approach for the morphological disambiguation of arabic texts showed the contribution of possibilistic models compared to probabilistic ones. That is, we evaluate the relevance of a word sense given a polysemous sentence proposing two types of relevance: plausible relevance and necessary relevance. This paper is structured as follows. First, we give an overview of the main existing WSD approaches in section 2. Section 3 briefly recalls possibility theory. Our approach is detailed in section 4. Subsequently, a set of experimentations and comparison results are discussed in section 5. Finally, we summarize our findings in the conclusion and propose some directions for future research.", "cite_spans": [ { "start": 644, "end": 648, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 649, "end": 653, "text": "[14]", "ref_id": "BIBREF13" }, { "start": 1089, "end": 1092, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 1093, "end": 1096, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 1984, "end": 1988, "text": "[23]", "ref_id": "BIBREF22" }, { "start": 1989, "end": 1993, "text": "[24]", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this literature review, we briefly cite the most important methods which allowed to clarify the main issues in WSD. We mainly focus on the limits of traditional dictionaries in WSD process. In fact, the most popular WSD approaches are based on traditional dictionaries or thesauruses (such as WordNet), which are quite similar in terms of sense organization. Indeed, dictionaries were made for a human use and are not suitable for automatic treatments, thus missing accurate information useful for WSD. This fact is confirmed by V\u00e9ronis [16] [17] who argues that it is not possible to progress in WSD while dictionaries do not include in their definitions distributional criteria or surface indices (syntax, collocations, etc). In addition, the inconsistency of the dictionaries is well-known for lexicographers. For these multiple reasons, many researchers proposed to build new types of dictionaries or to restructure traditional dictionaries. For example, Reymond [22] proposed to build a \"distributional\" dictionary based on differential criteria. The idea is to organize words in lexical items having coherent distributional properties. This dictionary contained initially the detailed description of 20 common nouns, 20 verbs and 20 adjectives. It enabled him to manually label each of the 53000 occurrences of these 60 terms in the corpus of the project SyntSem (Corpus of approximately 5.5 million words, composed of texts of various kinds). This corpus is a starting resource to study the criteria of automatic semantic disambiguation since it helps implement and evaluate algorithms of WSD. Audibert [15] worked on Reymond's dictionary to study different criteria of disambiguation (co-occurrence, domain information, synonyms of co-occurring words and so on). In the same perspective, V\u00e9ronis, [17] used a graph of co-occurrence to automatically determine the various usages of a word in a textual base. His algorithm searches high density zones in the graph of co-occurrence and allows to isolate non frequent usages. Thus, V\u00e9ronis applied the advice of Wittgenstein: \"Don't look for the meaning, but for the use\". In fact, co-occurrence-based approaches generate much noise since unrelated words may occur in the same sentence. We also find that none of these methods treated in a sufficient manner the problem of lexicon organization. Even the methods based on computing the similarities do not seek to represent the semantic distances between senses and do not manage to correctly organize the obtained senses. However, several research works tried to resolve the problem of polysemia on the level of dictionary. Gaume et al. (2004) [18] used a dictionary as information source to discover relations between lexical items. His work is based on an algorithm which computes the semantic distance between the words of the dictionary by taking into account the complete topology of the dictionary, which gives him a greater robustness. This algorithm makes it possible to solve the problem of polysemia which exists in the definitions of the dictionary. He started to test this approach on the disambiguation of the definitions of the dictionaries themselves. But this work is limited to disambiguate nouns, using only nouns or nouns and verbs. Our approach is supported by a semantic space where the various senses of a word are organized and exploited. Indeed, computing the sense of a sentence is a dynamic process during which the senses of the various words are mutually influenced and which leads simultaneously to the determination of the sense of each word and the global sense of the sentence. A distance between contexts and word senses is used to find the correct sense in a given sentence. Our work uses possibilistic networks to compute a preliminary rate of ambiguity of each sentence and to match senses to contexts. That is, we start by recalling principles of possibility theory in the following section.", "cite_spans": [ { "start": 540, "end": 544, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 545, "end": 549, "text": "[17]", "ref_id": "BIBREF16" }, { "start": 970, "end": 974, "text": "[22]", "ref_id": "BIBREF21" }, { "start": 1613, "end": 1617, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 1808, "end": 1812, "text": "[17]", "ref_id": "BIBREF16" }, { "start": 2631, "end": 2650, "text": "Gaume et al. (2004)", "ref_id": "BIBREF17" }, { "start": 2651, "end": 2655, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2." }, { "text": "The possibility theory introduced by Zadeh (1978) [10] and developed by several authors, handles uncertainty in the interval [0,1] called the possibility scale, in a qualitative or quantitative way. This section briefly reviews basic elements of possibility theory, for more details see [3] ", "cite_spans": [ { "start": 50, "end": 54, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 287, "end": 290, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Possibility Theory", "sec_num": "3." }, { "text": "Possibility theory is based on possibility distributions. The latter, denoted by \u03c0, are mappings from \u03a9 (the universe of discourse) to the scale [0,1] encoding partial knowledge on the world. The possibility scale is interpreted in two ways. In the ordinal case, possibility values only reflect an ordering between possible states; in the numerical scale, possibility values often account for upper probability bounds [3] [4] [21] . Probability distribution mainly differs from possibility distribution because it requires that the probability sum of elements in the universe of discourse is equal to 1, but this restriction is not necessary in the case of possibility theory. Besides, the probability of the complement of a given event is relevant to provide the probability of this event in probability theory. But, it is not the same thing in possibility theory, which involves non-additive measures. When we use probabilities in uncertainty representation, it is required to list an exhaustive set of mutually exclusive alternatives. This is the fundamental difficulty to use probabilities in this case. In reality, an expert cannot provide events that are exhaustive and mutually exclusive due to the increasing of his/her knowledge along time, and so uncertainty about the situation decreases. Furthermore possibility distributions may be more expressive in some situations and is able to distinguish between problems, ambiguity and ignorance whereas probability distributions can only represent ambiguity. In particular, the distribution \u03c0(\u03c9) = 1; \u2200 \u03c9 \u2208 \u03a9 express a total ignorance which reflects the absence of any relevant information. However in probability theory, complete ignorance is modeled by a uniform distribution which results in assigning equal weights p(\u03c9) = 1/n; \u2200 \u03c9 \u2208 \u03a9 for each event although no justification can explain this arbitrary assignment. For more reading, we can refer to [3] [4].", "cite_spans": [ { "start": 418, "end": 421, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 426, "end": 430, "text": "[21]", "ref_id": "BIBREF20" }, { "start": 1907, "end": 1910, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Possibility distribution", "sec_num": "3.1" }, { "text": "While other approaches provide a unique relevance value, the possibility theory defines two measures. A possibility distribution \u03c0 on \u03a9 enables events to be qualified in terms of their plausibility and their certainty, in terms of possibility and necessity measures respectively. In our context of WSD, the possible relevance allows rejecting non relevant senses. The necessary relevance permits to reinforce possibly relevant senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibility and necessity measures", "sec_num": "3.2" }, { "text": "\u2022 The possibility of an event A relies on the most normal situation in which A is true.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibility and necessity measures", "sec_num": "3.2" }, { "text": ") ( max ) ( x A A x \u03c0 \u2208 = \u03a0 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibility and necessity measures", "sec_num": "3.2" }, { "text": "\u2022 The necessity of an event A reflects the most normal situation in which A is false.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibility and necessity measures", "sec_num": "3.2" }, { "text": ") ( 1 )) ( 1 ( min ) ( A x A N A x \u00ac \u03a0 \u2212 = \u2212 = \u2209 \u03c0 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibility and necessity measures", "sec_num": "3.2" }, { "text": "The width of the gap between N(A) and \u03a0(A) evaluates the amount of ignorance about A. Note that N(A) > 0 implies \u03a0(A) = 1. When A is a fuzzy set this property no longer holds but the inequality N(A) \u2264 \u03a0(A) remains valid [3] [4] [21] . ", "cite_spans": [ { "start": 220, "end": 223, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 228, "end": 232, "text": "[21]", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Possibility and necessity measures", "sec_num": "3.2" }, { "text": "1 ) ( max ) ( = \u03a0 \u2208 v V dom v ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibilistic Networks", "sec_num": "3.3" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibilistic Networks", "sec_num": "3.3" }, { "text": "\u2022 If V is not a root node, the conditional distribution of V in the context of its parents context satisfy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibilistic Networks", "sec_num": "3.3" }, { "text": "1 ) ( max ) ( = \u03a0 \u2208 V V dom v Par v ; ) ( V V Par dom Par \u2208 (4) Where: dom(V): domain of V; Par V : value of parents of V; dom(Par V ): domain of parent set of V.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibilistic Networks", "sec_num": "3.3" }, { "text": "In this paper, possibilistic networks are exploited to compute relevance of a correct sense of a polysemous word given the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possibilistic Networks", "sec_num": "3.3" }, { "text": "Our approach tries to avoid the limits of traditional dictionaries by combining them with knowledge extracted from corpora and organized as a Semantic Dictionary of Contexts (SDC). Thus, the richness of traditional dictionaries is improved by contextual knowledge linking words to their contexts. WSD is also seen as a classification task where we have training and testing steps. In the training step, we need to learn dependencies between senses of words and contexts. This may be performed in labeled corpora (Judgment-based training) leading to a semi-automatic approach. We may also weight these dependencies directly from a traditional dictionary (Dictionary-based training), what may be considered as an automatic approach. In this case, we need to organize all the instances in such a way that improves classification rates. In this paper, we propose to sort the instances by computing an ambiguity rate (sf. section 4.2). In the testing step, the distance between the context of an occurrence of a word and its senses is computed in order to select the best sense. We present in the next sections the formulae for computing the DPR and the ambiguity rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed approach", "sec_num": "4." }, { "text": "Supposing that we have only one polysemous word in a sentence ph, let us note DPR(S i |ph) the Degree of Possibilistic Relevance of a word sense S i given ph. Let us consider that ph is composed of T words: ph = (t 1 , t 2 ,\u2026,t T ). We evaluate the relevance of a word sense S i given a sentence ph by a possibilistic matching model of Information Retrieval (IR) used in [5] [21] . In this case, the goal is to compute a matching score between a query and a document. In the case of WSD, the relevance of a sense given a polysemous sentence is modeled by a double measurement. The possible relevance makes it possible to reject the irrelevant senses. But, the necessary relevance makes it possible to reinforce relevance of the restored word senses, which have not been rejected by the possibility. In our case, possibilistic network links the word sense (S i ) to the words of a given a polysemous sentence (ph i = (t 1 , t 2 ,\u2026,t T )) as presented in figure 1. According to Elayeb et al. (2009) [5] , the possibility \u03a0(S j |ph) is proportional to:", "cite_spans": [ { "start": 371, "end": 374, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 375, "end": 379, "text": "[21]", "ref_id": "BIBREF20" }, { "start": 976, "end": 996, "text": "Elayeb et al. (2009)", "ref_id": "BIBREF4" }, { "start": 997, "end": 1000, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a0'(S j |ph) = \u03a0(t 1 | S j )*\u2026* \u03a0(t T | S j ) = nft 1j *\u2026* nft Tj", "eq_num": "(5)" } ], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "With nft ij = tf ij /max(tf kj ): the normalized frequency of the term t i in the sense S j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "And tf ij = (number of occurrence of the term t i in S j /number of terms in S j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "The necessity to restore a relevant sense S j for the sentence ph, denoted N(S j |ph), calculated as the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "N(S j | ph) = 1-\u03a0 (\u00acS j | ph) (6) Where: \u03a0(\u00acS j | ph) = (\u03a0(ph| \u00acS j )* \u03a0(\u00acS j ))/\u03a0(ph)", "eq_num": "(7)" } ], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "At the same way \u03a0(\u00acS j | ph) is proportional to: nS i = Number of senses of the word containing the term t j . This includes only senses which are in the SDC and does not cover all the senses of t i which are in the traditional dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a0'(\u00acS j | ph) = \u03a0(t 1 | \u00acS j )* \u2026*\u03a0(t T | \u00acS j )", "eq_num": "(8)" } ], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "We define the Degree of Possibilistic Relevance (DPR) of each word sense S j , giving a polysemous sentence ph by the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "DPR(S j | ph) = \u03a0( S j | ph) + N(S j | ph)", "eq_num": "(11)" } ], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "The preferred senses are those which have a high value of DPR(S j | ph).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Degree of Possibilistic Relevance (DPR)", "sec_num": "4.1" }, { "text": "We compute the ambiguity rate of a polysemous sentence ph using the possibility and necessity values as follow: (i) We index the definitions of all the possible senses of the ambiguous word; (ii) We use the index of each sense as a query; (iii) We evaluate relevance of the sentence given this query using a possibilistic matching model; and (iv) A sentence is considered as very ambiguous if it is relevant for many senses or if it is not relevant for any one. In other words, the relevance degrees of the sentence for all the senses are almost equal. Therefore, the ambiguity rate is inversely proportional to standard deviation value:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Ambiguity rate of a polysemous sentence", "sec_num": "4.2" }, { "text": "S 1 t 1 S i S N t 2 t 3 t 4 t T \u2026 \u2026.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Ambiguity rate of a polysemous sentence", "sec_num": "4.2" }, { "text": "Ambiguity_rate(ph) = 1 -\u0250\u123a\u123b (12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Ambiguity rate of a polysemous sentence", "sec_num": "4.2" }, { "text": "Where \u07ea\u123a\u202b\u0744\u202c\u123b : standard deviation of DPR(S i |ph) values corresponding to each sense of ambiguous word contained in the polysemous sentence ph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Ambiguity rate of a polysemous sentence", "sec_num": "4.2" }, { "text": "\u07ea\u123a\u202b\u0744\u202c\u123b \u0d4c \u00a6 \u2212 2 i ) ph) | DPR(S ( * / 1 S N i (13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Ambiguity rate of a polysemous sentence", "sec_num": "4.2" }, { "text": "Where S is the average of DPR(S i |ph) and N is the number of possible senses in the dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Ambiguity rate of a polysemous sentence", "sec_num": "4.2" }, { "text": "Let us consider the polysemous word M, which has two senses S 1 and S 2 such as: S 1 is indexed by the three terms {t 1 , t 2 , t 3 } and S 2 is indexed by {t 1 , t 4 , t 5 }. Let us consider also the polysemous sentence ph = (M, t 2 , t 4 , t 5 ), which contains only one polysemous word (M) in order to simplify the calculus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Illustrative example", "sec_num": "4.3" }, { "text": "We have : \u03a0(S 1 |ph) = nf (M, S1) * nf (t2, S1) * nf (t4, S1) * nf (t5, S1) = 0*(1/3)*0*0 = 0 With: nf (M, S1) is the normalized frequency of M in the first sense We remark that the polysemous sentence ph is more relevant for S 2 than S 1 because it contains two terms of the second sense S 2 (t 4 , t 5 ) and only one term of the sense S 1 (t 2 ). The average is S = (0,1 + 0,19)/2 = 0,145. The Standard Deviation = (1/2 *((0,1 -0,145) 2 + (0,19 -0,145) 2 )) 1/2 = 0,045 and the Ambiguity Rate = (1-Standard Deviation) = 0,955.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Illustrative example", "sec_num": "4.3" }, { "text": "Let us notice in this example that the polysemous sentence ph is very ambiguous because two values 0,1 and 0,19 are very close.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Illustrative example", "sec_num": "4.3" }, { "text": "This section introduces the test collection used in our experiments (cf. section 5.1). To improve our assessment, we performed two types of evaluation in the training step: the judgment-based training and the dictionary-based training (cf. sections 5.3 and 5.4 respectively). We analyze and interpret our results in section 5.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimentation and results", "sec_num": "5." }, { "text": "We used in our experiments the ROMANSEVAL standard test collection which provides necessary tools for WSD including: (1) a set of documents (issued from the Official Journal of the European Commission); and (2) a list of test sentences including ambiguous words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROMANSEVAL test collection", "sec_num": "5.1" }, { "text": "The set of documents consists of parallel texts in 9 languages part of the Official Journal of the European Commission (Series C, 1993). Texts (numbering several thousand) consist of written questions on a wide range of topics and corresponding responses from the European Commission. The total size of the corpus is approximately 10.2 million words (about 1.1 million words per language), which were collected and prepared within MULTEXT-MLCC projects [6] . These texts were prepared in order to obtain a standard test collection. The corpus was split into words labeled with, in particular, categorical labels to distinguish the names N, adjectives A and verbs V. Then the 600 most frequent words (200 N, 200 A, 200 V) were extracted, and their contexts of occurrence. These words were annotated in parallel by 6 students in Linguistics, in accordance with the sense of the French dictionary \"Le Petit Larousse\", each occurrence of a word that can receive a label of a sense, several or none. After this first step, the 60 most polysemous words have been preserved (20 N, 20 A, 20 V).", "cite_spans": [ { "start": 453, "end": 456, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "ROMANSEVAL test collection", "sec_num": "5.1" }, { "text": "The body offered to participants for the experiment was therefore made up of 60 words and 3624 contexts in which they appear each with about 60 word occurrences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROMANSEVAL test collection", "sec_num": "5.1" }, { "text": "We performed three stages of tests as explained below. For each test, we prepared an XML Semantic Dictionary of Contexts (SDC). It is used as a training subset from the sentences to be evaluated in ROMANSEVAL corpus. For each parsed sentence S and given a polysemous word W, we link words of S with the correct sense of W. The \"correct sense\" may be identified from the tags of the corpus or using context-independent knowledge from the traditional dictionary. Thus, two subset selection methods for building the SDC are described in the following (cf. section 5.3 and section 5.4). To assess our system, we compute the accuracy rate for each word be using the agree and kappa [11] [12] ", "cite_spans": [ { "start": 677, "end": 681, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 682, "end": 686, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental scenarios", "sec_num": "5.2" }, { "text": "Where: \u03bf : The set of judged senses corresponding to test sentences. \u0735 \u0be6\u0bec\u0be6\u0be7 : The selected sense by DPR measure (computed by the system). \u0735 \u0be8\u0bd7\u0be6 : Sense attributed by judges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental scenarios", "sec_num": "5.2" }, { "text": "The Kappa measure is based on the difference between how much agreement is actually present (\"observed\" agreement) compared to how much agreement would be expected to be present by chance alone (\"expected\" agreement) as follow [7] :", "cite_spans": [ { "start": 227, "end": 230, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental scenarios", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u0747 \u0d4c \u0be6\u0be9\u0bd7\u0b3f\u0beb\u0be7\u0bd7 \u0b35\u0b3f\u0beb\u0be7\u0bd7", "eq_num": "(15)" } ], "section": "Experimental scenarios", "sec_num": "5.2" }, { "text": "Kappa measure takes into account the agreement occurring by chance and is considered as a refined value. According to Landis and Koch [8] , Kappa values between 0-0.2 are considered slight, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as almost perfect agreement.", "cite_spans": [ { "start": 134, "end": 137, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental scenarios", "sec_num": "5.2" }, { "text": "To fill the XML SDC, we have applied the cross validation method. In each test case of the 10 iterations, we select 90% of sentences randomly and enlarge the training semantic dictionary by voted contexts. The 10% remaining ones are used in test by searching the most suitable context from the trained data. We applied there the DPR measure described in section 4. As a first interpretation of these histograms, we conclude that more a word is frequent in the corpus and has few senses, the more is its accuracy rate. Thus, verbs represent the most ambiguous words, because they have fewer occurrences in the corpus. On the other hand, nouns (except for some ones) are less ambiguous, because they are more frequent. The accuracy rate depends also on the characteristics of the corpus. For example, we discuss the case of \"constitution\" which has a weak accuracy rate compared to other nouns. This word has many meanings (\"constitution\" has 6 different meanings: (1) constitution (constitution),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Judgment-based training", "sec_num": "5.3" }, { "text": "(2) mise en place (establishment), (3) incorporation (incorporation), (4) r\u00e8gle (rule), (5) habitude (habit) and (6) code (code)). The legal discussion subjects in ROMANSEVAL articles contribute in increasing ambiguity of such words (the same interpretation is applied on \"\u00e9conomie\" word (meaning: \u00e9conomie (economy), finances (economics), \u00e9pargne (saving), \u00e9levage (thrift or husbandry)).", "cite_spans": [ { "start": 88, "end": 91, "text": "(5)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Judgment-based training", "sec_num": "5.3" }, { "text": "In this training method, senses are associated by the system (no more default judgments as the previous method). For each sentence, to be evaluated that contains an ambiguous word; one sense is attributed after computing the DPR values of each definition entry in the dictionary \"Le Petit Larousse\". Sense having the greatest DPR is considered as the best one to fit the sentence. Figure 5 . Adjectives mean agrees for dictionary based training WSD methods (descendant and ascendant sentences ambiguity) Figure 6 . Nouns mean agrees for dictionary based training WSD methods (descendant and ascendant sentences ambiguity) Figure 7 . Verbs mean agrees for dictionary-based training WSD methods (descendant and ascendant sentences ambiguity) These experiments confirm that training data should start from the most ambiguous sentences to the less ones (descendent ambiguity rate order). We should notice that the small accuracy rates are caused by the system selection of senses while building the SDC in the training step. However, this constitutes a first attempt for full automatic WSD. ", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 389, "text": "Figure 5", "ref_id": null }, { "start": 504, "end": 512, "text": "Figure 6", "ref_id": null }, { "start": 622, "end": 630, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Dictionary-based training", "sec_num": "5.4" }, { "text": "This section summarizes and discusses the overall performance of the various performed tests. Figure 8 shows the mean agree rates over the three methods by Part-Of-Speech. We remark that the judgment-based approach performed better than dictionary-based approaches because it exploits human knowledge to build the SDC. However dictionary-based is a full automatic approach which may be used when labeled corpora are unavailable. In this case, it is more suitable to start from the most ambiguous sentences. Then, we compare the performance of the best possibilistic method (judgment-based training) with five other WSD systems participating in the French exercise [6] . These systems are developed respectively by EPFL (Ecole Polytechnique F\u00e9d\u00e9rale de Lausanne), IRISA (Institut de recherche en informatique et Syst\u00e8mes Al\u00e9atoire, Rennes), LIA-BERTIN (Laboratoire d'informatique, Universit\u00e9 d'Avignon, and BERTIN, Paris), and XRCE (Xerox Research Centre Europe, Grenoble). A comparative study between these systems is available at [6] . Figure 9 shows the values of agree and Kappa metrics (often used to evaluate WSD approaches) for these five systems and our approach (POSS). According to figure 9, the agree performance using POSS (especially for verbs) is worse than the other systems. We should also recognize that the agree metric does not provide alone accurate evaluation of WSD systems. Studying the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance [12] . The kappa statistic is the most commonly used statistic for this purpose. When focusing on the results over all Part-Of-Speech (cf. Figure 10) , our system is distinguished from other systems for the Kappa value: in spite of having a medium agree mean in comparison with other systems, agreement between our system and other judges is not a stroke of chance according to a moderate Kappa value (0.45). According to Kappa results, the good agreement performance of the probabilistic WSD is by chance in many words: for example mean agree of the word \"pied\" (foot) is about 0.68 while Kappa measure is under 0.2. Thus, we notice that the possibilistic approach is finer than the probabilistic state-of-the-art systems. This explained by the fact possibility and necessity measures increase the relevance of correct senses and penalize the scores the remaining ones.", "cite_spans": [ { "start": 664, "end": 667, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 1031, "end": 1034, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 1574, "end": 1578, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 94, "end": 102, "text": "Figure 8", "ref_id": "FIGREF9" }, { "start": 1037, "end": 1045, "text": "Figure 9", "ref_id": "FIGREF10" }, { "start": 1713, "end": 1723, "text": "Figure 10)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Discussion and interpretation", "sec_num": "5.5" }, { "text": "We should here notice that disagreement among the human judges who prepared sense tagging of the ROMANSEVAL benchmark is so important according to [9] : Kappa ranges between 0.92 (noun \"detention\") and 0.007 (adjective \"correct\"). In other terms, there is no more agreement than chance for some words. If human annotators do not agree much more than chance on many words, it seems that systems that produce random sense tags for these words should be considered as satisfactory.", "cite_spans": [ { "start": 147, "end": 150, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and interpretation", "sec_num": "5.5" }, { "text": "In this paper, we proposed and evaluated a new possibilistic approach for word sense disambiguation. In fact, in spite of their advantages, the traditional dictionaries suffer from a lack of accurate information useful for WSD. Moreover, there exists a lack of semantically labeled corpora on which methods of learning could be trained. For these multiple reasons, it became important to use a semantic dictionary of contexts ensuring the machine learning in a semantic platform of WSD. Our approach combines traditional dictionaries and labeled corpora to build a semantic dictionary and identifies the sense of a word by using a possibilistic matching model. To evaluate our approach, we used the ROMANSEVAL collection and we compared our results to some existing systems. Experiments showed an encouraging improvement in terms of disambiguation rates of French words. This disambiguation performed better on nouns as they are most frequent among the existing words in the context. These results reveal the contribution of possibilistic theory, as it provided good accuracy rates in this first experiment. However, our WSD approach needs to be investigated in a practical case of application. Indeed, the long term goal of our work is to improve the performance of a cross-lingual information retrieval system by introducing a step of queries and documents disambiguation in a multilingual context. Thus, this work will be wide towards other languages such as English and Arabic. Moreover, our tools and data structures are reusable components that may be integrated in other fields such as information extraction, machine translation, content analysis, word processing, lexicography and the semantic Web applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future works", "sec_num": "5." }, { "text": "Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing(ROCLING 2012)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Survey of word sense disambiguation approaches", "authors": [ { "first": "X", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "H", "middle": [], "last": "Han", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 18 th International Florida AI Research Society Conference", "volume": "", "issue": "", "pages": "307--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Zhou and H. Han, \"Survey of word sense disambiguation approaches,\" in Proceedings of the 18 th International Florida AI Research Society Conference, Clearwater Beach, Florida, USA, pp. 307-313, 2005.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Word sense disambiguation: A survey", "authors": [ { "first": "R", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2009, "venue": "ACM Computing Surveys (CSUR)", "volume": "41", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Navigli, \"Word sense disambiguation: A survey,\" ACM Computing Surveys (CSUR), vol. 41, no. 2, p. 10, 2009.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Th\u00e9orie des Possibilit\u00e9s : Application \u00e0 la Repr\u00e9sentation des Connaissances en Informatique", "authors": [ { "first": "D", "middle": [], "last": "Dubois", "suffix": "" }, { "first": "H", "middle": [], "last": "Prade", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Dubois and H. Prade, Th\u00e9orie des Possibilit\u00e9s : Application \u00e0 la Repr\u00e9sentation des Connaissances en Informatique. Paris: MASSON, 1987.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Possibility Theory: An Approach to computerized Processing", "authors": [ { "first": "D", "middle": [], "last": "Dubois", "suffix": "" }, { "first": "H", "middle": [], "last": "Prade", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Dubois and H. Prade, Possibility Theory: An Approach to computerized Processing. New York, USA: Plenum Press, 2004.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Towards An Intelligent Possibilistic Web Information Retrieval using Multiagent System", "authors": [ { "first": "B", "middle": [], "last": "Elayeb", "suffix": "" }, { "first": "F", "middle": [], "last": "Evrard", "suffix": "" }, { "first": "M", "middle": [], "last": "Zaghdoud", "suffix": "" }, { "first": "M. Ben", "middle": [], "last": "Ahmed", "suffix": "" } ], "year": 2009, "venue": "International Journal of Interactive Technology and Smart Education", "volume": "6", "issue": "1", "pages": "40--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Elayeb, F. Evrard, M. Zaghdoud, and M. Ben Ahmed, \"Towards An Intelligent Possibilistic Web Information Retrieval using Multiagent System,\" International Journal of Interactive Technology and Smart Education, vol. 6, no. 1, pp. 40-59, 2009.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Framework and Results for French", "authors": [ { "first": "F", "middle": [], "last": "Segond", "suffix": "" } ], "year": 2000, "venue": "Computers and the Humanities", "volume": "34", "issue": "1", "pages": "49--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Segond, \"Framework and Results for French,\" Computers and the Humanities, vol. 34, no. 1, pp. 49-60, 2000.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Understanding interobserver agreement: the kappa statistic", "authors": [ { "first": "A", "middle": [ "J" ], "last": "Viera", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Garrett", "suffix": "" } ], "year": 2005, "venue": "Family Medecine", "volume": "37", "issue": "5", "pages": "360--363", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.J. Viera and J.M. Garrett, \"Understanding interobserver agreement: the kappa statistic\" Family Medecine, vol. 37, no. 5, pp. 360-363, 2005.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The measurement of observer agreement for categorical data", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Landis", "suffix": "" }, { "first": "G", "middle": [ "G" ], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "33", "issue": "1", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Landis and G.G. Koch, \"The measurement of observer agreement for categorical data,\" Biometrics, vol. 33, no. 1, pp. 159-174, 1977.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A study of polysemy judgements and inter-annotator agreement", "authors": [ { "first": "J", "middle": [], "last": "V\u00e9ronis", "suffix": "" } ], "year": 1998, "venue": "Programme and advanced papers of the Senseval workshop", "volume": "", "issue": "", "pages": "2--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. V\u00e9ronis, \"A study of polysemy judgements and inter-annotator agreement,\" in Programme and advanced papers of the Senseval workshop, Sussex, England, pp. 2-4, 1998.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Fuzzy Sets as a basis for a theory of Possibility", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Zadeh", "suffix": "" } ], "year": 1978, "venue": "Fuzzy Sets and Systems", "volume": "1", "issue": "1", "pages": "3--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. A. Zadeh, \"Fuzzy Sets as a basis for a theory of Possibility\", Fuzzy Sets and Systems, vol. 1, no. 1, pp. 3-28, 1978.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit", "authors": [ { "first": "J", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1968, "venue": "Psychological Bulletin", "volume": "70", "issue": "4", "pages": "213--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Cohen, \"Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit\", Psychological Bulletin, vol. 70, no. 4, pp. 213-220, 1968.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On the usage of Kappa to evaluate agreement on coding tasks", "authors": [ { "first": "B", "middle": [], "last": "", "suffix": "" }, { "first": "Di", "middle": [], "last": "Eugenio", "suffix": "" } ], "year": 2000, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "441--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Di Eugenio, \"On the usage of Kappa to evaluate agreement on coding tasks\", In Proceedings of LREC, Athens, Greece, pp. 441-444, 2000.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1994, "venue": "The 32 nd Annual Meeting of the Association for Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012) Computational Linguistics", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Yarowsky, \"Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French\". In The 32 nd Annual Meeting of the Association for Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012) Computational Linguistics, New Mexico, USA, pp. 88-95, 1994.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Word sense disambiguation: The state of the art", "authors": [ { "first": "N", "middle": [], "last": "Ide", "suffix": "" }, { "first": "J", "middle": [], "last": "V\u00e9ronis", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics: Special Issue on Word Sense Disambiguation", "volume": "24", "issue": "1", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Ide, and J. V\u00e9ronis, \"Word sense disambiguation: The state of the art\", Computational Linguistics: Special Issue on Word Sense Disambiguation, vol. 24 , no. 1, pp. 1-40, 1998.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Outils d'exploration de corpus et d\u00e9sambigu\u00efsation lexicale automatique", "authors": [ { "first": "L", "middle": [], "last": "Audibert", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Audibert, \"Outils d'exploration de corpus et d\u00e9sambigu\u00efsation lexicale automatique\", Ph.D. Thesis, Universit\u00e9 d'Aix-Marseille I -Universit\u00e9 de Provence, 2003.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Sense tagging : Does it makes sense", "authors": [ { "first": "J", "middle": [], "last": "V\u00e9ronis", "suffix": "" } ], "year": 2001, "venue": "Corpus Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. V\u00e9ronis, \"Sense tagging : Does it makes sense\", Corpus Linguistics, Lancaster, United Kingdom, p. 599, 2001.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Les dictionnaires traditionnels sont-ils adapt\u00e9s au traitement du sens en T.A.L. ? \", Journ\u00e9e d'\u00e9tude de l'ATALA, Les dictionnaires \u00e9lectroniques", "authors": [ { "first": "J", "middle": [], "last": "V\u00e9ronis", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. V\u00e9ronis, \"Les dictionnaires traditionnels sont-ils adapt\u00e9s au traitement du sens en T.A.L. ? \", Journ\u00e9e d'\u00e9tude de l'ATALA, Les dictionnaires \u00e9lectroniques, Paris, 2002.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Word sense disambiguation using a dictionary for sens similarity measure", "authors": [ { "first": "B", "middle": [], "last": "Gaume", "suffix": "" }, { "first": "N", "middle": [], "last": "Hathout", "suffix": "" }, { "first": "P", "middle": [], "last": "Muller", "suffix": "" }, { "first": "P", "middle": [], "last": "", "suffix": "" } ], "year": 2004, "venue": "The 20 th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1194--1200", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Gaume, N. Hathout and P. Muller, P. \"Word sense disambiguation using a dictionary for sens similarity measure\", In The 20 th International Conference on Computational Linguistics, Stroudsburg, PA, USA, pp. 1194-1200, 2004.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Possibilistic logic bases and possibilistic graphs", "authors": [ { "first": "S", "middle": [], "last": "Benferhat", "suffix": "" }, { "first": "D", "middle": [], "last": "Dubois", "suffix": "" }, { "first": "L", "middle": [], "last": "Garcia", "suffix": "" }, { "first": "H", "middle": [], "last": "Prade", "suffix": "" } ], "year": 1999, "venue": "the 15 th Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Benferhat, D. Dubois, L. Garcia, and H. Prade, \"Possibilistic logic bases and possibilistic graphs\", In the 15 th Conference on Uncertainty in Artificial Intelligence, Stockholm, Sweden, pp. 57-64, 1999.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Possibilistic Graphical Models\" Computational Intelligence in Data Mining", "authors": [ { "first": "C", "middle": [], "last": "Borgelt", "suffix": "" }, { "first": "J", "middle": [], "last": "Gebhardt", "suffix": "" }, { "first": "R", "middle": [], "last": "Kruse", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 3 rd International Workshop", "volume": "408", "issue": "", "pages": "51--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Borgelt, J. Gebhardt and R. Kruse, \"Possibilistic Graphical Models\" Computational Intelligence in Data Mining (Proceedings of the 3 rd International Workshop, Udine, Italy 1998) CISM Courses and Lectures 408, Springer, Wien, Austria, pp. 51-68, 2000.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Towards a Possibilistic Approach for Information Retrieval", "authors": [ { "first": "A", "middle": [], "last": "Brini", "suffix": "" }, { "first": "M", "middle": [], "last": "Boughanem", "suffix": "" }, { "first": "D", "middle": [], "last": "Dubois", "suffix": "" } ], "year": 2004, "venue": "Data and Knowledge Engineering Proceedings EUROFUSE", "volume": "", "issue": "", "pages": "92--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Brini, M. Boughanem, and D. Dubois, \"Towards a Possibilistic Approach for Information Retrieval\", In Data and Knowledge Engineering Proceedings EUROFUSE, Warszawa, Poland, pp. 92-102, 2004.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "M\u00e9thodologie pour la cr\u00e9ation d'un dictionnaire distributionnel dans une perspective d'\u00e9tiquetage lexical semi-automatique", "authors": [ { "first": "D", "middle": [], "last": "Reymond", "suffix": "" } ], "year": 2002, "venue": "Traitement Automatique des Langues", "volume": "1", "issue": "", "pages": "405--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Reymond, \" M\u00e9thodologie pour la cr\u00e9ation d'un dictionnaire distributionnel dans une perspective d'\u00e9tiquetage lexical semi-automatique\", In 6 \u00e8me Rencontre des \u00e9tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues, vol. 1, pp. 405-414, 2002.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Arabic Morphological Analysis and Disambiguation Using a Possibilistic Classifier", "authors": [ { "first": "R", "middle": [], "last": "Ayed", "suffix": "" }, { "first": "I", "middle": [], "last": "Bounhas", "suffix": "" }, { "first": "B", "middle": [], "last": "Elayeb", "suffix": "" }, { "first": "F", "middle": [], "last": "Evrard", "suffix": "" }, { "first": "N", "middle": [], "last": "Bellamine Ben", "suffix": "" }, { "first": "", "middle": [], "last": "Saoud", "suffix": "" } ], "year": 2012, "venue": "The 8 th International Conference on Intelligent Computing", "volume": "7390", "issue": "", "pages": "274--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Ayed, I. Bounhas, B. Elayeb, F. Evrard, and N. Bellamine Ben Saoud, \"Arabic Morphological Analysis and Disambiguation Using a Possibilistic Classifier\", In The 8 th International Conference on Intelligent Computing, Huangshan, China, Springer-Verlag Berlin Heidelberg, LNAI 7390, pp. 274-279, 2012.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Possibilistic Approach for the Automatic Morphological Disambiguation of Arabic Texts", "authors": [ { "first": "R", "middle": [], "last": "Ayed", "suffix": "" }, { "first": "I", "middle": [], "last": "Bounhas", "suffix": "" }, { "first": "B", "middle": [], "last": "Elayeb", "suffix": "" }, { "first": "F", "middle": [], "last": "Evrard", "suffix": "" }, { "first": "N", "middle": [], "last": "Bellamine Ben", "suffix": "" }, { "first": "", "middle": [], "last": "Saoud", "suffix": "" } ], "year": 2012, "venue": "The 13 th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Ayed, I. Bounhas, B. Elayeb, F. Evrard, and N. Bellamine Ben Saoud, \"A Possibilistic Approach for the Automatic Morphological Disambiguation of Arabic Texts\", In The 13 th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, August 08-10, 2012, Kyoto, Japan, IEEE Computer Society, 2012 (to appear).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "[4][21].", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "directed possibilistic network (PN) on a variable set V is characterized by a graphical and a numeric component. The first one is a directed acyclic graph. The graph structure encodes independence relation sets just like Bayesian nets [19][20]. The second component quantifies distinct links of the graph and consists of the conditional possibility matrix of each node in the context of its parents. These possibility distributions should respect normalization. For each variable V:\u2022 If V is a root node and dom(V) the domain of V, the prior possibility of V should satisfy:", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Possibilistic network of WSD approachThe relevance of each word sense (S j ), giving the polysemous sentence (ph i ) is calculated as follows:", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "This numerator can be expressed by:\u03a0'(\u00acS j | ph) = (1-\u03c6S 1j )*\u2026* (1-\u03c6S Tj )(9) Where: \u03c6S ij = Log 10 (nCS/nS i )*(nft ij ) (10) With: nCS = Number of senses of the word in the dictionary.", "num": null }, "FIGREF4": { "type_str": "figure", "uris": null, "text": "S 1 . \u03a0(S 2 |ph) = nf (M, S2) * nf (t2, S2) * nf (t4, S2) * nf (t5, S2) = 0*(1/3)*0*0 = 0We have frequently \u03a0(S j |ph) = 0, except if all the words of the sentence exist in the index of the sense. On the other hand, we have a not null values of N(S j |ph):N(S 1 |ph)= 1-[(1-\u03c6(S 1 , M))* (1-\u03c6(S 1 , t 2 ))* (1-\u03c6(S 1 , t 4 ))* (1-\u03c6(S 1 , t 5 ))] nf(S1 , M) = 0, so \u03c6(S 1 , M) = 0; \u03c6(S 1 , t 2 ) = log 10 (2/1)*1/3 = 0,1 ; \u03c6(S 1 , t 4 ) = log 10 (2/1)*0 = 0 ; \u03c6(S 1 , t 5 ) = 0 So: N(S 1 |ph) = 1-[(1-0)* (1-0,1)* (1-0)* (1-0)] = 1-[1* 0,9* 1* 1] = 0,1. And DPR(S 1 |ph) = 0,1 N(S 2 |ph)= 1-[(1-\u03c6(S 2 , M))* (1-\u03c6(S 2 , t 2 ))* (1-\u03c6(S 2 , t 4 ))* (1-\u03c6(S 2 , t 5 ))] With: \u03c6(S 2 , M) = 0 because nf (S2, M) = 0; \u03c6(S 2 , t 2 ) = 0 ; \u03c6(S 2 , t 4 )= log 10 (2/1)*1/3 = 0,1 ; \u03c6(S 2 , t 5 ) = 0,1. So: N(S 2 |ph) = 1-[ (1-0)* (1-0)* (1-0,1)* (1-0,1)] = 1-[1* 0,9* 0,9* 1] = 0,19. DPR(S 2 |ph) = 0,19 > DPR(S 1 |ph)", "num": null }, "FIGREF5": { "type_str": "figure", "uris": null, "text": "Averages agree values are presented in the following figures 2, 3 and 4.", "num": null }, "FIGREF6": { "type_str": "figure", "uris": null, "text": "Adjectives mean agrees for judgment-based training WSD method", "num": null }, "FIGREF7": { "type_str": "figure", "uris": null, "text": "Nouns mean agrees for judgment-based training WSD method", "num": null }, "FIGREF8": { "type_str": "figure", "uris": null, "text": "Verbs mean agrees for judgment-based training WSD method", "num": null }, "FIGREF9": { "type_str": "figure", "uris": null, "text": "Mean agree rates over the three possibilistic WSD methods by Part-Of-Speech", "num": null }, "FIGREF10": { "type_str": "figure", "uris": null, "text": "Mean agree and Kappa results by Part-Of-Speech", "num": null }, "FIGREF11": { "type_str": "figure", "uris": null, "text": "Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012)", "num": null }, "FIGREF12": { "type_str": "figure", "uris": null, "text": "Mean agree and Kappa results for all Part-Of-Speech", "num": null }, "FIGREF13": { "type_str": "figure", "uris": null, "text": "Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing(ROCLING 2012)", "num": null } } } }