sections
listlengths
0
910
pub_date
stringclasses
722 values
doi
stringlengths
0
570
references
listlengths
0
835
formulas
listlengths
0
679
title
stringlengths
0
235
abstract
stringlengths
0
7.77k
authors
stringlengths
0
11.9k
figures
listlengths
0
270
citation_data
stringlengths
2
160k
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16" ], "table_ref": [], "text": "Case-based reasoning (CBR [Riesbeck and Schank, 1989]) aims at solving a target problem thanks to a case base. A case represents a previously solved problem and may be seen as a pair (problem, solution). A CBR system selects a case from the case base and then adapts the associated solution, requiring domain-dependent knowledge for adaptation. The goal of adaptation knowledge acquisition (AKA) is to detect and extract this knowledge. This is the function of the semiautomatic system CABAMAKA, which applies principles of knowledge discovery from databases (KDD) to AKA, in particular frequent itemset extraction. This paper presents the system CABAMAKA: its principles, its implementation and an example of adaptation rule discovered in the framework of an application to breast cancer treatment. The originality of CABAMAKA lies essentially in the approach of AKA that uses a powerful learning technique that is guided by a domain expert, according to the spirit of KDD. This paper proposes an original and working approach to AKA, based on KDD techniques. In addition, the KDD process is performed on a knowledge base itself, leading to the extraction of meta-knowledge, i.e. knowledge units for manipulating other knowledge units. This is also one of the rare papers trying to build an effective bridge between knowledge discovery and case-based reasoning.\nThe paper is organized as follows. Section 2 presents basic notions about CBR and adaptation. Section 3 summarizes researches on AKA. Section 4 describes the system CABA-MAKA: its main principles, its implementation and examples of adaptation knowledge acquired from it. Finally, section 5 draws some conclusions and points out future work." }, { "figure_ref": [], "heading": "CBR and Adaptation", "publication_ref": [ "b2" ], "table_ref": [], "text": "A case in a given CBR application encodes a problem-solving episode that is represented by a problem statement pb and an associated solution Sol(pb). The case is denoted by the pair (pb, Sol(pb)) in the following. Let Problems and Solutions be the set of problems and the set of solutions of the application domain, and \"is a solution of\" be a binary relation on Problems × Solutions. In general, this relation is not known in the whole but at least a finite number of its instances (pb, Sol(pb)) is known and constitutes the case base CB. An element of CB is called a source case and is denoted by srce-case = (srce, Sol(srce)), where srce is a source problem. In a particular CBR session, the problem to be solved is called target problem, denoted by tgt.\nA case-based inference associates to tgt a solution Sol(tgt), with respect to the case base CB and to additional knowledge bases, in particular O, the domain ontology (also known as domain theory or domain knowledge) that usually introduces the concepts and terms used to represent the cases. It can be noticed that the research work presented in this paper is based on the assumption that there exists a domain ontology associated with the case base, in the spirit of knowledgeintensive CBR [Aamodt, 1990].\nA classical decomposition of CBR relies on the steps of retrieval and adaptation.\nRetrieval selects (srce, Sol(srce)) ∈ CB such that srce is similar to tgt according to some similarity criterion. The goal of adaptation is to solve tgt by modifying Sol(srce) accordingly. Thus, the profile of the adaptation function is\nAdaptation : ((srce, Sol(srce)), tgt) → Sol(tgt)\nThe work presented hereafter is based on the following model of adaptation, similar to transformational analogy [Carbonell, 1983]: ➀ (srce, tgt) → ∆pb, where ∆pb encodes the similarities and dissimilarities of the problems srce and tgt.\n➁ (∆pb, AK) → ∆sol, where AK is the adaptation knowledge and where ∆sol encodes the similarities and dissimilarities of Sol(srce) and the forthcoming Sol(tgt). ➂ (Sol(srce), ∆sol) → Sol(tgt), Sol(srce) is modified into Sol(tgt) according to ∆sol. Adaptation is generally supposed to be domain-dependent in the sense that it relies on domain-specific adaptation knowledge. Therefore, this knowledge has to be acquired. This is the purpose of adaptation knowledge acquisition (AKA)." }, { "figure_ref": [], "heading": "Related Work in AKA", "publication_ref": [ "b14", "b11", "b7", "b18", "b24", "b7", "b11", "b11", "b14", "b7", "b18", "b24" ], "table_ref": [], "text": "The notion of adaptation case is introduced in [Leake et al., 1996].\nThe system DIAL is a casebased planner in the domain of disaster response planning. Disaster response planning is the initial strategic planning used to determine how to assess damage, evacuate victims, etc. in response to natural and man-made disasters such as earthquakes and chemical spills. To adapt a case, the DIAL system performs either a case-based adaptation or a rule-based adaptation. The case-based adaptation attempts to retrieve an adaptation case describing the successful adaptation of a similar previous adaptation problem. An adaptation case represents an adaptation as the combination of transformations (e.g. addition, deletion, substitution) plus memory search for the knowledge needed to operationalize the transformation (e.g. to find what to add or substitute), thus reifying the principle: adaptation = transformations + memory search. An adaptation case in DIAL packages information about the context of an adaptation, the derivation of its solution, and the effort involved in the derivation process. The context information includes characteristics of the problem for which adaptation was generated, such as the type of problem, the value being adapted, and the roles that value fills in the response plan. The derivation records the operations needed to find appropriate values in memory, e.g. operations to extract role-fillers or other information to guide the memory search process. Finally, the effort records the actual effort expended to find the solution path. It can be noticed that the core idea of \"transformation\" is also present in our own adaptation knowledge extraction.\nIn [Jarmulak et al., 2001], an approach to AKA is presented that produces a set of adaptation cases, where an adaptation case is the representation of a particular adaptation process. The adaptation case base, CB A , is then used for further adaptation steps: an adaptation step itself is based on CBR, reusing the adaptation cases of CB A . CB A is built as follows. For each (srce 1 , Sol(srce 1 )) ∈ CB, the retrieval step of the CBR system using the case base CB without (srce 1 , Sol(srce 1 )) returns a case (srce 2 , Sol(srce 2 )). Then, an adaptation case is built based on both source cases and is added to CB A . This adaptation case encodes srce 1 , Sol(srce 1 ), the difference between srce 1 and srce 2 (∆pb, with the notations of this paper) and the difference between Sol(srce 1 ) and Sol(srce 2 ) (∆sol). This approach to AKA and CBR has been successfully tested for an application to the design of tablet formulation.\nThe idea of the research presented in [Hanney and Keane, 1996;Hanney, 1997] is to exploit the variations between source cases to learn adaptation rules. These rules compute variations on solutions from variations on problems. More precisely, ordered pairs (srce-case 1 , srce-case 2 ) of similar source cases are formed. Then, for each of these pairs, the variations between the problems srce 1 and srce 2 and the solutions Sol(srce 1 ) and Sol(srce 2 ) are represented (∆pb and ∆sol). Finally, the adaptation rules are learned, using as training set the set of the input-output pairs (∆pb, ∆sol). This approach has been tested in two domains: the estimation of the price of flats and houses, and the prediction of the rise time of a servo mechanism. The experiments have shown that the CBR system using the adaptation knowledge acquired from the automatic system of AKA shows a better performance compared to the CBR system working without adaptation. This research has influenced our work that is globally based on similar ideas.\n[ Shiu et al., 2001] proposes a method for case base maintenance that reduces the case base to a set of representative cases together with a set of general adaptation rules. These rules handle the perturbation between representative cases and the other ones. They are generated by a fuzzy decision tree algorithm using the pairs of similar source cases as a training set.\nIn [Wiratunga et al., 2002], the idea of [Hanney and Keane, 1996] is reused to extend the approach of [Jarmulak et al., 2001]: some learning algorithms (in particular, C4.5) are applied to the adaptation cases of CB A , to induce general adaptation knowledge.\nThese approaches to AKA share the idea of exploiting adaptation cases. For some of them ([Jarmulak et al., 2001;Leake et al., 1996]), the adaptation cases themselves constitute the adaptation knowledge (and adaptation is itself a CBR process). For the other ones ( [Hanney and Keane, 1996;Shiu et al., 2001;Wiratunga et al., 2002]), as for the approach presented in this paper, the adaptation cases are the input of a learning process." }, { "figure_ref": [], "heading": "CABAMAKA", "publication_ref": [], "table_ref": [], "text": "We now present the CABAMAKA system, for acquiring adaptation knowledge. The CABAMAKA system is at present working in the medical domain of cancer treatment, but it may be reused in other application domains where there exist problems to be solved by a CBR system." }, { "figure_ref": [], "heading": "Principles", "publication_ref": [ "b7", "b26", "b7", "b20" ], "table_ref": [], "text": "CABAMAKA deals with case base mining for AKA. Although the main ideas underlying CABAMAKA are shared with those presented in [Hanney and Keane, 1996], the followings are original ones. The adaptation knowledge that is mined has to be validated by experts and has to be associated with explanations making it understandable by the user. In this way, CABAMAKA may be considered as a semi-automated (or interactive) learning system. This is a necessary requirement for the medical domain for which CABAMAKA has been initially designed. Moreover, the system takes into account every ordered pair (srce-case 1 , srce-case 2 ) with srce-case 1 = srce-case 2 , leading to examine n(n -1) pairs of cases for a case base CB where |CB| = n. In practice, this number may be rather large since in the present application n ≃ 650 (n(n -1) ≃ 4 • 10 5 ). This is one reason for choosing for this system efficient KDD techniques such as CHARM [Zaki and Hsiao, 2002]. This is different from the approach of [Hanney and Keane, 1996], where only pairs of similar source cases are considered, according to a fixed criterion. In CABAMAKA, there is no similarity criterion on which a selection of pairs of cases to be compared could be carried out. Indeed, the CBR process in CABAMAKA relies on the adaptation-guided retrieval principle [Smyth and Keane, 1996], where only adaptable cases are retrieved. Thus, every pair of cases may be of interest, and two cases may appear to be similar w.r.t. a given point of view, and dissimilar w.r.t. another one." }, { "figure_ref": [], "heading": "Principles of KDD.", "publication_ref": [ "b5", "b26" ], "table_ref": [], "text": "The goal of KDD is to discover knowledge from databases, under the supervision of an analyst (expert of the domain). A KDD session usually relies on three main steps: data preparation, data-mining, and interpretation of the extracted pieces of information.\nData preparation is mainly based on formatting and filtering operations. The formatting operations are used to transform the data into a form allowing the application of the chosen data-mining operations. The filtering operations are used for removing noisy data and for focusing the data-mining operation on special subsets of objects and/or attributes.\nData-mining algorithms are applied for extracting from data information units showing some regularities [Hand et al., 2001]. In the present experiment, the CHARM data-mining algorithm that efficiently performs the extraction of frequent closed itemsets (FCIs) has been used [Zaki and Hsiao, 2002]. CHARM inputs a formal database, i.e. a set of binary transactions, where each transaction T is a set of binary items. An itemset I is a set of items, and the support of I, support(I), is the proportion of transactions T of the database possessing I (I ⊆ T ). I is frequent, with respect to a threshold σ ∈ [0; 1], whenever support(I) ≥ σ. I is closed if it has no proper superset J (I J) with the same support.\nThe interpretation step aims at interpreting the extracted pieces of information, i.e. the FCIs in the present case, with the help of an analyst. In this way, the interpretation step produces new knowledge units (e.g. rules).\nThe CABAMAKA system relies on these main KDD steps as explained below.\nFormatting. The formatting step of CABAMAKA inputs the case base CB and outputs a set of transactions obtained from the pairs (srce-case 1 , srce-case 2 ). It is composed of two substeps. During the first substep, each srce-case = (srce, Sol(srce)) ∈ CB is formatted in two sets of boolean properties: Φ(srce) and Φ(Sol(srce)). The computation of Φ(srce) consists in translating srce from the problem representation formalism to 2 P , P being a set of boolean prop-erties. Some information may be lost during this translation, for example, when translating a continuous property into a set of boolean properties, but this loss has to be minimized. Now, this translation formats an expression srce expressed in the framework of the domain ontology O to an expression Φ(srce) that will be manipulated as data, i.e. without the use of a reasoning process. Therefore, in order to minimize the translation loss, it is assumed that if p ∈ Φ(srce) and p O q then q ∈ Φ(srce) (1) for each p, q ∈ P (where p O q stands for \"q is a consequence of p in the ontology O\"). In other words, Φ(srce) is assumed to be deductively closed given O in the set P. The same assumption is made for Φ(Sol(srce)). How this first substep of formatting is computed in practice depends heavily on the representation formalism of the cases and is presented, for our application, in section 4.2.\nThe second substep of formatting produces a transaction T = Φ((srce-case 1 , srce-case 2 )) for each ordered pair of distinct source cases, based on the sets of items Φ(srce 1 ), Φ(srce 2 ), Φ(Sol(srce 1 )) and Φ(Sol(srce 2 )). Following the model of adaptation presented in section 2 (items ➀, ➁ and ➂), T has to encode the properties of ∆pb and ∆sol. ∆pb encodes the similarities and dissimilarities of srce 1 and srce 2 , i.e.:\n• The properties common to srce 1 and srce 2 (marked by \"=\"),\n• The properties of srce 1 that srce 2 does not share (\"-\"), and\n• The properties of srce 2 that srce 1 does not share (\"+\").\nAll these properties are related to problems and thus are marked by pb. ∆sol is computed in a similar way and\nΦ(T ) = ∆pb ∪ ∆sol. For example, if Φ(srce 1 ) = {a, b, c} Φ(Sol(srce 1 )) = {A, B} Φ(srce 2 ) = {b, c, d} Φ(Sol(srce 2 )) = {B, C} then T = a - pb , b = pb , c = pb , d + pb , A - sol , B = sol , C + sol (2)\nMore generally:\nT = {p - pb | p ∈ Φ(srce 1 )\\Φ(srce 2 )} ∪ {p = pb | p ∈ Φ(srce 1 ) ∩ Φ(srce 2 )} ∪ {p + pb | p ∈ Φ(srce 2 )\\Φ(srce 1 )} ∪ {p - sol | p ∈ Φ(Sol(srce 1 ))\\Φ(Sol(srce 2 ))} ∪ {p = sol | p ∈ Φ(Sol(srce 1 )) ∩ Φ(Sol(srce 2 ))} ∪ {p + sol | p ∈ Φ(Sol(srce 2 ))\\Φ(Sol(srce 1 ))}\nFiltering. The filtering operations may take place before, between and after the formatting substeps, and also after the mining step. They are guided by the analyst.\nMining. The extraction of FCIs is computed thanks to CHARM (in fact, thanks to a tool based on a CHARM-like algorithm) from the set of transactions. A transaction T = Φ((srce-case 1 , srce-case 2 )) encodes a specific adaptation ((srce 1 , Sol(srce 1 )), srce 2 ) → Sol(srce 2 ). For example, consider the following FCI:\nI = a - pb , c = pb , d + pb , A - sol , B = sol , C + sol(3)\nI can be considered as a generalization of a subset of the transactions including the transaction T of equation ( 2): I ⊆ T . The interpretation of this FCI as an adaptation rule is explained below.\nInterpretation. The interpretation step is supervised by the analyst. The CABAMAKA system provides the analyst with the extracted FCIs and facilities for navigating among them.\nThe analyst may select an FCI, say I, and interpret I as an adaptation rule. For example, the FCI in equation ( 3) may be interpreted in the following terms:\nif a is a property of srce but is not a property of tgt, c is a property of both srce and tgt, d is not a property of srce but is a property of tgt, A and B are properties of Sol(srce), and C is not a property of Sol(srce) then the properties of Sol(tgt) are Φ(Sol(tgt)) = (Φ(Sol(srce)) \\ {A}) ∪ {C}.\nThis rule has to be translated from the formalism 2 P (sets of boolean properties) to the formalism of the adaptation rules of the CBR system. The result is an adaptation rule, i.e. a rule whose left part represents conditions on srce, Sol(srce) and tgt and whose right part represents a way to compute Sol(tgt). The role of the analyst is to correct and to validate this adaptation rule and to associate an explanation with it.\nThe analyst is helped in this task by the domain ontology O that is useful for organizing the FCIs and by the already available adaptation knowledge that is useful for pruning from the FCIs the ones that are already known adaptation knowledge." }, { "figure_ref": [], "heading": "Implementation", "publication_ref": [], "table_ref": [], "text": "The CABAMAKA discovery process relies on the steps described in the previous section: (s 1 ) input the case base, (s 2 ) select a subset of it (or take the whole case base): first filtering step, (s 3 ) first formatting substep, (s 4 ) second filtering step, (s 5 ) second formatting substep, (s 6 ) third filtering step, (s 7 ) data-mining (CHARM), (s 8 ) last filtering step and (s 9 ) interpretation. This process is interactive and iterative: the analyst runs each of the (s i ) (and can interrupt it), and can go back to a previous step at each moment. Among these steps, only the first ones ((s 1 ) to (s 3 )) and the last one are dependent on the representation formalism. In the following, the step (s 3 ) is illustrated in the context of an application. First, some elements on the application itself and the associated knowledge representation formalism are introduced." }, { "figure_ref": [], "heading": "Application domain.", "publication_ref": [ "b16" ], "table_ref": [], "text": "The application domain of the CBR system we are developing is breast cancer treatment: in this application, a problem pb describes a class of patients with a set of attributes and associated constraints (holding on the age of the patient, the size and the localization of the tumor, etc.).\nA solution Sol(pb) of pb is a set of therapeutic decisions (in surgery, chemotherapy, radiotherapy, etc.).\nTwo features of this application must be pointed out. First, the source cases are general cases (or ossified cases according to the terminology of [Riesbeck and Schank, 1989]): a source case corresponds to a class of patients and not to a single one. These source cases are obtained from statistical studies in the cancer domain. Second, the requested behavior of the CBR system is to provide a treatment and explanations on this treatment proposal. This is why the analyst is required to associate an explanation to a discovered adaptation rule." }, { "figure_ref": [], "heading": "Representation of cases and of the domain ontology", "publication_ref": [ "b22", "b22", "b22" ], "table_ref": [], "text": "O. The problems, the solutions, and the domain ontology of the application are represented in a light extension of OWL DL (the Web Ontology Language recommended by the W3C [Staab and Studer, 2004]). The parts of the underlying description logic that are useful for this paper are presented below (other elements on description logics, DLs, may be found in [Staab and Studer, 2004]).\nLet us consider the following example: The DL representation entities used here are atomic and defined concepts (e.g. srce, Patient and ∃age.≥ 45 ), roles (e.g. tumor and localization) concrete roles (e.g. age and size) and constraints (e.g. ≥ 45 and < 70 ). A concept C is an expression representing a class of objects. A role r is a name representing a binary relation between objects. A concrete role g is a name representing a function associating a real number to an object (for this simplified presentation, the only concrete domain that is considered is (IR, ≤), the ordered set of real numbers). A constraint c represents a subset of IR denoted by c R . For example, intervals such as ≥ R 45 = [45; +∞[ and < R 70 =] -∞; 70[ introduce constraints that are used in the application.\nA concept is either atomic (a concept name) or defined. A defined concept is an expression of the following form: C ⊓ D, ∃r.C or ∃g.c, where C and D are concepts, r is a role, g is a concrete role and c is a constraint (many other constructions exist in the DL, but only these three constructions are used here). Following classical DL presentations [Staab and Studer, 2004], an ontology O is a set of axioms, where an axiom is a formula of the form C ⊑ D (general concept inclusion) or of the form C ≡ D, where C and D are two concepts.\nThe semantics of the DL expressions used hereafter can be read as follows. An interpretation is a pair I = (∆ I , • I ) where ∆ I is a non empty set (the interpretation domain) and • I is the interpretation function, which maps a concept C to a set C I ⊆ ∆ I , a role r to a binary relation r I ⊆ ∆ I × ∆ I , and a concrete role g to a function g I : ∆ I -→ IR. In the following, all roles r are assumed to be functional: • I maps r to a function r I : ∆ I -→ ∆ I . The interpretation of the defined concepts, for an interpretation I, is as follows: More practically, the problems of the CBR application are represented by concepts (as srce in (4)). A therapeutic decision dec is also represented by a concept. A solution is a finite set {dec 1 , dec 2 , . . . dec k } of decisions. The decisions of the system are represented by atomic concepts. The knowledge associated with atomic concepts (and hence, with therapeutic decisions) is given by axioms of the domain ontology O. For example, the decision in surgery dec = Partial-Mastectomy represents a partial ablation of the breast:\n(C ⊓ D) I = C I ∩ D I , (∃r.C) I is the set of objects x ∈ ∆ I such that r I (x) ∈ C I and (∃g.c) I is the set of objects x ∈ ∆ I such that g I (x) ∈ c R . An interpretation I is a model of an axiom C ⊑ D (resp. C ≡ D) if C I ⊆ D I (resp. C I = D I ). I is a model of an ontology O if it is a model\nPartial-Mastectomy ⊑ Mastectomy Mastectomy ⊑ Surgery (5) Surgery ⊑ Therapeutic-Decision\nImplementation of the first formatting substep (s 3 ). Both problems and decisions constituting solutions are represented by concepts. Thus, computing Φ(srce) and Φ(Sol(srce)) amounts to the computation of Φ(C), C being a concept. A property p is an element of the finite set P (see section 4.1). In the DL formalism, p is represented by a concept P. A concept C has the property p if O C ⊑ P. The set of boolean properties and the set of the corresponding concepts are both denoted by P in the following. Given P, Φ(C) is simply defined as the set of properties P ∈ P that C has:\nΦ(C) = {P ∈ P | O C ⊑ P}(6)\nAs a consequence, if P ∈ Φ(C), Q ∈ P and O P ⊑ Q then Q ∈ Φ(C). Thus, the implication (1) is satisfied. The algorithm of the first formatting substep that has been implemented first computes the Φ(C)'s for C: the source problems and the decisions occurring in their solutions, and then computes P as the union of the Φ(C)'s. This algorithm relies on the following set of equations1 : where A is an atomic concept, C and D are (either atomic or defined) concepts, r is a role, g is a concrete role, c is a constraint and KB, the knowledge base, is the union of the case base and of the domain ontology.\nΦ(A) = B B is\nIt can be proven that the algorithm for the first formatting substep (computing the Φ(C)'s and the set of properties P) respects (6) under the following hypotheses. First, the constructions used in the DL are the ones that have been introduced above (C ⊓ D, ∃r.C and ∃g.c, where r is functional). Second, no defined concept may strictly subsume an atomic concept (for every atomic concept A, there is no defined concept C such that O A ⊑ C and O A ≡ C). Under these hypotheses, (6) can be proven by a recursion on the size of C (this size is the number of constructions that C contains). These hypotheses hold for our application. However, an ongoing study aims at finding an algorithm for computing the Φ(C)'s and P in a more expressive DL, including in particular negation and disjunction of concepts.\nFor example, let srce be the problem introduced by the axiom (4). It is assumed that the constraints associated with the concrete role age in KB are < 30 , ≥ 30 , < 45 , ≥ 45 , < 70 and ≥ 70 , that the constraints associated with the concrete role size in KB are < 4 and ≥ 4 , that there is no concept A = Patient in KB such that O Patient ⊑ A, and that the only concept \nA = Left-Breast of KB such that O Left-Breast ⊑ A is A = Breast." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b7" ], "table_ref": [], "text": "The CABAMAKA process piloted by the analyst produces a set of FCIs. With n = 647 cases and σ = 10%, CABAMAKA has given 2344 FCIs in about 2 minutes (on a current PC).\nOnly the FCIs with at least a + or ain both problem properties and solution properties were kept, which corresponds to 208 FCIs. Each of these FCIs I is presented for interpretation to the analyst under a simplified form by removing some of the items that can be deduced from the ontology. In particular if P = pb ∈ I, Q = pb ∈ I and O P ⊑ Q then Q = pb is removed from I. For example, if P = (∃age ≥ 45 ) ∈ P, Q = (∃age ≥ 30 ) ∈ P and (∃age ≥ 45 ) = pb ∈ I, then, necessarily, (∃age ≥ 30 ) = pb ∈ I, which is a redundant piece of information.\nThe following FCI has been extracted from CABAMAKA: It must be noticed that this example has been chosen for its simplicity: other adaptation rules have been extracted that are less easy to understand. More substantial experiments have to be carried out for an effective evaluation.\nI = {(∃age. < 70 ) =\nThe choice of considering every pairs of distinct source cases can be discussed. Another version of CABAMAKA has been tested that considers only similar source cases, as in [Hanney and Keane, 1996]: only the pairs of source cases such that |Φ(srce 1 ) ∩ Φ(srce 2 )| ≥ k were considered (experimented with k = 1 to k = 10). The first experiments have not shown yet any improvements in the results, compared to the version without this constraint (k = +∞), and involves the necessity to have the threshold k fixed." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [ "b7" ], "table_ref": [], "text": "The CABAMAKA system presented in this paper is inspired by the research of Kathleen Hanney and Mark T. Keane [Hanney and Keane, 1996] and by the principles of KDD for the purpose of semi-automatic adaptation knowledge acquisition. It reuses an FCI extraction tool developed in our team and based on a CHARM-like algorithm. Although implemented for a specific application to breast cancer treatment decision support, it has been designed to be reusable for other CBR applications: only a few modules of CABAMA-KA are dependent on the formalism of the cases and of the domain ontology, and this formalism, OWL DL, is a wellknown standard.\nOne element of future work consists in searching for ways of simplifying the presentation of the numerous extracted FCIs to the analyst. This involves an organization of these FCIs for the purpose of navigation among them. Such an organization can be a hierarchy of FCIs according to their specificities or a clustering of the FCIs in themes.\nA second piece of future work, still for the purpose of helping the analyst, is to study the algebraic structure of all the possible adaptation rules associated with the operation of composition: r is a composition of r 1 and r 2 if adapting (srce, Sol(srce)) to solve tgt thanks to r gives the same solution Sol(tgt) as (1) solving a problem pb by adaptation of (srce, Sol(srce)) thanks to r 1 and (2) solving tgt by adaptation of (pb, Sol(pb)) thanks to r 2 . The idea is to find a smallest family of adaptation rules, F , such that the closure of F under composition contains the sets of the extracted adaptation rules expressed in the form of FCIs. It is hoped that F is much smaller than S and so requires less effort from the analyst while corresponding to the same adaptation knowledge.\nAnother study on AKA for our CBR system was AKA from experts (based on the analysis of the adaptations performed by the experts). This AKA has led to a few adaptation rules and also to adaptation patterns, i.e. general strategies for case-based decision support that are associated with explanations but that need to be instantiated to become operational. A third future work is mixed AKA, that is a combined use of the adaptation patterns and of the adaptation rules extracted from CABAMAKA: the idea is to try to instantiate the former by the latter in order to obtain a set of human-understandable and operational adaptation rules." } ]
2007-03-30
[ { "authors": " Aamodt", "journal": "", "ref_id": "b0", "title": "", "year": "1990" }, { "authors": "A Aamodt", "journal": "", "ref_id": "b1", "title": "Knowledge-Intensive Case-Based Reasoning and Sustained Learning", "year": "1990-08" }, { "authors": " Carbonell", "journal": "", "ref_id": "b2", "title": "", "year": "1983" }, { "authors": "J G Carbonell", "journal": "", "ref_id": "b3", "title": "Learning by analogy: Formulating and generalizing plans from past experience", "year": "" }, { "authors": "Inc Morgan Kaufmann", "journal": "", "ref_id": "b4", "title": "", "year": "1983" }, { "authors": " Hand", "journal": "", "ref_id": "b5", "title": "", "year": "2001" }, { "authors": "D Hand; H Mannila; P Smyth", "journal": "The MIT Press", "ref_id": "b6", "title": "Principles of Data Mining", "year": "2001" }, { "authors": "Keane Hanney", "journal": "", "ref_id": "b7", "title": "", "year": "1996" }, { "authors": "K Hanney; M T Keane", "journal": "Springer Verlag", "ref_id": "b8", "title": "Learning Adaptation Rules From a Case-Base", "year": "1996" }, { "authors": " Hanney", "journal": "", "ref_id": "b9", "title": "", "year": "1997" }, { "authors": "K Hanney", "journal": "", "ref_id": "b10", "title": "Learning Adaptation Rules from Cases", "year": "1997" }, { "authors": " Jarmulak", "journal": "", "ref_id": "b11", "title": "", "year": "2001" }, { "authors": "J Jarmulak; S Craw; R Rowe", "journal": "", "ref_id": "b12", "title": "Using Case-Base Data to Learn Adaptation Knowledge for Design", "year": "" }, { "authors": "Inc Morgan Kaufmann", "journal": "", "ref_id": "b13", "title": "", "year": "2001" }, { "authors": " Leake", "journal": "", "ref_id": "b14", "title": "", "year": "1996" }, { "authors": "D B Leake; A Kinley; D C Wilson", "journal": "", "ref_id": "b15", "title": "Acquiring Case Adaptation Knowledge: A Hybrid Approach", "year": "1996" }, { "authors": "Schank Riesbeck", "journal": "", "ref_id": "b16", "title": "", "year": "1989" }, { "authors": "C K Riesbeck; R C Schank", "journal": "Lawrence Erlbaum Associates, Inc", "ref_id": "b17", "title": "Inside Case-Based Reasoning", "year": "1989" }, { "authors": " Shiu", "journal": "", "ref_id": "b18", "title": "", "year": "2001" }, { "authors": "S C K Shiu; Daniel S Yeung; C Hung Sun; X Wang", "journal": "Computational Intelligence", "ref_id": "b19", "title": "Transferring Case Knowledge to Adaptation Knowledge: An Approach for Case-Base Maintenance", "year": "2001" }, { "authors": "Keane Smyth", "journal": "", "ref_id": "b20", "title": "", "year": "1996" }, { "authors": "B Smyth; M T Keane", "journal": "Knowledge-Based Systems", "ref_id": "b21", "title": "Using adaptation knowledge to retrieve and adapt design cases", "year": "1996" }, { "authors": "Studer Staab", "journal": "", "ref_id": "b22", "title": "", "year": "2004" }, { "authors": "", "journal": "Springer", "ref_id": "b23", "title": "Handbook on Ontologies", "year": "2004" }, { "authors": " Wiratunga", "journal": "", "ref_id": "b24", "title": "", "year": "2002" }, { "authors": "N Wiratunga; S Craw; R Rowe", "journal": "", "ref_id": "b25", "title": "Learning to Adapt for Case-Based Design", "year": "2002" }, { "authors": "Hsiao Zaki", "journal": "", "ref_id": "b26", "title": "", "year": "2002" }, { "authors": "M J Zaki; C.-J Hsiao", "journal": "", "ref_id": "b27", "title": "CHARM: An Efficient Algorithm for Closed Itemset Mining", "year": "2002-04" } ]
[ { "formula_coordinates": [ 1, 326.16, 627.93, 220.58, 9.96 ], "formula_id": "formula_0", "formula_text": "Adaptation : ((srce, Sol(srce)), tgt) → Sol(tgt)" }, { "formula_coordinates": [ 3, 315, 431.37, 243.08, 61.5 ], "formula_id": "formula_1", "formula_text": "Φ(T ) = ∆pb ∪ ∆sol. For example, if Φ(srce 1 ) = {a, b, c} Φ(Sol(srce 1 )) = {A, B} Φ(srce 2 ) = {b, c, d} Φ(Sol(srce 2 )) = {B, C} then T = a - pb , b = pb , c = pb , d + pb , A - sol , B = sol , C + sol (2)" }, { "formula_coordinates": [ 3, 331.08, 517.87, 210.78, 88.88 ], "formula_id": "formula_2", "formula_text": "T = {p - pb | p ∈ Φ(srce 1 )\\Φ(srce 2 )} ∪ {p = pb | p ∈ Φ(srce 1 ) ∩ Φ(srce 2 )} ∪ {p + pb | p ∈ Φ(srce 2 )\\Φ(srce 1 )} ∪ {p - sol | p ∈ Φ(Sol(srce 1 ))\\Φ(Sol(srce 2 ))} ∪ {p = sol | p ∈ Φ(Sol(srce 1 )) ∩ Φ(Sol(srce 2 ))} ∪ {p + sol | p ∈ Φ(Sol(srce 2 ))\\Φ(Sol(srce 1 ))}" }, { "formula_coordinates": [ 4, 101.64, 95.59, 195.44, 12.56 ], "formula_id": "formula_3", "formula_text": "I = a - pb , c = pb , d + pb , A - sol , B = sol , C + sol(3)" }, { "formula_coordinates": [ 5, 54, 77.3, 243.13, 56.11 ], "formula_id": "formula_4", "formula_text": "(C ⊓ D) I = C I ∩ D I , (∃r.C) I is the set of objects x ∈ ∆ I such that r I (x) ∈ C I and (∃g.c) I is the set of objects x ∈ ∆ I such that g I (x) ∈ c R . An interpretation I is a model of an axiom C ⊑ D (resp. C ≡ D) if C I ⊆ D I (resp. C I = D I ). I is a model of an ontology O if it is a model" }, { "formula_coordinates": [ 5, 69.48, 292.29, 227.6, 37.8 ], "formula_id": "formula_5", "formula_text": "Partial-Mastectomy ⊑ Mastectomy Mastectomy ⊑ Surgery (5) Surgery ⊑ Therapeutic-Decision" }, { "formula_coordinates": [ 5, 116.16, 470.25, 180.92, 9.96 ], "formula_id": "formula_6", "formula_text": "Φ(C) = {P ∈ P | O C ⊑ P}(6)" }, { "formula_coordinates": [ 5, 91.32, 568.48, 72.28, 15.05 ], "formula_id": "formula_7", "formula_text": "Φ(A) = B B is" }, { "formula_coordinates": [ 5, 315, 319.53, 243.04, 20.88 ], "formula_id": "formula_8", "formula_text": "A = Left-Breast of KB such that O Left-Breast ⊑ A is A = Breast." }, { "formula_coordinates": [ 5, 315, 646.87, 77.74, 11.06 ], "formula_id": "formula_9", "formula_text": "I = {(∃age. < 70 ) =" } ]
Case Base Mining for Adaptation Knowledge Acquisition
In case-based reasoning, the adaptation of a source case in order to solve the target problem is at the same time crucial and difficult to implement. The reason for this difficulty is that, in general, adaptation strongly depends on domain-dependent knowledge. This fact motivates research on adaptation knowledge acquisition (AKA). This paper presents an approach to AKA based on the principles and techniques of knowledge discovery from databases and data-mining. It is implemented in CABAMA-KA, a system that explores the variations within the case base to elicit adaptation knowledge. This system has been successfully tested in an application of case-based reasoning to decision support in the domain of breast cancer treatment.
M D'aquin; F Badra; S Lafrogne; J Lieber; A Napoli; L Szathmary
[ { "figure_caption": "srce ≡ Patient ⊓ ∃age.≥ 45 ⊓ ∃age.< 70 ⊓ ∃tumor.(∃size.≥ 4 ⊓ ∃localization.Left-Breast) (4) srce represents the class of patients with an age a ∈ [45; 70[, and a tumor of size S ≥ 4 centimeters localized in the left breast.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "of each axiom of O. The inference associated with this representation formalism that is used below is the subsumption test: given an ontology O, a concept C is subsumed by a concept D, denoted by O C ⊑ D, if for every model I of O, C I ⊆ D I .", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "an atomic concept occurring in KB and O A ⊑ B Φ(C ⊓ D) = Φ(C) ∪ Φ(D) Φ(∃r.C) = {∃r.P | P ∈ Φ(C)} Φ(∃g.c) = ∃g.d d ∈ Cstraints g and c R ⊆ d R Cstraints g = {c | the expression ∃g.c occurs in KB}", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Then, the implemented algorithm returns: Φ(srce) = {Patient, ∃age.≥ 30 , ∃age.≥ 45 , ∃age.< 70 , ∃tumor.∃size.≥ 4 , ∃tumor.∃localization.Left-Breast ∃tumor.∃localization.Breast} And the 7 elements of Φ(srce) are added to P. Another example, based on the set of axioms (5) is: Φ(Partial-Mastectomy) = {Partial-Mastectomy, Mastectomy, Surgery, Therapeutic-Decision}", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "It has been interpreted in the following way: if srce and tgt both represent classes of patients of less than 70 years old, if the difference between srce and tgt lies in the tumor size of the patients-less than 4 cm for the ones of srce and more than 4 cm for the ones of tgt-and if a partial mastectomy and a curettage of the lymph nodes are proposed for the srce, then Sol(tgt) is obtained by substituting in Sol(srce) the partial mastectomy by a radical one.", "figure_data": "pb ,(∃tumor.∃size. < 4 ) -pb , (∃tumor.∃size. ≥ 4 ) + pb ,Curettage = sol , Mastectomy = sol ,Partial-Mastectomy -sol , Radical-Mastectomy + sol }", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b3" ], "table_ref": [], "text": "Imagine that you are trying to solve some constraint-satisfaction problem, or csp. In the interests of de niteness, I will suppose that the csp in question involves coloring a map of the United States subject to the restriction that adjacent states be colored di erently.\nImagine we begin by coloring the states along the Mississippi, thereby splitting the remaining problem in two. We now begin to color the states in the western half of the country, coloring perhaps half a dozen of them before deciding that we are likely to be able to color the rest. Suppose also that the last state colored was Arizona.\nAt this point, we change our focus to the eastern half of the country. After all, if we can't color the eastern half because of our coloring choices for the states along the Mississippi, there is no point in wasting time completing the coloring of the western states.\nWe successfully color the eastern states and then return to the west. Unfortunately, we color New Mexico and Utah and then get stuck, unable to color (say) Nevada. What's more, backtracking doesn't help, at least in the sense that changing the colors for New Mexico and Utah alone does not allow us to proceed farther. Depth-rst search would now have us backtrack to the eastern states, trying a new color for (say) New York in the vain hope that this would solve our problems out West. This is obviously pointless; the blockade along the Mississippi makes it impossible for New York to have any impact on our attempt to color Nevada or other western states. What's more, we are likely to examine every possible coloring of the eastern states before addressing the problem that is actually the source of our di culties.\nThe solutions that have been proposed to this involve nding ways to backtrack directly to some state that might actually allow us to make progress, in this case Arizona or earlier. Dependency-directed backtracking (Stallman & Sussman, 1977) involves a direct backtrack to the source of the di culty; backjumping (Gaschnig, 1979) avoids the computational overhead of this technique by using syntactic methods to estimate the point to which backtrack is necessary. c 1993 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nIn both cases, however, note that although we backtrack to the source of the problem, we backtrack over our successful solution to half of the original problem, discarding our solution to the problem of coloring the states in the East. And once again, the problem is worse than this { after we recolor Arizona, we are in danger of solving the East yet again before realizing that our new choice for Arizona needs to be changed after all. We won't examine every possible coloring of the eastern states, but we are in danger of rediscovering our successful coloring an exponential number of times. This hardly seems sensible; a human problem solver working on this problem would simply ignore the East if possible, returning directly to Arizona and proceeding. Only if the states along the Mississippi needed new colors would the East be reconsidered { and even then only if no new coloring could be found for the Mississippi that was consistent with the eastern solution.\nIn this paper we formalize this technique, presenting a modi cation to conventional search techniques that is capable of backtracking not only to the most recently expanded node, but also directly to a node elsewhere in the search tree. Because of the dynamic way in which the search is structured, we refer to this technique as dynamic backtracking.\nA more speci c outline is as follows: We begin in the next section by introducing a variety of notational conventions that allow us to cast both existing work and our new ideas in a uniform computational setting. Section 3 discusses backjumping, an intermediate between simple chronological backtracking and our ideas, which are themselves presented in Section 4. An example of the dynamic backtracking algorithm in use appears in Section 5 and an experimental analysis of the technique in Section 6. A summary of our results and suggestions for future work are in Section 7. All proofs have been deferred to an appendix in the interests of continuity of exposition." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "De nition 2.1 By a constraint satisfaction problem (I; V; ) we will mean a set I of variables; for each i 2 I, there is a set V i of possible values for the variable i. is a set of constraints, each a pair (J; P) where J = (j 1 ; . . .; j k ) is an ordered subset of I and P is a subset of V j 1 V j k . A solution to the csp is a set v i of values for each of the variables in I such that v i 2 V i for each i and for every constraint (J; P) of the above form in , (v j 1 ; . . .; v j k ) 2 P.\nIn the example of the introduction, I is the set of states and V i is the set of possible colors for the state i. For each constraint, the rst part of the constraint is a pair of adjacent states and the second part is a set of allowable color combinations for these states.\nOur basic plan in this paper is to present formal versions of the search algorithms described in the introduction, beginning with simple depth-rst search and proceeding to backjumping and dynamic backtracking. As a start, we make the following de nition of a partial solution to a csp: De nition 2.2 Let (I; V; ) be a csp. By a partial solution to the csp we mean an ordered subset J I and an assignment of a value to each variable in J.\nWe will denote a partial solution by a tuple of ordered pairs, where each ordered pair (i; v) assigns the value v to the variable i. For a partial solution P, we will denote by P the set of variables assigned values by P.\nConstraint-satisfaction problems are solved in practice by taking partial solutions and extending them by assigning values to new variables. In general, of course, not any value can be assigned to a variable because some are inconsistent with the constraints. We therefore make the following de nition: De nition 2.3 Given a partial solution P to a csp, an eliminating explanation for a variable i is a pair (v; S) where v 2 V i and S P . The intended meaning is that i cannot take the value v because of the values already assigned by P to the variables in S. An elimination mechanism for a csp is a function that accepts as arguments a partial solution P, and a variable i 6 2 P . The function returns a (possibly empty) set (P; i) of eliminating explanations for i.\nFor a set E of eliminating explanations, we will denote by b E the values that have been identi ed as eliminated, ignoring the reasons given. We therefore denote by b (P; i) the set of values eliminated by elements of (P; i).\nNote that the above de nition is somewhat exible with regard to the amount of work done by the elimination mechanism { all values that violate completed constraints might be eliminated, or some amount of lookahead might be done. We will, however, make the following assumptions about all elimination mechanisms: 1. They are correct. For a partial solution P, if the value v i 6 2 b (P; i), then every constraint (S; T) in with S P fig is satis ed by the values in the partial solution and the value v i for i. These are the constraints that are complete after the value v i is assigned to i. 2. They are complete. Suppose that P is a partial solution to a csp, and there is some solution that extends P while assigning the value v to i. If P 0 is an extension of P with (v; E) 2 (P 0 ; i), then E \\ (P 0 P) 6 =\n(1) In other words, whenever P can be successfully extended after assigning v to i but P 0 cannot be, at least one element of P 0 P is identi ed as a possible reason for the problem. 3. They are concise. For a partial solution P, variable i and eliminated value v, there is at most a single element of the form (v; E) 2 (P; i). Only one reason is given why the variable i cannot have the value v.\nLemma 2.4 Let be a complete elimination mechanism for a csp, let P be a partial solution to this csp and let i 6 2 P . Now if P can be successfully extended to a complete solution after assigning i the value v, then v 6 2 b (P; i).\nI apologize for the swarm of de nitions, but they allow us to give a clean description of depth-rst search: Algorithm 2.5 (Depth-rst search) Given as inputs a constraint-satisfaction problem and an elimination mechanism :\n1. Set P = . P is a partial solution to the csp. Set E i = for each i 2 I; E i is the set of values that have been eliminated for the variable i. 2. If P = I, so that P assigns a value to every element in I, it is a solution to the original problem. Return it. Otherwise, select a variable i 2 I P . Set E i = b (P; i), the values that have been eliminated as possible choices for i. 3. Set S = V i E i , the set of remaining possibilities for i. If S is nonempty, choose an element v 2 S. Add (i; v) to P, thereby setting i's value to v, and return to step 2. 4. If S is empty, let (j; v j ) be the last entry in P; if there is no such entry, return failure.\nRemove (j; v j ) from P, add v j to E j , set i = j and return to step 3.\nWe have written the algorithm so that it returns a single answer to the csp; the modication to accumulate all such answers is straightforward.\nThe problem with Algorithm 2.5 is that it looks very little like conventional depth-rst search, since instead of recording the unexpanded children of any particular node, we are keeping track of the failed siblings of that node. But we have the following: Lemma 2.6 At any point in the execution of Algorithm 2.5, if the last element of the partial solution P assigns a value to the variable i, then the unexplored siblings of the current node are those that assign to i the values in V i E i .\nProposition 2.7 Algorithm 2.5 is equivalent to depth-rst search and therefore complete.\nAs we have remarked, the basic di erence between Algorithm 2.5 and a more conventional description of depth-rst search is the inclusion of the elimination sets E i . The conventional description expects nodes to include pointers back to their parents; the siblings of a given node are found by examining the children of that node's parent. Since we will be reorganizing the space as we search, this is impractical in our framework.\nIt might seem that a more natural solution to this di culty would be to record not the values that have been eliminated for a variable i, but those that remain to be considered. The technical reason that we have not done this is that it is much easier to maintain elimination information as the search progresses. To understand this at an intuitive level, note that when the search backtracks, the conclusion that has implicitly been drawn is that a particular node fails to expand to a solution, as opposed to a conclusion about the currently unexplored portion of the search space. It should be little surprise that the most e cient way to manipulate this information is by recording it in approximately this form." }, { "figure_ref": [], "heading": "Backjumping", "publication_ref": [ "b1", "b0" ], "table_ref": [], "text": "How are we to describe dependency-directed backtracking or backjumping in this setting? In these cases, we have a partial solution and have been forced to backtrack; these more sophisticated backtracking mechanisms use information about the reason for the failure to identify backtrack points that might allow the problem to be addressed. As a start, we need to modify Algorithm 2.5 to maintain the explanations for the eliminated values: Algorithm 3.1 Given as inputs a constraint-satisfaction problem and an elimination mechanism :\n1. Set P = E i = for each i 2 I. E i is a set of eliminating explanations for i. 2. If P = I, return P. Otherwise, select a variable i 2 I P. Set E i = (P; i): 3. Set S = V i b E i . If S is nonempty, choose an element v 2 S. Add (i; v) to P and return to step 2.\n4. If S is empty, let (j; v j ) be the last entry in P; if there is no such entry, return failure. Remove (j; v j ) from P. We must have b E i = V i , so that every value for i has been eliminated; let E be the set of all variables appearing in the explanations for each eliminated value. Add (v j ; E fjg) to E j , set i = j and return to step 3. Lemma 3.2 Let P be a partial solution obtained during the execution of Algorithm 3.1, and let i 2 P be a variable assigned a value by P. Now if P 0 P can be successfully extended to a complete solution after assigning i the value v but (v; E) 2 E i , we must have E \\ (P P 0 ) 6 = In other words, the assignment of a value to some variable in P P 0 is correctly identi ed as the source of the problem.\nNote that in step 4 of the algorithm, we could have added (v j ; E \\P) instead of (v j ; E fjg) to E j ; either way, the idea is to remove from E any variables that are no longer assigned values by P.\nIn backjumping, we now simply change our backtrack method; instead of removing a single entry from P and returning to the variable assigned a value prior to the problematic variable i, we return to a variable that has actually had an impact on i. In other words, we return to some variable in the set E. Algorithm 3.3 (Backjumping) Given as inputs a constraint-satisfaction problem and an elimination mechanism :\n1. Set P = E i = for each i 2 I. 2. If P = I, return P. Otherwise, select a variable i 2 I P. Set E i = (P; i):\n3. Set S = V i b E i .\nIf S is nonempty, choose an element v 2 S. Add (i; v) to P and return to step 2.\n4. If S is empty, we must have b E i = V i . Let E be the set of all variables appearing in the explanations for each eliminated value.\n5. If E = , return failure. Otherwise, let (j; v j ) be the last entry in P such that j 2 E. Remove from P this entry and any entry following it. Add (v j ; E \\P) to E j , set i = j and return to step 3.\nIn step 5, we add (v j ; E \\ P ) to E j , removing from E any variables that are no longer assigned values by P. Proposition 3.4 Backjumping is complete and always expands fewer nodes than does depthrst search.\nLet us have a look at this in our map-coloring example. If we have a partial coloring P and are looking at a speci c state i, suppose that we denote by C the set of colors that are obviously illegal for i because they con ict with a color already assigned to one of i's neighbors.\nOne possible elimination mechanism returns as (P; i) a list of (c; P) for each color c 2 C that has been used to color a neighbor of i. This reproduces depth-rst search, since we gradually try all possible colors but have no idea what went wrong when we need to backtrack since every colored state is included in P. A far more sensible choice would take (P; i) to be a list of (c; fng) where n is a neighbor that is already colored c. This would ensure that we backjump to a neighbor of i if no coloring for i can be found.\nIf this causes us to backjump to another state j, we will add i's neighbors to the eliminating explanation for j's original color, so that if we need to backtrack still further, we consider neighbors of either i or j. This is as it should be, since changing the color of one of i's other neighbors might allow us to solve the coloring problem by reverting to our original choice of color for the state j.\nWe also have:\nProposition 3.5 The amount of space needed by backjumping is o(i 2 v), where i = jIj is the number of variables in the problem and v is the number of values for that variable with the largest value set V i .\nThis result contrasts sharply with an approach to csps that relies on truth-maintenance techniques to maintain a list of nogoods (de Kleer, 1986). There, the number of nogoods found can grow linearly with the time taken for the analysis, and this will typically be exponential in the size of the problem. Backjumping avoids this problem by resetting the set E i of eliminating explanations in step 2 of Algorithm 3.3.\nThe description that we have given is quite similar to that developed in (Bruynooghe, 1981). The explanations there are somewhat coarser than ours, listing all of the variables that have been involved in any eliminating explanation for a particular variable in the csp, but the idea is essentially the same. Bruynooghe's eliminating explanations can be stored in o(i 2 ) space (instead of o(i 2 v)), but the associated loss of information makes the technique less e ective in practice. This earlier work is also a description of backjumping only, since intermediate information is erased as the search proceeds." }, { "figure_ref": [], "heading": "Dynamic backtracking", "publication_ref": [], "table_ref": [], "text": "We nally turn to new results. The basic problem with Algorithm 3.3 is not that it backjumps to the wrong place, but that it needlessly erases a great deal of the work that has been done thus far. At the very least, we can retain the values selected for variables that are backjumped over, in some sense moving the backjump variable to the end of the partial solution in order to replace its value without modifying the values of the variables that followed it.\nThere is an additional modi cation that will probably be clearest if we return to the example of the introduction. Suppose that in this example, we color only some of the eastern states before returning to the western half of the country. We reorder the variables in order to backtrack to Arizona and eventually succeed in coloring the West without disturbing the colors used in the East.\nUnfortunately, when we return East backtracking is required and we nd ourselves needing to change the coloring on some of the eastern states with which we dealt earlier.\nThe ideas that we have presented will allow us to avoid erasing our solution to the problems out West, but if the search through the eastern states is to be e cient, we will need to retain the information we have about the portion of the East's search space that has been eliminated. After all, if we have determined that New York cannot be colored yellow, our changes in the West will not reverse this conclusion { the Mississippi really does isolate one section of the country from the other.\nThe machinery needed to capture this sort of reasoning is already in place. When we backjump over a variable k, we should retain not only the choice of value for k, but also k's elimination set. We do, however, need to remove from this elimination set any entry that involves the eventual backtrack variable j, since these entries are no longer valid { they depend on the assumption that j takes its old value, and this assumption is now false. Algorithm 4.1 (Dynamic backtracking I) Given as inputs a constraint-satisfaction problem and an elimination mechanism :\n1. Set P = E i = for each i 2 I. 2. If P = I, return P. Otherwise, select a variable i 2 I P. Set E i = E i (P; i). 3. Set S = V i b E i . If S is nonempty, choose an element v 2 S. Add (i; v) to P and return to step 2.\n4. If S is empty, we must have b E i = V i ; let E be the set of all variables appearing in the explanations for each eliminated value.\n5. If E = , return failure. Otherwise, let (j; v j ) be the last entry in P such that j 2 E. Remove (j; v j ) from P and, for each variable k assigned a value after j, remove from E k any eliminating explanation that involves j. Set E j = E j (P; j) f(v j ; E \\ P )g\n(2)\nso that v j is eliminated as a value for j because of the values taken by variables in E \\ P. The inclusion of the term (P; j) incorporates new information from variables that have been assigned values since the original assignment of v j to j. Now set i = j and return to step 3.\nTheorem 4.2 Dynamic backtracking always terminates and is complete. It continues to satisfy Proposition 3.5 and can be expected to expand fewer nodes than backjumping provided that the goal nodes are distributed randomly in the search space.\nThe essential di erence between dynamic and dependency-directed backtracking is that the structure of our eliminating explanations means that we only save nogood information based on the current values of assigned variables; if a nogood depends on outdated information, we drop it. By doing this, we avoid the need to retain an exponential amount of nogood information. What makes this technique valuable is that (as stated in the theorem) termination is still guaranteed.\nThere is one trivial modi cation that we can make to Algorithm 4.1 that is quite useful in practice. After removing the current value for the backtrack variable j, Algorithm 4.1 immediately replaces it with another. But there is no real reason to do this; we could instead pick a value for an entirely di erent variable: Algorithm 4.3 (Dynamic backtracking) Given as inputs a constraint-satisfaction problem and an elimination mechanism :\n1. Set P = E i = for each i 2 I.\n2. If P = I, return P. Otherwise, select a variable i 2 I P. Set E i = E i (P; i).\n3. Set S = V i b E i . If S is nonempty, choose an element v 2 S. Add (i; v) to P and return to step 2.\n4. If S is empty, we must have b E i = V i ; let E be the set of all variables appearing in the explanations for each eliminated value.\n5. If E = , return failure. Otherwise, let (j; v j ) be the last entry in P that binds a variable appearing in E. Remove (j; v j ) from P and, for each variable k assigned a value after j, remove from E k any eliminating explanation that involves j. Add (v j ; E \\ P ) to E j and return to step 2." }, { "figure_ref": [], "heading": "An example", "publication_ref": [ "b10", "b14" ], "table_ref": [], "text": "In order to make Algorithm 4.3 a bit clearer, suppose that we consider a small mapcoloring problem in detail. The map is shown in Figure 1 and consists of ve countries: Albania, Bulgaria, Czechoslovakia, Denmark and England. We will assume (wrongly!) that the countries border each other as shown in the gure, where countries are denoted by nodes and border one another if and only if there is an arc connecting them.\nIn coloring the map, we can use the three colors red, yellow and blue. We will typically abbreviate the country names to single letters in the obvious way.\nWe begin our search with Albania, deciding (say) to color it red. When we now look at Bulgaria, no colors are eliminated because Albania and Bulgaria do not share a border; we decide to color Bulgaria yellow. (This is a mistake.)\nWe now go on to consider Czechoslovakia; since it borders Albania, the color red is eliminated. We decide to color Czechoslovakia blue and the situation is now this: s s s s s @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ Albania Denmark England Bulgaria Czechoslovakia For each country, we indicate its current color and the eliminating explanations that mean it cannot be colored each of the three colors (when such explanations exist). We now look at Denmark. Denmark cannot be colored red because of its border with Albania and cannot be colored yellow because of its border with Bulgaria; it must therefore be colored blue. But now England cannot be colored any color at all because of its borders with Albania, Bulgaria and Denmark, and we therefore need to backtrack to one of these three countries. At this point, the elimination lists are as follows: This example is almost trivially simple, of course; the thing to note is that when we changed the color for Bulgaria, we retained both the blue color for Czechoslovakia and the information indicating that none of Czechoslovakia, Denmark and England could be red. In more complex examples, this information may be very hard-won and retaining it may save us a great deal of subsequent search e ort.\nAnother feature of this speci c example (and of the example of the introduction as well) is that the computational bene ts of dynamic backtracking are a consequence of the automatic realization that the problem splits into disjoint subproblems. Other authors have also discussed the idea of applying divide-and-conquer techniques to csps (Seidel, 1981;Zabih, 1990), but their methods su er from the disadvantage that they constrain the order in which unassigned variables are assigned values, perhaps at odds with the common heuristic of assigning values rst to those variables that are most tightly constrained. Dynamic backtracking can also be expected to be of use in situations where the problem in question does not split into two or more disjoint subproblems. 1" }, { "figure_ref": [], "heading": "Experimentation", "publication_ref": [ "b4", "b5", "b12", "b4", "b6", "b6", "b6" ], "table_ref": [], "text": "Dynamic backtracking has been incorporated into the crossword-puzzle generation program described in (Ginsberg, Frank, Halpin, & Torrance, 1990), and leads to signi cant performance improvements in that restricted domain. More speci cally, the method was tested on the problem of generating 19 puzzles of sizes ranging from 2 2 to 13 13; each puzzle was attempted 100 times using both dynamic backtracking and simple backjumping. The dictionary was shu ed between solution attempts and a maximum of 1000 backtracks were permitted before the program was deemed to have failed.\nIn both cases, the algorithms were extended to include iterative broadening (Ginsberg & Harvey, 1992), the cheapest-rst heuristic and forward checking. Cheapest-rst has also been called \\most constrained rst\" and selects for instantiation that variable with the fewest number of remaining possibilities (i.e., that variable for which it is cheapest to enumerate the possible values (Smith & Genesereth, 1985)). Forward checking prunes the set of possibilities for crossing words whenever a new word is entered and constitutes our experimental choice of elimination mechanism: at any point, words for which there is no legal crossing word are eliminated. This ensures that no word will be entered into the crossword if the word has no potential crossing words at some point. The cheapest-rst heuristic would identify the problem at the next step in the search, but forward checking reduces the number of backtracks substantially. The \\least-constraining\" heuristic (Ginsberg et al., 1990) was not used; this heuristic suggests that each word slot be lled with the word that minimally constrains the subsequent search. The heuristic was not used because it would invalidate the technique of shu ing the dictionary between solution attempts in order to gather useful statistics.\nThe table in Figure 2 indicates the number of successful solution attempts (out of 100) for each of the two methods on each of the 19 crossword frames. Dynamic backtracking is more successful in six cases and less successful in none.\nWith regard to the number of nodes expanded by the two methods, consider the data presented in Figure 3, where we graph the average number of backtracks needed by the two methods. 2 Although initially comparable, dynamic backtracking provides increasing computational savings as the problems become more di cult. A somewhat broader set of experiments is described in (Jonsson & Ginsberg, 1993) and leads to similar conclusions.\nThere are some examples in (Jonsson & Ginsberg, 1993) where dynamic backtracking leads to performance degradation, however; a typical case appears in Figure 4. 3 @ @ @ @ @ @ @ @ a a a a a a a a a a a a a a a a a a a B Region 1 Region 2 A Figure 4: A di cult problem for dynamic backtracking gure, we rst color A, then B, then the countries in region 1, and then get stuck in region 2.\nWe now presumably backtrack directly to B, leaving the coloring of region 1 alone. But this may well be a mistake { the colors in region 1 will restrict our choices for B, perhaps making the subproblem consisting of A, B and region 2 more di cult than it might be. If region 1 were easy to color, we would have been better o erasing it even though we didn't need to.\nThis analysis suggests that dependency-directed backtracking should also fare worse on those coloring problems where dynamic backtracking has trouble, and we are currently extending the experiments of (Jonsson & Ginsberg, 1993) to con rm this. If this conjecture is borne out, a variety of solutions come to mind. We might, for example, record how many backtracks are made to a node such as B in the above gure, and then use this to determine that exibility at B is more important than retaining the choices made in region 1. The di culty of nding a coloring for region 1 can also be determined from the number of backtracks involved in the search." }, { "figure_ref": [], "heading": "Summary 7.1 Why it works", "publication_ref": [], "table_ref": [], "text": "There are two separate ideas that we have exploited in the development of Algorithm 4.3 and the others leading up to it. The rst, and easily the most important, is the notion that it is possible to modify variable order on the y in a way that allows us to retain the results of earlier work when backtracking to a variable that was assigned a value early in the search." }, { "figure_ref": [], "heading": "Ginsberg", "publication_ref": [ "b2", "b4", "b9", "b15", "b8", "b11", "b3" ], "table_ref": [], "text": "This reordering should not be confused with the work of authors who have suggested a dynamic choice among the variables that remain to be assigned values (Dechter & Meiri, 1989;Ginsberg et al., 1990;P. Purdom & Robertson, 1981;Zabih & McAllester, 1988); we are instead reordering the variables that have been assigned values in the search thus far.\nAnother way to look at this idea is that we have found a way to \\erase\" the value given to a variable directly as opposed to backtracking to it. This idea has also been explored by Minton et.al. in (Minton, Johnston, Philips, & Laird, 1990) and by Selman et.al. in (Selman, Levesque, & Mitchell, 1992); these authors also directly replace values assigned to variables in satis ability problems. Unfortunately, the heuristic repair method used is incomplete because no dependency information is retained from one state of the problem solver to the next.\nThere is a third way to view this as well. The space that we are examining is really a graph, as opposed to a tree; we reach the same point by coloring Albania blue and then Bulgaria red as if we color them in the opposite order. When we decide to backjump from a particular node in the search space, we know that we need to back up until some particular property of that node ceases to hold { and the key idea is that by backtracking along a path other than the one by which the node was generated, we may be able to backtrack only slightly when we would otherwise need to retreat a great deal. This observation is interesting because it may well apply to problems other than csps. Unfortunately, it is not clear how to guarantee completeness for a search that discovers a node using one path and backtracks using another.\nThe other idea is less novel. As we have already remarked, our use of eliminating explanations is quite similar to the use of nogoods in the atms community; the principal di erence is that we attach the explanations to the variables they impact and drop them when they cease to be relevant. (They might become relevant again later, of course.) This avoids the prohibitive space requirements of systems that permanently cache the results of their nogood calculations; this observation also may be extensible beyond the domain of csps speci cally. Again, there are other ways to view this { Gashnig's notion of backmarking (Gaschnig, 1979) records similar information about the reason that particular portions of a search space are known not to contain solutions." }, { "figure_ref": [], "heading": "Future work", "publication_ref": [], "table_ref": [], "text": "There are a variety of ways in which the techniques we have presented can be extended; in this section, we sketch a few of the more obvious ones." }, { "figure_ref": [], "heading": "Backtracking to older culprits", "publication_ref": [], "table_ref": [], "text": "One extension to our work involves lifting the restriction in Algorithm 4.3 that the variable erased always be the most recently assigned member of the set E.\nIn general, we cannot do this while retaining the completeness of the search. Consider the following example:\nImagine that our csp involves three variables, x, y and z, that can each take the value 0 or 1. Further, suppose that this csp has no solutions, in that after we pick any two values for x and for y, we realize that there is no suitable choice for z.\nWe begin by taking x = y = 0; when we realize the need to backtrack, we introduce the nogood x = 0 y 6 = 0\n(3) and replace the value for y with y = 1. This fails, too, but now suppose that we were to decide to backtrack to x, introducing the new nogood y = 1 x 6 = 0 (4) We change x's value to 1 and erase (3).\nThis also fails. We decide that y is the problem and change its value to 0, introducing the nogood x = 1 y 6 = 1 but erasing (4). And when this fails, we are in danger of returning to x = y = 0, which we eliminated at the beginning of the example. This loop may cause a modi ed version of the dynamic backtracking algorithm to fail to terminate.\nIn terms of the proof of Theorem 4.2, the nogoods discovered already include information about all assigned variables, so there is no di erence between ( 7) and ( 8). When we drop (3) in favor of (4), we are no longer in a position to recover (3).\nWe can deal with this by placing conditions on the variables to which we choose to backtrack; the conditions need to be de ned so that the proof of Theorem 4.2 continues to hold. 4 Experimentation indicates that loops of the form we have described are extremely rare in practice; it may also be possible to detect them directly and thereby retain more substantial freedom in the choice of backtrack point.\nThis freedom of backtrack raises an important question that has not yet been addressed in the literature: When backtracking to avoid a di culty of some sort, to where should one backtrack?\nPrevious work has been constrained to backtrack no further than the most recent choice that might impact the problem in question; any other decision would be both incomplete and ine cient. Although an extension of Algorithm 4.3 need not operate under this restriction, we have given no indication of how the backtrack point should be selected.\nThere are several easily identi ed factors that can be expected to bear on this choice. The rst is that there remains a reason to expect backtracking to chronologically recent choices to be the most e ective { these choices can be expected to have contributed to the fewest eliminating explanations, and there is obvious advantage to retaining as many eliminating explanations as possible from one point in the search to the next. It is possible, however, to simply identify that backtrack point that a ects the fewest number of eliminating explanations and to use that.\nAlternatively, it might be important to backtrack to the choice point for which there will be as many new choices as possible; as an extreme example, if there is a variable i for which every value other than its current one has already been eliminated for other reasons, backtracking to i is guaranteed to generate another backtrack immediately and should probably be avoided if possible.\nFinally, there is some measure of the \\directness\" with which a variable bears on a problem. If we are unable to nd a value for a particular variable i, it is probably sensible to backtrack to a second variable that shares a constraint with i itself, as opposed to some variable that a ects i only indirectly.\nHow are these competing considerations to be weighed? I have no idea. But the framework we have developed is interesting because it allows us to work on this question. In more basic terms, we can now \\debug\" partial solutions to csps directly, moving laterally through the search space in an attempt to remain as close to a solution as possible. This sort of lateral movement seems central to human solution of di cult search problems, and it is encouraging to begin to understand it in a formal way." }, { "figure_ref": [], "heading": "Dependency pruning", "publication_ref": [], "table_ref": [], "text": "It is often the case that when one value for a variable is eliminated while solving a csp, others are eliminated as well. As an example, in solving a scheduling problem a particular choice of time (say t = 16) may be eliminated for a task A because there then isn't enough time between A and a subsequent task B; in this case, all later times can obviously be eliminated for A as well.\nFormalizing this can be subtle; after all, a later time for A isn't uniformly worse than an earlier time because there may be other tasks that need to precede A and making A later makes that part of the schedule easier. It's the problem with B alone that forces A to be earlier; once again, the analysis depends on the ability to maintain dependency information as the search proceeds.\nWe can formalize this as follows. Given a csp (I; V; ), suppose that the value v has been assigned to some i 2 I. Now we can construct a new csp (I 0 ; V 0 ; 0 ) involving the remaining variables I 0 = I fig, where the new set V 0 need not mention the possible values V i for i, and where 0 is generated from by modifying the constraints to indicate that i has been assigned the value v. We also make the following de nition: De nition 7.1 Given a csp, suppose that i is a variable that has two possible values u and v. We will say that v is stricter than u if every constraint in the csp induced by assigning u to i is also a constraint in the csp induced by assigning i the value v.\nThe point, of course, is that if v is stricter than u is, there is no point to trying a solution involving v once u has been eliminated. After all, nding such a solution would involve satisfying all of the constraints in the v restriction, these are a superset of those in the u restriction, and we were unable to satisfy the constraints in the u restriction originally.\nThe example with which we began this section now generalizes to the following: Proposition 7.2 Suppose that a csp involves a set S of variables, and that we have a partial solution that assigns values to the variables in some subset P S. Suppose further that if we extend this partial solution by assigning the value u to a variable i 6 2 P, there is no further extension to a solution of the entire csp. Now consider the csp involving the variables in S P that is induced by the choices of values for variables in P. If v is stricter than u as a choice of value for i in this problem, the original csp has no solution that both assigns v to i and extends the given partial solution on P.\nThis proposition isn't quite enough; in the earlier example, the choice of t = 17 for A will not be stricter than t = 16 if there is any task that needs to be scheduled before A is. We need to record the fact that B (which is no longer assigned a value) is the source of the di culty. To do this, we need to augment the dependency information with which we are working.\nMore precisely, when we say that a set of variables fx i g eliminates a value v for a variable x, we mean that our search to date has allowed us to conclude that (v 1 = x 1 ) ^ ^(v k = x k ) v 6 = x where the v i are the current choices for the x i . We can obviously rewrite this as\n(v 1 = x 1 ) ^ ^(v k = x k ) ^(v = x) F\n(5) where F indicates that the csp in question has no solution.\nLet's be more speci c still, indicating in (5) exactly which csp has no solution:\n(v 1 = x 1 ) ^ ^(v k = x k ) ^(v = x) F(I) (6\n) where I is the set of variables in the complete csp.\nNow we can address the example with which we began this section; the csp that is known to fail in an expression such as ( 6) is not the entire problem, but only a subset of it.\nIn the example, we are considering, the subproblem involves only the two tasks A and B. In general, we can augment our nogoods to include information about the subproblems on which they fail, and then measure strictness with respect to these restricted subproblems only. In our example, this will indeed allow us to eliminate t = 17 from consideration as a possible time for A.\nThe additional information stored with the nogoods doubles their size (we have to store a second subset of the variables in the csp), and the variable sets involved can be manipulated easily as the search proceeds. The cost involved in employing this technique is therefore that of the strictness computation. This may be substantial given the data structures currently used to represent csps (which typically support the need to check if a constraint has been violated but little more), but it seems likely that compile-time modi cations to these data structures can be used to make the strictness question easier to answer. In scheduling problems, preliminary experimental work shows that the idea is an important one; here, too, there is much to be done.\nThe basic lesson of dynamic backtracking is that by retaining only those nogoods that are still relevant given the partial solution with which we are working, the storage di culties encountered by full dependency-directed methods can be alleviated. This is what makes all of the ideas we have proposed possible { erasing values, selecting alternate backtrack points, and dependency pruning. There are surely many other e ective uses for a practical dependency maintenance system as well.\nProof. That fewer nodes are examined is clear; for completeness, it follows from Lemma 3.2 that the backtrack to some element of E in step 5 will always be necessary if a solution is to be found. Proposition 3.5 The amount of space needed by backjumping is o(i 2 v), where i = jIj is the number of variables in the problem and v is the number of values for that variable with the largest value set V i .\nProof. The amount of space needed is dominated by the storage requirements of the elimination sets E j ; there are i of these. Each one might refer to each of the possible values for a particular variable j; the space needed to store the reason that the value j is eliminated is at most jIj, since the reason is simply a list of variables that have been assigned values.\nThere will never be two eliminating explanations for the same variable, since is concise and we never rebind a variable to a value that has been eliminated. Theorem 4.2 Dynamic backtracking always terminates and is complete. It continues to satisfy Proposition 3.5 and can be expected to expand fewer nodes than backjumping provided that the goal nodes are distributed randomly in the search space.\nProof. There are four things we need to show: That dynamic backtracking needs o(i 2 v) space, that it is complete, that it can be expected to expand fewer nodes than backjumping, and that it terminates. We prove things in this order. Space This is clear; the amount of space needed continues to be bounded by the structure of the eliminating explanations.\nCompleteness This is also clear, since by Lemma 3.2, all of the eliminating explanations retained in the algorithm are obviously still valid. The new explanations added in (2) are also obviously correct, since they indicate that j cannot take the value v j as in backjumping and that j also cannot take any values that are eliminated by the variables being backjumped over.\nE ciency To see that we expect to expand fewer nodes, suppose that the subproblem involving only the variables being jumped over has s solutions in total, one of which is given by the existing variable assignments. Assuming that the solutions are distributed randomly in the search space, there is at least a 1=s chance that this particular solution leads to a solution of the entire csp; if so, the reordered search { which considers this solution earlier than the other { will save the expense of either assigning new values to these variables or repeating the search that led to the existing choices. The reordered search will also bene t from the information in the nogoods that have been retained for the variables being jumped over.\nTermination This is the most di cult part of the proof.\nAs we work through the algorithm, we will be generating (and then discarding) a variety of eliminating explanations. Suppose that e is such an explanation, saying that j cannot take the value v j because of the values currently taken by the variables in some set e V . We will denote the variables in e V by x 1 ; . . .; x k and their current values by v 1 ; . . .; v k . In declarative terms, the eliminating explanation is telling us that (x 1 = v 1 ) ^ ^(x k = v k ) j 6 = v j (7) Dependency-directed backtracking would have us accumulate all of these nogoods; dynamic backtracking allows us to drop any particular instance of (7) for which the antecedent is no longer valid.\nThe reason that dependency-directed backtracking is guaranteed to terminate is that the set of accumulated nogoods eliminates a monotonically increasing amount of the search space. Each nogood eliminates a new section of the search space because the nature of the search process is such that any node examined is consistent with the nogoods that have been accumulated thus far; the process is monotonic because all nogoods are retained throughout the search. These arguments cannot be applied to dynamic backtracking, since nogoods are forgotten as the search proceeds. But we can make an analogous argument.\nTo do this, suppose that when we discover a nogood like (7), we record with it all of the variables that precede the variable j in the partial order, together with the values currently assigned to these variables. Thus an eliminating explanation becomes essentially a nogood n of the form (7) together with a set S of variable/value pairs.\nWe now de ne a mapping (n; S) that changes the antecedent of ( 7) to include assumptions about all the variables bound in S, so that if S = fs i ; v i g, (n; S) = (s 1 = v 1 ) ^ ^(s l = v l ) j 6 = v j ] (8) At any point in the execution of the algorithm, we denote by N the conjunction of the modi ed nogoods of the form (8).\nWe now make the following claims:\n1. For any eliminating explanation (n; S), n j = (n; S) so that (n; S) is valid for the problem at hand.\n2. For any new eliminating explanation (n; S), (n; S) is not a consequence of N.\n3. The deductive consequences of N grow monotonically as the dynamic backtracking algorithm proceeds.\nThe theorem will follow from these three observations, since we will know that N is a valid set of conclusions for our search problem and that we are once again making monotonic progress toward eliminating the entire search space and concluding that the problem is unsolvable. That (n; S) is a consequence of (n; S) is clear, since the modi cation used to obtain (8) from ( 7) involves strengthening that antecedent of (7). It is also clear that (n; S) is not a consequence of the nogoods already obtained, since we have added to the antecedent only conditions that hold for the node of the search space currently under examination. If (n; S) were a consequence of the nogoods we had obtained thus far, this node would not be being considered.\nThe last observation depends on the following lemma:\nLemma A.1 Suppose that x is a variable assigned a value by our partial solution and that\nx appears in the antecedent of the nogood n in the pair (n; S). Then if S 0 is the set of variables assigned values no later than x, S 0 S." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work has been supported by the Air Force O ce of Scienti c Research under grant number 92-0693 and by DARPA/Rome Labs under grant number F30602-91-C-0036. I" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "would like to thank Rina Dechter, Mark Fox, Don Geddis, Will Harvey, Vipin Kumar, Scott Roy and Narinder Singh for helpful comments on these ideas. Ari Jonsson and David McAllester provided me invaluable assistance with the experimentation and proofs respectively.\nA. Proofs Lemma 2.4 Let be a complete elimination mechanism for a csp, let P be a partial solution to this csp and let i 6 2 P. Now if P can be successfully extended to a complete solution after assigning i the value v, then v 6 2 b (P; i).\nProof. Suppose otherwise, so that (v; E) 2 (P; i). It follows directly from the completeness of that E \\ (P P ) 6 = a contradiction.\nLemma 2.6 At any point in the execution of Algorithm 2.5, if the last element of the partial solution P assigns a value to the variable i, then the unexplored siblings of the current node are those that assign to i the values in V i E i .\nProof. We rst note that when we decide to assign a value to a new variable i in step 2 of the algorithm, we take E i = b (P; i) so that V i E i is the set of allowed values for this variable. The lemma therefore holds in this case. The fact that it continues to hold through each repetition of the loop in steps 3 and 4 is now a simple induction; at each point, we add to E i the node that has just failed as a possible value to be assigned to i. Proposition 2.7 Algorithm 2.5 is equivalent to depth-rst search and therefore complete. Proof. This is an easy consequence of the lemma. Partial solutions correspond to nodes in the search space. Lemma 3.2 Let P be a partial solution obtained during the execution of Algorithm 3.1, and let i 2 P be a variable assigned a value by P. Now if P 0 P can be successfully extended to a complete solution after assigning i the value v but (v; E) 2 E i , we must have E \\ (P P 0 ) 6 = Proof. As in the proof of Lemma 2.6, we show that no step of Algorithm 3.1 can cause Lemma 3.2 to become false.\nThat the lemma holds after step 2, where the search is extended to consider a new variable, is an immediate consequence of the assumption that the elimination mechanism is complete.\nIn step 4, when we add (v j ; E fjg) to the set of eliminating explanations for j, we are simply recording the fact that the search for a solution with j set to v j failed because we were unable to extend the solution to i. It is a consequence of the inductive hypothesis that as long as no variable in E fjg changes, this conclusion will remain valid. Proposition 3.4 Backjumping is complete and always expands fewer nodes than does depthrst search.\nProof. Consider a y 2 S 0 , and suppose that it were not in S. We cannot have y = x, since y would then be mentioned in the nogood n and therefore in S. So we can suppose that y is actually assigned a value earlier than x is. Now when (n; S) was added to the set of eliminating explanations, it must have been the case that x was assigned a value (since it appears in the antecedent of n) but that y was not. But we also know that there was a later time when y was assigned a value but x was not, since y precedes x in the current partial solution. This means that x must have changed value at some point after (n; S) was added to the set of eliminating explanations { but (n; S) would have been deleted when this happened. This contradiction completes the proof.\nReturning to the proof the Theorem 4.2, suppose that we eventually drop (n; S) from our collection of nogoods and that when we do so, the new nogood being added is (n 0 ; S 0 ). It follows from the lemma that S 0 S. Since x i = v i is a clause in the antecedent of (n; S), it follows that (n 0 ; S 0 ) will imply the negation of the antecedent of (n; S) and will therefore imply (n; S) itself. Although we drop (n; S) when we drop the nogood (n; S), (n; S) continues to be entailed by the modi ed set N, the consequences of which are seen to be growing monotonically." } ]
[ { "authors": "M Bruynooghe", "journal": "Information Processing Letters", "ref_id": "b0", "title": "Solving combinatorial search problems by intelligent backtracking", "year": "1981" }, { "authors": "J De Kleer", "journal": "Arti cial Intelligence", "ref_id": "b1", "title": "An assumption-based truth maintenance system", "year": "1986" }, { "authors": "R Dechter; I Meiri", "journal": "", "ref_id": "b2", "title": "Experimental evaluation of preprocessing techniques in constraint satisfaction problems", "year": "1989" }, { "authors": "J Gaschnig", "journal": "", "ref_id": "b3", "title": "Performance measurement and analysis of certain search algorithms", "year": "1979" }, { "authors": "M L Ginsberg; M Frank; M P Halpin; M C Torrance", "journal": "", "ref_id": "b4", "title": "Search lessons learned from crossword puzzles", "year": "1990" }, { "authors": "M L Ginsberg; W D Harvey", "journal": "Arti cial Intelligence", "ref_id": "b5", "title": "Iterative broadening", "year": "1992" }, { "authors": "A K Jonsson; M L Ginsberg", "journal": "", "ref_id": "b6", "title": "Experimenting with new systematic and nonsystematic search techniques", "year": "1993" }, { "authors": "D A Mcallester", "journal": "Journal of Arti cial Intelligence Research", "ref_id": "b7", "title": "Partial order backtracking", "year": "1993" }, { "authors": "S Minton; M D Johnston; A B Philips; P Laird", "journal": "", "ref_id": "b8", "title": "Solving large-scale constraint satisfaction and scheduling problems using a heuristic repair method", "year": "1990" }, { "authors": "P Purdom; C B Robertson; E ", "journal": "Acta Informatica", "ref_id": "b9", "title": "Backtracking with multi-level dynamic search rearrangement", "year": "1981" }, { "authors": "R Seidel", "journal": "", "ref_id": "b10", "title": "A new method for solving constraint satisfaction problems", "year": "1981" }, { "authors": "B Selman; H Levesque; D Mitchell", "journal": "", "ref_id": "b11", "title": "A new method for solving hard satis ability problems", "year": "1992" }, { "authors": "D E Smith; M R Genesereth", "journal": "Arti cial Intelligence", "ref_id": "b12", "title": "Ordering conjunctive queries", "year": "1985" }, { "authors": "R M Stallman; G J Sussman", "journal": "Arti cial Intelligence", "ref_id": "b13", "title": "Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis", "year": "1977" }, { "authors": "R Zabih", "journal": "", "ref_id": "b14", "title": "Some applications of graph bandwidth to constraint satisfaction problems", "year": "1990" }, { "authors": "R Zabih; D A Mcallester", "journal": "", "ref_id": "b15", "title": "A rearrangement search strategy for determining propositional satis ability", "year": "1988" } ]
[ { "formula_coordinates": [ 5, 102.96, 591.84, 96.24, 16.44 ], "formula_id": "formula_0", "formula_text": "3. Set S = V i b E i ." }, { "formula_coordinates": [ 17, 208.08, 235.8, 194.4, 17.28 ], "formula_id": "formula_1", "formula_text": "(v 1 = x 1 ) ^ ^(v k = x k ) ^(v = x) F" }, { "formula_coordinates": [ 17, 200.88, 293.4, 316.64, 17.28 ], "formula_id": "formula_2", "formula_text": "(v 1 = x 1 ) ^ ^(v k = x k ) ^(v = x) F(I) (6" } ]
Dynamic Backtracking
Because of their occasional need to return to shallow points in a search tree, existing backtracking methods can sometimes erase meaningful progress toward solving a search problem. In this paper, we present a method by which backtrack points can be moved deeper in the search space, thereby avoiding this di culty. The technique developed is a variant of dependency-directed backtracking that uses only polynomial space while still providing useful control information and retaining the completeness guarantees provided by earlier approaches.
Matthew L Ginsberg
[ { "figure_caption": "FigureFigure 1: A small map-coloring problem", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FigureFigure 2: Number of problems solved successfully", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "We backtrack to Denmark because it is the most recent of the three possibilities, and begin by removing any eliminating explanation involving Denmark from the above table to get:This indicates correctly that because of the current colors for Albania and Bulgaria, Denmark cannot be colored blue (because of the subsequent dead end at England). Since every color is now eliminated, we must backtrack to a country in the set fA; Bg. Changing Czechoslovakia's color won't help and we must deal with Bulgaria instead. Bulgaria past Czechoslovakia to re ect the search reordering in the algorithm. We can now complete the problem by coloring Bulgaria red, Denmark either yellow or blue, and England the color not used for Denmark.", "figure_data": "country Albania Bulgaria Czechoslovakia blue A color red yellow blue red yellow Denmark A B England A BNext, we add to Denmark's elimination list the pair(blue; fA; Bg)lists are now:The eliminationcountry Albania Bulgaria Czechoslovakia blue A color red yellow blue red Denmark A B A,B England A BWe remove the eliminating explanations involving Bulgaria and also add to Bulgaria's elim-ination list the pair (yellow; A)indicating correctly that Bulgaria cannot be colored yellow because of the current choice of color for Albania (red). The situation is now:We have movedcountry country Albania Czechoslovakia blue A color red yellow blue red Bulgaria A Denmark A England A color red yellow blue Albania Bulgaria Denmark blue A England A B D B Czechoslovakia blue A yellow red", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "In this1. I am indebted to David McAllester for these observations. 2. Only 17 points are shown because no point is plotted where backjumping was unable to solve the problem. 3. The worst performance degradation observed was a factor of approximately 4.", "figure_data": "Dynamic Frame backtracking Backjumping Frame backtracking Backjumping Dynamic 1 100 100 11 100 98 2 100 100 12 100 100 3 100 100 13 100 100 4 100 100 14 100 100 5 100 100 15 99 14 6 100 100 16 100 26 7 100 100 17 100 30 8 100 100 18 61 0 9 100 100 19 10 0 10 100 100", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Distributed Planning and Economics", "publication_ref": [ "b14", "b27", "b27", "b12" ], "table_ref": [], "text": "In a distributed or multiagent planning system, the plan for the system as a whole is a composite of plans produced by its constituent agents. These plans may interact signi cantly in both the resources required by each of the agents' activities (preconditions) and the products resulting from these activities (postconditions). Despite these interactions, it is often advantageous or necessary to distribute the planning process because agents are separated geographically, have di erent information, possess distinct capabilities or authority, or have been designed and implemented separately. In any case, because each agent has limited competence and awareness of the decisions produced by others, some sort of coordination is required to maximize the performance of the overall system. However, allocating resources via central control or extensive communication is deemed infeasible, as it violates whatever constraints dictated distribution of the planning task in the rst place.\nThe task facing the designer of a distributed planning system is to de ne a computationally e cient coordination mechanism and its realization for a collection of agents. The agent con guration may be given, or may itself be a design parameter. By the term agent, I refer to a module that acts within the mechanism according to its own knowledge and interests. The capabilities of the agents and their organization in an overall decision-making structure determine the behavior of the system as a whole. Because it concerns the collective behavior of self-interested decision makers, the design of this decentralized structure is fundamentally an exercise in economics or incentive engineering. The problem of developing architectures for distributed planning ts within the framework of mechanism design (Hurwicz, 1977;Reiter, 1986), and many ideas and results from economics are directly applicable. In particular, the class of mechanisms based on price systems and competition has been deeply investigated by economists, who have characterized the conditions for its e ciency c 1993 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. and compatibility with other features of the economy. When applicable, the competitive mechanism achieves coordination with minimal communication requirements (in a precise sense related to the dimensionality of messages transmitted among agents (Reiter, 1986)).\nThe theory of general equilibrium (Hildenbrand & Kirman, 1976) provides the foundation for a general approach to the construction of distributed planning systems based on price mechanisms. In this approach, we regard the constituent planning agents as consumers and producers in an arti cial economy, and de ne their individual activities in terms of production and consumption of commodities. Interactions among agents are cast as exchanges, the terms of which are mediated by the underlying economic mechanism, or protocol. By specifying the universe of commodities, the con guration of agents, and the interaction protocol, we can achieve a variety of interesting and often e ective decentralized behaviors. Furthermore, we can apply economic theory to the analysis of alternative architectures, and thus exploit a wealth of existing knowledge in the design of distributed planners.\nI use the phrase market-oriented programming to refer to the general approach of deriving solutions to distributed resource allocation problems by computing the competitive equilibrium of an arti cial economy. 1 In the following, I describe this general approach and a primitive programming environment supporting the speci cation of computational markets and derivation of equilibrium prices. An example problem in distributed transportation planning demonstrates the feasibility of decentralizing a problem with nontrivial interactions, and the applicability of economic principles to distributed problem solving." }, { "figure_ref": [], "heading": "WALRAS: A Market-Oriented Programming Environment", "publication_ref": [ "b33", "b19" ], "table_ref": [], "text": "To explore the use of market mechanisms for the coordination of distributed planning modules, I have developed a prototype environment for specifying and simulating computational markets. The system is called walras, after the 19th-century French economist L eon Walras, who was the rst to envision a system of interconnected markets in price equilibrium.\nWalras provides basic mechanisms implementing various sorts of agents, auctions, and bidding protocols. To specify a computational economy, one de nes a set of goods and instantiates a collection of agents that produce or consume those goods. Depending on the context, some of the goods or agents may be xed exogenously, for example, they could correspond to real-world goods or agents participating in the planning process. Others might be completely arti cial ones invented by the designer to decentralize the problem-solving process in a particular way. Given a market con guration, walras then runs these agents to determine an equilibrium allocation of goods and activities. This distribution of goods and activities constitutes the market solution to the planning problem.\n1. The name was inspired by Shoham's use of agent-oriented programming to refer to a specialization of object-oriented programming where the entities are described in terms of agent concepts and interact via speech acts (Shoham, 1993). Market-oriented programming is an analogous specialization, where the entities are economic agents that interact according to market concepts of production and exchange. The phrase has also been invoked by Lavoie, Baetjer, and Tulloh (1991) to refer to real markets in software components." }, { "figure_ref": [], "heading": "General Equilibrium", "publication_ref": [ "b36", "b12", "b36", "b16" ], "table_ref": [], "text": "The walras framework is patterned directly after general-equilibrium theory. A brief exposition, glossing over many ne points, follows; for elaboration see any text on microeconomic theory (e.g., (Varian, 1984)).\nWe start with k goods and n agents. Agents fall in two general classes. Consumers can buy, sell, and consume goods, and their preferences for consuming various combinations or bundles of goods are speci ed by their utility function. If agent i is a consumer, then its utility function, u i : < k + ! <, ranks the various bundles of goods according to preference.\nConsumers may also start with an initial allocation of some goods, termed their endowment. Let e i;j denote agent i's endowment of good j, and x i;j the amount of good j that i ultimately consumes. The objective of consumer i is to choose a feasible bundle of goods, (x i;1 ; : : :; x i;k ) (rendered in vector notation as x i ), so as to maximize its utility. A bundle is feasible for consumer i if its total cost at the going prices does not exceed the value of i's endowment at these prices. The consumer's choice can be expressed as the following constrained optimization problem: max x i u i (x i ) s.t. p x i p e i ;\n(1) where p = (p 1 ; : : :; p k ) is the vector of prices for the k goods.\nAgents of the second type, producers, can transform some sorts of goods into some others, according to their technology. The technology speci es the feasible combinations of inputs and outputs for the producer. Let us consider the special case where there is one output good, indexed j, and the remaining goods are potential inputs. In that case, the technology for producer i can be described by a production function, y i = x i;j = f i (x i;1 ; : : :; x i;j 1 ; x i;j+1 ; : : :; x i;k ); specifying the maximum output producible from the given inputs. (When a good is an input in its own production, the production function characterizes net output.) In this case, the producer's objective is to choose a production plan that maximizes pro ts subject to its technology and the going price of its output and input goods. This involves choosing a production level, y i , along with the levels of inputs that can produce y i at the minimum cost. (2)\nAn agent acts competitively when it takes prices as given, neglecting any impact of its own behavior on prices. The above formulation implicitly assumes perfect competition, in that the prices are parameters of the agents' constrained optimization problems. Perfect competition realistically re ects individual rationality when there are numerous agents, each small with respect to the entire economy. Even when this is not the case, however, we can implement competitive behavior in individual agents if we so choose. The implications of the restriction to perfect competition are discussed further below. A pair (p; x) of a price vector and vector of demands for each agent constitutes a competitive equilibrium for the economy if and only if: 1. For each agent i, x i is a solution to its constrained optimization problem|(1) or\n(2)|at prices p, and 2. the net amount of each good produced and consumed equals the total endowment, n X i=1\nx i;j = n X i=1 e i;j ; for j = 1; : : :; k:\n(3)\nIn other words, the total amount consumed equals the total amount produced (counted as negative quantities in the consumption bundles of producers), plus the total amount the economy started out with (the endowments).\nUnder certain \\classical\" assumptions (essentially continuity, monotonicity, and concavity of the utility and production functions; see, e.g., (Hildenbrand & Kirman, 1976;Varian, 1984)), competitive equilibria exist, and are unique given strictness of these conditions. From the perspective of mechanism design, competitive equilibria possess several desirable properties, in particular, the two fundamental welfare theorems of general equilibrium theory: (1) all competitive equilibria are Pareto optimal (no agent can do better without some other doing worse), and (2) any feasible Pareto optimum is a competitive equilibrium for some initial allocation of the endowments. These properties seem to o er exactly what we need: a bound on the quality of the solution, plus the prospect that we can achieve the most desired behavior by carefully engineering the con guration of the computational market. Moreover, in equilibrium, the prices re ect exactly the information required for distributed agents to optimally evaluate perturbations in their behavior without resorting to communication or reconsideration of their full set of possibilities (Koopmans, 1970)." }, { "figure_ref": [], "heading": "Computing Competitive Equilibria", "publication_ref": [ "b31", "b25", "b35", "b31", "b11", "b29" ], "table_ref": [], "text": "Competitive equilibria are also computable, and algorithms based on xed-point methods (Scarf, 1984) and optimization techniques (Nagurney, 1993) have been developed. Both sorts of algorithms in e ect operate by collecting and solving the simultaneous equilibrium equations (1), (2), and (3)). Without an expressly distributed formulation, however, these techniques may violate the decentralization considerations underlying our distributed problem-solving context. This is quite acceptable for the purposes these algorithms were originally designed, namely to analyze existing decentralized structures, such as transportation industries or even entire economies (Shoven & Whalley, 1992). But because our purpose is to implement a distributed system, we must obey computational distributivity constraints not relevant to the usual purposes of applied general-equilibrium analysis. In general, explicitly examining the space of commodity bundle allocations in the search for equilibrium undercuts our original motive for decomposing complex activities into consumption and production of separate goods.\nAnother important constraint is that internal details of the agents' state (such as utility or production functions and bidding policy) should be considered private in order to maximize modularity and permit inclusion of agents not under the designers' direct control. A consequence of this is that computationally exploiting global properties arising from special features of agents would not generally be permissible for our purposes. For example, the constraint that pro ts be zero is a consequence of competitive behavior and constantreturns production technology. Since information about the form of the technology and bidding policy is private to producer agents, it could be considered cheating to embed the zero-pro t condition into the equilibrium derivation procedure.\nWalras's procedure is a decentralized relaxation method, akin to the mechanism of tatonnement originally sketched by L eon Walras to explain how prices might be derived.\nIn the basic tatonnement method, we begin with an initial vector of prices, p 0 . The agents determine their demands at those prices (by solving their corresponding constrained optimization problems), and report the quantities demanded to the \\auctioneer\". Based on these reports, the auctioneer iteratively adjusts the prices up or down as there is an excess of demand or supply, respectively. For instance, an adjustment proportional to the excess could be modeled by the di erence equation\np t+1 = p t + ( n X i=1 x i n X i=1 e i ):\nIf the sequence p 0 ; p 1 ; : : : converges, then the excess demand in each market approaches zero, and the result is a competitive equilibrium. It is well known, however, that tatonnement processes do not converge to equilibrium in general (Scarf, 1984). The class of economies in which tatonnement works are those with so-called stable equilibria (Hicks, 1948). A su cient condition for stability is gross substitutability (Arrow & Hurwicz, 1977): that if the price for one good rises, then the net demands for the other goods do not decrease. Intuitively, gross substitutability will be violated when there are complementarities in preferences or technologies such that reduced consumption for one good will cause reduced consumption in others as well (Samuelson, 1974)." }, { "figure_ref": [], "heading": "WALRAS Bidding Protocol", "publication_ref": [ "b2", "b11", "b5" ], "table_ref": [], "text": "The method employed by walras successively computes an equilibrium price in each separate market, in a manner detailed below. Like tatonnement, it involves an iterative adjustment of prices based on reactions of the agents in the market. However, it di ers from traditional tatonnement procedures in that (1) agents submit supply and demand curves rather than single point quantities for a particular price, and (2) the auction adjusts individual prices to clear, rather than adjusting the entire price vector by some increment (usually a function of summary statistics such as excess demand). 2 Walras associates an auction with each distinct good. Agents act in the market by submitting bids to auctions. In walras, bids specify a correspondence between prices and 2. This general approach is called progressive equilibration by Dafermos and Nagurney (1989), who applied it to a particular transportation network equilibrium problem. Although this model of market dynamics does not appear to have been investigated very extensively in general-equilibrium theory, it does seem to match the kind of price adjustment process envisioned by Hicks in his pioneering study of dynamics and stability (Hicks, 1948).\nquantities of the good that the agent o ers to demand or supply. The bid for a particular good corresponds to one dimension of the agent's optimal demand, which is parametrized by the prices for all relevant goods. Let x i (p) be the solution to equation ( 1) or (2), as appropriate, for prices p. A walras agent bids for good j under the assumption that prices for the remaining goods are xed at their current values, p | . Formally, agent i's bid for good j is a function x i;j : < + ! <, from prices to quantities satisfying x i;j (p j ) = x i (p j ; p | ) j ;\nwhere the subscript j on the right-hand side selects the quantity demanded of good j from the overall demand vector. The agent computes and sends this function (encoded in any of a variety of formats) to the auction for good j.\nGiven bids from all interested agents, the auction derives a market-clearing price, at which the quantity demanded balances that supplied, within some prespeci ed tolerance. This clearing price is simply the zero crossing of the aggregate demand function, which is the sum of the demands from all agents. Such a zero crossing will exist as long as the aggregate demand is su ciently well-behaved, in particular, if it is continuous and decreasing in price. Gross substitutability, along with the classical conditions for existence of equilibrium, is su cient to ensure the existence of a clearing price at any stage of the bidding protocol.\nWalras calculates the zero crossing of the aggregate demand function via binary search.\nIf aggregate demand is not well-behaved, the result of the auction may be a non-clearing price.\nWhen the current price is clearing with respect to the current bids, we say the market for that commodity is in equilibrium. We say that an agent is in equilibrium if its set of outstanding bids corresponds to the solution of its optimization problem at the going prices. If all the agents and commodity markets are in equilibrium, the allocation of goods dictated by the auction results is a competitive equilibrium.\nFigure 1 presents a schematic view of the walras bidding process. There is an auction for each distinct good, and for each agent, a link to all auctions in which it has an interest. There is also a \\tote board\" of current prices, kept up-to-date by the various auctions. In the current implementation the tote board is a global data structure, however, since price change noti cations are explicitly transmitted to interested agents, this central information could be easily dispensed with.\nEach agent maintains an agenda of bid tasks, specifying the markets in which it must update its bid or compute a new one. In Figure 1, agent A i has pending tasks to submit bids to auctions G 1 , G 7 , and G 4 . The bidding process is highly distributed, in that each agent need communicate directly only with the auctions for the goods of interest (those in the domain of its utility or production function, or for which it has nonzero endowments). Each of these interactions concerns only a single good; auctions never coordinate with each other. Agents need not negotiate directly with other agents, nor even know of each other's existence.\nAs new bids are received at auction, the previously computed clearing price becomes obsolete. Periodically, each auction computes a new clearing price (if any new or updated bids have been received) and posts it on the tote board. When a price is updated, this may invalidate some of an agent's outstanding bids, since these were computed under the assumption that prices for remaining goods were xed at previous values. On nding out\nG 1 G 2 G k A 1 A 2 A i A n Task Agenda [1], [7], [4] p 1 p 2 p k tote board } } Figure 1:\nWalras's bidding process. G j denotes the auction for the jth good, and A i the ith trading agent. An item j] on the task agenda denotes a pending task to compute and submit a bid for good j.\nabout a price change, an agent augments its task agenda to include the potentially a ected bids.\nAt all times, walras maintains a vector of going prices and quantities that would be exchanged at those prices. While the agents have nonempty bid agendas or the auctions new bids, some or all goods may be in disequilibrium. When all auctions clear and all agendas are exhausted, however, the economy is in competitive equilibrium (up to some numeric tolerance). Using a recent result of Milgrom and Roberts (1991, Theorem 12), it can be shown that the condition su cient for convergence of tatonnement|gross substitutability| is also su cient for convergence of walras's price-adjustment process. The key observation is that in progressive equilibration (synchronous or not) the price at each time is based on some set of previous supply and demand bids.\nAlthough I have no precise results to this e ect, the computational e ort required for convergence to a xed tolerance seems highly sensitive to the number of goods, and much less so to the number of agents. Eydeland and Nagurney (1989) have analyzed in detail the convergence pattern of progressive equilibration algorithms related to walras for particular special cases, and found roughly linear growth in the number of agents. However, general conclusions are di cult to draw as the cost of computing the equilibrium for a particular computational economy may well depend on the interconnectedness and strength of interactions among agents and goods." }, { "figure_ref": [], "heading": "Market-Oriented Programming", "publication_ref": [], "table_ref": [], "text": "As described above, walras provides facilities for specifying market con gurations and computing their competitive equilibrium. We can also view walras as a programming environment for decentralized resource allocation procedures. The environment provides constructs for specifying various sorts of agents and de ning their interactions via their relations to common commodities. After setting up the initial con guration, the market can be run to determine the equilibrium level of activities and distribution of resources throughout the economy.\nTo cast a distributed planning problem as a market, one needs to identify (1) the goods traded, (2) the agents trading, and (3) the agents' bidding behavior. These design steps are serially dependent, as the de nition of what constitutes an exchangeable or producible commodity severely restricts the type of agents that it makes sense to include. And as mentioned above, sometimes we have to take as xed some real-world agents and goods presented as part of the problem speci cation. Once the con guration is determined, it might be advantageous to adjust some general parameters of the bidding protocol. Below, I illustrate the design task with a walras formulation of the multicommodity ow problem." }, { "figure_ref": [], "heading": "Implementation", "publication_ref": [ "b38", "b31", "b34", "b13", "b15" ], "table_ref": [], "text": "Walras is implemented in Common Lisp and the Common Lisp Object System (CLOS).\nThe current version provides basic infrastructure for running computational economies, including the underlying bidding protocol and a library of CLOS classes implementing a variety of agent types. The object-oriented implementation supports incremental development of market con gurations. In particular, new types of agents can often be de ned as slight variations on existing types, for example by modifying isolated features of the demand behavior, bidding strategies (e.g., management of task agenda), or bid format. Wang and Slagle (1993) present a detailed case for the use of object-oriented languages to represent general-equilibrium models. Their proposed system is similar to walras with respect to formulation, although it is designed as an interface to conventional model-solving packages, rather than to support a decentralized computation of equilibrium directly.\nAlthough it models a distributed system, walras runs serially on a single processor.\nDistribution constraints on information and communication are enforced by programming and speci cation conventions rather than by fundamental mechanisms of the software environment. Asynchrony is simulated by randomizing the bidding sequences so that agents are called on unpredictably. Indeed, arti cial synchronization can lead to an undesirable oscillation in the clearing prices, as agents collectively overcompensate for imbalances in the preceding iteration. 3 The current experimental system runs transportation models of the sort described below, as well as some abstract exchange and production economies with parametrized utility and production functions (including the expository examples of Scarf (1984) and Shoven and Whalley (1984)). Customized tuning of the basic bidding protocol has not been necessary. In the process of getting walras to run on these examples, I have added some generically useful building blocks to the class libraries, but much more is required to ll out a comprehensive taxonomy of agents, bidding strategies, and auction policies.\n3. In some formal dynamic models (Huberman, 1988;Kephart, Hogg, & Huberman, 1989), homogeneous agents choose instantaneously optimal policies without accounting for others that are simultaneously making the same choice. Since the value of a particular choice varies inversely with the number of agents choosing it, this delayed feedback about the others' decisions leads to systematic errors, and hence oscillation. I have also observed this phenomenon empirically in a synchronized version of WALRAS. By eliminating the synchronization, agents tend to work on di erent markets at any one time, and hence do not su er as much from delayed feedback about prices." }, { "figure_ref": [ "fig_0" ], "heading": "Example: Multicommodity Flow", "publication_ref": [ "b9", "b7", "b6" ], "table_ref": [], "text": "In a simple version of the multicommodity ow problem, the task is to allocate a given set of cargo movements over a given transportation network. The transportation network is a collection of locations, with links (directed edges) identifying feasible transportation operations. Associated with each link is a speci cation of the cost of moving cargo along it. We suppose further that the cargo is homogeneous, and that amounts of cargo are arbitrarily divisible. A movement requirement associates an amount of cargo with an origin-destination pair. The planning problem is to determine the amount to transport on each link in order to move all the cargo at the minimum cost. This simpli cation ignores salient aspects of real transportation planning. For instance, this model is completely atemporal, and is hence more suitable for planning steady-state ows than for planning dynamic movements.\nA distributed version of the problem would decentralize the responsibility for transporting separate cargo elements. For example, planning modules corresponding to geographically or organizationally disparate units might arrange the transportation for cargo within their respective spheres of authority. Or decision-making activity might be decomposed along hierarchical levels of abstraction, gross functional characteristics, or according to any other relevant distinction. This decentralization might result from real distribution of authority within a human organization, from inherent informational asymmetries and communication barriers, or from modularity imposed to facilitate software engineering.\nConsider, for example, the abstract transportation network of Figure 2, taken from Harker (1988). There are four locations, with directed links as shown. Consider two movement requirements. The rst is to transport cargo from location 1 to location 4, and the second in the reverse direction. Suppose we wish to decentralize authority so that separate agents (called shippers) decide how to allocate the cargo for each movement. The rst shipper decides how to split its cargo units between the paths 1 ! 2 ! 4 and 1 ! 2 ! 3 ! 4, while the second gures the split between paths 4 ! 2 ! 1 and 4 ! 2 ! 3 ! 1. Note that the latter paths for each shipper share a common resource: the link 2 ! 3. Because of their overlapping resource demands, the shippers' decisions appear to be necessarily intertwined. In a congested network, for example, the cost for transporting a unit of cargo over a link is increasing in the overall usage of the link. A shipper planning its cargo movements as if it were the only user on a network would thus underestimate its costs and potentially misallocate transportation resources.\nFor the analysis of networks such as this, transportation researchers have developed equilibrium concepts describing the collective behavior of the shippers. In a system equilibrium, the overall transportation of cargo proceeds as if there were an omniscient central planner directing the movement of each shipment so as to minimize the total aggregate cost of meeting the requirements. In a user equilibrium, the overall allocation of cargo movements is such that each shipper minimizes its own total cost, sharing proportionately the cost of shared resources. The system equilibrium is thus a global optimum, while the user equilibrium corresponds to a composition of locally optimal solutions to subproblems. There are also some intermediate possibilities, corresponding to game-theoretic equilibrium concepts such as the Nash equilibrium, where each shipper behaves optimally given the transportation policies of the remaining shippers (Harker, 1986). 4From our perspective as designer of the distributed planner, we seek a decentralization mechanism that will reach the system equilibrium, or come as close as possible given the distributed decision-making structure. In general, however, we cannot expect to derive a system equilibrium or globally optimal solution without central control. Limits on coordination and communication may prevent the distributed resource allocation from exploiting all opportunities and inhibiting agents from acting at cross purposes. But under certain conditions decision making can indeed be decentralized e ectively via market mechanisms. General-equilibrium analysis can help us to recognize and take advantage of these opportunities.\nNote that for the multicommodity ow problem, there is an e ective distributed solution due to Gallager (1977). One of the market structures described below e ectively mimics this solution, even though Gallager's algorithm was not formulated expressly in market terms. The point here is not to crack a hitherto unsolved distributed optimization problem (though that would be nice), but rather to illustrate a general approach on a simply described yet nontrivial task." }, { "figure_ref": [], "heading": "WALRAS Transportation Market", "publication_ref": [], "table_ref": [], "text": "In this section, I present a series of three transportation market structures implemented in walras. The rst and simplest model comprises the basic transportation goods and shipper agents, which are augmented in the succeeding models to include other agent types. Comparative analysis of the three market structures reveals the qualitatively distinct economic and computational behaviors realized by alternate walras con gurations." }, { "figure_ref": [ "fig_0", "fig_1", "fig_0" ], "heading": "Basic Shipper Model", "publication_ref": [], "table_ref": [], "text": "The resource of primary interest in the multicommodity ow problem is movement of cargo. Because the value and cost of a cargo movement depends on location, we designate as a distinct good the capacity on each origin-destination pair in the network (see Figure 2). To capture the cost or input required to move cargo, we de ne another good denoting generic transportation resources. In a more concrete model, these might consist of vehicles, fuel, labor, or other factors contributing to transportation.\nTo decentralize the decision making, we identify each movement requirement with a distinct shipper agent. These shippers, or consumers, have an interest in moving various units of cargo between speci ed origins and destinations.\nThe interconnectedness of agents and goods de nes the market con guration. Figure 3 depicts the walras con guration for the basic shipper model corresponding to the example network of Figure 2. In this model there are two shippers, S 1;4 and S 4;1 , where S i;j denotes a shipper with a requirement to move goods from origin i to destination j. Shippers connect to goods that might serve their objectives: in this case, movement along links that belong to some simple path from the shipper's origin to its destination. In the diagram, G i;j denotes the good representing an amount of cargo moved over the link i ! j. G 0 denotes the special transportation resource good. Notice that the only goods of interest to both shippers are G 0 , for which they both have endowments, and G 2;3 , transportation on the link serving both origin-destination pairs. The model we employ for transportation costs is based on a network with congestion, thus exhibiting diseconomies of scale. In other words, the marginal and average costs (in terms of transportation resources required) are both increasing in the level of service on a link. Using Harker's data, we take costs to be quadratic. The quadratic cost model is posed simply for concreteness, and does not represent any substantive claim about transportation networks. The important qualitative feature of this model (and the only one necessary for the example to work) is that it exhibits decreasing returns, a de ning characteristic of congested networks. Note also that Harker's model is in terms of monetary costs, whereas we introduce an abstract input good. Let c i;j (x) denote the cost in transportation resources (good G 0 ) required to transport x units of cargo on the link from i to j. The complete cost functions are: c 1;2 (x) = c 2;1 (x) = c 2;4 (x) = c 4;2 (x) = x 2 + 20x; c 3;1 (x) = c 2;3 (x) = c 3;4 (x) = 2x 2 + 5x:\nG 0 G 4,2 G 2,1 G 3,1 S 4,1 G 2,4 G 1,2 S 1,4 G 2,3 G 3,4\nFinally, each shipper's objective is to transport 10 units of cargo from its origin to its destination.\nIn the basic shipper model, we assume that the shippers pay proportionately (in units of G 0 ) for the total cost on each link. This amounts to a policy of average cost pricing.\nWe take the shipper's objective to be to ship as much as possible (up to its movement requirement) in the least costly manner. Notice that this objective is not expressible in terms of the consumer's optimization problem, equation ( 1), and hence this model is not technically an instance of the general-equilibrium framework.\nGiven a network with prices on each link, the cheapest cargo movement corresponds to the shortest path in the graph, where distances are equated with prices. Thus, for a given link, a shipper would prefer to ship its entire quota on the link if it is on the shortest path, and zero otherwise. In the case of ties, it is indi erent among the possible allocations. To bid on link i; j, the shipper can derive the threshold price that determines whether the link is on a shortest path by taking the di erence in shortest-path distance between the networks where link i; j's distance is set to zero and in nity, respectively.\nIn incrementally changing its bids, the shipper should also consider its outstanding bids and the current prices. The value of reserving capacity on a particular link is zero if it cannot get service on the other links on the path. Similarly, if it is already committed to shipping cargo on a parallel path, it does not gain by obtaining more capacity (even at a lower price) until it withdraws these other bids. 5 Therefore, the actual demand policy of a shipper is to spend its uncommitted income on the potential ow increase (derived from maximum-ow calculations) it could obtain by purchasing capacity on the given link. It is willing to spend up to the threshold value of the link, as described above. This determines one point on its demand curve. If it has some unsatis ed requirement and uncommitted income it also indicates a willingness to pay a lower price for a greater amount of capacity. Boundary points such as this serve to bootstrap the economy; from the initial conditions it is typically the case that no individual link contributes to overall ow between the shipper's origin and destination. Finally, the demand curve is completed by a smoothing operation on these points.\nDetails of the boundary points and smoothing operation are rather arbitrary, and I make no claim that this particular bidding policy is ideal or guaranteed to work for a broad class of problems. This crude approach appears su cient for the present example and some similar ones, as long as the shippers' policies become more accurate as the prices approach equilibrium.\nWalras successfully computes the competitive equilibrium for this example, which in the case of the basic shipper model corresponds to a user equilibrium (UE) for the transportation network. In the UE for the example network, each shipper sends 2.86 units of cargo over the shared link 2 ! 3, and the remaining cargo over the direct link from location 2 to the destination. This allocation is ine cient, as its total cost is 1143 resource 5. Even if a shipper could simultaneously update its bids in all markets, it would not be a good idea to do so here. A competitive shipper would send all its cargo on the least costly path, neglecting the possibility that this demand may increase the prices so that it is no longer cheapest. The outstanding bids provide some sensitivity to this e ect, as they are functions of price. But they cannot respond to changes in many prices at once, and thus the policy of updating all bids simultaneously can lead to perpetual oscillation. For example, in the network considered here, the unique competitive equilibrium has each shipper splitting its cargo between two di erent paths. Policies allocating all cargo to one path can never lead to this result, and hence convergence to competitive equilibrium depends on the incrementality of bidding behavior.\nunits, which is somewhat greater than the global minimum-cost solution of 1136 units. In economic terms, the cause of the ine ciency is an externality with respect to usage of the shared link. Because the shippers are e ectively charged average cost|which in the case of decreasing returns is below marginal cost|the price they face does not re ect the full incremental social cost of additional usage of the resource. In e ect, incremental usage of the resource by one agent is subsidized by the other. The steeper the decreasing returns, the more the agents have an incentive to overutilize the resource. 6 This is a simple example of the classic tragedy of the commons.\nThe classical remedy to such problems is to internalize the externality by allocating ownership of the shared resource to some decision maker who has the proper incentives to use it e ciently. We can implement such a solution in walras by augmenting the market structure with another type of agent." }, { "figure_ref": [], "heading": "Carrier Agents", "publication_ref": [ "b32" ], "table_ref": [ "tab_1", "tab_1", "tab_1" ], "text": "We extend the basic shipper model by introducing carriers, agents of type producer who have the capability to transport cargo units over speci ed links, given varying amounts of transportation resources. In the model described here, we associate one carrier with each available link. The production function for each carrier is simply the inverse of the cost function described above. To achieve a global movement of cargo, shippers obtain transportation services from carriers in exchange for the necessary transportation resources.\nLet C i;j denote the carrier that transports cargo from location i to location j. Each carrier C i;j is connected to the auction for G i;j , its output good, along with G 0 |its input in the production process. Shipper agents are also connected to G 0 , as they are endowed with transportation resources to exchange for transportation services. Figure 4 depicts the walras market structure when carriers are included in the economy.\nC 1,2 G 0 C G 4,2 4,2 G 2,1 G 3,1 S 4,1 C 3,1 C 2,3 G 2,4 G 1,2 S 1,4 C 3,4 G 2,3 C 2,1 G 3,4 C 2,4\nFigure 4: Walras market con guration for the example transportation network in an economy with shippers and carriers.\n6. Average-cost pricing is perhaps the most common mechanism for allocating costs of a shared resource. Shenker (1991) points out problems with this scheme|with respect to both e ciency and strategic behavior|in the context of allocating access to congested computer networks, a problem analogous to our transportation task.\nIn the case of a decreasing returns technology, the producer's (carrier's) optimization problem has a unique solution. The optimal level of activity maximizes revenues minus costs, which occurs at the point where the output price equals marginal cost. Using this result, carriers submit supply bids specifying transportation services as a function of link prices (with resource price xed), and demand bids specifying required resources as a function of input prices (for activity level computed with output price xed).\nFor example, consider carrier C 1;2 . At output price p 1;2 and input price p 0 , the carrier's pro t is p 1;2 y p 0 c 1;2 (y); where y is the level of service it chooses to supply. Given the cost function above, this expression is maximized at y = (p 1;2 20p 0 )=2p 0 . Taking p 0 as xed, the carrier submits a supply bid with y a function of p 1;2 . On the demand side, the carrier takes p 1;2 as xed and submits a demand bid for enough good G 0 to produce y, where y is treated as a function of p 0 .\nWith the revised con guration and agent behaviors described, walras derives the system equilibrium (SE), that is, the cargo allocation minimizing overall transportation costs. The derived cargo movements are correct to within 10% in 36 bidding cycles, and to 1% in 72, where in each cycle every agent submits an average of one bid to one auction. The total cost (in units of G 0 ), its division between shippers' expenditures and carriers' pro ts, and the equilibrium prices are presented in Table 1. Data for the UE solution of the basic shipper model are included for comparison. That the decentralized process produces a global optimum is perfectly consistent with competitive behavior|the carriers price their outputs at marginal cost, and the technologies are convex. pricing TC expense pro t p 1;2 p 2;1 p 2;3 p 2;4 p 3;1 p As a simple check on the prices of Table 1, we can verify that p 2;3 + p 3;4 = p 2;4 and p 2;3 +p 3;1 = p 2;1 . Both these relationships must hold in equilibrium (assuming all links have nonzero movements), else a shipper could reduce its cost by rerouting some cargo. Indeed, for a simple (small and symmetric) example such as this, it is easy to derive the equilibrium analytically using global equations such as these. But as argued above, it would be improper to exploit these relationships in the implementation of a truly distributed decision process.\nThe lesson from this exercise is that we can achieve qualitatively distinct results by simple variations in the market con guration or agent policies. From our designers' perspective, we prefer the con guration that leads to the more transportation-e cient SE. Examination of Table 1 reveals that we can achieve this result by allowing the carriers to earn nonzero pro ts (economically speaking, these are really rents on the xed factor represented by the congested channel) and redistributing these pro ts to the shippers to cover their increased expenditures. (In the model of general equilibrium with production, consumers own shares in the producers' pro ts. This closes the loop so that all value is ultimately realized in consumption. We can specify these shares as part of the initial con guration, just like the endowment.) In this example, we distribute the pro ts evenly between the two shippers." }, { "figure_ref": [], "heading": "Arbitrageur Agents", "publication_ref": [ "b28", "b6" ], "table_ref": [], "text": "The preceding results demonstrate that walras can indeed implement a decentralized solution to the multicommodity ow problem. But the market structure in Figure 4 is not as distributed as it might be, in that (1) all agents are connected to G 0 , and (2) shippers need to know about all links potentially serving their origin-destination pair. The rst of these concerns is easily remedied, as the choice of a single transportation resource good was completely arbitrary. For example, it would be straightforward to consider some collection of resources (e.g., fuel, labor, vehicles), and endow each shipper with only subsets of these.\nThe second concern can also be addressed within walras. To do so, we introduce yet another sort of producer agent. These new agents, called arbitrageurs, act as specialized middlemen, monitoring isolated pieces of the network for ine ciencies. An arbitrageur A i;j;k produces transportation from i to k by buying capacity from i to j and j to k. Its production function simply speci es that the amount of its output good, G i;k , is equal to the minimum of its two inputs, G i;j and G j;k . If p i;j + p j;k < p i;k , then its production is pro table. Its bidding policy in walras is to increment its level of activity at each iteration by an amount proportional to its current pro tability (or decrement proportional to the loss). Such incremental behavior is necessary for all constant-returns producers in walras, as the pro t maximization problem has no interior solution in the linear case. 7 To incorporate arbitrageurs into the transportation market structure, we rst create new goods corresponding to the transitive closure of the transportation network. In the example network, this leads to goods for every location pair. Next, we add an arbitrageur A i;j;k for every triple of locations such that (1) i ! j is in the original network, and (2) there exists a path from j to k that does not traverse location i. These two conditions ensure that there is an arbitrageur A i;j;k for every pair i; k connected by a path with more than one link, and eliminate some combinations that are either redundant or clearly unpro table.\nThe revised market structure for the running example is depicted in Figure 5, with new goods and agents shaded. Some goods and agents that are inactive in the market solution have been omitted from the diagram to avoid clutter.\nNotice that in Figure 5 the connectivity of the shippers has been signi cantly decreased, as the shippers now need be aware of only the good directly serving their origin-destination pair. This dramatically simpli es their bidding problem, as they can avoid all analysis of the price network. The structure as a whole seems more distributed, as no agent is concerned with more than three goods.\n7. Without such a restriction on its bidding behavior, the competitive constant-returns producer would choose to operate at a level of in nity or zero, depending on whether its activity were pro table or unpro table at the going prices (at break-even, the producer is indi erent among all levels). This would lead to perpetual oscillation, a problem noticed (and solved) by Paul Samuelson in 1949 when he considered the use of market mechanisms to solve linear programming problems (Samuelson, 1966).\nG 0 C 2,3 G 2,3 C 1,2 G 2,4 G 1,2 3,4 G 3,4 1,2,4 A S 1,4 C C 2,4 A 2,3,4 G 1,4 A 2,3,1 C C C S G G G A G 2,1 3,1 4,2 4,1 2,1 3,1 4,2 4,2,1 4,1\nFigure 5: The revised walras market con guration with arbitrageurs. Despite the simpli ed shipper behavior, walras still converges to the SE, or optimal solution, in this con guration. Although the resulting allocation of resources is identical, a qualitative change in market structure here corresponds to a qualitative change in the degree of decentralization.\nIn fact, the behavior of walras on the market con guration with arbitrageurs is virtually identical to a standard distributed algorithm (Gallager, 1977) for multicommodity ow (minimum delay on communication networks). In Gallager's algorithm, distributed modules expressly di erentiate the cost function to derive the marginal cost of increasing ow on a communication link. Flows are adjusted up or down so to equate the marginal costs along competing subpaths. This procedure provably converges to the optimal solution as long as the iterative adjustment parameter is su ciently small. Similarly, convergence in walras for this model requires that the arbitrageurs do not adjust their activity levels too quickly in response to pro t opportunities or loss situations." }, { "figure_ref": [], "heading": "Summary", "publication_ref": [], "table_ref": [], "text": "The preceding sections have developed three progressively elaborate market con gurations for the multicommodity ow problem. Table 2 summarizes the size and shape of the conguration for a transportation network with V locations and E links, and M movement requirements. The basic shipper model results in the user equilibrium, while both of the augmented models produce the globally optimal system equilibrium. The carrier model requires E new producer agents to produce the superior result. The arbitrageur model adds O(V E) more producers and potentially some new goods as well, but reduces the number of goods of interest to any individual agent from O(E) to a small constant. These market models represent three qualitatively distinct points on the spectrum of potential con gurations. Hybrid models are also conceivable, for example, where a partial set of arbitrageurs are included, perhaps arranged in a hierarchy or some other regular model goods shippers carriers arbitrageurs Basic shipper\nE + 1 M O(E)] | | : : :plus carriers E + 1 M O(E)] E 2] | : : :plus arbitrageurs O(V 2 ) M 2] E 2] O(V E) 3]\nTable 2: Numbers of goods and agents for the three market con gurations. For each type of agent, the gure in brackets indicates the number of goods on which each individual bids.\nstructure. I would expect such con gurations to exhibit behaviors intermediate to the speci c models studied here, with respect to both equilibrium produced and degree of decentralization." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b22", "b8", "b8" ], "table_ref": [], "text": "One serious limitation of walras is the assumption that agents act competitively. As mentioned above, this behavior is rational when there are many agents, each small with respect to the overall economy. However, when an individual agent is large enough to a ect prices signi cantly (i.e., possesses market power), it forfeits utility or pro ts by failing to take this into account. There are two approaches toward alleviating the restriction of perfect competition in a computational economy. First, we could simply adopt models of imperfect competition, perhaps based on speci c forms of imperfection (e.g., spatial monopolistic competition) or on general game-theoretic models. Second, as architects we can con gure the markets to promote competitive behavior. For example, decreasing the agent's grain size and enabling free entry of agents should enhance the degree of competition. Perhaps most interestingly, by controlling the agents' knowledge of the market structure (via standard information-encapsulation techniques), we can degrade their ability to exploit whatever market power they possess. Uncertainty has been shown to increase competitiveness among risk-averse agents in some formal bidding models (McAfee & McMillan, 1987), and in a computational environment we have substantial control over this uncertainty. The existence of competitive equilibria and e cient market allocations also depends critically on the assumption of nonincreasing returns to scale. Although congestion is a real factor in transportation networks, for example, for many modes of transport there are often other economies of scale and density that may lead to returns that are increasing overall (Harker, 1987). Note that strategic interactions, increasing returns, and other factors degrading the e ectiveness of market mechanisms also inhibit decentralization in general, and so would need to be addressed directly in any approach.\nHaving cast walras as a general environment for distributed planning, it is natural to ask how universal \\market-oriented programming\" is as a computational paradigm. We can characterize the computational power of this model easily enough, by correspondence to the class of convex programming problems represented by economies satisfying the classical conditions. However, the more interesting issue is how well the conceptual framework of market equilibrium corresponds to the salient features of distributed planning problems. Although it is too early to make a de nitive assertion about this, it seems clear that many planning tasks are fundamentally problems in resource allocation, and that the units of distribution often correspond well with units of agency. Economics has been the most prominent (and arguably the most successful) approach to modeling resource allocation with decentralized decision making, and it is reasonable to suppose that the concepts economists nd useful in the social context will prove similarly useful in our analogous computational context. Of course, just as economics is not ideal for analyzing all aspects of social interaction, we should expect that many issues in the organization of distributed planning will not be well accounted-for in this framework.\nFinally, the transportation network model presented here is a highly simpli ed version of the actual planning problem for this domain. A more realistic treatment would cover multiple commodity types, discrete movements, temporal extent, hierarchical network structure, and other critical features of the problem. Some of these may be captured by incremental extensions to the simple model, perhaps applying elaborations developed by the transportation science community. For example, many transportation models (including Harker's more elaborate formulation (Harker, 1987)) allow for variable supply and demand of the commodities and more complex shipper-carrier relationships. Concepts of spatial price equilibrium, based on markets for commodities in each location, seem to o er the most direct approach toward extending the transportation model within walras." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Distributed Optimization", "publication_ref": [ "b1", "b6" ], "table_ref": [], "text": "The techniques and models described here obviously build on much work in economics, transportation science, and operations research. The intended research contribution here is not to these elds, but rather in their application to the construction of a computational framework for decentralized decision making in general. Nevertheless, a few words are in order regarding the relation of the approach described here to extant methods for distributed optimization.\nAlthough the most elaborate walras model is essentially equivalent to existing algorithms for distributed multicommodity ow (Bertsekas & Tsitsiklis, 1989;Gallager, 1977), the market framework o ers an approach toward extensions beyond the strict scope of this particular optimization problem. For example, we could reduce the number of arbitrageurs, and while this would eliminate the guarantees of optimality, we might still have a reasonable expectation for graceful degradation. Similarly, we could realize conceptual extensions to the structure of the problem, such as distributed production of goods in addition to transportation, by adding new types of agents. For any given extension, there may very well be a customized distributed optimization algorithm that would outperform the computational market, but coming up with this algorithm would likely involve a completely new analysis. Nevertheless, it must be stated that speculations regarding the methodological advantages of the market-oriented framework are indeed just speculations at this point, and the relative exibility of applications programming in this paradigm must ultimately be demonstrated empirically.\nFinally, there is a large literature on decomposition methods for mathematical programming problems, which is perhaps the most common approach to distributed optimization. Many of these techniques can themselves be interpreted in economic terms, using the close relationship between prices and Lagrange multipliers. Again, the main distinction of the approach advocated here is conceptual. Rather than taking a global optimization problem and decentralizing it, our aim is to provide a framework for formulating a task in a distributed manner in the rst place." }, { "figure_ref": [], "heading": "Market-Based Computation", "publication_ref": [ "b3", "b20", "b30", "b24", "b37", "b17", "b39", "b18", "b21", "b33", "b26" ], "table_ref": [], "text": "The basic idea of applying economic mechanisms to coordinate distributed problem solving is not new to the AI community. Starting with the contract net (Davis & Smith, 1983), many have found the metaphor of markets appealing, and have built systems organized around markets or market-like mechanisms (Malone, Fikes, Grant, & Howard, 1988). The original contract net actually did not include any economic notions at all in its bidding mechanism, however, recent work by Sandholm (1993) has shown how cost and price can be incorporated in the contract net protocol to make it more like a true market mechanism. Miller and Drexler (Drexler & Miller, 1988;Miller & Drexler, 1988) have examined the market-based approach in depth, presenting some underlying rationale and addressing speci c issues salient in a computational environment. Waldspurger, Hogg, Huberman, Kephart, and Stornetta (1992) investigated the concepts further by actually implementing market mechanisms to allocate computational resources in a distributed operating system. Researchers in distributed computing (Kurose & Simha, 1989) have also applied specialized algorithms based on economic analyses to speci c resource-allocation problems arising in distributed systems. For further remarks on this line of work, see (Wellman, 1991).\nRecently, Kuwabara and Ishida (1992) have experimented with demand adjustment methods for a task very similar to the multicommodity ow problem considered here. One signi cant di erence is that their method would consider each path in the network as a separate resource, whereas the market structures here manipulate only links or location pairs. Although they do not cast their system in a competitive-equilibrium framework, the results are congruent with those obtained by walras.\nWalras is distinct from these prior e orts in two primary respects. First, it is constructed expressly in terms of concepts from general equilibrium theory, to promote mathematical analysis of the system and facilitate the application of economic principles to architectural design. Second, walras is designed to serve as a general programming environment for implementing computational economies. Although not developed speci cally to allocate computational resources, there is no reason these could not be included in market structures con gured for particular application domains. Indeed, the idea of grounding measures of the value of computation in real-world values (e.g., cargo movements) follows naturally from the general-equilibrium view of interconnected markets, and is one of the more exciting prospects for future applications of walras to distributed problem-solving.\nOrganizational theorists have studied markets as mechanisms for coordinating activities and allocating resources within rms. For example, Malone (1987) models information requirements, exibility and other performance characteristics of a variety of market and non-market structures. In his terminology, walras implements a centralized market, where the allocation of each good is mediated by an auction. Using such models, we can determine whether this gross form of organization is advantageous, given information about the cost of communication, the exibility of individual modules, and other related features. In this paper, we examine in greater detail the coordination process in computational markets, elaborating on the criteria for designing decentralized allocation mechanisms. We take the distributivity constraint as exogenously imposed; when the constraint is relaxable, both organizational and economic analysis illuminate the tradeo s underlying the mechanism design problem.\nFinally, market-oriented programming shares with Shoham's agent-oriented programming (Shoham, 1993) the view that distributed problem-solving modules are best designed and understood as rational agents. The two approaches support di erent agent operations (transactions versus speech acts), adopt di erent rationality criteria, and emphasize different agent descriptors, but are ultimately aimed at achieving the same goal of specifying complex behavior in terms of agent concepts (e.g., belief, desire, capability) and social organizations. Combining individual rationality with laws of social interaction provides perhaps the most natural approach to generalizing Newell's \\knowledge level analysis\" idea (Newell, 1982) to distributed computation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In summary, walras represents a general approach to the construction and analysis of distributed planning systems, based on general equilibrium theory and competitive mechanisms. The approach works by deriving the competitive equilibrium corresponding to a particular con guration of agents and commodities, speci ed using walras's basic constructs for de ning computational market structures. In a particular realization of this approach for a simpli ed form of distributed transportation planning, we see that qualitative di erences in economic structure (e.g., cost-sharing among shippers versus ownership of shared resources by pro t-maximizing carriers) correspond to qualitatively distinct behaviors (user versus system equilibrium). This exercise demonstrates that careful design of the distributed decision structure according to economic principles can sometimes lead to e ective decentralization, and that the behaviors of alternative systems can be meaningfully analyzed in economic terms.\nThe contribution of the work reported here lies in the idea of market-oriented programming, an algorithm for distributed computation of competitive equilibria of computational economies, and an initial illustration of the approach on a simple problem in distributed resource allocation. A great deal of additional work will be required to understand the precise capabilities and limitations of the approach, and to establish a broader methodology for con guration of computational economies. Scott Shenker, Yoav Shoham, Hal Varian, Carl Waldspurger, Martin Weitzman, and the anonymous reviewers for helpful comments and suggestions." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [ "b40" ], "table_ref": [], "text": "This paper is a revised and extended version of (Wellman, 1992). I have bene ted from discussions of computational economies with many colleagues, and would like to thank in particular Jon Doyle, Ed Durfee, Eli Gafni, Daphne Koller, Tracy Mullen, Anna Nagurney," } ]
[ { "authors": "", "journal": "Cambridge University Press", "ref_id": "b0", "title": "Studies in Resource Allocation Processes", "year": "1977" }, { "authors": "D P Bertsekas; J N Tsitsiklis", "journal": "Prentice-Hall", "ref_id": "b1", "title": "Parallel and Distributed Computation", "year": "1989" }, { "authors": "S Dafermos; A Nagurney", "journal": "Transportation Science", "ref_id": "b2", "title": "Supply and demand equilibration algorithms for a class of market equilibrium problems", "year": "1989" }, { "authors": "R Davis; R G Smith", "journal": "Arti cial Intelligence", "ref_id": "b3", "title": "Negotiation as a metaphor for distributed problem solving", "year": "1983" }, { "authors": "K E Drexler; M S Miller", "journal": "Huberman", "ref_id": "b4", "title": "Incentive engineering for computational resource management", "year": "1988" }, { "authors": "A Eydeland; A Nagurney", "journal": "Computer Science in Economics and Management", "ref_id": "b5", "title": "Progressive equilibration algorithms: The case of linear transaction costs", "year": "1989" }, { "authors": "R G Gallager", "journal": "IEEE Transactions on Communications", "ref_id": "b6", "title": "A minimum delay routing algorithm using distributed computation", "year": "1977" }, { "authors": "P T Harker", "journal": "Operations Research", "ref_id": "b7", "title": "Alternative models of spatial competition", "year": "1986" }, { "authors": "P T Harker", "journal": "VNU Science Press", "ref_id": "b8", "title": "Predicting Intercity Freight Flows", "year": "1987" }, { "authors": "P T Harker", "journal": "Transportation Science", "ref_id": "b9", "title": "Multiple equilibrium behaviors on networks", "year": "1988" }, { "authors": "A Haurie; P Marcotte", "journal": "Networks", "ref_id": "b10", "title": "On the relationship between Nash-Cournot and Wardrop equilibria", "year": "1985" }, { "authors": "J R Hicks", "journal": "Oxford University Press", "ref_id": "b11", "title": "Value and Capital", "year": "1948" }, { "authors": "W Hildenbrand; A P Kirman", "journal": "North-Holland Publishing Company", "ref_id": "b12", "title": "Introduction to Equilibrium Analysis: Variations on Themes by Edgeworth and Walras", "year": "1976" }, { "authors": "B A Huberman", "journal": "North-Holland", "ref_id": "b13", "title": "The Ecology of Computation", "year": "1988" }, { "authors": "L Hurwicz", "journal": "", "ref_id": "b14", "title": "The design of resource allocation mechanisms", "year": "1973" }, { "authors": "J O Wellman Kephart; T Hogg; B A Huberman", "journal": "Physical Review A", "ref_id": "b15", "title": "Dynamics of computational ecosystems", "year": "1989" }, { "authors": "T C Koopmans", "journal": "Springer-Verlag", "ref_id": "b16", "title": "Uses of prices", "year": "1954" }, { "authors": "J F Kurose; R Simha", "journal": "IEEE Transactions on Computers", "ref_id": "b17", "title": "A microeconomic approach to optimal resource allocation in distributed computer systems", "year": "1989" }, { "authors": "K Kuwabara; T Ishida", "journal": "", "ref_id": "b18", "title": "Symbiotic approach to distributed resource allocation: Toward coordinated balancing", "year": "1992" }, { "authors": "D Lavoie; H Baetjer; W Tulloh", "journal": "Hotline on Object-Oriented Technology", "ref_id": "b19", "title": "Coping with complexity: OOPS and the economists' critique of central planning", "year": "1991" }, { "authors": "T W Malone; R E Fikes; K R Grant; M T Howard", "journal": "Huberman", "ref_id": "b20", "title": "Enterprise: A marketlike task scheduler for distributed computing environments", "year": "1988" }, { "authors": "T W Malone", "journal": "Management Science", "ref_id": "b21", "title": "Modeling coordination in organizations and markets", "year": "1987" }, { "authors": "R P Mcafee; J Mcmillan", "journal": "Journal of Economic Literature", "ref_id": "b22", "title": "Auctions and bidding", "year": "1987" }, { "authors": "P Milgrom; J Roberts", "journal": "Games and Economic Behavior", "ref_id": "b23", "title": "Adaptive and sophisticated learning in normal form games", "year": "1991" }, { "authors": "M S Miller; K E Drexler", "journal": "Huberman", "ref_id": "b24", "title": "Markets and computation: Agoric open systems", "year": "1988" }, { "authors": "A Nagurney", "journal": "Kluwer Academic Publishers", "ref_id": "b25", "title": "Network Economics: A Variational Inequality Approach", "year": "1993" }, { "authors": "A Newell", "journal": "Arti cial Intelligence", "ref_id": "b26", "title": "The knowledge level", "year": "1982" }, { "authors": "S Reiter", "journal": "", "ref_id": "b27", "title": "Information incentive and performance in the (new) 2 welfare economics", "year": "1986" }, { "authors": "P A Samuelson", "journal": "MIT Press", "ref_id": "b28", "title": "Market mechanisms and maximization", "year": "1949" }, { "authors": "P A Samuelson", "journal": "Journal of Economic Literature", "ref_id": "b29", "title": "Complementarity: An essay on the 40th anniversary of the Hicks-Allen revolution in demand theory", "year": "1974" }, { "authors": "T Sandholm", "journal": "AAAI", "ref_id": "b30", "title": "An implementation of the contract net protocol based on marginal cost calculations", "year": "1993" }, { "authors": "H E Scarf", "journal": "Cambridge University Press", "ref_id": "b31", "title": "The computation of equilibrium prices", "year": "1984" }, { "authors": "S Shenker", "journal": "", "ref_id": "b32", "title": "Congestion control in computer networks: An exercise in cost-sharing", "year": "1991" }, { "authors": "Y Shoham", "journal": "Arti cial Intelligence", "ref_id": "b33", "title": "Agent-oriented programming", "year": "1993" }, { "authors": "J B Shoven; J Whalley", "journal": "Journal of Economic Literature", "ref_id": "b34", "title": "Applied general-equilibrium models of taxation and international trade: An introduction and survey", "year": "1984" }, { "authors": "J B Shoven; J Whalley", "journal": "Cambridge University Press", "ref_id": "b35", "title": "Applying General Equilibrium", "year": "1992" }, { "authors": "H R Varian", "journal": "W. W. Norton & Company", "ref_id": "b36", "title": "Microeconomic Analysis", "year": "1984" }, { "authors": "C A Waldspurger; T Hogg; B A Huberman; J O Kephart; S Stornetta", "journal": "IEEE Transactions on Software Engineering", "ref_id": "b37", "title": "Spawn: A distributed computational economy", "year": "1992" }, { "authors": "Z Wang; J Slagle", "journal": "", "ref_id": "b38", "title": "An object-oriented knowledge-based approach for formulating applied general equilibrium models", "year": "1993" }, { "authors": "M P Wellman", "journal": "Arti cial Intelligence", "ref_id": "b39", "title": "Review of Huberman", "year": "1988" }, { "authors": "M P Wellman", "journal": "AAAI", "ref_id": "b40", "title": "A general-equilibrium approach to distributed transportation planning", "year": "1992" } ]
[ { "formula_coordinates": [ 5, 234.48, 328.56, 143.28, 30.96 ], "formula_id": "formula_0", "formula_text": "p t+1 = p t + ( n X i=1 x i n X i=1 e i ):" }, { "formula_coordinates": [ 7, 90, 93.43, 401.58, 206.77 ], "formula_id": "formula_1", "formula_text": "G 1 G 2 G k A 1 A 2 A i A n Task Agenda [1], [7], [4] p 1 p 2 p k tote board } } Figure 1:" }, { "formula_coordinates": [ 11, 186.32, 289.91, 241.2, 116.74 ], "formula_id": "formula_2", "formula_text": "G 0 G 4,2 G 2,1 G 3,1 S 4,1 G 2,4 G 1,2 S 1,4 G 2,3 G 3,4" }, { "formula_coordinates": [ 13, 144.8, 469.76, 322.76, 111.22 ], "formula_id": "formula_3", "formula_text": "C 1,2 G 0 C G 4,2 4,2 G 2,1 G 3,1 S 4,1 C 3,1 C 2,3 G 2,4 G 1,2 S 1,4 C 3,4 G 2,3 C 2,1 G 3,4 C 2,4" }, { "formula_coordinates": [ 16, 146.13, 102.04, 323.9, 182.47 ], "formula_id": "formula_4", "formula_text": "G 0 C 2,3 G 2,3 C 1,2 G 2,4 G 1,2 3,4 G 3,4 1,2,4 A S 1,4 C C 2,4 A 2,3,4 G 1,4 A 2,3,1 C C C S G G G A G 2,1 3,1 4,2 4,1 2,1 3,1 4,2 4,2,1 4,1" }, { "formula_coordinates": [ 17, 143.04, 114, 310.32, 43.44 ], "formula_id": "formula_5", "formula_text": "E + 1 M O(E)] | | : : :plus carriers E + 1 M O(E)] E 2] | : : :plus arbitrageurs O(V 2 ) M 2] E 2] O(V E) 3]" } ]
A Market-Oriented Programming Environment and its Application to Distributed Multicommodity Flow Problems
Market price systems constitute a well-understood class of mechanisms that under certain conditions provide e ective decentralization of decision making with minimal communication overhead. In a market-oriented programming approach to distributed problem solving, we derive the activities and resource allocations for a set of computational agents by computing the competitive equilibrium of an arti cial economy. Walras provides basic constructs for de ning computational market structures, and protocols for deriving their corresponding price equilibria. In a particular realization of this approach for a form of multicommodity ow problem, we see that careful construction of the decision process according to economic principles can lead to e cient distributed resource allocation, and that the behavior of the system can be meaningfully analyzed in economic terms.
Michael P Wellman
[ { "figure_caption": "Figure 2 :2Figure 2: A simple network (from Harker (1988)).", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Walras basic shipper market con guration for the example transportation network.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Equilibria derived by walras for the transportation example. TC, MC, and AC stand for total, marginal, and average cost, respectively. TC = shipper expense carrier pro t.", "figure_data": "3;4 p 4;2", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b11", "b6", "b16", "b3" ], "table_ref": [], "text": "Mathematicians are increasingly recognizing the usefulness of experiments with computers to help advance mathematical theory. It is surprising therefore that one area of mathematics which has bene tted little from empirical results is the theory of algorithms, especially those used in AI. Since the objects of this theory are abstract descriptions of computer programs, we should in principle be able to reason about programs entirely deductively. However, such theoretical analysis is often too complex for our current mathematical tools. Where theoretical analysis is practical, it is often limited to (unrealistically) simple cases. For example, results presented in (Koutsoupias & Papadimitriou, 1992) for the greedy algorithm for satis ability do not apply to interesting and hard region of problems as described in x3.\nIn addition, actual behaviour on real problems is sometimes quite di erent to worst and average case analyses. We therefore support the calls of McGeoch (McGeoch, 1986), Hooker (Hooker, 1993) and others for the development of an empirical science of algorithms. In such a science, experiments as well as theory are used to advance our understanding of the properties of algorithms. One of the aims of this paper is to demonstrate the bene ts of such an empirical approach. We will present some surprising experimental results and demonstrate how such results can direct future e orts for a theoretical analysis.\nThe algorithm studied in this paper is GSAT, a randomized hill-climbing procedure for propositional satis ability (or SAT) (Selman, Levesque, & Mitchell, 1992;Selman & Kautz, 1993a). Propositional satis ability is the problem of deciding if there is an assignment for c 1993 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nthe variables in a propositional formula that makes the formula true. Recently, there has been considerable interest in GSAT as it appears to be able to solve large and di cult satisability problems beyond the range of conventional procedures like Davis-Putnam (Selman et al., 1992). We believe that the results we give here will actually apply to a larger family of procedures for satis ability called GenSAT (Gent & Walsh, 1993). Understanding such procedures more fully is of considerable practical interest since SAT is, in many ways, the archetypical (and intractable) NP-hard problem. In addition, many AI problems can be encoded quite naturally in SAT (eg. constraint satisfaction, diagnosis and vision interpretation, refutational theorem proving, planning). This paper is structured as follows. In x2 we introduce GSAT, the algorithm studied in the rest of the paper. In x3 we de ne and motivate the choice of problems used in our experiments. The experiments themselves are described in x4. These experiments provide a more complete picture of GSAT's search than previous informal accounts. The results of these experiments are analysed more closely in x5 using some powerful statistical tools. This analysis allow us to make various experimentally veri able conjectures about GSAT's search. For example, we are able to conjecture: the length of GSAT's initial hill-climbing phase; the average gradient of this phase; the simple scaling of various important features like the score (on which hill-climbing is performed) and the branching rate. In x6 we suggest how such results can be used to direct future theoretical analysis. Finally, in x7 we describe related work and end with some brief conclusions in x8." }, { "figure_ref": [], "heading": "GSAT", "publication_ref": [ "b16", "b3" ], "table_ref": [], "text": "GSAT is a random greedy hill-climbing procedure. GSAT deals with formulae in conjunctive normal form (CNF); a formula, is in CNF i it is a conjunction of clauses, where a clause is a disjunction of literals. GSAT starts with a randomly generated truth assignment, then hill-climbs by \\ ipping\" the variable assignment which gives the largest increase in the number of clauses satis ed (called the \\score\" from now on). Given the choice between several equally good ips, GSAT picks one at random. If no ip can increase the score, then a variable is ipped which does not change the score or (failing that) which decreases the score the least. Thus GSAT starts in a random part of the search space and searches for a global solution using only local information. Despite its simplicity, this procedure has been shown to give good performance on hard satis ability problems (Selman et al., 1992). procedure GSAT( ) for i := 1 to Max-tries T := random truth assignment for j := 1 to Max-ips if T satis es then return T else Poss-ips := set of vars which increase satis ability most V := a random element of Poss-ips T := T with V's truth assignment ipped end end return \\no satisfying assignment found\"\nIn (Gent & Walsh, 1993) we describe a large number of experiments which suggest that neither greediness not randomness is important for the performance of this procedure.\nThese experiments also suggest various other conjectures. For instance, for random 3-SAT problems (see x3) the log of the runtime appears to scale with a less than linear dependency on the problem size. Conjectures such as these could, as we noted in the introduction, be very pro tably used to direct future e orts to analyse GSAT theoretically. Indeed, we believe that the experiments reported here suggest various conjectures which would be useful in a proof of the relationship between runtime and problem size (see x6 for more details)" }, { "figure_ref": [], "heading": "Problem Space", "publication_ref": [ "b10", "b2", "b1" ], "table_ref": [], "text": "To be able to perform experiments on an algorithm, you need a source of problems on which to run the algorithm. Ideally the problems should come from a probability distribution with some well-de ned properties, contain a few simple parameters and be representative of problems which occur in real situations. Unfortunately, it is often di cult to meet all these criteria. In practice, one is usually forced to accept either problems from a well-de ned distribution with a few simple parameters or a benchmark set of real problems, necessarily from some unknown distribution. In these experiments we adopt the former approach and use CNF formulae randomly generated according to the random k-SAT model.\nProblems in random k-SAT with N variables and L clauses are generated as follows: a random subset of size k of the N variables is selected for each clause, and each variable is made positive or negative with probability 1 2 . For random 3-SAT, there is a phase transition from satis able to unsatis able when L is approximately 4.3N (Mitchell, Selman, & Levesque, 1992;Larrabee & Tsuji, 1992;Crawford & Auton, 1993). At lower L, most problems generated are under-constrained and are thus satis able; at higher L, most problems generated are over-constrained and are thus unsatis able. As with many NP-complete problems, problems in the phase transition are typically much more di cult to solve than problems away from the transition (Cheeseman, Kanefsky, & Taylor, 1991). The region L=4.3N is thus generally considered to be a good source of hard SAT problems and has been the focus of much recent experimental e ort." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_2", "fig_2", "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "GSAT's search", "publication_ref": [ "b11", "b16", "b3" ], "table_ref": [], "text": "When GSAT was rst introduced, it was noted that search in each try is divided into two phases. In the rst phase of a try, each ip increases the score. However, this phase is relatively short and is followed by a second phase in which most ips do not increase the score, but are instead sideways moves which leave the same number of clauses satis ed. This phase is a search of a \\plateau\" for the occasional ip that can increase the score. 1 One of the aims of this paper is to improve upon such informal observations by making quantitative measurements of GSAT's search, and by using these measurements to make several experimentally testable predictions.\nIn our experiments, we followed three methodological principles from (McGeoch, 1986). First, we performed experiments with large problem sizes and many repetitions, to reduce variance and allow for emergent properties. Second, we sought good views of the data. That is, we looked for features of performance which are meaningful and which are as predictable as possible. Third, we analysed our results closely. Suitable analysis of data may show features which are not clear from a simple presentation. In the rest of this paper we show how these principles enabled us to make very detailed conjectures about GSAT's search.\nMany features of GSAT's search space can be graphically illustrated by plotting how they vary during a try. The most obvious feature to plot is the score, the number of satis ed clauses. In our quest for a good view of GSAT's search space, we also decided to plot \\possips\" at each ip: that is, the number of equally good ips between which GSAT randomly picks. This is an interesting measure since it indicates the branching rate of GSAT's search space.\nWe begin with one try of GSAT on a 500 variable random 3-SAT problem in the di cult region of L = 4.3N (Figure 1a). Although there is considerable variation between tries, this graph illustrates features common to all tries. Both score (in Figure 1a) and poss-ips (in Figure 1b) are plotted as percentages of their maximal values, that is L and N respectively. The percentage score starts just above 87.5%, which might seem surprisingly high. Theoretically, however, we expect a random truth assignment in k-SAT to satisfy 2 k 1 2 k of all clauses (in this instance, 7 8 ). As expected from the earlier informal description, the score climbs rapidly at rst, and then attens o as we mount the plateau. The graph is discrete since positive moves increase the score by a xed amount, but some of this discreteness is lost due to the small scale. To illustrate the discreteness, in Figure 1b we plot the change in the number of satis ed clauses made by each ip (as its exact value, unscaled). Note that the x-axis for both plots in Figure 1b is the same. The behaviour of poss-ips is considerably more complicated than that of the score. It is easiest rst to consider poss-ips once on the plateau. The start of plateau search, after 115 ips, coincides with a very large increase in poss-ips, corresponding to a change from the region where a small number of ips can increase the score by 1 to a region where a large number of ips can be made which leave the score unchanged. Once on the plateau, there are several sharp dips in poss-ips. These correspond to ips where an increase by 1 in the score was e ected, as can be seen from Figure 1b. It seems that if you can increase the score on the plateau, you only have a very small number of ways to do it. Also, the dominance of ips which make no change in score graphically illustrates the need for such \\sideways\" ips, a need that has been noted before (Selman et al., 1992;Gent & Walsh, 1993).\nPerhaps the most fascinating feature is the initial behaviour of poss-ips. There are four well de ned wedges starting at 5, 16, 26, and 57 ips, with occasional sharp dips. These wedges demonstrate behaviour analogous to the that of poss-ips on the plateau. The plateau spans the region where ips typically do not change the score: we call this region H 0 since hill-climbing typically makes zero change to the score. The last wedge spans the region H 1 where hill-climbing typically increases the score by 1, as can be seen very clearly from Figure 1b. Again Figure 1b shows that the next three wedges (reading right to left) span regions H 2 , H 3 , and H 4 . As with the transition onto the plateau, the transition between each region is marked by a sharp increase in poss-ips. Dips in the wedges represent unusual ips which increase the score by more than the characteristic value for that region, just as the dips in poss-ips on the plateau represent ips where an increase in score was possible. This exact correlation can be seen clearly in Figure 1b. Note that in this experiment, in no region H j did a change in score of j + 2 occur, and that there was no change in score of 1 at all. In addition, each wedge in poss-ips appears to decay close to linearly. This is explained by the facts that once a variable is ipped it no longer appears in poss-ips ( ipping it back would decrease score), that most of the variables in poss-ips can be ipped independently of each other, and that new variables are rarely added to poss-ips as a consequence of an earlier ip. On the plateau, however, when a variable is ipped which does not change the score, it remains in poss-ips since ipping it back also does not change the score.\nTo determine if this behaviour is typical, we generated 500 random 3-SAT problems with N=500 and L=4.3N, and ran 10 tries of GSAT on each problem. Figure 2a shows the mean percentage score2 while Figure 2b presents the mean percentage poss-ips together with the mean change in score at each ip. (The small discreteness in this gure is due to the discreteness of Postscript's plotting.)\nThe average percentage score is very similar to the behaviour on the individual run of Figure 1, naturally being somewhat smoother. The graph of average poss-ips seems quite di erent, but it is to be expected that you will neither observe the sharply de ned dips in poss-ips from Figure 1b, nor the very sharply de ned start to the wedges, since these happen at varying times. It is remarkable that the wedges are consistent enough to be visible when averaged over 5,000 tries; the smoothing in the wedges and the start of the plateau is caused by the regions not starting at exactly the same time in each try.\nFigure 2 does not distinguish between satis able and unsatis able problems. There is no current technique for determining the satis ability of 500 variable 3-SAT problems in feasible time. From instances we have been able to test, we do not believe that large di erences from Figure 2 will be seen when it is possible to plot satis able and unsatis able problems separately, but this remains an interesting topic to investigate in the future.\nExperiments with other values of N with the same ratio of clauses to variables demonstrated qualitatively similar behaviour. More careful analysis shows the remarkable fact that not only is the behaviour qualitatively similar, but quantitatively similar, with a simple linear dependency on N. If graphs similar to Figure 2 are plotted for each N with the x-axis scaled by N, behaviour is almost identical. To illustrate this, Figure 3 shows the mean percentage score, percentage poss-ips, and change in score, for N = 500, 750, and 1000, for L = 4.3N and for the rst 0.5N ips (250 ips at N = 500). Both Figure 3a and Figure 3b demonstrate the closeness of the scaling, to the extent that they may appear to contain just one thick line. In Figure 3b there is a slight tendency for the di erent regions of hill-climbing to become better de ned with increasing N.\nThe gures we have presented only reach a very early stage of plateau search. To investigate further along the plateau, we performed experiments with 100, 200, 300, 400, and 500 variables from 0 to 2.5N ips. 3 In Figure 4a shows the mean percentage score in each case, while Figure 4b shows the mean percentage poss-ips, magni ed on the y-axis for clarity. Both these gures demonstrate the closeness of the scaling on the plateau. In Figure 4b the graphs are not quite so close together as in Figure 4a. The phases of hillclimbing become much better de ned with increasing N. During plateau search, although separate lines are distinguishable, the di erence is always considerably less than 1% of the total number of variables.\nThe problems used in these experiments (random 3-SAT with L=4.3N) are believed to be unusually hard and are satis able with probability approximately 1 2 . Neither of these facts appears to be relevant to the scaling of GSAT's search. To check this we performed a similar range of experiments with a ratio of clauses to variables of 6. Although almost all such problems are unsatis able, we observed exactly the same scaling behaviour. The score does not reach such a high value as in Figure 4a, as is to be expected, but nevertheless shows the same linear scaling. On the plateau, the mean value of poss-ips is lower than before. We again observed this behaviour for L = 3N, where almost all problems are satis able. The score approaches 100% faster than before, and a higher value of poss-ips is reached on the plateau, but the decay in the value of poss-ips seen in Figure 4b does not seem to be present.\nTo summarise, we have shown that GSAT's hill-climbing goes through several distinct phases, and that the average behaviour of certain important features scale in linear fashion with N. These results provide a considerable advance on previous informal descriptions of GSAT's search." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Numerical Conjectures", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In this section, we will show that detailed numerical conjectures can be made if the data presented graphically in x4 is analysed numerically. We divide our analysis into two parts:\nrst we deal with the plateau search, where behaviour is relatively simple, then we analyse the hill-climbing search.\nOn the plateau, both average score and poss-ips seem to decay exponentially with a simple linear dependency on problem size. To test this, we performed regression analysis on our experimental data, using the models S(x) = N (B C e x A N ) ( 1)\nP(x) = N (E + F e x D N ) (2)\nwhere x represents the number of ips, S(x) the average score at ip x and P(x) the average number of possible ips. To determine GSAT's behaviour just on the plateau, we analysed data on mean score, starting from 0.4N ips, a time when plateau search always appears to have started (see x5). Our experimental data tted the model very well. Detailed results for N = 500 are given in Table 1 to three signi cant gures. The values of A, B, and C change only slightly with N, providing further evidence for the scaling of GSAT's behaviour. For L = 3N the asymptotic mean percentage score is very close to 100% of clauses being satis ed, while for L = 4.3N it is approximately 99.3% of clauses and for L = 6N it is approximately 98.2% of clauses. A good t was also found for mean poss-ips behaviour (see Table 2 for N = 500), except for L = 3N, where the mean value of poss-ips on the plateau may be constant. It seems that for L = 4.3N the asymptotic value of poss-ips is about 10% of N and that for 6 it is about 5% of N.\nIt is important to note that the behaviour we analysed was for mean behaviour over both satis able and unsatis able problems. It is likely that individual problems will exhibit similar behaviour with di erent asymptotes, but we do not expect even satis able problems to yield a mean score of 100% asymptotically. Note that as N increases a small error in percentage terms may correspond to a large error in the actual score. As a result, our predictions of asymptotic score may be inaccurate for large N, or for very large numbers of ips. Further experimentation is necessary to examine these issues in detail.\nL/N N A B C R 2\n3 500 0.511 2.997 0.0428 0.995 4.3 500 0.566 4.27 0.0772 0.995 6 500 0.492 5.89 0.112 0.993 Table 1: Regression results for average score of GSAT. 4 4. The value of R 2 is a number in the interval 0; 1] indicating how well the variance in data is explained by the regression formula. 1 R 2 is the ratio between variance of the data from its predicted value, and the variance of the data from the mean of all the data. A value of R 2 close to 1 indicates that the regression formula ts the data very well.\nL/N N D E F R 2 4.3 500 0.838 0.100 0.0348 0.996 6 500 0.789 0.0502 0.0373 0.999\nTable 2: Regression results on average poss-ips of GSAT.\nWe have also analysed GSAT's behaviour during its hill-climbing phase. Figure 1b shows regions where most ips increase the score by 4, then by 3, then by 2, then by 1. Analysis of our data suggested that each phase lasts roughly twice the length of the previous one. This motivates the following conjectures: GSAT moves through a sequence of regions H j for j = :::; 3; 2; 1 in which the majority of ips increase the score by j, and where the length of each region H j is proportional to 2 j (except for the region H 0 which represents plateau search).\nTo investigate this conjecture, we analysed 50 tries each on 20 di erent problems for random 3-SAT problems at N=500 and L=4.3N. We very rarely observe ips in H j that increase the score by less than j, and so de ne H j as the region between the rst ip which increases the score by exactly j and the rst ip which increases the score by less than j (unless the latter actually appears before the former, in which case H j is empty). One simple test of our conjecture is to compare the total time spent in H j with the total time up to the end of H j ; we predict that this ratio will be 1 2 . For j = 1 to 4 the mean and standard deviations of this ratio, and the length of each region are shown in Table 3. 5 This data supports our conjecture although as j increases each region is slightly longer than predicted. The total length of hill-climbing at N=500 is 0.22N ips, while at N=100 it is 0.23N. This is consistent with the scaling behaviour observed in x4. (3)\nIt follows from this that mean gradient of the entire hill-climbing phase is approximately 2. At N=500, we observed a mean ratio of change in score per ip during hill-climbing of 1.94 with a standard deviation of 0.1. At N=100, the ratio is 1.95 with a standard deviation of 0.2.\nThe model presented above ignores ips in H j which increase the score by more than j. Such ips were seen in Figure 1b in regions H 3 to H 1 . In our experiment 9.8% of ips in H 1 were of size 2 and 6.3% of ips in H 2 were of size 3. However, ips of size j + 2 were very rare, forming only about 0.02% of all ips in H 1 and H 2 . We conjectured that an exponential decay similar to that in H 0 occurs in each H j . That is, we conjecture that the average change in number of satis ed clauses from ip x to ip x + 1 in H j is given by: j + E j e x D j N (4)\nThis might correspond to a model of GSAT's search in which there are a certain number of ips of size j + 1 in each region H j , and the probability of making a j + 1 ip is merely dependent on the number of such ips left; the rest of the time, GSAT is obliged to make a ip of size j. Our data from 1000 tries tted this model well, giving values of R 2 of 96.8% for H 1 and 97.5% for H 2 . The regression gave estimates for the parameters of: D 1 = 0:045, E 1 = 0:25, D 2 = 0:025, E 2 = 0:15. Not surprisingly, since the region H 3 is very short, data was too noisy to obtain a better t with the model (4) than with one of linear decay. These results support our conjecture, but more experiments on larger problems are needed to lengthen the region H j for j 3." }, { "figure_ref": [], "heading": "Theoretical Conjectures", "publication_ref": [ "b0" ], "table_ref": [], "text": "Empirical results like those given in x5 can be used to direct e orts to analyse algorithms theoretically. For example, consider the plateau region of GSAT's search. If the model (1) applies also to successful tries, the asymptotic score is L, giving S(x) = L C N e x A N Di erentiating with respect to x we get, dS(x) dx = C A e x a N = L S(x) A N The gradient is a good approximation for D x , the average size of a ip at x. Hence, D x = L S(x)\nA N Our experiments suggest that downward ips and those of more than +1 are very rare on the plateau. Thus, a good ( rst order) approximation for D x is as follows, where prob(D x = j) is the probability that a ip at x is of size j.\nD x = L X j= L j prob(D x = j) = prob(D x = 1)\nHence, prob(D x = 1) = L S(x) A N That is, on the plateau the probability of making a ip of size +1 may be directly proportional to L S(x), the average number of clauses remaining unsatis ed and inversely proportional N, to the number of variables. A similar analysis and result can be given for prob(D x = j+1) in the hill-climbing region H j , which would explain the model (4) proposed in x5.\nIf our theoretical conjecture is correct, it can be used to show that the mean number of ips on successful tries will be proportional to N ln N. Further investigation, both experimental and theoretical, will be needed to determine the accuracy of this prediction. Our conjectures in this section should be seen as conjectures as to what a formal theory of GSAT's search might look like, and should be useful in determining results such as average runtime and the optimal setting for a parameter like Max-ips. In addition, if we can develop a model of GSAT's search in which prob(D x = j) is related to the number of unsatis ed clauses and N as in the above equation, then the experimentally observed exponential behaviour and linear scaling of the score will follow immediately." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b16", "b5", "b9", "b3", "b7", "b8", "b15", "b17", "b12" ], "table_ref": [], "text": "Prior to the introduction of GSAT in (Selman et al., 1992), a closely related set of procedures were studied by Gu (Gu, 1992). These procedures have a di erent control structure to GSAT which allows them, for instance, to make sideways moves when upwards moves are possible. This makes it di cult to compare our results directly. Nevertheless, we are con dent that the approach taken here would apply equally well to these procedures, and that similar results could be expected. Another \\greedy algorithm for satis ability\" has been analysed in (Koutsoupias & Papadimitriou, 1992), but our results are not directly applicable to it because, unlike GSAT, it disallows sideways ips.\nIn (Gent & Walsh, 1993) we describe an empirical study of GenSAT, a family of procedures related to GSAT. This study focuses on the importance of randomness, greediness and hill-climbing for the e ectiveness of these procedures. In addition, we determine how performance depends on parameters like Max-tries and Max-ips. We showed also that certain variants of GenSAT could outperform GSAT on random problems. It would be very interesting to perform a similar analysis to that given here of these closely related procedures.\nGSAT is closely related to simulated annealing (van Laarhoven & Aarts, 1987) and the Metropolis algorithm, which both use greedy local search with a randomised method of allowing non-optimal ips. Theoretical work on these algorithms has not applied to SAT problems, for example (Jerrum, 1992;Jerrum & Sorkin, 1993), while experimental studies of the relationship between GSAT and simulated annealing have as yet only reached tentative conclusions (Selman & Kautz, 1993b;Spears, 1993).\nProcedures like GSAT have also been successfully applied to constraint satisfaction problems other than satis ability. For example, (Minton, Johnston, Philips, & Laird, 1990) proposed a greedy local search procedure which performed well scheduling observations on the Hubble Space Telescope, and other constraint problems like the million-queens, and 3-colourability. It would be very interesting to see how the results given here map across to these new problem domains." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We have described an empirical study of search in GSAT, an approximation procedure for satis ability. We performed detailed analysis of the two basic phases of GSAT's search, an initial period of fast hill-climbing followed by a longer period of plateau search. We have shown that the hill-climbing phases can be broken down further into a number of distinct phases each corresponding to progressively slower climbing, and each phase lasting twice as long as the last. We have also shown that, in certain well de ned problem classes, the average behaviour of certain important features of GSAT's search (the average score and the average branching rate at a given point) scale in a remarkably simple way with the problem size We have also demonstrated that the behaviour of these features can be modelled very well by simple exponential decay, both in the plateau and in the hill-climbing phase. Finally, we used our experiments to conjecture various properties (eg. the probability of making a ip of a certain size) that will be useful in a theoretical analysis of GSAT. These results illustrate how carefully performed experiments can be used to guide theory, and how computers have an increasingly important rôle to play in the analysis of algorithms." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research is supported by a SERC Postdoctoral Fellowship to the rst author and a HCM Postdoctoral fellowship to the second. We thank Alan Bundy, Ian Green, and the members of the Mathematical Reasoning Group for their constructive comments and for the quadrillion CPU cycles donated to these and other experiments from SERC grant GR/H/23610. We also thank Andrew Bremner, Judith Underwood, and the reviewers of this journal for other help." } ]
[ { "authors": "", "journal": "Gent & Walsh", "ref_id": "b0", "title": "AAAI-92. These observations were enlarged upon", "year": "1992" }, { "authors": "P Cheeseman; B Kanefsky; W Taylor", "journal": "International Joint Conference on Arti cial Intelligence", "ref_id": "b1", "title": "Where the really hard problems are", "year": "1991" }, { "authors": "J Crawford; L Auton", "journal": "AAAI Press/The MIT Press", "ref_id": "b2", "title": "Experimental results on the crossover point in satisability problems", "year": "1993" }, { "authors": "I P Gent; T Walsh", "journal": "AAAI Press/The MIT Press", "ref_id": "b3", "title": "Towards an Understanding of Hill-climbing Procedures for SAT", "year": "1993" }, { "authors": "I P Gent; T Walsh", "journal": "", "ref_id": "b4", "title": "The enigma of SAT hill-climbing procedures", "year": "1992" }, { "authors": "J Gu", "journal": "SIGART Bulletin", "ref_id": "b5", "title": "E cient local search for very large-scale satis ability problems", "year": "1992" }, { "authors": "J N Hooker", "journal": "", "ref_id": "b6", "title": "Needed: An empirical science of algorithms", "year": "1993" }, { "authors": "M Jerrum", "journal": "Random Structures and Algorithms", "ref_id": "b7", "title": "Large cliques elude the Metropolis process", "year": "1992" }, { "authors": "M Jerrum; G Sorkin", "journal": "", "ref_id": "b8", "title": "Simulated annealing for graph bisection", "year": "1993" }, { "authors": "E Koutsoupias; C H Papadimitriou", "journal": "Information Processing Letters", "ref_id": "b9", "title": "On the greedy algorithm for satis ability", "year": "1992" }, { "authors": "T Larrabee; Y Tsuji", "journal": "", "ref_id": "b10", "title": "Evidence for a Satis ability Threshold for Random 3CNF Formulas", "year": "1992" }, { "authors": "C Mcgeoch", "journal": "", "ref_id": "b11", "title": "Experimental Analysis of Algorithms", "year": "1986" }, { "authors": "S Minton; M D Johnston; A B Philips; P Laird", "journal": "AAAI Press/MIT Press", "ref_id": "b12", "title": "Solving large-scale constraint satisfaction and scheduling problems using a heuristic repair method", "year": "1990" }, { "authors": "D Mitchell; B Selman; H Levesque", "journal": "AAAI Press/The MIT Press", "ref_id": "b13", "title": "Hard and easy distributions of SAT problems", "year": "1992" }, { "authors": "B Selman; H Kautz", "journal": "", "ref_id": "b14", "title": "Domain-independent extensions to GSAT: Solving large structured satis ability problems", "year": "1993" }, { "authors": "B Selman; H Kautz", "journal": "AAAI Press/The MIT Press", "ref_id": "b15", "title": "An empirical study of greedy local search for satis ability testing", "year": "1993" }, { "authors": "B Selman; H Levesque; D Mitchell", "journal": "AAAI Press/The MIT Press", "ref_id": "b16", "title": "A new method for solving hard satis ability problems", "year": "1992" }, { "authors": "W M Spears", "journal": "", "ref_id": "b17", "title": "Simulated annealing for hard satis ability problems", "year": "1993" }, { "authors": "P Van Laarhoven; E Aarts", "journal": "D. Reidel Publishing Company", "ref_id": "b18", "title": "Simulated Annealing: Theory and Applications", "year": "1987" } ]
[ { "formula_coordinates": [ 8, 234.48, 231.48, 287.76, 20.28 ], "formula_id": "formula_0", "formula_text": "P(x) = N (E + F e x D N ) (2)" }, { "formula_coordinates": [ 8, 205.2, 543.24, 195.36, 17.16 ], "formula_id": "formula_1", "formula_text": "L/N N A B C R 2" }, { "formula_coordinates": [ 10, 182.88, 608.4, 246.48, 46.68 ], "formula_id": "formula_2", "formula_text": "D x = L X j= L j prob(D x = j) = prob(D x = 1)" } ]
An Empirical Analysis of Search in GSAT
We describe an extensive study of search in GSAT, an approximation procedure for propositional satis ability. GSAT performs greedy hill-climbing on the number of satis ed clauses in a truth assignment. Our experiments provide a more complete picture of GSAT's search than previous accounts. We describe in detail the two phases of search: rapid hillclimbing followed by a long plateau search. We demonstrate that when applied to randomly generated 3-SAT problems, there is a very simple scaling with problem size for both the mean number of satis ed clauses and the mean branching rate. Our results allow us to make detailed numerical conjectures about the length of the hill-climbingphase, the average gradient of this phase, and to conjecture that both the average score and average branching rate decay exponentially during plateau search. We end by showing how these results can be used to direct future theoretical analysis. This work provides a case study of how computer experiments can be used to improve understanding of the theoretical properties of algorithms.
Ian P Gent; Toby Walsh
[ { "figure_caption": "Figure 1 :1Figure 1: GSAT's behaviour during one try, N = 500, L = 2150, rst 250 ips", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: Mean GSAT behaviour, N = 500, L = 4.3N, rst 250 ips", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 3: Scaling of mean GSAT behaviour, N = 500, 750, 1000, rst 0.5N ips", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Comparative and Absolute Lengths of hill-climbing phasesOur conjecture has an appealing corollary. Namely, that if there are i non-empty hill-", "figure_data": "Region All climbing H 1 H 2 H 3 H 4mean ratio s.d. mean length s.d. | | 112 7.59 0.486 0.0510 54.7 7.69 0.513 0.0672 29.5 5.12 0.564 0.0959 15.7 3.61 0.574 0.0161 7.00 2.48climbing regions, the average change in score per ip during hill-climbing is:1 2 1 + 1 4 2 + 1 8 3 + 1 16 4 + + 1 2 i i2:", "figure_id": "tab_0", "figure_label": "3", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b14" ], "table_ref": [], "text": "Much recent research in AI and Machine Learning is addressing the problem of learning relations from examples, especially under the title of Inductive Logic Programming (Muggleton, 1991). One goal of this line of research, although certainly not the only one, is the inductive synthesis of logic programs. More generally, we are interested in the construction of program development tools based on Machine Learning techniques. Such techniques now include e cient algorithms for the induction of logical descriptions of recursive relations. However, real logic programs contain features that are not purely logical, most notably the cut (!) predicate. The problem of learning programs with cut has not been studied before in Inductive Logic Programming, and this paper analyzes the di culties involved." }, { "figure_ref": [], "heading": "Why Learn Programs with Cut?", "publication_ref": [ "b6", "b16", "b15", "b19", "b7", "b10" ], "table_ref": [], "text": "There are two main motivations for learning logic programs with cut:\n1. ILP should provide practical tools for developing logic programs, in the context of some general program development methodology (e.g., (Bergadano, 1993b)); as real size logic programs normally contain cut, learning cut will be important for creating an integrated Software Engineering framework.\nc 1993 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\n2. Extensive use of cut can make programs sensibly shorter, and the di culty of learning a given logic program is very much related to its length.\nFor both of these objectives, we need not only cuts that make the programs more e cient without changing their input-output behavior (\\green cuts\"), but also cuts that eliminate some possible computed results (\\red cuts\"). Red cuts are sometimes considered bad programming style, but are often useful. Moreover, only the red cuts are e ective in making programs shorter. Green cuts are also important, and less controversial. Once a correct program has been inferred via inductive methods, it could be made more e cient through the insertion of green cuts, either manually or by means of automated program transformation techniques (Lau & Clement, 1993).\n1.2 Why Standard Approaches Cannot be Used?\nMost Machine Learning algorithms generate rules or clauses one at a time and independently of each other: if a rule is useful (it covers some positive example) and correct (it does not cover any negative example), then it is added to the description or program which is being generated, until all positive examples have been covered. This means that we are searching a space of possible clauses, without backtracking. This is obviously a great advantage, as programs are sets of clauses, and therefore the space of possible programs is exponentially larger.\nThe one principle which allows this simpli cation of the problem is the extensional evaluation of possible clauses, used to determine whether a clause C covers an example e. The fact that a clause C covers an example e is then used as an approximation of the fact that a logic program containing C derives e. Consider, for instance, the clause C = \\p(X,Y) \", and suppose the example e is p(a,b). In order to see whether C covers e, the extensionality principle makes us evaluate any literal in as true if and only if it matches some given positive example. For instance, if = q(X,Z) ^p(Z,Y), then the example p(a,b) is extensionally covered i there is a ground term c such that q(a,c) and p(c,b) are given as positive examples. In particular, in order to obtain the truth value of p(c,b), we will not need to call other clauses that were learned previously. For this reason, determining whether C covers e only depends on C and on the positive examples. Therefore, the learning system will decide whether to accept C as part of the nal program P independently of the other clauses P will contain.\nThe extensionality principle is found in Foil (Quinlan, 1990) and its derivatives, but is also used in bottom-up methods such as Golem (Muggleton & Feng, 1990). Shapiro's MIS system (Shapiro, 1983) uses it when re ning clauses, although it does not when backtracing inconsistencies. We have also used an extensional evaluation of clauses in the FILP system (Bergadano & Gunetti, 1993).\nWhen learning programs with cut, clauses are no longer independent and their standalone extensional evaluation is meaningless. When a cut predicate is evaluated, other possible clauses for proving the same goal will be ignored. This changes the meaning of these other clauses. Even if a clause extensionally covers some example e, it may be the case that the nal program does not derive e, because some derivation paths have been eliminated by the evaluation of a cut predicate.\nHowever, an exhaustive search in a space of programs is prohibitive. Learning methods, even if based on extensionality, are often considered ine cient if su cient prior information is not available; searching for sets of clauses will be exponentially worse. This would amount to a brute-force enumeration of all possible logic programs containing cut, until a program that is consistent with the given examples is found.\n1.3 Is there an Alternative Method?\nCut will only eliminate some computed results, i.e., after adding cut to some program, it may be the case that some example is no longer derived. This observation suggests a general learning strategy: a base program P is induced with standard techniques, given the positive and maybe some of the negative examples, then the remaining negative examples are ruled out by inserting cut in some clause of P. Obviously, after inserting cut, we must make sure that the positive examples may still be derived.\nGiven the present technology and the discussion above, this seems to be the only viable path to a possible solution. Using standard techniques, the base program P would be generated one clause at a time, so that the positive examples are extensionally covered. However, we think this view is too restrictive, as there are programs which derive all given positive examples, although they do not cover them extensionally (Bergadano, 1993a;DeRaedt, Lavrac, & Dzeroski, 1993). More generally, we consider traces of the positive examples:\nDe nition 1 Given a hypothesis space S of possible clauses, and an example e such that S `e, the set of clauses T S which is used during the derivation of e is called a trace for e.\nWe will use as a candidate base program P any subset of S which is the union of some traces for the positive examples. If P S extensionally covers the positive examples, then it will also be the union of such traces, but the converse is not always true. After a candidate program has been generated, an attempt is made to insert cuts so that the negative examples are not derived. If this is successful, we have a solution, otherwise, we backtrack to another candidate base program. We will analyze the many problems inherent in learning cut with this class of trace-based learning methods, but, as we discuss later (Section 4), the same problems need to be faced in the more restrictive framework of extensional evaluation. In other words, even if we choose to learn the base program P extensionally, and then we try to make it consistent by using cut, the same computational problems would still arise. The main di erence is that standard approaches based on extensionality do not allow for backtracking and do not guarantee that a correct solution is found (Bergadano, 1993a).\nAs far as computational complexity is concerned, trace-based methods have a complexity standing between the search in a space of independent clauses (for the extensional methods) and the exhaustive search in a space of possible programs. We need the following:\nDe nition 2 Given a hypothesis space S, the depth of an example e is the maximum number of clauses in S successfully used in the derivation of e.\nFor example, if we are in a list processing domain, and S only contains recursive calls of the type \\P( HjT]) :-..., P(T), ...\" then the depth of an example P(L) is the length of L. For practical program induction tasks, it is often the case that the depth of an example is related to its complexity, and not to the hypothesis space S. If d is the maximum depth for the given m positive examples, then the complexity of trace-based methods is of the order of jSj md , while extensional methods will just enumerate possible clauses with a complexity which is linear in jSj, and enumerating all possible programs is exponential in jSj." }, { "figure_ref": [], "heading": "A Simple Induction Procedure", "publication_ref": [], "table_ref": [], "text": "The trace-based induction procedure we analyze here takes as input a nite set of clauses S and a set of positive and negative examples E+ and E-and tries to nd a subset T of S such that T derives all the positive examples and none of the negative examples. For every positive example e+ 2 E+, we assume that S is large enough to derive it. Moreover, we assume that all clauses in S are attened 1 . If this is not the case, clauses are attened as a preprocessing step.\nWe consider one possible proof for S `e+, and we build an intermediate program T S containing a trace of the derivation. The same is done for the other positive examples, and the corresponding traces T are merged. Every time T is updated, it is checked against the negative examples. If some of them are derived from T, cut (!) is inserted in the antecedents of the clauses in T, so that a consistent program is found, if it exists. If this is not the case, the procedure backtracks to a di erent proof for S `e+. The algorithm can be informally described as follows: The complexity of adding cut somewhere in the trace T, so that the negative example eis no longer derived, obviously only depends on the size of T. But this size depends on the depth of the positive examples, not on the size of the hypothesis space S. Although more 1. A clause is flattened if it does not contain any functional symbol. Given an un attened clause, it is alway possible to atten it (by turning functions into new predicates with an additional argument representing the result of the function) and vice versa (Rouveirol, in press).\nclever ways of doing this can be devised, based on the particular example e-, we propose a simple enumerative technique in the implementation described in the Appendix." }, { "figure_ref": [], "heading": "Example: Simplifying a List", "publication_ref": [ "b8", "b8" ], "table_ref": [], "text": "In this section we show an example of the use of the induction procedure to learn the logic program \\simplify\". Simplify takes as input a list whose members may be lists, and transforms it into a \\ attened\" list of single members, containing no repetitions and no lists as members. This program appears as exercise number 25 in (Coelho & Cotta, 1988), is composed of nine clauses (plus the clauses for append and member); six of them are recursive, one is doubly-recursive and cut is extensively used. Even if simplify is a not a very complex logic program, it is more complex than usual ILP test cases. For instance, the quicksort and partition program, which is very often used, is composed of only ve clauses (plus those for append), and three of them are recursive. Moreover, note that the conciseness of simplify is essentially due to the extensive use of cut. Without cut, this program would be much longer. In general, the longer a logic program, the more di cult to learn it.\nAs a consequence, we start with a relatively strong bias; suppose that the following hypothesis space of N=8449 possible clauses is de ned by the user:\nThe clause \\simplify(L,NL) :-atten(L,L1), remove(L1,NL).\" All clauses whose head is \\ atten(X,L)\" and whose body is composed of a conjunction of any of the following literals: head(X,H), tail(X,L1), equal(X, L1,T]), null(T), null(H), null(L1), equal(X, L1]), atten(H,X1), atten(L1,X2), append(X1,X2,L), assign(X1,L), assign(X2,L), list(X,L).\nAll clauses whose head is \\remove(IL,OL)\" and whose body is composed of a conjunction of any of the following literals: cons(X,N,OL), null(IL), assign( ],OL), head(IL,X), tail(IL,L), member(X,L), remove(L,OL), remove(L,N).\nThe correct clauses for null, head, tail, equal, assign, member, append are given: null( ]). head( Hj ],H). tail( jT],T). equal(X,X). assign(X,X). member(X, Xj ]). member(X, jT]) :-member(X,T).\nappend( ],Z,Z). append( HjX],Y, HjZ]) :-append(X,Y,Z). By using various kinds of constraints, the initial number of clauses can be strongly reduced. Possible constraints are the following:\nOnce an output is produced it must not be instantiated again. This means that any variable cannot occur as output in the antecedent more than once.\nInputs must be used: all input variables in the head of a clause must also occur in its antecedent. Some conjunctions of literals are ruled out because they can never be true, e.g. null(IL)^head(IL,X). By applying various combination of these constraints it is possible to strongly restrict the initial hypothesis space, which is then given in input to the learning procedure. The set of positive and negative examples used in the learning task is:\nsimplify pos( ], b,a,a]], ]], b,a]). remove pos( a,a], a]). (simplify neg( ], b,a,a]], ]],X),not equal(X, b,a])). simplify neg( a,b,a], ]], a, b,a]]). remove neg( a,a], a,a]).\nNote that we de ne some negative examples of simplify to be all the examples with the same input of a given positive example and a di erent output, for instance simplify neg( ], b,a,a]], ]], a,b]). Obviously, it is also possible to give negative examples as normal ground literals. The learning procedure outputs the program for simplify reported below, which turns out to be substantially equivalent to the one described in (Coelho & Cotta, 1988) (we have kept clauses un attened). simplify(L,NL) :-atten(L,L1), remove(L1,NL). atten(X,L) :-equal(X, L1,T]), null(T), !, atten(L1,X2), assign(X2,L). atten(X,L) :-head(X,H), tail(X,L1), null(H), !, atten(L1,X2), assign(X2,L). atten(X,L) :-equal(X, L1]), !, atten(L1,X2), assign(X2,L). atten(X,L) :-head(X,H), tail(X,L1), !, atten(H,X1), !, atten(L1,X2), append(X1,X2,L). atten(X,L) :-list(X,L). remove(IL,OL) :-head(IL,X), tail(IL,L), member(X,L), !, remove(L,OL). remove(IL,OL) :-head(IL,X), tail(IL,L), remove(L,N), cons(X,N,OL). remove(IL,OL) :-null(IL), assign( ],OL).\nThe learning task takes about 44 seconds on our implementation. However, This is obtained at some special conditions, which are thoroughly discussed in the next sections:\nAll the constraints listed above are applied, so that the nal hypothesis space is reduced to less than one hundred clauses.\nClauses in the hypothesis space are generated in the correct order, as they must appear in the nal program. Moreover, literals in each clause are in the correct position. This is important, since in a logic program with cut the relative position of clauses and literals is signi cant. As a consequence, we can learn simplify without having to test for di erent clause and literal orderings (see subsections 4.2 and 4.5).\nWe tell the learning procedure to use at most two cuts per clause. This seems to be quite an intuitive constraint since, in fact, many classical logic programs have no more than one cut per clause (see subsections 4.1 and 5.4)." }, { "figure_ref": [], "heading": "Problems", "publication_ref": [], "table_ref": [], "text": "Experiments with the above induction procedure have shown that many problems arise when learning logic programs containing cut. In the following, we analyze these problems, and this is a major contribution of the present paper. As cut cannot be evaluated extensionally, this analysis is general, and does not depend on the speci c induction method adopted. Some possible partial solutions will be discussed in Section 5." }, { "figure_ref": [], "heading": "Problem 1: Intensional Evaluation, Backtracking and Cut", "publication_ref": [], "table_ref": [], "text": "The learning procedure of Section 2 is very simple, but it can be ine cient. However, we believe this is common to every intensional method, because clauses cannot be learned independently of one another. As a consequence, backtracking cannot be avoided and this can have some impact on the complexity of the learning process. Moreover, cut must be added to every trace covering negative examples. If no constraints are in force, we can range from only one cut in the whole trace to a cut between each two literals of each clause in the trace. Clearly, the number of possibilities is exponential in the number of literals in the trace. Fortunately, this number is usually much smaller than the size of the hypothesis space, as it depends on the depth of the positive examples. However, backtracking also has some advantages; in particular, it can be useful to search for alternative solutions. These alternative programs can then be confronted on the basis of any required characteristic, such as simplicity or e ciency. For example, using backtracking we discovered a version of simplify equivalent to the one given but without the cut predicate between the two recursive calls of the fourth clause of flatten." }, { "figure_ref": [], "heading": "Problem 2: Ordering of Clauses in the Trace", "publication_ref": [], "table_ref": [], "text": "In a logic program containing cut, the mutual position of clauses is signi cant, and a di erent ordering can lead to a di erent (perhaps wrong) behavior of the program. For example, the following program for intersection: c 1 ) int(X,S2,Y) :-null(X), null(Y). c 2 ) int(X,S2,Y) :-head(X,H), tail(X,Tail), member(H,S2), !, int(Tail,S2,S), cons(H,S,Y). c 3 ) int(X,S2,Y) :-head(X,H), tail(X,Tail), int(Tail,S2,Y). behaves correctly only if c 2 comes before c 3 . Suppose the hypothesis space given in input to the induction procedure consists of the same three clauses as above, but with c 3 before c 2 . If :int( a], a], ]) is given as a negative example, then the learning task fails, because clauses c 1 and c 3 derive that example.\nIn other words, learning a program containing cut means not only to learn a set of clauses, but also a speci c ordering for those clauses. In terms of our induction procedure this means that for every trace T covering some negative example, we must check not only every position for inserting cuts, but also every possible clause ordering in the trace. This \\generate and test\" behavior is not di cult to implement, but it can dramatically decrease the performance of the learning task. In the worst case all possible permutations must be generated and checked, and this requires a time proportional to (md)! for a trace of md clauses 2 .\nThe necessity to test for di erent permutations of clauses in a trace is a primary source of ine ciency when learning programs with cut, and probably the most di cult problem to solve." }, { "figure_ref": [], "heading": "Problem 3: Kinds of Given Examples", "publication_ref": [], "table_ref": [], "text": "Our induction procedure is only able to learn programs which are traces, i.e. where every clause in the program is used to derive at least one positive example. When learning de nite clauses, this is not a problem, because derivation is monotone, and for every program P, complete and consistent w.r.t. the given examples, there is a program P 0 P which is also complete and consistent and is a trace 3 . On the other hand, when learning clauses containing cut, it may happen that the only complete and consistent program(s) in the hypothesis space is neither a trace, nor contains it as a subset. This is because derivation is no longer monotone and it can be the case that a negative example is derived by a set of clauses, but not by a superset of them, as in the following simple example: S = fsum(A,B,C) :-A>0, !, M is A-1, sum(M,B,N), C is N+1.\nsum(A,B,C) :-C is B.g sum pos(0,2,2), sum neg(2,2,2).\nThe two clauses in the hypothesis space represent a complete and consistent program for the given examples, but our procedure is unable to learn it. Observe that the negative example is derived by the second clause, which is a trace for the positive example, but not by the rst and the second together.\nThis problem can be avoided if we require that, for every negative example, a corresponding positive example with the same input be given (in the above case, the example required is sum pos(2,2,4)). In this way, if a complete program exists in the hypothesis space, then it is also a trace, and can be learned. Then it can be made consistent using cut, in order to rule out the derivation of negative examples. The constraint on positive and negative examples seems to be quite intuitive. In fact, when writing a program, a 2. it must be noted that if we are learning programs for two di erent predicates, of j and k clauses respectively (that is, md = j+k), then we have to consider not (j+k)! di erent programs, but only j!+k!. We can do better if, inside a program, it is known that non-recursive clauses have a xed position, and can be put before or after of all the recursive clauses. " }, { "figure_ref": [], "heading": "a learned", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem 4: Ordering of Given Examples", "publication_ref": [], "table_ref": [], "text": "When learning clauses with cut, even the order of the positive examples may be signi cant.\nIn the example above, if sum pos(2,2,4) comes after sum pos(0,2,2) then the learning task fails to learn a correct program for sum, because it cannot nd a program consistent w.r.t.\nthe rst positive example and the negative one(s).\nIn general, for a given set of m positive examples this problem can be remedied by testing di erent example orderings. Again, in the worst case k! di erent orderings of a set of k positive examples must be checked. Moreover, in some situations a favorable ordering does not exist. Consider the following hypothesis space: c 1 ) int(X,Y,W) :-head(X,A), tail(X,B), notmember(A,Y), int(B,Y,W). c 2 ) int(X,Y,W) :-head(X,A), tail(X,B), notmember(A,Y), !, int(B,Y,W). c 3 ) int(X,Y,Z) :-head(X,A), tail(X,B), int(B,Y,W), cons(A,W,Z). c 4 ) int(X,Y,Z) :-head(X,A), tail(X,B), !, int(B,Y,W), cons(A,W,Z). c 5 ) int(X,Y,Z) :-null(Z).\ntogether with the set of examples: e 1 ) int pos( a], b], ]). e 2 ) int pos( a], a], a]). e 3 ) int neg( a], b], a]). e 4 ) int neg( a], a], ]).\nOur induction procedure will not be able to nd a correct program for any ordering of the two positive examples, even if such a program does exist ( c 2 ,c 4 ,c 5 ]). This program is the union of two traces: c 2 ,c 5 ], which covers e 1 , and c 4 ,c 5 ], which covers e 2 . Both of these traces are inconsistent, because the rst covers e 4 , and the second covers e 3 . This problem can be remedied only if all the positive examples are derived before the check against negative examples is done.\nHowever, in that case we have a further loss of e ciency, because some inconsistent traces are discarded only in the end. In other words, we would need to learn a program covering all the positive examples, and then make it consistent by using cut and by reordering clauses. Moreover, there can be no way to make a program consistent by using cut and reorderings. As a consequence, all the time used to build that program is wasted. As an example, suppose we are given the following hypothesis space: c 0 1 ) int(X,Y,Z) :-head(X,A), tail(X,B), int(B,Y,W), cons(A,W,Z). c 0 2 ) int(X,Y,Z) :-null(X), null(Z). c 0 3 ) int(X,Y,Z) :-null(Z).\nwith the examples: e 0 1 ) int pos( a], a], a]). e 0 2 ) int pos( a,b], c], ]). e 0 3 ) int neg( a], b], a]).\nThen we can learn the trace c 0 1 ,c 0 2 ] from e 0 1 and the trace c 0 3 ] from e 0 2 . But c 0 1 ,c 0 2 ,c 0 3 ] covers e 0 3 , and there is no way to make it consistent using cut or by reordering its clauses. In fact, the rst partial trace is responsible for this inconsistency, and hence the time used to learn c 0 3 ] is totally wasted.\nHere it is also possible to understand why we need attened clauses. Consider the following program for intersection, which is equivalent to c 2 ,c 4 ,c 5 ], but with the three clauses un attened: ,]). Now, this program covers int neg( a], a], ]), i.e. u 2 ,u 4 ,u 5 ] `int( a], a], ]). In fact, clause u 2 fails on this example because a is a member of a]. Clause u 4 fails because the empty list cannot be matched with AjW]. But clause u 5 succeeds because its arguments match those of the negative example. As a consequence, this program would be rejected by the induction procedure.\nu 2 ) int( AjB],Y,W) :-notmember(A,Y), !, int(B,Y,W). u 4 ) int( AjB],Y, AjW]) :-!, int(B,Y,W). u 5 ) int( ,\nThe problem is that, if we use un attened clauses, it may happen that a clause body is not evaluated because an example does not match the head of the clause. As a consequence, possible cuts in that clause are not evaluated and cannot in uence the behavior of the entire program. In our example, the cut in clause u 4 has no e ect because the output argument of int( a], a], ]) does not match AjW], and the body of u 4 is not evaluated at all. Then u 5 is red and the negative example is covered. In the attened version, clause c 4 fails only when cons(a, ], ]) is reached, but at that point a cut is in force and clause c 5 cannot be activated. Note that program u 2 ,u 4 ,u 5 ] behaves correctly on the query int( a], a],X), and gives X= a] as the only output." }, { "figure_ref": [], "heading": "Problem 5: Ordering of Literals", "publication_ref": [], "table_ref": [], "text": "Even the relative position of literals and cut in a clause is signi cant. Consider again the correct program for intersection as above ( c 2 ,c 4 ,c 5 ]), but with c 4 modi ed by putting the cons literal in front of the antecedent: c 0 4 ) int(X,Y,Z) :-cons(A,W,Z), head(X,A), tail(X,B), int(B,Y,W).\nThen, there is no way to get a correct program for intersection using this clause. To rule out the negative example int neg( a], a], ]) we must put a cut before the cons predicate, in order to prevent the activation of c 5 . But, then, some positive examples are no longer covered, such as int pos( a], ], ]). In fact, we have a wrong behavior every time clause c 0 4 is called and fails, since it prevents the activation on c 5 . In general, this problem cannot be avoided even by reordering clauses: if we put c 0 4 after c 2 and c 5 , then int neg( a], a], ]) will be covered. As a consequence, we should also test for every possible permutation of literals in every clause of a candidate program." }, { "figure_ref": [], "heading": "Situations where Learning Cut is still Practical", "publication_ref": [], "table_ref": [], "text": "From the above analysis, learning cut appears to be di cult since, in general, a learning procedure should be able to backtrack on the candidate base programs (e.g., traces), on the position of cut(s) in the program, on the order of the clauses in the program, on the order of literals in the clauses and on the order of given positive examples. However, we have spotted some general conditions at which learning cut could still be practical. Clearly, these conditions cannot be a nal solution to learning cut, but, if applicable, can alleviate the computational problems of the task." }, { "figure_ref": [], "heading": "Small Hypothesis Space", "publication_ref": [ "b15", "b11", "b7", "b18", "b9", "b7" ], "table_ref": [], "text": "First of all, a restricted hypothesis space is necessary. If clauses cannot be learned independently of one another, a small hypothesis space would help to limit the backtracking required on candidate traces (problem 1). Moreover, even the number of clauses in a trace would be probably smaller, and hence also the number of di erent permutations and the number of di erent positions for inserted cuts (problems 2 and 1). A small trace would also have a slight positive impact on the need to test for di erent literal orderings in clauses (problem 5).\nIn general, many kinds of constraints can be applied to keep a hypothesis space small, such as ij-determinism (Muggleton & Feng, 1990), rule sets and schemata (Kietz & Wrobel, 1991;Bergadano & Gunetti, 1993), determinations (Russell, 1988), locality (Cohen, 1993), etc (in fact, some of these restrictions and others, such as those listed in Section 3, are available in the actual implementation of our procedure -see the Appendix4 ). Moreover, candidate recursive clauses must be designed so that no in nite chains of recursive calls can take place (Bergadano & Gunetti, 1993) (otherwise the learning task itself could be non-terminating). In general, the number of possible recursive calls must be kept small, in order to avoid too much backtracking when searching for possible traces. However, general constraints may not be su cient. The hypothesis space must be designed carefully from the very beginning, and this can be di cult. In the example of learning simplify an initial hypothesis space of \\only\" 8449 clauses was obtained specifying not only the set of required predicates, but even the variables occurring in every literal.\nIf clauses cannot be learned independently, experiments have shown to us that a dramatic improvement of the learning task can be obtained by generating the clauses in the hypothesis space so that recursive clauses, and in general more complex clauses, are taken into consideration after the simpler and non-recursive ones. Since simpler and non recursive clauses require less time to be evaluated, they will have a small impact on the learning time. Moreover, learning simpler clauses (i.e. shorter) also alleviates problem 5.\nFinally, it must be noted that our induction procedure does not necessarily require that the hypothesis space S of possible clauses be represented explicitly. The learning task could start with an empty set S and an implicit description of the hypothesis space, for example the one given in Section 3. When a positive example cannot be derived from S, a new clause is asked for to a clause generator and added to S. This step is repeated until the example is derivable from the updated S, and then the learning task can proceed normally." }, { "figure_ref": [], "heading": "Simple Examples", "publication_ref": [], "table_ref": [], "text": "Another improvement can be achieved by using examples that are as simple as possible. In fact, each example which may involve a recursive call is potentially responsible for the activation of all the corresponding clauses in the hypothesis space. The more complex the example, the larger the number of consecutive recursive activations of clauses and the larger the number of traces to be considered for backtracking (problem 1). For instance, to learn the append relation, it may be su cient to use an example like append( a], b], a,b]) instead of one like append ( a,b,c,d], b], a,b,c,d,b]). Since simple examples would probably require a smaller number of di erent clauses to be derived, this would result in smaller traces, alleviating the problem of permutation of clauses and literals in a trace (problems 2 and 5) and decreasing the number of positions for cuts (problem 1). Having to check for all possible orderings of a set of positive examples, a small number of examples is also a solution to problem 4. Fortunately, experiments have shown that normally very few positive examples are needed to learn a program, and hence the corresponding number of di erent orderings is, in any case, a small number. Moreover, since in our method a positive example is su cient to learn all the clauses necessary to derive it, most of the time a complete program can be learned using only one well chosen example. If such an example can be found (as in the case of the learning task of section 3, where only one example of simplify and one of remove are given), the computational problem of testing di erent example orderings is automatically solved." }, { "figure_ref": [], "heading": "Small Number of Examples", "publication_ref": [ "b13" ], "table_ref": [], "text": "However, it must be noted that, in general, a small number of examples may not be su cient, except for very simple programs. In fact, if we want to learn logic programs such as member, append, reverse and so on, then any example involving recursion will be su cient. But for more complex programs the choice may not be trivial. For example, our procedure is able to learn the quicksort (plus partition) program with only one \\good\" example. But if one does not know how quicksort and partition work, it is likely that she or he will provide an example allowing to learn only a partial description of partition. This is particularly clear in the example of simplify. Had we used the positive example simplify pos( ], b,a,a]]], b,a]) (which is very close to the one e ectively used), the rst clause of flatten would not have been learned. In other words, to give few examples we must give good examples, and often this is possible only by having in mind (at least partially and in an informal way) the target program. Moreover, for complex programs, good examples can mean complex examples, and this is in contrast with the previous requirement. For further studies of learning from good examples we refer the reader to the work of Ling (1991) and Aha, Ling, Matwin and Lapointe (1993)." }, { "figure_ref": [], "heading": "Constrained Positions for Cut and Literals", "publication_ref": [], "table_ref": [], "text": "Experiments have shown that it is not practical to allow the learning procedure to test all possible positions of cut in a trace, even if we are able to keep the number of clauses in a trace small. The user must be able to indicate the positions where a cut is allowed to occur, e.g., at the beginning of a clause body, or before a recursive call. In this case, many alternative programs with cut are automatically ruled out and thus do not have to be tested against the negative examples. It may also be useful to limit the maximum number of cuts per clause or per trace. For example, most of the time one cut per clause can be su cient to learn a correct program. In the actual implementation of our procedure, it is in fact possible to specify the exact position of cut w.r.t. a literal or a group of literals within each clause of the hypothesis space, when this information is known.\nTo eliminate the need to test for di erent ordering of literals (problem 5), we may also impose a particular global order, which must be maintained in every clause of the hypothesis space. However this requires a deep knowledge of the program we want, otherwise some (or even all) solutions will be lost. Moreover, this solution can be in contrast with a use of constrained positions for cut, since a solution program for a particular literal ordering and for particular positions for cuts may not exist." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our induction procedure is based on an intensional evaluation of clauses. Since the cut predicate has no declarative meaning, we believe that intensional evaluation of clauses cannot be abandoned, independently of the kind of learning method adopted. This can decrease the performance of the learning task, compared with extensional methods, which examine clauses one at a time without backtracking. However, the computational problems outlined in Section 4 remain even if we choose to learn a complete program extensionally, and then we try to make it consistent by inserting cut. The only di erence is that we do not have backtracking (problem 1), but the situation is probably worse, since extensional methods can fail to learn a complete program even if it exists in the hypothesis space. (Bergadano, 1993a).\nEven if the ability to learn clauses containing procedural predicates like cut seems to be fundamental to learning \\real\" logic programs, in particular short and e cient programs, many problems in uencing the complexity of the learning task must be faced. These include the number and the relative ordering of clauses and literals in the hypothesis space, the kind and the relative ordering of given examples. Such problems seem to be related to the need for an intensional evaluation of clauses in general, and not to the particular learning method adopted. Even just to alleviate these problems, it seems necessary to know a lot about the target program. An alternative solution is simply to ignore some of the problems. That is, avoid testing for di erent clause and/or literal and/or example orderings. Clearly, in this way the learning process can become feasible, but it can fail to nd a solution even when it exists. However, many ILP systems (such as Foil) adopt such an \\incomplete-but-fast\" approach, which is guided by heuristic information.\nAs a consequence, we view results presented in this paper as, at least partially, negative. The problems we raised appear computationally di cult, and suggest that attention should be restricted to purely declarative logic languages, which are, in any case, su ciently expressive." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was in part supported by BRA ESPRIT project 6020 on Inductive Logic Programming." }, { "figure_ref": [], "heading": "Appendix A", "publication_ref": [], "table_ref": [], "text": "The induction procedure of Section 2 is written in C-prolog (interpreted) and runs on a SUNsparcstation 1. We are planning to translate it in QUINTUS prolog. This Appendix contains a simpli ed description of its implementation. As a preliminary step, in order to record a trace of the clauses deriving a positive example e+, every clause in the hypothesis space 5 S must be numbered and modi ed by adding to its body two literals. The rst one, allowed(n,m) is used to activate only the clauses which must be checked against the negative examples. The second one, marker(n), is used to remember that clause number n has been successfully used while deriving e+. Hence, in general, a clause in the hypothesis space S takes the following form: P(X 1 ,: : :,X m ) :-allowed(n,m), ,marker(n).\nwhere is the actual body of the clause, n is the number of the clause in the set and m is a number used to deal with cuts. For every clause n, the one without cut is augmented with allowed(n,0), while those containing a cut somewhere in their body are augmented with allowed(n,1), allowed(n,2), ..., and so on. Moreover, for every augmented clause as above, a fact \\alt(n,m).\" is inserted in S, in order to implement an enumeration mechanism.\nA simpli ed (but running) version of the learning algorithm is reported below. In the algorithm, the output, if any, is the variable Trace containing the list of the (numbers of the) clauses representing the learned program P. By using the backtracking mechanism of Prolog, more than one solution (trace) can be found. We assume the two predicates listpositive and listnegative build a list of the given positive and negative examples, respectively. consult( le containing the set of clauses S).\n5. We assume clauses in the hypothesis space to be attened and the tracer procedure is called on that list. For every positive example, tracer calls the example itself, ring all the clauses in S that may be resolved against that example.\nObserve that, initially, an allowed(X,0) predicate is asserted in the database: in this way only clauses not containing a cut are allowed to be used (this is because clauses with cut are employed only if some negative example is derived). Then, a trace, if any, of (the numbers associated to) the clauses successfully used in the derivation of that example is built, using the setof predicate. The notneg procedure works as follows. First, only the clauses in the trace are allowed to be checked against the negative examples, by retracting the allowed(X,0) clause and asserting an allowed(n,0) if the n-th clause (without cut) is in the trace. This is done with the prep and assertem predicates. Then a list of the negative examples is formed and we check if they can be derived from the clauses in the trace. If at least one negative example is covered, (i.e., if trynegs fails) then we backtrack to the prep procedure (backtracking point 2) where a clause of the trace is substituted with an equivalent one but with cut inserted somewhere (or in a di erent position). If no correct program can be found in such a way by trying all possible alternatives (i.e. by using cut in all possible ways), notneg fails, and backtracking to backtracking point 1 occurs, where another trace is searched for. Otherwise, all clauses in S without cut are reactivated by asserting again allowed(X,0), and the next positive example is considered. Note that trypos is used in notneg to verify if a modi ed trace still derives the set of positive examples derived initially. The possibility to substitute clauses in the current trace with others having cut inserted somewhere is achieved through the alt predicate in the assertem procedure. Finally, note that this simpli ed version of the learning procedure is not able to generate and test for di erent orderings of clauses in a trace or for di erent ordering of literals in each clause, nor to use di erent orderings for the set of positive examples.\nIn order to derive all the positive examples before the check against the negative ones (see subsection 4.4), we must change the rst clause of the tracer procedure into: tracer( Pos1, ... ,Posn]):-Pos1, ... ,Posn, setof(L,trace(L),T), notneg(T).\nThe actual implementation of the above induction procedure is available through ftp. For further information contact gunetti@di.unito.it." } ]
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "). tracer(Covered, ExamplejCdr],Trace) :-Example, /? backtracking point 1 ?/ setof(L,trace(", "year": "" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "notneg(T,Covered,Remaining", "year": "" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "notneg(T,Covered,Remaining) :-listnegative(Negexamplelist), asserta", "year": "" }, { "authors": "", "journal": "", "ref_id": "b3", "title": "notneg(T,Covered,Remaining) :-resetallowed(Remaining), retract", "year": "" }, { "authors": " Actually", "journal": "", "ref_id": "b4", "title": "our complete implementation is more complex, also in order to achieve greater e ciency. The behavior of the learning task is quite simple. Initially, the set S of clauses is read into the Prolog interpreter, together with the learning algorithm. Then the learning task can be started by calling", "year": "1993" }, { "authors": "F Bergadano", "journal": "IEEE Transactions on Data and Knowledge Engineering", "ref_id": "b5", "title": "Inductive database relations", "year": "1993" }, { "authors": "F Bergadano", "journal": "", "ref_id": "b6", "title": "Test Case Generation by Means of Learning Techniques", "year": "1993" }, { "authors": "F Bergadano; D Gunetti", "journal": "", "ref_id": "b7", "title": "An interactive system to learn functional logic programs", "year": "1993" }, { "authors": "H Coelho; J C Cotta", "journal": "Springer-Verlag", "ref_id": "b8", "title": "Prolog by Example: how to learn teach and use it", "year": "1988" }, { "authors": "W Cohen", "journal": "", "ref_id": "b9", "title": "Rapid Prototyping of ILP Systems Using Explicit Bias", "year": "1993" }, { "authors": "L Deraedt; N Lavrac; S Dzeroski", "journal": "", "ref_id": "b10", "title": "Multiple predicate learning", "year": "1993" }, { "authors": "J U Kietz; S Wrobel", "journal": "Academic Press", "ref_id": "b11", "title": "Controlling the Complexity of Learning in Logic through Syntactic and Task-Oriented Models", "year": "1991" }, { "authors": "", "journal": "Springer-Verlag", "ref_id": "b12", "title": "Logic Program Synthesis and Transformation", "year": "1993" }, { "authors": "X C Ling", "journal": "", "ref_id": "b13", "title": "Learning from Good Examples", "year": "1991" }, { "authors": "S Muggleton", "journal": "Academic Press", "ref_id": "b14", "title": "Inductive Logic Programming", "year": "1991" }, { "authors": "S Muggleton; C Feng", "journal": "", "ref_id": "b15", "title": "E cient Induction of Logic Programs", "year": "1990" }, { "authors": "R Quinlan", "journal": "Machine Learning", "ref_id": "b16", "title": "Learning Logical De nitions from Relations", "year": "1990" }, { "authors": "C Rouveirol", "journal": "Machine Learning", "ref_id": "b17", "title": "Flattening: a representation change for generalization", "year": "" }, { "authors": "S Russell", "journal": "", "ref_id": "b18", "title": "Tree-structured bias", "year": "1988" }, { "authors": "E Y Shapiro", "journal": "MIT Press", "ref_id": "b19", "title": "Algorithmic Program Debugging", "year": "1983" } ]
[ { "formula_coordinates": [ 6, 90, 317.88, 265.92, 42.32 ], "formula_id": "formula_0", "formula_text": "simplify pos( ], b,a,a]], ]], b,a]). remove pos( a,a], a]). (simplify neg( ], b,a,a]], ]],X),not equal(X, b,a])). simplify neg( a,b,a], ]], a, b,a]]). remove neg( a,a], a,a])." }, { "formula_coordinates": [ 10, 90, 283.68, 261.12, 43.8 ], "formula_id": "formula_1", "formula_text": "u 2 ) int( AjB],Y,W) :-notmember(A,Y), !, int(B,Y,W). u 4 ) int( AjB],Y, AjW]) :-!, int(B,Y,W). u 5 ) int( ," } ]
The Di culties of Learning Logic Programs with Cut
As real logic programmers normally use cut (!), an e ective learning procedure for logic programs should be able to deal with it. Because the cut predicate has only a procedural meaning, clauses containing cut cannot be learned using an extensional evaluation method, as is done in most learning systems. On the other hand, searching a space of possible programs (instead of a space of independent clauses) is unfeasible. An alternative solution is to generate rst a candidate base program which covers the positive examples, and then make it consistent by inserting cut where appropriate. The problem of learning programs with cut has not been investigated before and this seems to be a natural and reasonable approach. We generalize this scheme and investigate the di culties that arise. Some of the major shortcomings are actually caused, in general, by the need for intensional evaluation. As a conclusion, the analysis of this paper suggests, on precise and technical grounds, that learning cut is di cult, and current induction techniques should probably be restricted to purely declarative logic languages.
Francesco Bergadano; Daniele Gunetti
[ { "figure_caption": "input: a set of clauses S a set of positive examples E+ a set of negative examples E-S := atten(S) T ; For each positive example e+ 2 E+ nd T1 S such that T1 `SLD e+ (backtracking point 1) T T T1 if T derives some negative example e-then trycut(T,e-) if trycut(T,e-) fails then backtrack output the clauses listed in T trycut(T,e-): insert ! somewhere in T (backtracking point 2) so that 1. all previously covered positive examples are still derived from T, and 2. T 6 `SLD e-", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Since a candidate program is formed by taking the union of partial traces learned for single examples, if we want a small trace (problems 2 and 5) we must use as few examples as possible, while still completely describing the required concept. In other words, we should avoid redundant information. For example, if we want to learn the program for append, it will be normally su cient to use only one of the two positive examples append( a], b], a,b]) and append( c], d], c,d]). Obviously it may happen that di erent examples are derived by the same set of clauses, and in this case the nal program does not change.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "program P is complete if it derives all the given positive examples, and it is consistent if it does not derive any of the given negative examples programmer usually thinks in terms of what a program should compute on given inputs, and then tries to avoid wrong computations for those inputs.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction and Motivation", "publication_ref": [], "table_ref": [], "text": "People like to record information for later consultation. For many, the media of choice is paper. It is easy to use, inexpensive, and durable. To its disadvantage, paper records do not scale well. As the amount of information grows, retrieval becomes inefficient, physical storage becomes excessive, and duplication and distribution become expensive. Digital media offers better scaling capabilities. With indexing and sub-linear algorithms, retrieval is efficient; using high density devices, storage space is minimal; and with electronic storage and high-speed networks, duplication and distribution is fast and inexpensive. It is clear that our computing environments are evolving as several vendors are beginning to market inexpensive, hand-held, highly portable computers that can convert handwriting into text. We view this as the start of a new paradigm shift in how traditional digital information will be gathered and used. One obvious change is that these computers embrace the paper metaphor, eliminating the need for typing. It is in this paradigm that our research is inspired, and one of our primary goals is to combine the best of both worlds by making digital media as convenient as paper.\nThis document describes an interactive note-taking software system for computers with pen-based input devices. Our software has two distinctive features: first, it actively predicts what the user is going to write and provides a default that the user may select; second, the software automatically constructs a graphical interface at the user's request. The purpose of these features is to speed up information entry and reduce user errors. Viewed in a larger context, the interactive note-taking system is a type of self-customizing software.\nTo clarify this notion, consider a pair of dimensions for characterizing software. As Figure 1 depicts, one dimension is task specificity. Software that addresses a generic task (e.g., a spreadsheet) lies between task independent software (e.g., a compiler) and task specific software (e.g., a particular company's accounting software). Another dimension is the amount of user customization required to make the software useful. Task generic software lies between the two extremes, requiring modest programming in a specialized language. Self-customizing software uses machine learning techniques to automatically customize task generic software to a specific user. Because the software learns to assist the user by watching them complete tasks, the software is also a learning apprentice. Similarly, because the user does not explicitly program the defaults or the user interface for the note taking system, it is a type of software agent. Agents are a new user interface paradigm that free the user from having to explicitly command the computer. The user can record information directly and in a free-form manner. Behind the interface, the software is acting on behalf of the user, helping to capture and organize the information.\nNext we will introduce the performance component of the note-taking software in more detail, then describe the representations and algorithms used by the learning methods. We also present empirical results, comparing the performance of seven alternate methods on nine realistic note-taking domains, and finally, we describe related research and identify some of the system's limitations.\nFigure 1: Continuum of software development depicting the traditional trade-off between the development cost per user and the amount of user customization required. Self-customizing software eliminates the need for user customization by starting with partially-specified software and applying machine learning methods to complete any remaining customization." }, { "figure_ref": [], "heading": "Generic", "publication_ref": [], "table_ref": [], "text": "Task " }, { "figure_ref": [ "fig_1" ], "heading": "Performance Task", "publication_ref": [], "table_ref": [], "text": "The primary function of the note-taking software is to improve the user's speed and accuracy as they enter notes about various domains of interest. A note is a short sequence of descriptive terms that describe a single object of interest. Example 1 shows a note describing a particular personal computer (recorded by the first author from a Usenet newsgroup during 1992):\n4096K PowerBook 170, 1.4MB and 40MB Int. Drives, 2400/9600 Baud FAX Modem\n(Example 1)\nExample 2 is a note describing a fabric pattern (recorded by the first author's wife):\nButterick 3611 Size 10 dress, top (Example 2)\nTables 5 through 11 later in the paper list sample notes drawn from seven other domains. The user may enter notes from different domains at their convenience and may use whatever syntactic style comes naturally.\nFrom the user's point of view, the software operates in one of two modes: a contextual prompting mode, and an interactive graphical interface mode. In the first mode, the software continuously predicts a likely completion as the user writes out a note. It offers this as a default for the user. The location and presentation of this default must balance conflicting requirements to be convenient yet unobtrusive. For example, the hand should not hide the indicated default while the user is writing. Our solution is to have a small, colored completion button follow to the left and below where the user is writing. In this location, it is visible to either right-or left-handed people as they write out notes. The user can reposition the button to another location if they prefer. The default text is displayed to the immediate right of this button in a smaller font. The completion button is green; the text is black. The completion button saturation ranges from 1 (appearing green), when the software is highly confident of the predicted value, to 0 (appearing white), when the software lacks confidence. The button has a light gray frame, so it is visible even when the software has no prediction. Figure 2 The software's second mode presents an interactive graphical interface. Instead of requiring the user to write out the text of a note, the software presents a radio-button and check-box interface (what we call a button-box interface ). With this, the user may select from text fragments, portions of notes called descriptive terms , by tapping on radio-buttons or check-boxes with the pen interface device. Each selection from the button-box interface is added to the current note. Intuitively, check boxes are generated to depict optional descriptive terms, whereas radio-button panels are generated to depict alternate, exclusive descriptive terms. For user convenience, the radio-buttons are clustered into panels and are sorted alphabetically in ascending order from top to bottom. To allow the user to add new descriptive terms to a button-box panel, an additional blank button is included at the bottom of each. When the user selects a radio button item, the graphical interface is expanded to depict additional choices corresponding to descriptive terms that follow syntactically. The software indicates its predictions by preselecting the corresponding buttons and highlighting them in green. The user may easily override the default selection by tapping the desired button. Figure 3 portrays a screen snapshot of the software operating in the interactive graphical interface mode for a PowerBook note.\nThe software is in prompting mode when a user begins to write a note. If the learned syntax for the domain of the note is sufficiently mature (see Section 6, Constructing a Button-Box Interface), then the software can switch into the button-box mode. To indicate this to the user, a mode switch depicted as a radio button is presented for the user's notice. A convenient and unobtrusive location for this switch is just below the completion button. In keeping with the color theme, the mode switch also has a green hue. If the user taps this switch, the written text is removed, and the appropriate radio buttons and check boxes are inserted. The system automatically selects buttons that match the user-written text. As the user makes additional selections, the interface expands to include additional buttons. When the user finishes a note, in either mode, the software returns to prompting mode in anticipation of another note. 1 Because the interface is constructed from a learned syntax, as the software refines its representation of the domains of the notes, the button-box interface also improves. On-line Appendix 1 is a demonstration of the system's operation in each of its two modes." }, { "figure_ref": [], "heading": "Learning a Syntax", "publication_ref": [], "table_ref": [], "text": "To implement the two modes of the note taking software, the system internally learns two structures. To characterize the syntax of user's notes, it learns finite-state machines (FSMs).\nTo generate predictions, it learns decision tree classifiers situated at states within the FSMs. In order to construct a graphical user interface, the system converts a FSM into a set of buttons. This section describes the representation and method for learning FSMs. The next section discusses learning of the embedded classifiers." }, { "figure_ref": [], "heading": "Tokenization", "publication_ref": [], "table_ref": [], "text": "Prior to learning a finite-state machine, the user's note must first be converted into a sequence of tokens. Useful tokenizers can be domain independent. However, handcrafted domain-specific tokenizers lead to more useful representations. The generic tokenizer used for the results reported here uses normal punctuation, whitespace, and alpha-numeric character boundaries as token delimiters. For example, our generic tokenizer splits the sample PowerBook note in Example 1 into the following 16 tokens: :NULL \" 4096 \" \" K \" \" PowerBook \" \" 170 \" \" , 1.4 \" \" MB \" \" and \" \" 40 \" \" MB \" \" Int. \" \" Drives \" \" , 2400/9600 \" \" Baud \" \" FAX \" \" Modem \" .\nThe token :NULL is prepended by the tokenizer. This convention simplifies the code for constructing a FSM." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_4", "fig_5", "fig_5", "fig_10" ], "heading": "Learning a Finite-State Machine", "publication_ref": [ "b0", "b1", "b0", "b0", "b4" ], "table_ref": [ "tab_1", "tab_1", "tab_13", "tab_1" ], "text": "Deterministic finite-state machines (FSMs) are one candidate approach for describing the syntax of a user's notes because they are well understood and relatively expressive. Moreover, Angluin (1982) and Berwick and Pilato (1987) present a straightforward algorithm for learning a specific subclass of FSMs called k-reversible FSMs. The algorithm is incremental and does not suffer from presentation order effects. Berwick and Pilato define a k-reversible FSM as: \"A regular language is k-reversible , where k is a non-negative integer, if whenever two prefixes whose last k words [tokens] match have a tail in common, then the two prefixes have all tails in common. In other words, a deterministic finite-state automaton (DFA) [FSM] is k -reversible if it is deterministic with lookahead k when its sets of initial and final states are swapped and all of its arcs [transitions] are reversed.\"\nGiven a list of tokens, the k-reversible FSM algorithm first constructs a prefix tree, where all token sequences with common k-leaders share a k-length path through the FSM. For example, Figure 4a depicts a simple FSM constructed for a single fabric pattern note. The text of the user's note was converted into a sequence of tokens. Then a transition was created for each token and a sequence of states was created to link them together. One state serves as the initial state, and another indicates the completion of the sequence. For convenience, this latter, terminal state is depicted with a double circle. If the FSM is able to find a transition for each token in the sequence, and it arrives at the terminal state, then the FSM accepts the token sequence as an instance of the language it defines. Figure 4b depicts the same FSM after another path has been added corresponding to a second fabric pattern note (Example 2). Now the FSM will accept either note if expressed as a sequence of tokens. This FSM is a trivial prefix tree because only the first state is shared between the two paths. A prefix tree is minimal for observed token sequences, but it may not be general enough for use in prediction. (The prefix tree is, in essence, an expensive method for memorizing token sequences-which is not the desired result.) For the sake of prediction, it is desirable to have a FSM that can accept new, previously unseen combinations of tokens. The prefix tree automaton can be converted into a more general FSM by merging some of its states. A particular method for doing this converts a prefix tree into a k-reversible FSM via Angluin's (1982) algorithm. The algorithm merges states that have similar transitions, and it creates a FSM that accepts all token sequences in the prefix tree, as well as other candidate sequences. Table 1 lists the three rules for deciding when to merge a pair of states in a prefix tree to form a k-reversible FSM. In the special case where k equals zero, all states have a common kleader, and Rule 2a ensures that there will be only one accepting state.\nBecause the rules in Table 1 must be applied to each pair of states in the FSM, and because each time a pair of states is merged the process must be repeated, the asymptotic complexity of the process is O( ), where is the number of states in the FSM.\nApplying these rules to the prefix tree in Figure 4b with k equal to zero results in a FSM depicted in Figure 5a. Notice that the first two states have been merged to make the FSM deterministic (Rule 1). The accepting states have also been merged in compliance with Rule 2a. The resulting FSM has fewer states but is not more general. It only accepts the two token sequences originally seen. Extending this example, Figure 5b illustrates the addition of a third fabric pattern note as a prefix tree path to the FSM. Reapplying the rules results in the FSM shown in Figure 6. The first two states have been merged as before through the action of the determinism Rule 1. Note that a pair of latter states have also been merged because they share a common zero-leader (true of all pairs of states) and because they transition to the common terminal state on the token \" dress \".\nFigure 7 depicts a more sophisticated result; it shows a learned zero-reversible FSM for notes about PowerBook computers. This example shows that the model number \" 100 \" is never followed by a specification for an internal floppy drive, but that other model numbers are. Any model may have an external floppy drive. Note that there is a single terminal state. Whitespace and punctuation have been eliminated for clarity in the figure . \nThe rules listed in Table 1 are generalization operators that allow the FSM to accept previously unobserved sequences. Whenever two or more states are merged into one, the FSM will accept more sequences than before if the new state is at the tail end of more transitions than one of the previous states and if the new state is at the head end of at least one transition. For example, the state just after State 1 in Figure 7 was merged from several previous states and generalizes memory sizes for PowerBook models. These rules comprise a heuristic bias and may be too conservative. For example, Figure 8 depicts a FSM for notes A k-leader is defined as a path of length k that accepts in the given state. Merge any two states if either of the following is true: 1. Another state transitions to both states on the same token; or (This enforces determinism.) 2. Both states have a common k-leader and a. Both states are accepting states, or b. Both states transition to a common state via the same token.\nTable 1: FSM state merging rules from (Angluin, 1982).\nn 3 n\nabout fabric patterns. Many of the states prior to the accepting state could be usefully merged, but using only the rules listed in Table 1, many more notes will have to be processed before this happens. If the FSM in Figure 8 were rendered as a button-box interface, it would reflect little of the true structure of the domain of fabric patterns. Table 2 lists specializations of Rules 2a and 2b and an additional pair of rules we developed to make the FSM generalize more readily. Note that the parameter k has been set to zero in Rule 2 and to one in Rule 3. Effectively, two states are merged by Rules 3a or 2b' if they share an incoming or outgoing transition. Rule 3b is a Kleene rule that encourages the FSM to generalize the number of times a token may appear in a sequence. If one state has a transition to another, then merging them will result in a transition that loops from and to the newly merged state. Figure 9 depicts a FSM for notes about fabric patterns learned using all three generalization rules in Table 2. The resulting FSM accurately captures the syntax of the user's fabric pattern notes and correctly indicates the syntactically optional tokens that may appear at the end of note.\nWhen rendered as a button-box interface, it clearly depicts the user's syntax (as illustrated later by Figure 12). The added generalization rules may have only marginal effects on the system's ability to accurately predict a completion as the user writes out a note (as Table 14 below indicates). Their purpose is to improve the quality of the custom interface. Cohen (1988) uses an interesting alternative representation for learning a syntactic form. The goal in his work is to guide the generation of proof structures. Intuitively, the representation is a finite-state machine that accepts a tree rather than a sequence, and for this reason it is termed a tree automaton. Like the rules in Tables 1 and2, tree automatons are generalized Figure 5: (a) Finite-state machine after processing two fabric pattern notes and applying state merging rules in Table 1, and (b) prefix tree finite-state machine after adding a third fabric pattern note. by merging states that share similar transitions. Oddly enough, one motivation for using tree automatons is that they are less likely to introduce extraneous loops, the opposite of the problem with the original FSM merging rules in Table 1. It is not clear how to map the sequence of tokens in the user's notes into a tree structure, but the less sequential nature of the tree automaton may help alleviate sequencing problems in rendering the custom user interface (see Section 9, Observations/Limitations)." }, { "figure_ref": [ "fig_5" ], "heading": "Parsing", "publication_ref": [], "table_ref": [], "text": "To use the finite-state machine for prediction, the software needs a strategy for dealing with novel tokens. For example, when the user takes a note about a PowerBook computer with a new memory configuration, the FSM will not have a transition for the first token. If the software is to prompt the user, then it must have a means for deciding where novel tokens lie in a note's syntax-which state to predict from. Without such a mechanism, no meaningful prediction can be generated after novel tokens.\nA state may not have a transition for the next token. In general, this is a single symptom with three possible causes: (1) a novel token has been inserted, (2) a suitable token has been omitted and the next token would be accepted by a subsequent state, or (3) a token has been simply replaced by another in the syntax. For example, in the sequence of tokens { :NULL , \" 12288 \", \" K \", \" PB \"}, \" 12288 \" is a novel token, a familiar memory size has been omitted, and \" PowerBook \" has been replaced by \" PB \".\nAn optimal solution would identify the state requiring a minimum number of insertions, omissions, and replacements necessary to parse the new sequence. An efficient, heuristic approximation does a greedy search using a special marker. Each time the marked state in the FSM has a transition for the next token written by the user, the marker is moved forward, and a prediction is generated from that state. When there is no transition for the next token, a greedy search is conducted for some state (including the marked one and those reachable from it) that has a transition for some token (including the next one and those following). If such a state is found, the marker is moved forward to that state, tokens for the transitions of skipped states are assumed omitted, and novel tokens are assumed inserted. If no state past the marker has a transition for any of the remaining tokens, the remaining tokens are assumed to be replacements for the same number of the most likely transitions; the marker is not moved. If the user writes a subsequent token for which some state has a transition, the 1. marker is moved as described above, and the syntax of the user's note is realigned with the learned syntax. Continuing with the simple PowerBook example, the marker is moved to State 1 of the FSM in Figure 7 because the initial state had a transition for the first token :NULL . Because State 1 doesn't have a transition for the next token \" 12288 \", a greedy search is conducted to find a nearby state that accepts either \" 12288 \", \" K \", or \" PB \". The state just before State 2 accepts \" K \", so the marker is moved to that state. Another greedy search is started to find a state that accepts \" PB \". Because one cannot be found, the heuristic parsing assumes that it should skip to the next transition. In this case the one labeled \" PowerBook \". Consequently, the system generates a prediction from State 2 to prompt the user." }, { "figure_ref": [], "heading": "Multiple Finite-State Machines", "publication_ref": [ "b8" ], "table_ref": [ "tab_1" ], "text": "If the user decides to take notes about multiple domains, it may be necessary to learn a separate syntax for each domain. For example, a single syntax generalized over both the Power-Book and fabric pattern notes is likely to yield confusing predictions and an unnatural user interface. Maintenance of multiple finite-state machines is an instance of the clustering problem-deciding which notes should be clustered together to share a FSM. As Fisher (1987) discusses, this involves a trade-off between maximizing similarity within a cluster and minimizing similarity between clusters. Without the first criteria, all notes would be put into a single cluster. Without the second criteria, each note would be put into its own cluster.\nOne obvious approach would be to require the user to prepend each note with a unique token to identify each note's domain. This simplifies the clustering computation. All notes sharing the first token would share a FSM. However, with this scheme, the user would have Figure 9: Finite-state machine characterizing fabric pattern notes learned using extended rules in Table 2. Compare to zero-reversible finite-state machine for the same domain in Figure 8.\nto remember the identifying token or name for each domain. An interface could provide a pop-up list of all previously used domain identifiers. This is not satisfactory because it requires overhead not needed when taking notes on paper.\nAn alternative approach doesn't require any extra effort on the part of the user. A new note is grouped with the FSM that skips the fewest of its tokens. This heuristic encourages within cluster similarity because a FSM will accept new token sequences similar to those it summarizes. To inhibit the formation of single-note FSMs, a new FSM is constructed only if all other FSMs skip more than half of the new note's tokens. This is a parametrized solution to encourage between-cluster dissimilarity." }, { "figure_ref": [ "fig_6", "fig_6", "fig_6", "fig_5", "fig_5" ], "heading": "Learning Embedded Classifiers", "publication_ref": [], "table_ref": [], "text": "Finite-state machines are useful representations for capturing the syntax of a user's notes, and they are easy to learn. When predicting a note's completion, it is essential that a prediction be made from the correct state in the FSM (as discussed above). It is also necessary to decide whether to terminate (indicating acceptance of the note) or continue prediction, and, in the later case, which transition to predict. To facilitate these decisions, the FSM can maintain a count of how many times parsing terminated and how many times each transition was taken. Prediction can then return the option with the maximum frequency.\nFigure 10 depicts a FSM for which this method will prove insufficient. There is only one state, an accepting state, and the transition corresponding to the token \" X \" is optional. (This corresponds to a check box interface item.) There are two problems with a frequency-based prediction. First, the FSM does not indicate that the transition is to be taken at most once, yet this is quite clear from the user interface. Second, simple frequency-based prediction would always recommend termination and never the transition. The FSM accepts whether the box is checked or not, thus the frequency of termination is greater than or equal to the frequency of the transition. This problem arises whenever there is a loop.\nEmbedding general classifiers in a FSM can alleviate some of the FSM's representational shortcomings. For example, in the FSM depicted in Figure 10, a decision tree embedded in this state easily tests whether the transition has already been taken and can advise against repeating it. Moreover, a classifier can predict based on previous transitions rather than just the frequency of the current state's transitions. Therefore, a decision tree embedded in the state of Figure 10 can predict when the transition should be taken as a function of other, earlier tokens in the sequence. Table 3 lists sample decision trees embedded in states of the FSM depicted in Figure 7. The first tree tests which token was parsed by a distant state, in effect augmenting the FSM representation. It relates memory size to hard disk capacity (small amounts of memory correlate with a small hard disk). The second tree prevents an optional loop from being taken a second time by testing to see if the state has yet been visited during a parse of the note. After processing additional notes, this second decision tree becomes more complex as the system tries to predict which PowerBooks have FAX modems and which do not.\nA classifier is trained for each state in the FSM which: (a) has more than one transition, or (b) is marked as a terminal state but also has a transition. The classifiers are updated incrementally after the user finishes each note. The classifier's training data are token sequences parsed at this state. The class value of the data is the transition taken from, or termination at, this state by the token sequences. Only those classifiers whose states are used in a parse are updated. The attributes of the data are the names of states prior to this one, and the values of the attributes are the transitions taken from those states. A distinct attribute is defined each time a state is visited during a given parse, so when a loop transition is taken a specific attribute reflects this fact. For any of the attributes, if the corresponding state was not visited while parsing the token sequence, the attribute has a special, empty value.\nConsider the PowerBook FSM shown in Figure 7. A classifier would be embedded at States 1, 2, 3, 4, 5, 6, 7. A training example corresponding to the note in Example 1 for the classifier at State 6 would be:" }, { "figure_ref": [], "heading": "Attributes:", "publication_ref": [], "table_ref": [], "text": "Values:\nS1 = \" 4096 \" S2 = \" 170 \" S3 = NIL S4 = \" 40 \" S5 = \" Drives \" S6\n= \" , 2400/9600 \" S7 = \" FAX \" S7-1 = \" Modem \" Class:\n= :TERMINATE .\nNote that there is no value for State 3, denoting that it wasn't visited during the parse of Example 1. Also there are two attributes for State 7 denoting that it has been visited twice.\nThe classifier gives informed advice about which transition to take or whether to terminate. The FSM in turn gives the classifier a specific context for operation. If only a single classifier were used to predict the next token, it would be hard pressed to represent the different predictions required. The domain is naturally narrowed by the FSM and therefore reduces the representational demands on the classifier. Later, we present empirical results" }, { "figure_ref": [], "heading": "Decision tree embedded in State 3:", "publication_ref": [], "table_ref": [], "text": "If State 1 exited with \"2048\" Then predict \" 20\" Else if with \"4096\"\nThen predict \" 40\" Else if with \"6144\"\nThen predict \" 40\" Else if with \"8192\"\nThen predict \" 40\" ." }, { "figure_ref": [ "fig_5" ], "heading": "Decision tree embedded in State 7:", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "If State 7 has not been visited Then predict \" FAX\" Else if State 7 exited with \" FAX\" Then predict \" Modem\" .\nTable 3: Sample decision trees embedded in the finite-state machine depicted in Figure 7. comparing a single classifier to a set of classifiers embedded in a FSM. The findings there show that the latter outperforms the former, confirming the intuition that learning is more effective if situated within a narrow context.\nFrom the classifier's point of view, the learning task is non-stationary. The concept to be learned is changing over time because the structure of the FSM is changing. When two states are merged, one of the two classifiers is discarded. The other is now embedded in a different position in the FSM, and it sees different training data. Similarly, when other states are merged, the attributes of the training data also change. To help mitigate this effect, the new state takes the oldest identifier assigned to the two merged states. Empirical results in Table 14 illustrate that the FSM does not have to be fixed before the classifier can learn useful information." }, { "figure_ref": [], "heading": "Contextual Prompting", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In the prompting mode, the software continuously predicts a likely completion as the user writes out a note. It presents this as a default next to the completion button. The button's saturation ranges from white to green in proportion to the confidence of the prediction. If the user taps the completion button, the prompt text is inserted at the end of the current note.\nA completion is generated by parsing the tokens already written by the user, finding the last state visited in the FSM, and predicting the next most likely transition (or termination). This process is repeated until a stopping criterion is satisfied, which is discussed below. If the last token written by the user is incomplete, matching only a prefix of a state's transition, then the remainder of that transition is predicted. If the last token matches more than one transition, a generalized string is predicted using special characters to indicate the type and number of characters expected. If a digit is expected, a \" # \" is included; if a letter, an \" a \" is included; if either are possible, a \" ? \" is included; and if some transition's tokens are longer than others, a \" … \" is appended to the end. For example, if the user has written \" 4096K PowerBook 1 \", the possible values for PowerBook models of \" 100 \", \" 140 \", \" 160C \", and \" 170 \" are generalized, and the prompt is \" #0… \".\nA simple calculation is used to compute the confidence of the prediction and set the button's color saturation. It is the simple ratio where is the frequency of the predicted arc (or terminate) [i.e., the number of times this choice was taken while parsing previously observed notes],\nis the total frequency of all arcs (and terminate), and is the number of tokens skipped during heuristic parsing (cf. Section 3.3, Parsing). Confidence is directly proportional to the simple likelihood of the prediction and is degraded in proportion to the number of tokens the FSM had to skip to get to this point. This information is used in a simple way, so it is unclear if more sophisticated measures are needed.\nThe stopping criterion is used to determine how much of a prompt to offer the user. At one extreme, only a single token can be predicted. This gives the user little context and may not provide much assistance. At the other extreme, a sequence of tokens that completes the note can be predicted. This may be too lengthy, and the user would have to edit the prompt if selected. The stopping criterion in Table 4 balances these two extremes and attempts to limit prompts to a consistent set of tokens. In particular, Condition 3 stops expanding the prompt f prediction ( )\nf total ( ) 1 skipped + ( ) × f prediction ( ) f total ( ) skipped\nupon reaching a syntactic boundary (leading punctuation) or upon reaching a semantic boundary (falling confidence)." }, { "figure_ref": [ "fig_5", "fig_1" ], "heading": "Constructing a Button-Box Interface", "publication_ref": [], "table_ref": [], "text": "In the button-box mode, the software presents an interactive graphical interface. Instead of writing out the note, the user may select note fragments by tapping buttons. To switch from contextual mode to button-box mode, a green radio button indicator is displayed below the completion button when the software is confident about the user's syntax. If the user taps this indicator, the existing text is removed, and the corresponding buttons in the button-box interface are selected. As the user selects additional buttons, the interface dynamically expands to reveal additional choices. Because the interface reflects an improving syntactic representation, it also improves with successive notes. The button-box interface is a direct presentation of a finite-state machine. After the user has written out a token or so of the note, the software finds the FSM that best parses these tokens. The mode switch is presented if the syntax is sufficiently mature-if the average number of times each state has been used to parse earlier notes is greater than 2. If the user selects this indicator, the FSM is incrementally rendered as a set of radio buttons and check boxes.\nThe two user interface item types correspond to optional choices (check boxes) and exclusive choices (radio buttons). Mapping a FSM into these two item types proceeds one state at a time. Given a particular state to be rendered, any transition that starts a path that does not branch and eventually returns back to the state is rendered as a check box (a loop). The loop corresponds to syntactically optional information. The label for the check box consists of each of the transition labels along the looping path. Other non-looping transitions are rendered as buttons in a single radio button panel along with an extra, unlabeled button. They correspond to syntactically exclusive information. The label for each radio button consists of each transition label up to the point of a subsequent branch or termination. For example, compare the FSM depicted in Figure 7 and the corresponding button-box interface in Figure 3.\nBecause the transitions for different radio buttons lead to different parts of the FSM, it may confuse the user to render the entire FSM at once. So, each branching state is rendered as it is visited. Initially, the first state in the FSM is rendered. Then, when a radio button is selected, the branching state at the end of its transition path is rendered. Note that check boxes do not trigger additional rendering because the branching state at the end of their loop Stop expanding the prompt if any of the following are true: 1. The next prediction is to terminate; or 2. The next prediction is a generalized string; or 3. At least one token has already been predicted and a. The prediction starts with punctuation, or b. The confidence of the prediction is lower; or 4. The next prediction is the same as the last prediction; or 5. More than 10 tokens have already been predicted. has already been rendered. This interactive process is repeated as long as the user selects radio buttons that lead to branching states." }, { "figure_ref": [], "heading": "Empirical Results", "publication_ref": [ "b9" ], "table_ref": [ "tab_5", "tab_1" ], "text": "We tested the interactive note taking software on notes drawn from a variety of domains. Tables 5 through 11 (Grove & Miller, 1989). B,81,5,151 (2.5),Cyl. 4,Pontiac C,82,X,173 (2.8),Cyl. 6,Chevrolet Table 6: Sample notes from the engine code domain. Listed above are 2 of the 20 notes about the meaning of engine codes stamped on automobile identification plates collected from Chilton's Repair & Tune-Up Guide (1985). 12 together with some simple measures to indicate prediction difficulty. For instance, Column 1 shows the number of notes in the domain. With a larger number of notes, the easier it should be to accurately train a predictive method. Column 4 shows the standard deviation (STD) of the length of all notes in each domain. It is more likely that a well-behaved FSM can be discovered when STD is low. In this and successive tables, the domains are ranked by STD. Column 5 presents the percentage of unique tokens in the notes. The fewer novel tokens a note has, the more likely that successive tokens can be predicted. This measure places an upper bound on predictive accuracy. Column 6 shows the percentage of constant tokens, ones that always appear in a fixed position. It is easier to predict these constant tokens. Finally, Column 7 indicates the percentage of repeated tokens. When fewer tokens are repeated verbatim within a note, the more likely that the predictive method will not become confused about its locale within a note during prediction.\nThe first six domains are natural for the interactive note taking task because they exhibit a regular syntax. The last three domains are included to test the software's ability on less suitable domains. Notes from the Antihistamine, Lens, and Raptor domains contain highlyvariable lists of terms or natural language sentences. Learned FSMs for notes in these domains are unlikely to converge, and, in the experiments reported here, only the FSM for the Lens data exceeded the maturity threshold (average state usage greater than 2)." }, { "figure_ref": [], "heading": "Contextual Prediction Accuracy", "publication_ref": [ "b2" ], "table_ref": [ "tab_12", "tab_1" ], "text": "Column 7 of Table 13 lists the accuracy of next-token predictions made by the software in prompting mode. The first nine rows list predictive accuracy over all tokens as notes from each of the nine domains are independently processed in the order they were collected. The last row lists predictive accuracy over all tokens as notes from all nine domains are collectively processed. This simulates a user taking notes about several domains simultaneously.\nTo put these results in context, the table also lists predictive accuracies for several other methods. Column 1 lists the accuracy for a lower bound method. It assumes that each note shares a fixed sequence of tokens. Termed common , this method initializes its structure to the 22in. W. 48in. A very large falcon. Three color phases occur: blackish, white, and gray-brown. All are more uniformly colored than the Peregrine Falcon, which has dark mustaches and hood. 16-24in. W. 42in. Long-winged, long-tailed hawk with a white rump, usually seen soaring unsteadily over marshes with its wings held in a shallow 'V'. Male has a pale gray back, head, and breast. Female and young are brown above, streaked below, young birds with a rusty tone.\nTable 11: Sample notes from the raptor domain. Listed above are 2 of the 21 notes about North American birds of prey collected from (Bull & Farrand, 1977). first note. It then removes each token in this sequential structure that cannot be found in order in other notes. At best, this method can only predict the constant, delimiter-like tokens that may appear regularly in notes. Its performance is limited by the percentage of constant tokens reported in Column 6 of Table 12. It performs best for the PowerBook notes where it learns the following note syntax: * :NULL * \"K\" * \" PowerBook\" * \"MB\" * \"MB\" * \" Int.\" * .\n(Example 3) (The asterisks indicate Kleene star notation.) This reads as some sequence of zero or more tokens then the token :NULL , followed by zero or more tokens then \" K \", followed by zero or more tokens then \" PowerBook \", and so on. It is less successful for the minivan notes where it learns a simpler syntax: * :NULL * \"K\" * \" MI\" * \" Pass\" * ." }, { "figure_ref": [], "heading": "(Example 4)", "publication_ref": [ "b14", "b11" ], "table_ref": [ "tab_12", "tab_12", "tab_1", "tab_12", "tab_12", "tab_12", "tab_12" ], "text": "Columns 2 and 3 of Table 13 list the accuracy of using a classifier to directly predict the next token without explicitly learning a syntax. In this paradigm, examples are prefixes of token sequences. Attributes are the last token in the sequence, the second to last token, the third to last token, and so on. Class values are the next token in the sequence-the one to be predicted. Column 2 lists the performance of a simple Bayes classifier, and Column 3 lists the performance of an incremental variant of ID3 (Schlimmer & Fisher, 1986). Perhaps surprisingly, these methods perform considerably worse than the simple conjunctive method. Without the benefit of a narrow context provided by the FSM, these methods must implicitly construct representations to detect differences between similar situations that arise within a single note. For example, in the PowerBook notes, a classifier-only approach must learn to discriminate between the first and second occurrence of the \" MB \" token.\nColumn 4 of Table 13 lists the accuracy of a more viable prediction mechanism. Based on simple ideas of memorization and termed digram , the method maintains a list of tokens that have immediately followed each observed token. For example, in the fabric pattern domain, this method retains the list of tokens {\" 8-10-12 \", \" 10 \", \" 11/12 \", \" 12 \"} as those that follow the token \" Size \". Each list of follow tokens are kept in order from most to least frequent. To predict the next token, the system looks for the last token written and predicts Table 12: Quantitative properties of the nine domains used to test alternative methods.\nthe most frequent follow token. This method is nearly as effective as any other in Table 13, especially on the combined task when notes from each domain are entered in random order. Laird (1992) describes an efficient algorithm for maintaining higher-dimensional n-grams, in effect increasing the context of each prediction and effectively memorizing longer sequences of tokens. Laird's algorithm builds a Markov tree and incorporates heuristics that keep the size of the tree from growing excessively large. Regrettably, these methods are unsuitable for the interactive note-taking software because of the difficulty of using them to construct a custom user interface. It is plausible to construct a panel of exclusive choices based directly on the set of follow tokens, but it is unclear how to identify optional choices corresponding to loops in finite-state machines. Moreover, if notes are drawn from different domains, and those domains share even a single token, then some follow set will include tokens from different domains. Using these follow sets to construct a user interface will unnecessarily confuse the user by introducing options from more than one domain at a time.\nColumn 5 of Table 13 lists the accuracy of prediction based solely on the learned FSMs. Without an embedded classifier, this method must rely on prediction of the most common transition (or termination) from each state. Because the prediction is based on simple counts (as noted in Section 4, Learning Embedded Classifiers), this method never predicts optional transitions.\nColumns 6 and 7 of Table 13 list the accuracy of predicting using FSMs and embedded classifiers. The classifiers used are simple Bayes and the incremental ID3, respectively. The latter outperforms either the FSM alone or the FSM with embedded Bayes classifiers. If the system only makes predictions when its confidence measure is greater than 0.25, the accuracy is significantly different for the Engine Code, Minivan, Lens, and Raptor domains, ranging between 10 and 22 percentage points of improvement.\nColumn 8 of Table 13 lists an estimate of the upper-bound on predictive accuracy. This was calculated by assuming that prediction errors were only made the first time each distinct token was written. " }, { "figure_ref": [], "heading": "Design Decisions", "publication_ref": [], "table_ref": [ "tab_13", "tab_1" ], "text": "The note taking software embodies a number of design decisions. Table 14 lists the effects of these decisions on predictive accuracy by comparing versions of the software with and without each design feature. The first column lists the predictive accuracy for the software's nominal configuration. Column 2 lists the accuracy data for a slightly different generic tokenizer. Accuracy is higher for some domains, lower for others. A custom-built tokenizer is one way to incorporate knowledge about the domain. Columns 3 and 4 show the accuracy for the system using only the original two FSM merging rules (cf. Table 1) and all but the last merging rule (cf. Table 2), respectively. The decreased structural generality tends to lower predictive accuracy, but the embedded classifiers help compensate for the reduced accuracy. Column 5 lists the accuracy for when the FSM does not heuristically continue parsing upon encountering a token for which there is no immediate transition. As expected, accuracy suffers considerably in some domains because a novel token in a sequence completely foils any subsequent prediction. Columns 6 and 7 list accuracy for different values of the free parameter controlling the clustering of notes together into a FSM. There is little effect on predictive accuracy in this case. Column 8 shows the accuracy for when embedded classifiers do not use information about repeated states in the FSM. Without this information, the classifiers cannot predict that a loop transition should be taken exactly once. Surprisingly, elimination of this feature has little effect on accuracy. Column 9 lists the accuracy for when the embedded classifiers associated with a pair of FSM states are discarded when the states are merged. Finally, Column 10 lists the accuracy for when a new FSM state is assigned a unique ID rather than the ID of the oldest of the two merged states." }, { "figure_ref": [ "fig_1", "fig_9" ], "heading": "Sample Button-Box Interfaces", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In addition to Figure 3, Figures 11 through 15 depict button-box interfaces for the five other well-behaved note taking domains listed at the top of Table 12. These interfaces are visual and offer the user an organized view of their notes, presenting options in a natural way. However, whenever unique tokens are involved, the current software makes no attempt to explicitly generalize tokens. This effect is reflected in the tour dates for the Airwing notes in Figure 11. Note that the radio button panel consists of a long series of dates, none of which is likely to be selected for a new note. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b6", "b7", "b10", "b12" ], "table_ref": [], "text": "Self-customizing software agents have several subjective dimensions on which they can be evaluated and compared:\n• Anticipation -Does the system present alternatives without the user having to request them? • User interface -Is the system graphical, or is it command-line oriented?\n• User control -Can the user override or choose to ignore predictive actions? • Modality -If the system has a number of working modes, can the user work in any mode without explicitly selecting one of them? • Learning update -Is learning incremental, continuous and/or real-time? • User adjustable -Can the user tune the system parameters manually? Here we describe related systems that exhibit properties in each of these agent dimensions.\nOur note taking software utilizes the anticipation user interface technique pioneered by Eager (Cypher, 1991). Eager is a non-intrusive system that learns to perform iterative procedures by watching the user. As such, it is a learning apprentice, a software agent, and an example of programming by example or demonstration. Situated within the HyperCard environment, it continuously watches a user's actions. When it detects the second cycle of an iteration, it presents an execute icon for the user's notice. It also visually indicates the anticipated next action by highlighting the appropriate button, menu item, or text selection in green. As the user performs their task, they can verify that Eager has learned the correct procedure by comparing its anticipations to their actions. When the user is confident enough, they can click on the execution icon, and Eager will run the iterative procedure to completion. Eager is highly anticipatory, uses a graphical interface, is non-obtrusive, non-modal, and learns in real-time, but is not user adjustable.\nCAP is an apprenticeship system that learns to predict default values (Dent, et al., 1992). Its domain of operation is calendar management, and it learns preferences as a knowledgable secretary might. For example, a professor may prefer to hold a regular group meeting in a particular room at a particular time of day for a particular duration-information that a secretary would know from experience. CAP collects information as the user manages their calendar, learns from previous meetings, and uses the regularities it learns to offer default values for meeting location, time, and duration. The learning system is re-run each night on the most recent meeting data, and the learned rules are applied for prediction the following day. CAP is also designed to utilize an extensible knowledge base that contains calendar information and a database of personnel information. The system continues to be used to manage individual faculty calendars. Though offering some intelligence, CAP's user interface is line-oriented and is based on the Emacs editor. Questions asked of the user about meetings are presented using a command-line dialog, and the default predictions are displayed one-at-a-time. CAP can be characterized as anticipatory, command-line oriented and modal with user control (but not user adjustable), where learning is done in batch.\nAnother related system addresses the task of learning to fill out a form (Hermens & Schlimmer, 1993). The system recreates a paper form as an on-screen facsimile, allowing the user to view all of the pertinent information at a glance. Input typed by the user into the electronic form is processed by a central form-filling module. When the user completes a form copy, it is printed, and each field value on the form is forwarded to a learning module (a decision tree learning method). The learned representations predict default values for each field on the form by referring to values observed on other fields and on the previous form copy. From the user's point of view, it is as if spreadsheet functions have been learned for each field of the form. Empirical studies indicate that this system reduced the number of keystrokes required of the user by 87% on 269 forms processed over the 8 month period in which it was actually used by office personnel. This system is unobtrusive, non-modal and anticipatory, uses a graphical interface, and updates learning in real-time. Maes and Kozierok (1993) are addressing the problem of self-customizing software at a much more task-independent level. They identify three learning opportunities for a software agent: observing the user's actions and imitating them, receiving user feedback upon error, and incorporating explicit training by the user. To illustrate the generality of their framework, they demonstrate simple learning apprentices that help sort the user's electronic mail and schedule meetings. Their initial systems use an instance-based (case-or memory-based) approach primarily because it allows efficient update and because it naturally generates a confidence in each of its predictions. User's may set thresholds on these predictions, corresponding to a minimum confidence for when the agent should prompt the user (a \"tell-me\" threshold) and a higher minimum confidence for the agent to act immediately on behalf of the user (a \"do-it\" threshold). The framework for learning in this case is anticipatory, utilizes a graphical user interface, is devoted to user control, is non-modal, learns in real-time, and is user adjustable.\nA system developed for Macintosh Common Lisp (MCL) provides a word-completion mechanism for word prefixes typed by the user in any window. J. Salem and A. Ruttenberg (unpublished) have devised MCL methods to display a word completion in the status bar of the each window. If the user desires to add the completion to the window, they simply press the CLEAR key. This word completion mechanism is similar to file-name completion in EMACS and the C-shell in UNIX systems, except that the word is displayed for the user before it is added. This system is anticipatory (unlike the UNIX file completion), is command line oriented (but displays the default completion in a graphical window), can be fully controlled by the user, is non-modal, learns in real time, is not intended to be user adjustable (though knowledgeable MCL programmers could easily make changes to the code).\nThe interactive note taking software we have devised does not require any user programming. It only receives implicit user feedback when the user chooses to complete a note in a different way than prompted. It does not have any mechanisms for direct user instruction or threshold tuning. In a system designed to be as easy to use as paper, such explicit adjustment may be inappropriate. We characterize our system as anticipatory, graphically-oriented, and modal (due to the switching that takes place when a user wishes to display the button-box interface). It allows the user to override default prompts and predictions, and it learns in real-time. We have not included features that allow the user to configure the performance of the agent." }, { "figure_ref": [], "heading": "Observations/Limitations", "publication_ref": [], "table_ref": [], "text": "The interactive note-taking software is designed to help users capture information digitally, both to speed entry and improve accuracy, and to support the longer term goal of efficient retrieval. The software incorporates two distinctive features. First, it actively predicts what the user is going to write. Second, it automatically constructs a custom radio-button, checkbox user interface.\nThis research explores the extremes of FSM learning and prediction, where the system has no explicit a priori knowledge of the note domains. We have tried to design the system so that it can learn quickly, yet adapt well to semantic and syntactic changes, all without a knowledge store from which to draw. It is clear that knowledge in the form of a domain-specific tokenizer would aid FSM learning by chunking significant phrases and relating similar notations and abbreviations. Some preliminary work has shown that, after a few notes have been written, users may create abbreviations instead of writing out whole words. A domainspecific tokenizer would be able to relate an abbreviation and a whole word as being in the same class, and therefore allow for more flexibility during note taking. For example, a domain-specific tokenizer may recognize that \" Megabytes \", \" Meg \" , \" MB \", and \" M \" all represent the same token for memory sizes. One could imagine a framework that would allow for domain-specific tokenizers to be simply plugged in.\nThe prototype built to demonstrate these ideas was implemented on a conventional, micro computer with keyboard input. As a consequence, it was impossible to evaluate user acceptance of the new interface or the adaptive agent. With newly available computing devices incorporating pen input and handwriting recognition, it should be possible to reengineer the user interface and field test these ideas with actual users.\nOne aspect of note learning, related to tokenization and the button-box user interface display, is the difficulty of generalizing numeric strings or unique tokens. The cardinality of the range of model numbers, telephone numbers, quantities, sizes, other numeric values, and even proper names is very large in some note domains. The finite-state machine learning method presented here is incapable of generalizing over transitions from a particular state, and, as a consequence, the current system has the problem of displaying a very lengthy button-box interface list. (A button is displayed for each value encountered in the syntax of notes, and there may be many choices.) For example, a large variety of pattern numbers may be available in the fabric pattern note domain. An appropriate mechanism is desired to determine when the list of numeric choices is too large to be useful as a button-box interface. The system can then generalize the expected number, indicating the number of digits to prompt the user: #### , for example. This may be helpful to remind the user that a number is expected without presenting an overbearing list of possibilities.\nAnother limitation of the current effort lies in the choice of finite-state machines to represent the syntax of the user's notes. Notes may not be regular expressions with the consequence that the FSMs may become too large as the learning method attempts to acquire a syntax. This may place an unreasonable demand on memory and lead to reduced prompting effectiveness.\nThe choice of finite-state machines also apparently constraints the custom user interface. Because FSMs branch in unpredicable ways, button-box interfaces must be rendered incrementally. After the user indicates a particular transition (by selecting a button), the system can render states reachable from that transition for the user. Ideally, the user should be able to select buttons corresponding to note fragments in any order, allowing them to write down the size before the pattern number, for example. To construct a non-modal user interface, a more flexible syntactic representation is needed.\nSeveral of the low-level design decisions employed in this system are crude responses to technical issues. For instance, the decision to render a syntax as a button-box interface only if the average number of times each state has been used to parse notes is greater than 2. This ignores the fact that some parts of the state machine have been used frequently for parsing notes while other parts have rarely been used. Similarly, the particular measure for estimating prompting confidence (and setting the saturation of the completion button) is simplistic and would benefit from a more sound statistical basis." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "Anonymous reviewers suggested an additional example in Section 3, offered some refinements to the user interface, graciously identified some limitations of the work listed in Section 9, and pointed out some additional related work. Mike Kibler, Karl Hakimian, and the EECS staff provided a consistent and reliable computing environment. Apple Cambridge developed and supports the Macintosh Common Lisp programming environment. Allen Cypher provided the tokenizer code. This work was supported in part by the National Science Foundation under grant number 92-1290 and by a grant from Digital Equipment Corporation." } ]
[ { "authors": "D Angluin", "journal": "Journal of the Association for Computing Machinery", "ref_id": "b0", "title": "Inference of reversible languages", "year": "1982" }, { "authors": "R C Berwick; S Pilato", "journal": "Machine Learning", "ref_id": "b1", "title": "Learning syntax by automata induction", "year": "1987" }, { "authors": "J Bull; J Farrand; Jr", "journal": "Alfred A. Knopf", "ref_id": "b2", "title": "The Audubon Society Field Guide to North American Birds (Eastern Edition)", "year": "1977" }, { "authors": "", "journal": "Chilton Book", "ref_id": "b3", "title": "Chilton's Repair & Tune-Up Guide: GM X-Body 1980-1985", "year": "1985" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b4", "title": "Generalizing number and learning from multiple examples in explanation based learning", "year": "1988" }, { "authors": "", "journal": "Consumer Reports", "ref_id": "b5", "title": "", "year": "1988" }, { "authors": "A Cypher", "journal": "ACM", "ref_id": "b6", "title": "Eager: Programming repetitive tasks by example", "year": "1991" }, { "authors": "L Dent; J Boticario; J Mcdermott; T Mitchell; D Zabowski", "journal": "AAAI Press", "ref_id": "b7", "title": "A personal learning apprentice", "year": "1992" }, { "authors": "D H Fisher", "journal": "Machine Learning", "ref_id": "b8", "title": "Knowledge acquisition via incremental conceptual clustering", "year": "1987" }, { "authors": "M Grove; J Miller", "journal": "Aerofax", "ref_id": "b9", "title": "North American Rockwell A3J/A-5 Vigilante", "year": "1989" }, { "authors": "L A Hermens; J C Schlimmer", "journal": "", "ref_id": "b10", "title": "A machine-learning apprentice for the completion of repetitive forms", "year": "1993" }, { "authors": "P Laird", "journal": "AAAI Press", "ref_id": "b11", "title": "Discrete sequence prediction and its applications", "year": "1992" }, { "authors": "P Maes; R Kozierok", "journal": "", "ref_id": "b12", "title": "Learning interface agents", "year": "1993" }, { "authors": "D C Washington", "journal": "Intermed Communications", "ref_id": "b13", "title": "AAAI Press. Nurse's Guide to Drugs", "year": "1979" }, { "authors": "J C Schlimmer; D H Fisher", "journal": "AAAI Press", "ref_id": "b14", "title": "A case study of incremental concept induction", "year": "1986" } ]
[ { "formula_coordinates": [ 3, 460.25, 429.24, 57.42, 9.9 ], "formula_id": "formula_0", "formula_text": "(Example 1)" }, { "formula_coordinates": [ 7, 233.25, 410.07, 55.8, 11.23 ], "formula_id": "formula_1", "formula_text": "n 3 n" }, { "formula_coordinates": [ 14, 85.95, 497.84, 122.99, 74.9 ], "formula_id": "formula_2", "formula_text": "S1 = \" 4096 \" S2 = \" 170 \" S3 = NIL S4 = \" 40 \" S5 = \" Drives \" S6" }, { "formula_coordinates": [ 15, 119.05, 510.86, 340.3, 67.84 ], "formula_id": "formula_3", "formula_text": "f total ( ) 1 skipped + ( ) × f prediction ( ) f total ( ) skipped" } ]
Software Agents: Completing Patterns and Constructing User Interfaces
To support the goal of allowing users to record and retrieve information, this paper describes an interactive note-taking system for pen-based computers with two distinctive features. First, it actively predicts what the user is going to write. Second, it automatically constructs a custom, button-box user interface on request. The system is an example of a learning-apprentice software-agent. A machine learning component characterizes the syntax and semantics of the user's information. A performance system uses this learned information to generate completion strings and construct a user interface. 1. Of the functionality described here, our prototype implements all but the transition from button-box to contextual prompting. The mechanism for such a transition is machine dependent and is not germane to this research.
Jeffrey C Schlimmer; Leonard A Hermens
[ { "figure_caption": "Figure 2 :2Figure 2: Screen snapshot of the note-taking software in contextual prompting mode for a PowerBook note. The two triangles in the lower left are scroller buttons.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Screen snapshot of the note-taking software in button-box mode for a PowerBook note.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: (a) Degenerate finite-state machine after processing a single fabric pattern note, and (b) prefix tree finite-state machine after adding a second fabric pattern note (cf. Example 2).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Sample finite-state machine after processing three fabric pattern notes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Zero-reversible FSM characterizing PowerBook notes (cf. Example 1).", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Simple finite-state machine with one state.", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "list sample notes from seven domains (in addition to the PowerBook and fabric pattern sample notes listed above).CVA-62 8/6/63 to 3/4/64 Mediterranean A-5A AG 60X CVA-61 8/5/64 to 5/6/65 Vietnam RA-5C NG 10X", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "90, Mazda MPV, 40K MI, 7 Pass, V6, Auto ABS, PL/PW, Cruise, Dual Air 87, Grand Caravan, 35K MI, 7 Pass, V6, Auto Cruise, Air, Tilt, Tinting", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Screen snapshot of the note-taking software in button-box mode for an airwing note.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Screen snapshot of the note-taking software in button-box mode for a fabric pattern note.", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Screen snapshot of the note-taking software in button-box mode for an engine code note.", "figure_data": "", "figure_id": "fig_11", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Screen snapshot of the note-taking software in button-box mode for a minivan note.", "figure_data": "", "figure_id": "fig_12", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Screen snapshot of the note-taking software in button-box mode for a watch note.", "figure_data": "", "figure_id": "fig_13", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Extended FSM state merging rules.", "figure_data": "start:NULL2048409661448192KPowerBook100140145160170801.4MBMBIntandand20402040801201.4MBMBIntExtDriveDrivesandDrivesterminal14.4v1.42xBattery,Battery,Case, Charger,K32MBFPU,Video Output9.6Ext96002400/48002400/96004800/9600KBaudbisFAXModem", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Zero-reversible finite-state machine characterizing fabric pattern notes learned using merging rules listed in Table", "figure_data": "start:NULLButterickMcCall'sSimplicity4198372243526171367430353611486450575377590654245465SizeSizesSizeSizeSizeSizeSuzeSizeSizeSizeSize128-10-1212101011/1210121211/1211/12DressJumperDressDressDressJumperDressTopSkirtJumperSkirtTopterminalFigure 8:", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Stopping criterion for contextual prompting.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Sample notes from the airwing domain. Listed above are 2 of the 78 notes about airwing assignments aboard aircraft carriers collected from", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Sample notes from the minivan domain. Listed above are 2 of the 22 notes about minivan automobiles collected by the first author.", "figure_data": "Lorus Disney Oversize Mickey Mouse Watch.Genuine leather strap.Seiko Disney Ladies' Minnie Mouse Watch.Leather strap.", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Sample notes from the watch domain. Listed above are 2 of the 89 notes about personal watches collected from the Best catalog (a department store).", "figure_data": "azatadine maleateBlood: thrombocytopenia.CNS: disturbed coordination, dizziness, drowsiness, sedation,vertigo.CV: palpitations, hypotension.GI: anorexia, dry mouth and throat, nausea, vomiting.GU: Urinary retention.Skin: rash, urticaria.Other: chills, thickening of bronchial secretions.brompheniramine maleateBlood: aganulocytosis, thrombocytopenia.CNS: dizziness, insomnia, irritability, tremors.CV: hypotension, palpitations.GI: anorexia, dry mouth and throat, nausea, vomiting.GU: urinary retention.Skin: rash, urticaria.After parenteral administration:local reaction, sweating, syncope may occur.", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Sample notes from the antihistamine domain. Listed above are 2 of the 17 notes on the side effects of antihistamines collected from the Nurses Guide to Drugs (1979).", "figure_data": "Canon FD f/1.8, 6oz., f/22, 13in.,good sharpness, poor freedom from flare,better freedom from distortion,focal length marked on sides as well ason front of lensChinon f/1.7, 6oz., f/22, 9in.,poor sharpness, good freedom from flare,good freedom from distortion,cannot be locked in program mode, whichis only a problem, of course, when lens isused on program-mode cameras", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Sample notes from the lens domain. Listed above are 2 of the 31 notes about 35mm SLR camera normal lenses collected from the Consumer Reports (1988). Summary characteristics of the nine domains are listed in Table", "figure_data": "", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Percentage of tokens correctly predicted as a function of the learning method.", "figure_data": "", "figure_id": "tab_12", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Percentage of tokens correctly predicted as a function of design variations.", "figure_data": "12345678910DomainNormDiff TokensRules 2a,bRules 2ab,3aNo RestartAccept = 1/4Accept = 3/4Repeat AttsDrop Class'rNew IDsAirwing62626362626262626163Pattern51515352505151515153Engine Code69717269436969696772Minivan47484847284747524548PowerBook82808383778282818082Watch42424343284242424143Antihistamine2425242492424242424Lens63666463466363636364Raptor12111212111212121212", "figure_id": "tab_13", "figure_label": "14", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b32", "b51", "b38", "b9", "b40", "b52", "b62", "b12", "b0", "b8", "b43", "b10", "b44", "b21", "b58", "b18", "b55", "b57", "b49", "b34", "b39", "b28", "b36", "b48", "b42", "b23", "b60", "b37", "b2", "b3", "b15", "b53", "b45", "b46", "b56", "b58" ], "table_ref": [], "text": "A general characteristic of many proposed terminological knowledge representation systems (TKRSs) such as krypton (Brachman, Pigman Gilbert, & Levesque, 1985), nikl (Kaczmarek, Bates, & Robins, 1986), back (Quantz & Kindermann, 1990), loom (MacGregor & Bates, 1987), classic (Borgida, Brachman, McGuinness, & Alperin Resnick, 1989), kris (Baader & Hollunder, 1991), k-rep (Mays, Dionne, & Weida, 1991), and others (see Rich, editor, 1991;Woods & Schmolze, 1992), is that they are made up of two di erent components. Informally speaking, the rst is a general schema concerning the classes of individuals to be represented, their general properties and mutual relationships, while the second is a (partial) instantiation of this schema, containing assertions relating either individuals to classes, or individuals to each other. This characteristic, which the mentioned proposals inherit from the seminal TKRS kl-one (Brachman & Schmolze, 1985), is shared also by several proposals of database models such as Abrial's (1974), candide (Beck, Gala, & Navathe, 1989), and taxis (Mylopoulos, Bernstein, & Wong, 1980).\nRetrieving information in actual knowledge bases (KBs) built up using one of these systems is a deductive process involving both the schema (TBox) and its instantiation (ABox). c 1993 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nIn fact, the TBox is not just a set of constraints on possible ABoxes, but contains intensional information about classes. This information is taken into account when answering queries to the KB.\nDuring the realization and use of a KB, a TKRS should provide a mechanical solution for at least the following problems (from this point on, we use the word concept to refer to a class):\n1. KB-satis ability: are an ABox and a TBox consistent with each other? That is, does the KB admit a model? A positive answer is useful in the validation phase, while the negative answer can be used to make inferences in refutation-style. The latter will be precisely the approach taken in this paper.\n2. Concept Satis ability: given a KB and a concept C, does there exist at least one model of the KB assigning a non-empty extension to C? This is important not only to rule out meaningless concepts in the KB design phase, but also in processing the user's queries, to eliminate parts of a query which cannot contribute to the answer.\n3. Subsumption: given a KB and two concepts C and D, is C more general than D in any model of the KB? Subsumption detects implicit dependencies among the concepts in the KB.\n4. Instance Checking: given a KB, an individual a and a concept C, is a an instance of C in any model of the KB? Note that retrieving all individuals described by a given concept (a query in the database lexicon) can be formulated as a set of parallel instance checkings.\nThe above questions can be precisely characterized once the TKRS is given a semantics (see next section), which de nes models of the KB and gives a meaning to expressions in the KB. Once the problems are formalized, one can start both a theoretical analysis of them, and|maybe independently|a search for reasoning procedures accomplishing the tasks. Completeness and correctness of procedures can be judged with respect to the formal statements of the problems.\nUp to now, all the proposed systems give incomplete procedures for solving the above problems 1{4, except for kris1 . That is, some inferences are missed, in some cases without a precise semantical characterization of which ones are. If the designer or the user needs (more) complete reasoning, she/he must either write programs in a suitable programming language (as in the database proposal of Abrial, and in taxis), or de ne appropriate inference rules completing the inference capabilities of the system (as in back, loom, and classic). From the theoretical point of view, for several systems (e.g., loom) it is not even known if complete procedures can ever exist|i.e., the decidability of the corresponding problems is not known.\nRecent research on the computational complexity of subsumption had an in uence in many TKRSs on the choice for incomplete procedures. Brachman and Levesque (1984) started this research analyzing the complexity of subsumption between pure concept expressions, abstracting from KBs (we call this problem later in the paper as pure subsumption). The motivation for focusing on such a small problem was that pure subsumption is a fundamental inference in any TKRS. It turned out that pure subsumption is tractable (i.e., worst-case polynomial-time solvable) for simple languages, and intractable for slight extensions of such languages, as subsequent research de nitely con rmed (Nebel, 1988;Donini, Lenzerini, Nardi, & Nutt, 1991a, 1991b;Schmidt-Schau & Smolka, 1991;Donini, Hollunder, Lenzerini, Marchetti Spaccamela, Nardi, & Nutt, 1992). Also, beyond computational complexity, pure subsumption was proved undecidable in the TKRSs U (Schild, 1988), kl-one (Schmidt-Schau , 1989) and nikl (Patel-Schneider, 1989).\nNote that extending the language results in enhancing its expressiveness, therefore the result of that research could be summarized as: The more a TKRS language is expressive, the higher is the computational complexity of reasoning in that language|as Levesque (1984) rst noted. This result has been interpreted in two di erent ways, leading to two di erent TKRSs design philosophies:\n1. `General-purpose languages for TKRSs are intractable, or even undecidable, and tractable languages are not expressive enough to be of practical interest'. Following this interpretation, in several TKRSs (such as nikl, loom and back) incomplete procedures for pure subsumption are considered satisfactory (e.g., see (MacGregor & Brill, 1992) for loom). Once completeness is abandoned for this basic subproblem, completeness of overall reasoning procedures is not an issue anymore; but other issues arise, such as how to compare incomplete procedures (Heinsohn, Kudenko, Nebel, & Pro tlich, 1992), and how to judge a procedure \\complete enough\" (MacGregor, 1991). As a practical tool, inference rules can be used in such systems to achieve the expected behavior of the KB w.r.t. the information contained in it. 2. `A TKRS is (by de nition) general-purpose, hence it must provide tractable and complete reasoning to a user'. Following this line, other TKRSs (such as krypton and classic) provide limited tractable languages for expressing concepts, following the \\small-can-be-beautiful\" approach (see Patel-Schneider, 1984). The gap between what is expressible in the TKRS language and what is needed to be expressed for the application is then lled by the user, by a (sort of) programming with inference rules. Of course, the usual problems present in program development and debugging arise (McGuinness, 1992). What is common to both approaches is that a user must cope with incomplete reasoning. The di erence is that in the former approach, the burden of regaining useful yet missed inferences is mostly left to the developers of the TKRS (and the user is supposed to specify what is \\complete enough\"), while in the latter this is mainly left to the user. These are perfectly reasonable approaches in a practical context, where incomplete procedures and specialized programs are often used to deal with intractable problems. In our opinion incomplete procedures are just a provisional answer to the problem|the best possible up to now. In order to improve on such an answer, a theoretical analysis of the general problems 1{4 is to be done.\nPrevious theoretical results do not deal with the problems 1{4 in their full generality. For example, the problems are studied in (Nebel, 1990, Chapter 4), but only incomplete procedures are given, and cycles are not considered. In (Donini, Lenzerini, Nardi, & Schaerf, 1993;Schaerf, 1993a) the complexity of instance checking has been analyzed, but only KBs without a TBox are treated. Instance checking has also been analyzed in (Vilain, 1991), but addressing only that part of the problem which can be performed as parsing.\nIn addition, we think that the expressiveness of actual systems should be enhanced making terminological cycles (see Nebel, 1990, Chapter 5) available in TKRSs. Such a feature is of undoubtable practical interest (MacGregor, 1992), yet most present TKRSs can only approximate cycles, by using forward inference rules (as in back, classic, loom). In our opinion, in order to make terminological cycles fully available in complete TKRSs, a theoretical investigation is still needed.\nPrevious theoretical work on cycles was done in (Baader, 1990a(Baader, , 1990b;;Baader, B urkert, Hollunder, Nutt, & Siekmann, 1990;Dionne, Mays, & Oles, 1992, 1993;Nebel, 1990Nebel, , 1991;;Schild, 1991), but considering KBs formed by the TBox alone. Moreover, these approaches do not deal with number restrictions (except for Nebel, 1990, Section 5.3.5) |a basic feature already provided by TKRSs| and the techniques used do not seem easily extensible to reasoning with ABoxes. We compare in detail several of these works with ours in Section 4.\nIn this paper, we propose a TKRS equipped with a highly expressive language, including constructors often required in practical applications, and prove decidability of problems 1{4. In particular, our system uses the language ALCNR, which supports general complements of concepts, number restrictions and role conjunction. Moreover, the system allows one to express inclusion statements between general concepts and, as a particular case, terminological cycles. We prove decidability by means of a suitable calculus, which is developed extending the well established framework of constraint systems (see Donini et al., 1991a;Schmidt-Schau & Smolka, 1991), thus exploiting a uniform approach to reasoning in TKRSs. Moreover, our calculus can easily be turned into a decision procedure.\nThe paper is organized as follows. In Section 2 we introduce the language, and we give it a Tarski-style extensional semantics, which is the most commonly used. Using this semantics, we establish relationships between problems 1{4 which allow us to concentrate on KB-satis ability only. In Section 3 we provide a calculus for KB-satis ability, and show correctness and termination of the calculus. Hence, we conclude that KB-satis ability is decidable in ALCNR, which is the main result of this paper. In Section 4 we compare our approach with previous results on decidable TKRSs, and we establish the equivalence of general (cyclic) inclusion statements and general concept de nitions using the descriptive semantics. Finally, we discuss in detail several practical issues related to our results in Section 5." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "In this section we rst present the basic notions regarding concept languages. Then we describe knowledge bases built up using concept languages, and reasoning services that must be provided for extracting information from such knowledge bases." }, { "figure_ref": [], "heading": "Concept Languages", "publication_ref": [], "table_ref": [], "text": "In concept languages, concepts represent the classes of objects in the domain of interest, while roles represent binary relations between objects. Complex concepts and roles can be de ned by means of suitable constructors applied to concept names and role names. In particular, concepts and roles in ALCNR can be formed by means of the following syntax (where P i (for i = 1; : : :; k) denotes a role name, C and D denote arbitrary concepts, and R an arbitrary role):\nC; D ! A j (concept name) > j (top concept) ? j (bottom concept) (C u D) j (conjunction) (C t D) j (disjunction) :C j (complement) 8R.C j (universal quanti cation) 9R.C j (existential quanti cation) ( n R) j ( n R) (number restrictions) R ! P 1 u u P k (role conjunction)\nWhen no confusion arises we drop the brackets around conjunctions and disjunctions. We interpret concepts as subsets of a domain and roles as binary relations over a domain.\nMore precisely, an interpretation I = ( I ; I ) consists of a nonempty set I (the domain of I) and a function I (the extension function of I), which maps every concept to a subset of I and every role to a subset of I I . The interpretation of concept names and role names is thus restricted by A I I , and P I I I , respectively. Moreover, the interpretation of complex concepts and roles must satisfy the following equations (]fg denotes the cardinality of a set):\n> I = I ? I = ; (C u D) I = C I \\ D I (C t D) I = C I D I (1) (:C) I = I n C I (8R.C) I = fd 1 2 I j 8d 2 : (d 1 ; d 2 ) 2 R I ! d 2 2 C I g (9R.C) I = fd 1 2 I j 9d 2 : (d 1 ; d 2 ) 2 R I ^d2 2 C I g ( n R) I = fd 1 2 I j ]fd 2 j (d 1 ; d 2 ) 2 R I g ng ( n R) I = fd 1 2 I j ]fd 2 j (d 1 ; d 2 ) 2 R I g ng (P 1 u u P k ) I = P I 1 \\ \\ P I k 2." }, { "figure_ref": [], "heading": "Knowledge Bases", "publication_ref": [ "b45", "b46", "b2", "b46", "b14", "b10", "b45", "b9", "b54", "b29", "b54" ], "table_ref": [], "text": "A knowledge base built by means of concept languages is generally formed by two components: The intensional one, called TBox, and the extensional one, called ABox. We rst turn our attention to the TBox. As we said before, the intensional level speci es the properties of the concepts of interest in a particular application. Syntactically, such properties are expressed in terms of what we call inclusion statements. An inclusion statement (or simply inclusion) has the form C v D where C and D are two arbitrary ALCNR-concepts. Intuitively, the statement speci es that every instance of C is also an instance of D. More precisely, an interpretation I satis es the inclusion C v D if C I D I .\nA TBox is a nite set of inclusions. An interpretation I is a model for a TBox T if I satis es all inclusions in T .\nIn general, TKRSs provide the user with mechanisms for stating concept introductions (e.g., Nebel, 1990, Section 3.2) of the form A :\n= D (concept de nition, interpreted as set equality), or A _ D (concept speci cation, interpreted as set inclusion), with the restrictions that the left-hand side concept A must be a concept name, that for each concept name at most one introduction is allowed, and that no terminological cycles are allowed, i.e., no concept name may occur|neither directly nor indirectly|within its own introduction. These restrictions make it possible to substitute an occurrence of a de ned concept by its de nition.\nWe do not impose any of these restrictions to the form of inclusions, obtaining statements that are syntactically more expressive than concept introductions. In particular, a de nition of the form A : = D can be expressed in our system using the pair of inclusions A v D and D v A and a speci cation of the form A _ D can be simply expressed by A v D. Conversely, an inclusion of the form C v D, where C and D are arbitrary concepts, cannot be expressed with concept introductions. Moreover, cyclic inclusions are allowed in our statements, realizing terminological cycles.\nAs shown in (Nebel, 1991), there are at least three types of semantics for terminological cycles, namely the least xpoint, the greatest xpoint, and the descriptive semantics. Fixpoint semantics choose particular models among the set of interpretations that satisfy a statement of the form A : = D. Such models are chosen as the least and the greatest xpoint of the above equation. The descriptive semantics instead considers all interpretations that satisfy the statement (i.e., all xpoints) as its models.\nHowever, xpoint semantics naturally apply only to xpoint statements like A : = D, where D is a \\function\" of A, i.e., A may appear in D, and there is no obvious way to extend them to general inclusions. In addition, since our language includes the constructor for complement of general concepts, the \\function\" D may be not monotone, and therefore the least and the greatest xpoints may be not unique. Whether there exists or not a de nitional semantics that is suitable for cyclic de nitions in expressive languages is still unclear.\nConversely, the descriptive semantics interprets statements as just restricting the set of possible models, with no de nitional import. Although it is not completely satisfactory in all practical cases (Baader, 1990b;Nebel, 1991), the descriptive semantics has been considered to be the most appropriate one for general cyclic statements in powerful concept languages. Hence, it seems to be the most suitable to be extended to our case and it is exactly the one we have adopted above.\nObserve that our decision to put general inclusions in the TBox is not a standard one. In fact, in TKRS like krypton such statements were put in the ABox. However, we conceive inclusions as a generalization of traditional TBox statements: acyclic concept introductions, with their de nitional import, can be perfectly expressed with inclusions; and cyclic concept introductions can be expressed as well, if descriptive semantics is adopted. Therefore, we believe that inclusions should be part of the TBox.\nNotice that role conjunction allows one to express the practical feature of subroles. For example, the role ADOPTEDCHILD can be written as CHILD uADOPTEDCHILD 0 , where ADOPTED-CHILD' is a role name, making it a subrole of CHILD. Following such idea, every hierarchy of role names can be rephrased with a set of role conjunctions, and vice versa.\nActual systems usually provide for the construction of hierarchies of roles by means of role introductions (i.e., statements of the form P : = R and P _ R) in the TBox. However, in our simple language for roles, cyclic de nitions of roles can be always reduced to acyclic de nitions, as explained in (Nebel, 1990, Sec.5.3.1). When role de nitions are acyclic, one can always substitute in every concept each role name with its de nition, obtaining an equivalent concept. Therefore, we do not consider role de nitions in this paper, and we conceive the TBox just as a set of concept inclusions.\nEven so, it is worth to notice that concept inclusions can express knowledge about roles. In particular, domain and range restrictions of roles can be expressed, in a way similar to the one in (Catarci & Lenzerini, 1993). Restricting the domain of a role R to a concept C and its range to a concept D can be done by the two inclusions 9R.> v C; > v 8R.D It is straightforward to show that if an interpretation I satis es the two inclusions, then R I C I D I .\nCombining subroles with domain and range restrictions it is also possible to partially express the constructor for role restriction, which is present in various proposals (e.g., the language FL in Brachman & Levesque, 1984). Role restriction, written as R : C, is de ned by (R : C) I = f(d 1 ; d 2 ) 2 I I j (d 1 ; d 2 ) 2 R I ^d2 2 C I g. For example the role DAUGHTER, which can be formulated as CHILD:Female, can be partially simulated by CHILD u DAUGHTER 0 , with the inclusion > v 8DAUGHTER 0 .Female. However, this simulation would not be complete in number restrictions: E.g., if a mother has at least three daughters, then we know she has at least three female children; if instead we know that she has three female children we cannot infer that she has three daughters.\nWe can now turn our attention to the extensional level, i.e., the ABox. The ABox essentially allows one to specify instance-of relations between individuals and concepts, and between pairs of individuals and roles.\nLet O be an alphabet of symbols, called individuals. Instance-of relationships are expressed in terms of membership assertions of the form: C(a); R(a; b); where a and b are individuals, C is an ALCNR-concept, and R is an ALCNR-role. Intuitively, the rst form states that a is an instance of C, whereas the second form states that a is related to b by means of the role R.\nIn order to assign a meaning to membership assertions, the extension function I of an interpretation I is extended to individuals by mapping them to elements of I in such a way that a I 6 = b I if a 6 = b. This property is called Unique Name Assumption; it ensures that di erent individuals are interpreted as di erent objects.\nAn interpretation I satis es the assertion C(a) if a I 2 C I , and satis es R(a; b) if (a I ; b I ) 2 R I . An ABox is a nite set of membership assertions. I is a model for an ABox A if I satis es all the assertions in A.\nAn ALCNR-knowledge base is a pair = hT ; Ai where T is a TBox and A is an ABox. An interpretation I is a model for if it is both a model for T and a model for A.\nWe can now formally de ne the problems 1{4 mentioned in the introduction. Let be an ALCNR-knowledge base.\n1. KB-satis ability : is satis able, if it has a model; 2. Concept Satis ability : C is satis able w.r.t , if there exists a model I of such that C I 6 = ;; 3. Subsumption : C is subsumed by D w.r.t. , if C I D I for every model I of ; 4. Instance Checking : a is an instance of C, written j = C(a), if the assertion C(a) is satis ed in every model of .\nIn (Nebel, 1990, Sec.3.3.2) it is shown that the ABox plays no active role when checking concept satis ability and subsumption. In particular, Nebel shows that the ABox (subject to its satis ability) can be replaced by an empty one without a ecting the result of those services. Actually, in (Nebel, 1990), the above property is stated for a language less expressive than ALCNR. However, it is easy to show that it extends to ALCNR. It is important to remark that such a property is not valid for all concept languages. In fact, there are languages that include some constructors that refer to the individuals in the concept language, e.g., the constructor one-of (Borgida et al., 1989) that forms a concept from a set of enumerated individuals. If a concept language includes such a constructor the individuals in the TBox can interact with the individuals in the ABox, as shown in (Schaerf, 1993b). As a consequence, both concept satis ability and subsumption depend also on the ABox.\nExample 2.1 Consider the following knowledge base = hT ; Ai:\nT = f9TEACHES.Course v (Student u 9DEGREE.BS) t Prof; Prof v 9DEGREE.MS; 9DEGREE.MS v 9DEGREE.BS; MS u BS v ?g A = fTEACHES(john; cs156); ( 1 DEGREE)(john); Course(cs156)g\nis a fragment of a hypothetical knowledge base describing the organization of a university. The rst inclusion, for instance, states that the persons teaching a course are either graduate students (students with a BS degree) or professors. It is easy to see that is satis able. For example, the following interpretation I satis es all the inclusions in T and all the assertions in A, and therefore it is a model for : I = fjohn; cs156; csbg; john I = john; cs156 I = cs156 Student I = fjohng; Prof I = ;; Course I = fcs156g; BS I = fcsbg MS I = ;; TEACHES I = f(john; cs156)g; DEGREE I = f(john; csb)g\nWe have described the interpretation I by giving only I , and the values of I on concept names and role names. It is straightforward to see that all values of I on complex concepts and roles are uniquely determined by imposing that I must satisfy the Equations 1 on page 113.\nNotice that it is possible to draw several non-trivial conclusions from . For example, we can infer that j = Student(john). Intuitively this can be shown as follows: John teaches a course, thus he is either a student with a BS or a professor. But he can't be a professor since professors have at least two degrees (BS and MS) and he has at most one, therefore he is a student.\nGiven the previous semantics, the problems 1{4 can all be reduced to KB-satis ability (or to its complement) in linear time. In fact, given a knowledge base = hT ; Ai, two concepts C and D, an individual a, and an individual b not appearing in , the following equivalences hold:\nC is satis able w:r:t i hT ; A fC(b)gi is satis able: C is subsumed by D w:r:t: i hT ; A f(C u :D)(b)gi is not satis able: j = C(a) i hT ; A f(:C)(a)gi is not satis able:\nA slightly di erent form of these equivalences has been given in (Hollunder, 1990). The equivalences given here are a straightforward consequence of the ones given by Hollunder. However, the above equivalences are not valid for languages including constructors that refer to the individuals in the concept language. The equivalences between reasoning services in such languages are studied in (Schaerf, 1993b).\nBased on the above equivalences, in the next section we concentrate just on KBsatis ability." }, { "figure_ref": [], "heading": "Decidability Result", "publication_ref": [], "table_ref": [], "text": "In this section we provide a calculus for deciding KB-satis ability. In particular, in Subsection 3.1 we present the calculus and we state its correctness. Then, in Subsection 3.2, we prove the termination of the calculus. This will be su cient to assess the decidability of all problems 1{4, thanks to the relationships between the four problems." }, { "figure_ref": [], "heading": "The calculus and its correctness", "publication_ref": [ "b58", "b24" ], "table_ref": [], "text": "Our method makes use of the notion of constraint system (Donini et al., 1991a;Schmidt-Schau & Smolka, 1991;Donini, Lenzerini, Nardi, & Schaerf, 1991c), and is based on a tableaux-like calculus (Fitting, 1990) that tries to build a model for the logical formula corresponding to a KB.\nWe introduce an alphabet of variable symbols V together with a well-founded total ordering ` ' on V. The alphabet V is disjoint from the other ones de ned so far. The purpose of the ordering will become clear later. The elements of V are denoted by the letters x; y; z; w. From this point on, we use the term object as an abstraction for individual and variable (i.e., an object is an element of O V). Objects are denoted by the symbols s; t and, as in Section 2, individuals are denoted by a; b.\nA constraint is a syntactic entity of one of the forms: s: C; sPt; 8x.x: C; s 6 : = t;\nwhere C is a concept and P is a role name. Concepts are assumed to be simple, i.e., the only complements they contain are of the form :A, where A is a concept name. Arbitrary ALCNR-concepts can be rewritten into equivalent simple concepts in linear time (Donini et al., 1991a). A constraint system is a nite nonempty set of constraints.\nGiven an interpretation I, we de ne an I-assignment as a function that maps every variable of V to an element of I , and every individual a to a I (i.e., (a) = a I for all a 2 O).\nA pair (I; ) satis es the constraint s: C if (s) 2 C I , the constraint sPt if ( (s); (t)) 2 P I , the constraint s 6 :\n= t if (s) 6 = (t), and nally, the constraint 8x.x: C if C I = I (notice that does not play any role in this case). A constraint system S is satis able if there is a pair (I; ) that satis es every constraint in S.\nAn ALCNR-knowledge base = hT ; Ai can be translated into a constraint system S by replacing every inclusion C v D 2 T with the constraint 8x.x: :C t D, every membership assertion C(a) with the constraint a: C, every R(a; b) with the constraints aP 1 b; : : :; aP k b if R = P 1 u : : : u P k , and including the constraint a 6 : = b for every pair (a; b) of individuals appearing in A. It is easy to see that is satis able if and only if S is satis able.\nIn order to check a constraint system S for satis ability, our technique adds constraints to S until either an evident contradiction is generated or an interpretation satisfying it can be obtained from the resulting system. Constraints are added on the basis of a suitable set of so-called propagation rules.\nBefore providing the rules, we need some additional de nitions. Let S be a constraint system and R = P 1 u : : : u P k (k 1) be a role. We say that t is an R-successor of s in S if sP 1 t; : : :; sP k t are in S. We say that t is a direct successor of s in S if for some role R, t is an R-successor of s. We call direct predecessor the inverse relation of direct successor. If S is clear from the context we omit it. Moreover, we denote by successor the transitive closure of the relation direct successor, and we denote by predecessor its inverse.\nWe assume that variables are introduced in a constraint system according to the ordering ` '. This means, if y is introduced in a constraint system S then x y for all variables x that are already in S. We denote by S x=s] the constraint system obtained from S by replacing each occurrence of the variable x by the object s.\nWe say that s and t are separated in S if the constraint s 6 : = t is in S. Given a constraint system S and an object s, we de ne the function ( ; ) as follows: (S; s) := fC j s: C 2 Sg. Moreover, we say that two variables x and y are S-equivalent, written x s y, if (S; x) = (S; y). Intuitively, two S-equivalent variables can represent the same element in the potential interpretation built by the rules, unless they are separated. We call the rules ! t and ! nondeterministic rules, since they can be applied in di erent ways to the same constraint system (intuitively, they correspond to branching rules of tableaux). All the other rules are called deterministic rules. Moreover, we call the rules ! 9 and ! generating rules, since they introduce new variables in the constraint system. All other rules are called nongenerating ones.\nThe use of the condition based on the S-equivalence relation in the generating rules (condition 5) is related to the goal of keeping the constraint system nite even in presence of potentially in nite chains of applications of generating rules. Its role will become clearer later in the paper.\nOne can verify that rules are always applied to a system S either because of the presence in S of a given constraint s: C (condition 1), or, in the case of the ! 8x -rule, because of the presence of an object s in S. When no confusion arises, we will say that a rule is applied to the constraint s: C or the object s (instead of saying that it is applied to the constraint system S).\nProposition 3.1 (Invariance) Let S and S 0 be constraint systems. Then:\n1. If S 0 is obtained from S by application of a deterministic rule, then S is satis able if and only if S 0 is satis able. 2. If S 0 is obtained from S by application of a nondeterministic rule, then S is satisable if S 0 is satis able. Conversely, if S is satis able and a nondeterministic rule is applicable to an object s in S, then it can be applied to s in such a way that it yields a satis able constraint system.\nProof. The proof is mainly a rephrasing of typical soundness proofs for tableaux methods (e.g., Fitting, 1990, Lemma 6.3.2). The only non-standard constructors are number restrictions.\n1. \\(\" Considering the deterministic rules one can directly check that S is a subset of S 0 . So it is obvious that S is satis able if S 0 is satis able.\n\\)\" In order to show that S 0 is satis able if this is the case for S we consider in turn each possible deterministic rule application leading from S to S 0 . We assume that (I; ) satis es S.\nIf the ! u -rule is applied to s: C 1 u C 2 in S, then S 0 = S fs: C 1 ; s: C 2 g. Since (I; ) satis es s: C 1 u C 2 , (I; ) satis es s: C 1 and s: C 2 and therefore S 0 .\nIf the ! 8 -rule is applied to s: 8R.C, there must be an R-successor t of s in S such that S 0 = S ft: Cg. Since (I; ) satis es S, it holds that ( (s); (t)) 2 R I . Since (I; ) satis es s: 8R.C, it holds that (t) 2 C I . So (I; ) satis es t: C and therefore S 0 .\nIf the ! 8x -rule is applied to an s because of the presence of 8x.x: C in S, then S 0 = S fs: Cg. Since (I; ) satis es S it holds that C I = I . Therefore (s) 2 C I and so (I; ) satis es S 0 .\nIf the ! 9 -rule is applied to s: 9R.C, then S 0 = S fsP 1 y; : : :; sP k y; y: Cg. Since (I; ) satis es S, there exists a d such that ( (s); d) 2 R I and d 2 C I . We de ne the I-assignment 0 as 0 (y) := d and 0 (t) := (t) for t 6 = y. It is easy to show that (I; 0 ) satis es S 0 . If the ! -rule is applied to s: ( n R), then S 0 = S fsP 1 y i ; : : :; sP k y i j i 2 1::ng fy i 6 : = y j j i; j 2 1::n; i 6 = jg. Since (I; ) satis es S, there exist n distinct elements d 1 ; : : :; d n 2 I such that ( (s); d i ) 2 R I . We de ne the I-assignment 0 as 0 (y i ) := d i for i 2 1::n and 0 (t) := (t) for t 6 2 fy 1 ; : : :; y n g. It is easy to show that (I; 0 ) satis es S 0 .\n2. \\(\" Assume that S 0 is satis ed by (I; 0 ). We show that S is also satis able. If S 0 is obtained from S by application of the ! t -rule, then S is a subset of S 0 and therefore satis ed by (I; 0 ). If S 0 is obtained from S by application of the ! -rule to s: ( n R) in S, then there are y; t in S such that S 0 = S y=t]. We de ne the I-assignment as (y) := 0 (t) and (v) := 0 (v) for every object v with v 6 = y. Obviously (I; ) satis es S.\n\\)\" Now suppose that S is satis ed by (I; ) and a nondeterministic rule is applicable to an object s.\nIf the ! t -rule is applicable to s: C 1 t C 2 then, since S is satis able, (s) 2 (C 1 t C 2 ) I . It follows that either (s) 2 C I 1 or (s) 2 C I 2 (or both). Hence, the ! t -rule can obviously be applied in a way such that (I; ) satis es the resulting constraint system S 0 .\nIf the ! -rule is applicable to s: ( n R), then|since (I; ) satis es S|it holds that (s) 2 ( n R) I and therefore the set fd 2 I j ( (s); d) 2 R I g has at most n elements.\nOn the other hand, there are more than n R-successors of s in S and for each R-successor t of s we have ( (s); (t)) 2 R I . Thus, we can conclude by the Pigeonhole Principle (see e.g., Lewis & Papadimitriou, 1981, page 26) that there exist at least two R-successors t; t 0 of s such that (t) = (t 0 ). Since (I; ) satis es S, the constraint t 6 : = t 0 is not in S. Therefore one of the two must be a variable, let's say t 0 = y. Now obviously (I; ) satis es S y=t].\nGiven a constraint system S, more than one rule might be applicable to it. We de ne the following strategy for the application of rules:\n1. apply a rule to a variable only if no rule is applicable to individuals;\n2. apply a rule to a variable x only if no rule is applicable to a variable y such that y x;\n3. apply generating rules only if no nongenerating rule is applicable.\nThe above strategy ensures that the variables are processed one at a time according to the ordering ` '.\nFrom this point on, we assume that rules are always applied according to this strategy and that we always start with a constraint system S coming from an ALCNR-knowledge base . The following lemma is a direct consequence of these assumptions.\nLemma 3.2 (Stability) Let S be a constraint system and x be a variable in S. Let a generating rule be applicable to x according to the strategy. Let S 0 be any constraint system derivable from S by any sequence (possibly empty) of applications of rules. Then 1. No rule is applicable in S 0 to a variable y with y x 2. (S; x) = (S 0 ; x) 3. If y is a variable in S with y x then y is a variable in S 0 , i.e., the variable y is not substituted by another variable or by a constant.\nProof.\n1. By contradiction: Suppose S S 0 ! S 1 ! ! S n S 0 , where 2 ft; u; 9; 8; ; ; 8xg and a rule is applicable to a variable y such that y x in S 0 . Then there exists a minimal i, where i n, such that this is the case in S i . Note that i 6 = 0; in fact, because of the strategy, if a rule is applicable to x in S no rule is applicable to y in S. So no rule is applicable to any variable z such that z x in S 0 ; : : :; S i 1 . It follows that from S i 1 to S i a rule is applied to x or to a variable w such that x w. By an exhaustive analysis of all rules we see that|whichever is the rule applied from S i 1 to S i |no new constraint of the form y: C or yRz can be added to S i 1 , and therefore no rule is applicable to y in S i , contradicting the assumption.\n2. By contradiction: Suppose (S; x) 6 = (S 0 ; x). Call y the direct predecessor of x, then a rule must have been applied either to y or to x itself. Obviously we have y x, therefore the former case cannot be because of point 1. A case analysis shows that the only rules which can have been applied to x are generating ones and the ! 8 and the ! rules. But these rules add new constraints only to the direct successors of x and not to x itself and therefore do not change ( ; x).\n3. This follows from point 1. and the strategy. Lemma 3.2 proves that for a variable x which has a direct successor, ( ; x) is stable, i.e., it will not change because of subsequent applications of rules. In fact, if a variable has a direct successor it means that a generating rule has been applied to it, therefore (Lemma 3.2.2) from that point on ( ; x) does not change.\nA constraint system is complete if no propagation rule applies to it. A complete system derived from a system S is also called a completion of S. A clash is a constraint system having one of the following forms: fs: ?g fs: A; s: :Ag, where A is a concept name. fs: ( n R)g fsP 1 t i ; : : :; sP k t i j i 2 1::n + 1g ft i 6 : = t j j i; j 2 1::n + 1; i 6 = jg, where R = P 1 u : : : u P k .\nA clash is evidently an unsatis able constraint system. For example, the last case represents the situation in which an object has an at-most restriction and a set of Rsuccessors that cannot be identi ed (either because they are individuals or because they have been created by some at-least restrictions).\nAny constraint system containing a clash is obviously unsatis able. The purpose of the calculus is to generate completions, and look for the presence of clashes inside. If S is a completion of S and S contains no clash, we prove that it is always possible to construct a model for on the basis of S. A sequence of applications of the propagation rules to S is as follows:\nS 1 = S fsusan: :Italiang (! 8 -rule) S 2 = S 1 fpeter: :Italian t 9FRIEND.Italiang (! 8x -rule) S 3 = S 2 fsusan: :Italian t 9FRIEND.Italiang (! 8x -rule) S 4 = S 3 fpeter: :Italiang (! t -rule) S 5 = S 4 fsusanFRIENDx; x: Italiang (! 9 -rule) S 6 = S 5 fx: :Italian t 9FRIEND.Italiang (! 8x -rule) S 7 = S 6 fx: 9FRIEND.Italiang (! t -rule) S 8 = S 7 fxFRIENDy; y: Italiang (! 9 -rule) S 9 = S 8 fy: :Italian t 9FRIEND.Italiang (! 8x -rule) S 10 = S 9 fy: 9FRIEND.Italiang (! t -rule) One can verify that S 10 is a complete clash-free constraint system. In particular, the ! 9rule is not applicable to y. In fact, since x S 10 y condition 5 is not satis ed. From S 10 one can build an interpretation I, as follows (again, we give only the interpretation of concept and role names): I = fpeter; susan; x; yg peter I = peter, susan I = susan, (x) = x, (y) = y, Italian I = fx; yg FRIEND I = f(peter; susan); (susan; x); (x; y); (y; y)g It is easy to see that I is indeed a model for .\nIn order to prove that it is always possible to obtain an interpretation from a complete clash-free constraint system we need some additional notions. Let S be a constraint system and x, w variables in S. We call w a witness of x in S if the three following conditions hold: 1. x s w 2. w x 3. there is no variable z such that z w and z satis es conditions 1. and 2., i.e., w is the least variable w.r.t. satisfying conditions 1. and 2.\nWe say x is blocked (by w) in S if x has a witness (w) in S. The following lemma states a property of witnesses. Lemma 3.4 Let S be a constraint system, x a variable in S. If x is blocked then 1. x has no direct successor and 2. x has exactly one witness.\nProof. 1. By contradiction: Suppose that x is blocked in S and xPy is in S. During the completion process leading to S a generating rule must have been applied to x in a system S 0 . It follows from the de nition of the rules that in S 0 for every variable w x we had x6 s 0 w. Now from Lemma 3.2 we know, that for the constraint system S derivable from S 0 and for every w x in S we also have x6 s w. Hence there is no witness for x in S, contradicting the hypothesis that x is blocked.\n2. This follows directly from condition 3. for a witness.\nAs a consequence of Lemma 3.4, in a constraint system S, if w 1 is a witness of x then w 1 cannot have a witness itself, since both the relations ` ' and S-equivalence are transitive.\nThe uniqueness of the witness for a blocked variable is important for de ning the following particular interpretation out of S.\nLet S be a constraint system. We de ne the canonical interpretation I S and the canonical I S -assignment S as follows:\n1. I S := fs j s is an object in Sg From Lemma 3.4 it is obvious that a role-pair cannot be both explicit and implicit. Moreover, if a variable has an implicit role-pair then all its role-pairs are implicit and they all come from exactly one witness, as stated by the following lemma. Lemma 3.5 Let S be a completion and x a variable in S. Let I S be the canonical interpretation for S. If x has an implicit role-pair (x; y), then 1. all role-pairs of x in I S are implicit 2. there is exactly one witness w of x in S such that for all roles P in S and all P-rolepairs (x,y) of x, the constraint wPy is in S.\nProof. The rst statement follows from Lemma 3.4 (point 1). The second statement follows from Lemma 3.4 (point 2) together with the de nition of I S .\nWe have now all the machinery needed to prove the main theorem of this subsection.\nTheorem 3.6 Let S be a complete constraint system. If S contains no clash then it is satis able.\nProof. Let I S and S be the canonical interpretation and canonical I-assignment for S.\nWe prove that the pair (I S ; S ) satis es every constraint c in S. If c has the form sPt or s 6 : = t, then (I S ; S ) satis es them by de nition of I S and S . Considering the ! -rule and the ! -rule we see that a constraint of the form s 6 : = s can not be in S. If c has the form s: C, we show by induction on the structure of C that s 2 C I S .\nWe rst consider the base cases. If C is a concept name, then s 2 C I S by de nition of I S . If C = >, then obviously s 2 > I S . The case that C = ? cannot occur since S is clash-free.\nNext we analyze in turn each possible complex concept C. If C is of the form :C 1 then C 1 is a concept name since all concepts are simple. Then the constraint s: C 1 is not in S since S is clash-free. Then s 6 2 C I S 1 , that is, s 2 I S n C I S 1 . Hence s 2 (:C 1 ) I S . If C is of the form C 1 u C 2 then (since S is complete) s: C 1 is in S and s: C 2 is in S. By induction hypothesis, s 2 C I S 1 and s 2 C I S 2 . Hence s 2 (C 1 u C 2 ) I S . If C is of the form C 1 t C 2 then (since S is complete) either s: C 1 is in S or s: C 2 is in S. By induction hypothesis, either s 2 C I S 1 or s 2 C I S 2 . Hence s 2 (C 1 t C 2 ) I S . If C is of the form 8R.D, we have to show that for all t with (s; t) 2 R I S it holds that t 2 D I S . If (s; t) 2 R I S , then according to Lemma 3.5 two cases can occur. Either t is an R-successor of s in S or s is blocked by a witness w in S and t is an R-successor of w in S. In the rst case t: D must also be in S since S is complete. Then by induction hypothesis we have t 2 D I S . In the second case by de nition of witness, w: 8R.D is in S and then because of completeness of S, t: D must be in S. By induction hypothesis we have again t 2 D I S .\nIf C is of the form 9R.D we have to show that there exists a t 2 I S with (s; t) 2 R I S and t 2 D I S . Since S is complete, either there is a t that is an R-successor of s in S and t: D is in S, or s is a variable blocked by a witness w in S. In the rst case, by induction hypothesis and the de nition of I S , we have t 2 D I S and (s; t) 2 R I S . In the second case w: 9R.D is in S. Since w cannot be blocked and S is complete, we have that there is a t that is an R-successor of w in S and t: D is in S. So by induction hypothesis we have t 2 D I S and by the de nition of I S we have (s; t) 2 R I S .\nIf C is of the form ( n R) we show the goal by contradiction. Assume that s 6 2 ( n R) I S . Then there exist atleast n + 1 distinct objects t 1 ; : : :; t n+1 with (s; t i ) 2 R I S ; i 2 1::n + 1. This means that, since R = P 1 u : : : u P k , there are pairs (s; t i ) 2 P I S j , where i 2 1::n + 1 and j 2 1::k. Then according to Lemma 3.5 one of the two following cases must occur. Either all sP j t i for j 2 1::k; i 2 1::n + 1 are in S or there exists a witness w of s in S with all wP i t i for j 2 1::k and i 2 1::n + 1 are in S. In the rst case the ! -rule can not be applicable because of completeness. This means that all the t i 's are pairwise separated, i.e., that S contains the constraints t i 6 : = t j ; i; j 2 1::n + 1; i 6 = j. This contradicts the fact that S is clash-free. And the second case leads to an analogous contradiction.\nIf C is of the form ( n R) we show the goal by contradiction. Assume that s 6 2 ( n R) I S . Then there exist atmost m < n (m possibly 0) distinct objects t 1 ; : : :; t m with (s; t i ) 2 R I S ; i 2 1::m. We have to consider two cases. First case: s is not blocked in S. Since there are only m R-successors of s in S, the ! -rule is applicable to s. This contradicts the fact that S is complete. Second case: s is blocked by a witness w in S.\nSince there are m R-successors of w in S, the ! -rule is applicable to w. But this leads to the same contradiction.\nIf c has the form 8x.x: D then, since S is complete, for each object t in S, t: D is in S|and, by the previous cases, t 2 D I S . Therefore, the pair (I S ; S ) satis es 8x.x: D. Finally, since (I S ; S ) satis es all constraints in S, (I S ; S ) satis es S.\nTheorem 3.7 (Correctness) A constraint system S is satis able if and only if there exists at least one clash-free completion of S.\nProof. \\(\" Follows immediately from Theorem 3.6. \\)\" Clearly, a system containing a clash is unsatis able. If every completion of S is unsatis able, then from Proposition 3.1 S, is unsatis able." }, { "figure_ref": [], "heading": "Termination and complexity of the calculus", "publication_ref": [ "b41", "b47", "b50", "b59", "b31", "b26" ], "table_ref": [], "text": "Given a constraint system S, we call n S the number of concepts appearing in S, including also all the concepts appearing as a substring of another concept. Notice that n S is bounded by the length of the string expressing S. Lemma 3.8 Let S be a constraint system and let S 0 be derived from S by means of the propagation rules. In any set of variables in S 0 including more than 2 n S variables there are at least two variables x,y such that x s 0 y.\nProof. Each constraint x: C 2 S 0 may contain only concepts of the constraint system S.\nSince there are n S such concepts, given a variable x there cannot be more than 2 n S di erent sets of constraints x: C in S 0 . Lemma 3.9 Let S be a constraint system and let S 0 be any constraint system derived from S by applying the propagation rules with the given strategy. Then, in S 0 there are at most 2 n S non-blocked variables.\nProof. Suppose there are 2 n S + 1 non-blocked variables. From Lemma 3.8, we know that in S 0 there are at least two variables y 1 , y 2 such that y 1 s y 2 . Obviously either y 1 y 2 or y 2 y 1 holds; suppose that y 1 y 2 . From the de nitions of witness and blocked either y 1 is a witness of y 2 or there exists a variable y 3 such that y 3 y 1 and y 3 is a witness of y 2 . In both cases y 2 is blocked, contradicting the hypothesis. Theorem 3.10 (Termination and space complexity) Let be an ALCNR-knowledge base and let n be its size. Every completion of S is nite and its size is O(2 4n ).\nProof. Let S be a completion of S . From Lemma 3.9 it follows that there are at most 2 n non-blocked variables in S. Therefore there are at most m 2 n total variables in S, where m is the maximum number of direct successors for a variable in S.\nObserve that m is bounded by the number of 9R.C concepts (at most n) plus the sum of all numbers appearing in number restrictions. Since these numbers are expressed in binary, their sum is bounded by 2 n . Hence, m 2 n + n. Since the number of individuals is also bounded by n, the total number of objects in S is at most m (2 n +n) (2 n +n) (2 n +n), that is, O(2 2n ).\nThe number of di erent constraints of the form s: C, 8x.x: C in which each object s can be involved is bounded by n, and each constraint has size linear in n. Hence, the total size of these constraints is bounded by n n 2 2n , that is O(2 3n ).\nThe number of constraints of the form sPt, s 6 : = t is bounded by (2 2n ) 2 = 2 4n , and each constraint has constant size.\nIn conclusion, we have that the size of S is O(2 4n ).\nNotice that the above one is just a coarse upper bound, obtained for theoretical purposes. In practical cases we expect the actual size to be much smaller than that. For example, if the numbers involved in number restrictions were either expressed in unary notation, or limited by a constant (the latter being a reasonable restriction in practical systems) then an argumentation analogous to the above one would lead to a bound of 2 3n . Theorem 3.11 (Decidability) Given an ALCNR-knowledge base , checking whether is satis able is a decidable problem.\nProof. This follows from Theorems 3.7 and 3.10 and the fact that is satis able if and only if S is satis able.\nWe can re ne the above theorem, by giving tighter bounds on the time required to decide satis ability. Theorem 3.12 (Time complexity) Given an ALCNR-knowledge base , checking whether is satis able can be done in nondeterministic exponential time.\nProof. In order to prove the claim it is su cient to show that each completion is obtained with an exponential number of applications of rules. Since the number of constraints of each completion is exponential (Theorem 3.10) and each rule, but the ! -rule, adds new constraints to the constraint system, it follows that all such rules are applied at most an exponential number of times. Regarding the ! -rule, it is applied for each object at most as many times as the number of its direct successors. Since such number is at most exponential (if numbers are coded in binary) w.r.t. the size of the knowledge base, the claim follows.\nA lower bound of the complexity of KB-satis ability is obtained exploiting previous results about the language ALC, which is a sublanguage of ALCNR that does not include number restrictions and role conjunction. We know from McAllester (1991), and (independently) from an observation by Nutt (1992) that KB-satis ability in ALC-knowledge bases is EXPTIME-hard (see (Garey & Johnson, 1979, page 183) for a de nition) and hence it is hard for ALCNR-knowledge bases, too. Hence, we do not expect to nd any algorithm solving the problem in polynomial space, unless PSPACE=EXPTIME. Therefore, we do not expect to substantially improve space complexity of our calculus, which already works in exponential space. We now discuss possible improvements on time complexity. The proposed calculus works in nondeterministic exponential time, and hence improves the one we proposed in (Buchheit, Donini, & Schaerf, 1993, Sec.4), which works in deterministic double exponential time. The key improvement is that we showed that a KB has a model if and only if it has a model of exponential size. However, it may be argued that as it is, the calculus cannot yet be turned into a practical procedure, since such a procedure would simply simulate nondeterminism by a second level of exponentiality, resulting in a double exponential time procedure. However, the di erent combinations of concepts are only exponentially many (this is just the cardinality of the powerset of the set of concepts). Hence, a double exponential time procedure wastes most of the time re-analyzing over and over objects with di erent names yet with the same ( ; ), in di erent constraint systems. This could be avoided if we allow a variable to be blocked by a witness that is in a previously analyzed constraint system. This technique would be similar to the one used in (Pratt, 1978), and to the tree-automata technique used in (Vardi & Wolper, 1986), improving on simple tableaux methods for variants of propositional dynamic logics. Since our calculus considers only one constraint system at a time, a modi cation of the calculus would be necessary to accomplish this task in a formal way, which is outside the scope of this paper. The formal development of such a deterministic exponential time procedure will be a subject for future research.\nNotice that, since the domain of the canonical interpretation I S is always nite, we have also implicitly proved that ALCNR-knowledge bases have the nite model property,\ni.e., any satis able knowledge base has a nite model. This property has been extensively studied in modal logics (Hughes & Cresswell, 1984) and dynamic logics (Harel, 1984). In particular, a technique, called ltration, has been developed both to prove the nite model property and to build a nite model for a satis able formula. This technique allows one to build a nite model from an in nite one by grouping the worlds of a structure in equivalence classes, based on the set of formulae that are satis ed in each world. It is interesting to observe that our calculus, based on witnesses, can be considered as a variant of the ltration technique where the equivalence classes are determined on the basis of our S-equivalence relation. However, because of number restrictions, variables that are S-equivalent cannot be grouped, since they might be separated (e.g., they might have been introduced by the same application of the ! -rule). Nevertheless, they can have the same direct successors, as stated in point 4.(b) of the de nition of canonical interpretation on page 124. This would correspond to grouping variables of an in nite model in such a way that separations are preserved." }, { "figure_ref": [], "heading": "Relation to previous work", "publication_ref": [], "table_ref": [], "text": "In this section we discuss the relation of our paper to previous work about reasoning with inclusions. In particular, we rst consider previously proposed reasoning techniques that deal with inclusions and terminological cycles, then we discuss the relation between inclusions and terminological cycles." }, { "figure_ref": [], "heading": "Reasoning Techniques", "publication_ref": [ "b3", "b2", "b45", "b46", "b56", "b15", "b16", "b2", "b2", "b46", "b3", "b3", "b3", "b56", "b15", "b16", "b29", "b23" ], "table_ref": [], "text": "As mentioned in the introduction, previous results were obtained by Baader et al. (1990), Baader (1990aBaader ( , 1990b)), Nebel (1990Nebel ( , 1991)), Schild (1991) and Dionne et al. (1992Dionne et al. ( , 1993)). Nebel (1990, Chapter 5) considers the language T F, containing concept conjunction, universal quanti cation and number restrictions, and TBoxes containing (possibly cyclic) concept de nitions, role de nitions and disjointness axioms (stating that two concept names are disjoint). Nebel shows that subsumption of T F-concepts w.r.t. a TBox is decidable.\nHowever, the argument he uses is non-constructive: He shows that it is su cient to con-sider nite interpretations of a size bounded by the size of the TBox in order to decide subsumption.\nIn (Baader, 1990b) the e ect of the three types of semantics|descriptive, greatest xpoint and least xpoint semantics|for the language FL 0 , containing concept conjunction and universal quanti cation, is described with the help of nite automata. Baader reduces subsumption of FL 0 -concepts w.r.t. a TBox containing (possibly cyclic) de nitions of the form A : = C (which he calls terminological axioms) to decision problems for nite automata.\nIn particular, he shows that subsumption w.r.t. descriptive semantics can be decided in polynomial space using B uchi automata. Using results from (Baader, 1990b), in (Nebel, 1991) a characterization of the above subsumption problem w.r.t. descriptive semantics is given with the help of deterministic automata (whereas B uchi automata are nondeterministic). This also yields a PSPACE-algorithm for deciding subsumption.\nIn (Baader et al., 1990) the attention is restricted to the language ALC. In particular, that paper considers the problem of checking the satis ability of a single equation of the form C = >, where C is an ALC-concept. This problem, called the universal satis ability problem, is shown to be equivalent to checking the satis ability of an ALC-TBox (see Proposition 4.1).\nIn (Baader, 1990a), an extension of ALC, called ALC reg , is introduced, which supports a constructor to express the transitive closure of roles. By means of transitive closure of roles it is possible to replace cyclic inclusions of the form A v D with equivalent acyclic ones. The problem of checking the satis ability of an ALC reg -concept is solved in that paper. It is also shown that using transitive closure it is possible to reduce satis ability of an ALC-concept w.r.t. an ALC-TBox T = fC 1 v D 1 ; : : :; C n v D n g into the concept satis ability problem in ALC reg (w.r.t. the empty TBox). Since the problem of concept satis ability w.r.t. a TBox is trivially harder than checking the satis ability of a TBox, that paper extends the result given in (Baader et al., 1990). The technique exploited in (Baader et al., 1990) and (Baader, 1990a) is based on the notion of concept tree. A concept tree is generated starting from a concept C in order to check its satis ability (or universal satis ability). The way a concept tree is generated from a concept C is similar in avor to the way a complete constraint system is generated from the constraint system fx: Cg. However, the extension of the concept tree method to deal with number restrictions and individuals in the knowledge base is neither obvious, nor suggested in the cited papers; on the other hand, the extension of the calculus based on constraint systems is immediate, provided that additional features have a counterpart in First Order Logic.\nIn (Schild, 1991) some results more general than those in (Baader, 1990a) are obtained by considering languages more expressive than ALC reg and dealing with the concept satis ability problem in such languages. The results are obtained by establishing a correspondence between concept languages and Propositional Dynamic Logics (PDL), and reducing the given problem to a satis ability problem in PDL. Such an approach allows Schild to nd several new results exploiting known results in the PDL framework. However, it cannot be used to deal with every concept language. In fact, the correspondence cannot be established when the language includes some concept constructors having no counterpart in PDL (e.g., number restrictions, or individuals in an ABox).\nRecently, an algebraic approach to cycles has been proposed in (Dionne et al., 1992), in which (possibly cyclic) de nitions are interpreted as determining an equivalence relation over the terms describing concepts. The existence and uniqueness of such an equivalence relation derives from Aczel's results on non-well founded sets. In (Dionne et al., 1993) the same researchers prove that subsumption based on this approach is equivalent to subsumption in greatest xpoint semantics. The language analyzed is a small fragment of the one used in the TKRS k-rep, and contains conjunction and existential-universal quanti cations combined into one construct (hence it is similar to FL 0 ). The di culty of extending these results lies in the fact that it is not clear how individuals can be interpreted in this algebraic setting. Moreover, we believe that constructive approaches like the algebraic one, give counterintuitive results when applied to non-constructive features of concept languages|as negation and number restrictions.\nIn conclusion, all these approaches, i.e., reduction to automata problems, concept trees, reduction to PDL and algebraic semantics, deal only with TBoxes and they don't seem to be suitable to deal also with ABoxes. On the other hand, the constraint system technique, even though it was conceived for TBox-reasoning, can be easily extended to ABox-reasoning, as also shown in (Hollunder, 1990;Baader & Hollunder, 1991;Donini et al., 1993)." }, { "figure_ref": [], "heading": "Inclusions versus Concept De nitions", "publication_ref": [ "b56", "b61", "b56", "b41", "b46" ], "table_ref": [], "text": "Now we compare the expressive power of TBoxes de ned as a set of inclusions (as done in this paper) and TBoxes de ned as a set of (possibly cyclic) concept introductions of the form A _ D and A : = D.\nUnlike (Baader, 1990a) and (Schild, 1991), we consider reasoning problems dealing with TBox and ABox together. Moreover, we use the descriptive semantics for the concept introductions, as we do for inclusions. The result we have obtained is that inclusion statements and concept introductions actually have the same expressive power. In detail, we show that the satis ability of a knowledge base = hA; T i, where T is a set of inclusion statements, can be reduced to the satis ability of a knowledge base 0 = hA 0 ; T 0 i such that T 0 is a set of concept introductions. The other direction, from concept introductions to inclusions, is trivial since introductions of the form A :\n= D can be expressed by the pair of inclusions A v D and D v A, while a concept name speci cation A _ D can be rewritten as the inclusion A v D (as already mentioned in Section 2). As a notation, given a TBox T = fC 1 v D 1 ; : : :; C n v D n g, we de ne the concept C T as C T = (:C 1 t D 1 ) u u (:C n t D n ). As pointed out in (Baader, 1990a) for ALC, an interpretation satis es a TBox T if and only if it satis es the equation C T = >. This result easily extends to ALCNR, as stated in the following proposition. Proposition 4.1 Given an ALCNR-TBox T = fC 1 v D 1 ; : : :; C n v D n g, an interpretation I satis es T if and only if it satis es the equation C T = >.\nProof. An interpretation I satis es an inclusion C v D if and only if it satis es the equation :C t D = >; I satis es the set of equations :C 1 t D 1 = >,: : :, :C n t D n = > if and only if I satis es (:C 1 t D 1 ) u u (:C n t D n ) = >. The claim follows.\nGiven a knowledge base = hA; T i and a concept A not appearing in , we de ne the knowledge base 0 = hA 0 ; T 0 i as follows:\nA 0 = A fA(b) j b is an individual in g T 0 = fA _ C T u 8P 1 .A u u 8P n .Ag where P 1 ; P 2 ; : : :; P n are all the role names appearing in . Note that T 0 has a single inclusion, which could be also thought of as one primitive concept speci cation. Theorem 4.2 = hA; T i is satis able if and only if 0 = hA 0 ; T 0 i is satis able.\nProof. In order to simplify the machinery of the proof, we will use for T 0 the following (logically equivalent) form: T 0 = fA v C T ; A v 8P 1 .A; : : :; A v 8P n .Ag (Note that we use the symbol `v' instead of `_ ' because now the concept name A appears as the left-hand side of many statements, we must consider these statements as inclusions). \\)\" Suppose = hA; T i satis able. From Theorem 3.7, there exists a complete constraint system S without clash, which de nes a canonical interpretation I S which is a model of . De ne the constraint system S 0 as follows: S 0 = S fw: A j w is an object in Sg and call I S 0 the canonical interpretation associated to S 0 . We prove that I S 0 is a model of 0 .\nFirst observe that every assertion in A is satis ed by I S 0 since I S 0 is equal to I S except for the interpretation of A, and A does not appear in A. Therefore, every assertion in A 0 is also satis ed by I S 0 , either because it is an assertion of A, or (if it is an assertion of the form A(b)) by de nition of S 0 .\nRegarding T 0 , note that by de nition of S 0 , we have A I S 0 = I S 0 = I S ; therefore both sides of the inclusions of the form A v 8P i .A (i = 1; : : :; n) are interpreted as I S 0 , hence they are satis ed by I S 0 . Since A does not appear in C T , we have that (C T ) I S 0 = (C T ) I S . Moreover, since I S satis es T , we also have, by Proposition 4.1, that (C T ) I S = I S , therefore (C T ) I S 0 = (C T ) I S = I S = I S 0 . It follows that also both sides of the inclusion A v C T are interpreted as I S 0 . In conclusion, I S 0 satis es T 0 .\n\\(\" Suppose 0 = hA 0 ; T 0 i satis able. Again, because of Theorem 3.7, there exists a complete constraint system S 0 without clash, which de nes a canonical interpretation I S 0 which is a model of 0 . We show that I S 0 is also a model of .\nFirst of all, the assertions in A are satis ed because A A 0 , and I S 0 satis es every assertion in A 0 . To prove that I S 0 satis es T , we rst prove the following equation:\nA I S 0 = I S 0 (2)\nEquation 2 is proved by showing that, for every object s 2 I S 0 , s is in A I S 0 . In order to do that, observe a general property of constraint systems: Every variable in S 0 is a successor of an individual. This comes from the de nition of the generating rules, which add variables to the constraint system only as direct successors of existing objects, and at the beginning S 0 contains only individuals.\nThen, Equation 2 is proved by observing the following three facts:\n1. for every individual b in I S 0 , b 2 A I S 0 ; 2. if an object s is in A I S 0 , then because I S 0 satis es the inclusions A I S 0 (8P 1 .A) I S 0 ; : : :;\nA I S 0 (8P n .A) I S 0 , every direct successor of s is in A I S 0 ;\n3. the successor relation is closed under the direct successor relation From the Fundamental Theorem on Induction (see e.g., Wand, 1980, page 41) we conclude that every object s of I S 0 is in A I S 0 . This proves that Equation 2 holds.\nFrom Equation 2, and the fact that I S 0 satis es the inclusion A I S 0 (C T ) I S 0 , we derive that (C T ) I S 0 = I S 0 , that is I S 0 satis es the equation C T = >. Hence, from Proposition 4.1, I S 0 satis es T , and this completes the proof of the theorem.\nThe machinery present in this proof is not new. In fact, realizing that the inclusions A v 8P 1 .A; : : :; A v 8P n .A simulate a transitive closure on the roles P 1 ; : : :; P n , one can recognize similarities with the proofs given by Schild (1991) and Baader (1990a). The di erence is that their proofs rely on the notion of connected model (Baader uses the equivalent notion of rooted model). In contrast, the models we obtain are not connected, when the individuals in the knowledge base are not. What we exploit is the weaker property that every variable in the model is a successor of an individual.\nNote that the above reduction strongly relies on the fact that disjunction `t' and complement `:' are within the language. In fact, disjunction and complement are necessary in order to express all the inclusions of a TBox T inside the concept C T . Therefore, the proof holds for ALC-knowledge bases, but does not hold for TKRSs not allowing for these constructors of concepts (e.g., back).\nFurthermore, for the language FL 0 introduced in Section 4.1, the opposite result holds.\nIn fact, McAllester (1991) proves that computing subsumption w.r.t. a set of inclusions is EXPTIME-hard, even in the small language FL 0 . Conversely, Nebel (1991) proves that subsumption w.r.t. a set of cyclic de nitions in FL 0 can be done in PSPACE. Combining the two results, we can conclude that for FL 0 subsumption w.r.t. a set of inclusions and subsumption w.r.t. a set of de nitions are in di erent complexity classes, hence (assuming EXPTIME 6 = PSPACE) inclusion statements are strictly more expressive than concept de nitions in FL 0 .\nIt is still open whether inclusions and de nitions are equivalent in languages whose expressivity is between FL 0 and ALC." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b7", "b37", "b56", "b4" ], "table_ref": [], "text": "In this paper we have proved the decidability of the main inference services of a TKRS based on the concept language ALCNR. We believe that this result is not only of theoretical importance, but bears some impact on existing TKRSs, because a complete procedure can be easily devised from the calculus provided in Section 3. From this procedure, one can build more e cient (but still complete) ones, as described at the end of Section 3.2, and also by applying standard optimization techniques such as those described in (Baader, Hollunder, Nebel, Pro tlich, & Franconi, 1992). An optimized procedure can perform well for small sublanguages where reasoning is tractable, while still being complete when solving more complex tasks. However, such a complete procedure will still take exponential time and space in the worst case, and it may be argued what could be its practical applicability. We comment in following on this point.\nFirstly, a complete procedure (possibly optimized) o ers a benchmark for comparing incomplete procedures, not only in terms of performance, but also in terms of missed inferences. Let us illustrate this point in detail, by providing a blatant paradox: consider the mostly incomplete constant-time procedure, answering always \\No\" to any check. Obviously this useless procedure outperforms any other one, if missed inferences are not taken into account. This paradox shows that incomplete procedures can be meaningfully compared only if missed inferences are considered. But to recognize missed inferences over large examples, one needs exactly a complete procedure|even if not an e cient one|like ours. We believe that a fair detection of missed inferences would be of great help even when the satisfaction of end users is the primary criterion for judging incomplete procedures.\nSecondly, a complete procedure can be used for \\anytime classi cation\", as proposed in (MacGregor, 1992). The idea is to use a fast, but incomplete algorithm as a rst step in analyzing the input knowledge, and then do more reasoning in background. In the cited paper, resolution-based theorem provers are proposed for performing this background reasoning. We argue that any specialized complete procedure will perform better than a general theorem prover. For instance, theorem provers are usually not speci cally designed to deal with ltration techniques.\nMoreover, our calculus can be easily adapted to deal with rules. As outlined in the introduction, rules are often used in practical TKRSs. Rules behave like one-way concept inclusions|no contrapositive is allowed|and they are applied only to known individuals.\nOur result shows that rules in ALCNR can be applied also to unknown individuals (our variables in a constraint system) without endangering decidability. This result is to be compared with the negative result in (Baader & Hollunder, 1992), where it is shown that subsumption becomes undecidable if rules are applied to unknown individuals in classic.\nFinally, the calculus provides a new way of building incomplete procedures, by modifying some of the propagation rules. Since the rules build up a model, modi cations to them have a semantical counterpart which gives a precise account of the incomplete procedures obtained. For example, one could limit the size of the canonical model by a polynomial in the size of the KB. Semantically, this would mean to consider only \\small\" models, which is reasonable when the intended models for the KB are not much bigger than the size of the KB itself. We believe that this way of designing incomplete procedures \\from above\", i.e., starting with the complete set of inferences and weakening it, is dual to the way incomplete procedures have been realized so far \\from below\", i.e., starting with already incomplete inferences and adding inference power by need.\nFurther research is still needed to address problems issuing from practical systems. For example, to completely express role restrictions inside number restrictions, quali ed number restrictions (Hollunder & Baader, 1991) should be taken into account. Also, the language resulting from the addition of enumerated sets (called one-of in classic), and role llers to ALCNR is still to be studied, although it does not seem to endanger the ltration method we used. Instead, a di erent method might be necessary if inverse roles are added to ALCNR, since the nite model property is lost (as shown in Schild, 1991). Finally, the addition of concrete domains (Baader & Hanschke, 1991) remains open." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Maurizio Lenzerini for the inspiration of this work, as well as for several discussions that contributed to the paper. Werner Nutt pointed out to us the observation mentioned at the end of Section 3, and we thank him and Franz Baader for helpful comments on earlier drafts. We thank also the anonymous reviewers, whose stimulating comments helped us in improving on the submitted version.\nThe research was partly done while the rst author was visiting the Dipartimento di Informatica e Sistemistica, Universit a di Roma \\La Sapienza\". The third author also acknowledges Yoav Shoham for his hospitality at the Computer Science Department of Stanford University, while the author was developing part of this research.\nThis work has been supported by the ESPRIT Basic Research Action N.6810 (COM-PULOG 2) and by the Progetto Finalizzato Sistemi Informatici e Calcolo Parallelo of the CNR (Italian Research Council), LdR \\Ibridi\"." } ]
[ { "authors": "J Abrial", "journal": "North-Holland Publ. Co", "ref_id": "b0", "title": "Data semantics", "year": "1974" }, { "authors": "F Baader", "journal": "", "ref_id": "b1", "title": "Augmenting concept languages by transitive closure of roles: An alternative to terminological cycles", "year": "1990" }, { "authors": "F Baader", "journal": "", "ref_id": "b2", "title": "Terminological cycles in KL-ONE-based knowledge representation languages", "year": "1990" }, { "authors": "F Baader; H.-J Hollunder; B Nutt; W Siekmann; J H ", "journal": "Springer-Verlag", "ref_id": "b3", "title": "Concept logics", "year": "1990" }, { "authors": "F Baader; P Hanschke", "journal": "", "ref_id": "b4", "title": "A schema for integrating concrete domains into concept languages", "year": "1991" }, { "authors": "F Baader; B Hollunder", "journal": "Springer-Verlag", "ref_id": "b5", "title": "A terminological knowledge representation system with complete inference algorithm", "year": "1991" }, { "authors": "F Baader; B Hollunder", "journal": "Morgan Kaufmann", "ref_id": "b6", "title": "Embedding defaults into terminological knowledge representation formalisms", "year": "1992" }, { "authors": "F Baader; B Hollunder; B Nebel; H.-J Pro Tlich; E Franconi", "journal": "Morgan Kaufmann", "ref_id": "b7", "title": "An empirical analisys of optimization techniques for terminological representation systems", "year": "1992" }, { "authors": "H W Beck; S K Gala; S B Navathe", "journal": "", "ref_id": "b8", "title": "Classi cation as a query processing technique in the CANDIDE semantic data model", "year": "1989" }, { "authors": "A Borgida; R J Brachman; D L Mcguinness; L Resnick", "journal": "", "ref_id": "b9", "title": "CLASSIC: A structural data model for objects", "year": "1989" }, { "authors": "R J Brachman; H J Levesque", "journal": "", "ref_id": "b10", "title": "The tractability of subsumption in framebased description languages", "year": "1984" }, { "authors": "R J Brachman; V Pigman Gilbert; H J Levesque", "journal": "", "ref_id": "b11", "title": "An essential hybrid reasoning system: Knowledge and symbol level accounts in KRYPTON", "year": "1985" }, { "authors": "R J Brachman; J G Schmolze", "journal": "Cognitive Science", "ref_id": "b12", "title": "An overview of the KL-ONE knowledge representation system", "year": "1985" }, { "authors": "M Buchheit; F M Donini; A Schaerf", "journal": "", "ref_id": "b13", "title": "Decidable reasoning in terminological knowledge representation systems", "year": "1993" }, { "authors": "T Catarci; M Lenzerini", "journal": "Journal of Intelligent and Cooperative Inf. Syst", "ref_id": "b14", "title": "Representing and using interschema knowledge in cooperative information systems", "year": "1993" }, { "authors": "R Dionne; E Mays; F J Oles", "journal": "AAAI Press/The MIT Press", "ref_id": "b15", "title": "A non-well-founded approach to terminological cycles", "year": "1992" }, { "authors": "R Dionne; E Mays; F J Oles", "journal": "", "ref_id": "b16", "title": "The equivalence of model theoretic and structural subsumption in description logics", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "F M Donini; B Hollunder; M Lenzerini; A Marchetti Spaccamela; D Nardi; W Nutt", "journal": "Arti cial Intelligence", "ref_id": "b18", "title": "The complexity of existential quanti cation in concept languages", "year": "1992" }, { "authors": "F M Donini; M Lenzerini; D Nardi; W Nutt", "journal": "", "ref_id": "b19", "title": "The complexity of concept languages", "year": "1991" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": "F M Donini; M Lenzerini; D Nardi; W Nutt", "journal": "", "ref_id": "b21", "title": "Tractable concept languages", "year": "1991" }, { "authors": "F M Donini; M Lenzerini; D Nardi; A Schaerf", "journal": "Springer-Verlag", "ref_id": "b22", "title": "A hybrid system integrating datalog and concept languages", "year": "1991" }, { "authors": "F M Donini; M Lenzerini; D Nardi; A Schaerf", "journal": "Journal of Logic and Computation", "ref_id": "b23", "title": "Deduction in concept languages: From subsumption to instance checking", "year": "1993" }, { "authors": "M Fitting", "journal": "Springer-Verlag", "ref_id": "b24", "title": "First-Order Logic and Automated Theorem Proving", "year": "1990" }, { "authors": "M Garey; D Johnson", "journal": "W.H. Freeman and Company", "ref_id": "b25", "title": "Computers and Intractability|A guide to NPcompleteness", "year": "1979" }, { "authors": "D Harel", "journal": "Handbook of Philosophical Logic", "ref_id": "b26", "title": "Dynamic logic", "year": "1984" }, { "authors": "D Reidel", "journal": "", "ref_id": "b27", "title": "", "year": "" }, { "authors": "J Heinsohn; D Kudenko; B Nebel; H.-J Pro Tlich", "journal": "AAAI Press/The MIT Press", "ref_id": "b28", "title": "An empirical analysis of terminological representation systems", "year": "1992" }, { "authors": "B Hollunder", "journal": "Springer-Verlag", "ref_id": "b29", "title": "Hybrid inferences in KL-ONE-based knowledge representation systems", "year": "1990" }, { "authors": "B Hollunder; F Baader", "journal": "", "ref_id": "b30", "title": "Qualifying number restrictions in concept languages", "year": "1991" }, { "authors": "G E Hughes; M J Cresswell", "journal": "", "ref_id": "b31", "title": "A Companion to Modal Logic", "year": "1984" }, { "authors": "T S Kaczmarek; R Bates; G Robins", "journal": "", "ref_id": "b32", "title": "Recent developments in NIKL", "year": "1986" }, { "authors": "M Lenzerini; A Schaerf", "journal": "", "ref_id": "b33", "title": "Concept languages as query languages", "year": "1991" }, { "authors": "H J Levesque", "journal": "Arti cial Intelligence", "ref_id": "b34", "title": "Foundations of a functional approach to knowledge representation", "year": "1984" }, { "authors": "H R Lewis; C H Papadimitriou", "journal": "Prentice-Hall", "ref_id": "b35", "title": "Elements of the Theory of Computation", "year": "1981" }, { "authors": "R Macgregor", "journal": "SIGART Bulletin", "ref_id": "b36", "title": "Inside the LOOM description classi er", "year": "1991" }, { "authors": "R Macgregor", "journal": "", "ref_id": "b37", "title": "What's needed to make a description logic a good KR citizen", "year": "1992" }, { "authors": "R Macgregor; R Bates", "journal": "", "ref_id": "b38", "title": "The Loom knowledge representation language", "year": "1987" }, { "authors": "R Macgregor; D Brill", "journal": "AAAI Press/The MIT Press", "ref_id": "b39", "title": "Recognition algorithms for the LOOM classi er", "year": "1992" }, { "authors": "E Mays; R Dionne; R Weida", "journal": "SIGART Bulletin", "ref_id": "b40", "title": "K-REP system overview", "year": "1991" }, { "authors": "D Mcallester", "journal": "", "ref_id": "b41", "title": "", "year": "1991" }, { "authors": "D L Mcguinness", "journal": "", "ref_id": "b42", "title": "Making description logic based knowledge representation systems more usable", "year": "1992" }, { "authors": "J Mylopoulos; P Bernstein; E Wong", "journal": "ACM Trans. on Database Syst", "ref_id": "b43", "title": "A language facility for designing databaseintensive applications", "year": "1980" }, { "authors": "B Nebel", "journal": "Arti cial Intelligence", "ref_id": "b44", "title": "Computational complexity of terminological reasoning in BACK", "year": "1988" }, { "authors": "B Nebel", "journal": "Springer-Verlag", "ref_id": "b45", "title": "Reasoning and Revision in Hybrid Representation Systems", "year": "1990" }, { "authors": "B Nebel", "journal": "Morgan Kaufmann", "ref_id": "b46", "title": "Terminological cycles: Semantics and computational properties", "year": "1991" }, { "authors": "W Nutt", "journal": "", "ref_id": "b47", "title": "", "year": "1992" }, { "authors": "P F Patel-Schneider", "journal": "", "ref_id": "b48", "title": "Small can be beautiful in knowledge representation", "year": "1984-10" }, { "authors": "P Patel-Schneider", "journal": "Arti cial Intelligence", "ref_id": "b49", "title": "Undecidability of subsumption in NIKL", "year": "1989" }, { "authors": "V R Pratt", "journal": "", "ref_id": "b50", "title": "A practical decision method for propositional dynamic logic", "year": "1978" }, { "authors": "J Quantz; C Kindermann", "journal": "", "ref_id": "b51", "title": "Implementation of the BACK system version 4", "year": "1990" }, { "authors": "C Rich", "journal": "", "ref_id": "b52", "title": "SIGART bulletin. Special issue on implemented knowledge representation and reasoning systems", "year": "1991" }, { "authors": "A Schaerf", "journal": "", "ref_id": "b53", "title": "On the complexity of the instance checking problem in concept languages with existential quanti cation", "year": "1993" }, { "authors": "A Schaerf", "journal": "", "ref_id": "b54", "title": "Reasoning with individuals in concept languages", "year": "1993" }, { "authors": "K Schild", "journal": "", "ref_id": "b55", "title": "Undecidability of subsumption in U", "year": "1988" }, { "authors": "K Schild", "journal": "", "ref_id": "b56", "title": "A correspondence theory for terminological logics: Preliminary report", "year": "1991" }, { "authors": "M Schmidt-Schau", "journal": "Morgan Kaufmann", "ref_id": "b57", "title": "Subsumption in KL-ONE is undecidable", "year": "1989" }, { "authors": "M Schmidt-Schau; G Smolka", "journal": "Arti cial Intelligence", "ref_id": "b58", "title": "Attributive concept descriptions with complements", "year": "1991" }, { "authors": "M Vardi; P Wolper", "journal": "Journal of Computer and System Science", "ref_id": "b59", "title": "Automata-theoretic techniques for modal logics of programs", "year": "1986" }, { "authors": "M Vilain", "journal": "", "ref_id": "b60", "title": "Deduction as parsing: Tractable classi cation in the KL-ONE framework", "year": "1991" }, { "authors": "M Wand", "journal": "North-Holland Publ. Co", "ref_id": "b61", "title": "Induction, Recursion, and Programming", "year": "1980" }, { "authors": "W A Woods; J G Schmolze", "journal": "Pergamon Press", "ref_id": "b62", "title": "The KL-ONE family", "year": "1992" } ]
[ { "formula_coordinates": [ 5, 157.2, 155.76, 297.36, 139.66 ], "formula_id": "formula_0", "formula_text": "C; D ! A j (concept name) > j (top concept) ? j (bottom concept) (C u D) j (conjunction) (C t D) j (disjunction) :C j (complement) 8R.C j (universal quanti cation) 9R.C j (existential quanti cation) ( n R) j ( n R) (number restrictions) R ! P 1 u u P k (role conjunction)" }, { "formula_coordinates": [ 5, 90, 436.92, 432.24, 195.8 ], "formula_id": "formula_1", "formula_text": "> I = I ? I = ; (C u D) I = C I \\ D I (C t D) I = C I D I (1) (:C) I = I n C I (8R.C) I = fd 1 2 I j 8d 2 : (d 1 ; d 2 ) 2 R I ! d 2 2 C I g (9R.C) I = fd 1 2 I j 9d 2 : (d 1 ; d 2 ) 2 R I ^d2 2 C I g ( n R) I = fd 1 2 I j ]fd 2 j (d 1 ; d 2 ) 2 R I g ng ( n R) I = fd 1 2 I j ]fd 2 j (d 1 ; d 2 ) 2 R I g ng (P 1 u u P k ) I = P I 1 \\ \\ P I k 2." }, { "formula_coordinates": [ 8, 107.04, 568.8, 317.04, 71.52 ], "formula_id": "formula_2", "formula_text": "T = f9TEACHES.Course v (Student u 9DEGREE.BS) t Prof; Prof v 9DEGREE.MS; 9DEGREE.MS v 9DEGREE.BS; MS u BS v ?g A = fTEACHES(john; cs156); ( 1 DEGREE)(john); Course(cs156)g" }, { "formula_coordinates": [ 23, 276.96, 600.36, 245.28, 18.28 ], "formula_id": "formula_3", "formula_text": "A I S 0 = I S 0 (2)" } ]
Decidable Reasoning in Terminological Knowledge Representation Systems
Terminological knowledge representation systems (TKRSs) are tools for designing and using knowledge bases that make use of terminological languages (or concept languages). We analyze from a theoretical point of view a TKRS whose capabilities go beyond the ones of presently available TKRSs. The new features studied, often required in practical applications, can be summarized in three main points. First, we consider a highly expressive terminological language, called ALCNR, including general complements of concepts, number restrictions and role conjunction. Second, we allow to express inclusion statements between general concepts, and terminological cycles as a particular case. Third, we prove the decidability of a number of desirable TKRS-deduction services (like satis ability, subsumption and instance checking) through a sound, complete and terminating calculus for reasoning in ALCNR-knowledge bases. Our calculus extends the general technique of constraint systems. As a byproduct of the proof, we get also the result that inclusion statements in ALCNR can be simulated by terminological cycles, if descriptive semantics is adopted.
Martin Buchheit; Francesco M Donini; Andrea Schaerf
[ { "figure_caption": "Before looking at the technical details of the proof, let us consider an example of application of the calculus for checking satis ability.", "figure_data": "peter: 8FRIEND.:Italian; susan: 9FRIEND.Italian peter 6 : = susangExample 3.3 Consider the following knowledge base = hT ; Ai:T = fItalian v 9FRIEND.ItaliangA = fFRIEND(peter; susan); 8FRIEND.:Italian(peter); 9FRIEND.Italian(susan)gThe corresponding constraint system S is:S = f8x.x: :Italian t 9FRIEND.Italian;peterFRIENDsusan;", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b20", "b17", "b35", "b25", "b7", "b24" ], "table_ref": [], "text": "Autonomous agents, such as mobile robots, typically operate in dynamic and uncertain environments. Such environments can be sensed only imperfectly, eects on them are not always completely predictable, and they may be subject to changes not under the agent's control. Designing agents to operate in these environments has presented challenges to the standard methods of articial intelligence, which are based on explicit declarative representations and reasoning processes. Prominent among the alternative approaches are the so-called behavior-based, situated, and animat methods (Brooks, 1986;Maes, 1989;Kaelbling & Rosenschein, 1990;Wilson, 1991), which convert sensory inputs into actions in a much more direct fashion than do AI systems based on representation and reasoning. Many of these alternative approaches share with control theory the central notion that continuous feedback from the environment is a necessary component of eective action.\nPerhaps it is relatively easier for control theorists than it is for computer scientists to deal with continuous feedback because control theorists are accustomed to thinking of their controlling mechanisms as composed of analog electrical circuits or other physical systems rather than as automata with discrete read-compute-write cycles. The notions of goal-seeking servo-mechanisms, homeostasis, feedback, ltering, and stability|so essential to control in dynamic environments|were all developed with analog circuitry in mind. Circuits, by their nature, are continously responsive to their inputs.\nIn contrast, some of the central ideas of computer science, namely sequences, events, discrete actions, and subroutines, seem at odds with the notion of continuous feedback. For example, in conventional programming when one program calls another, the calling program is suspended until the called program returns control. This feature is awkward in applications in which the called program might encounter unexpected environmental circumstances with which it was not designed to cope. In such cases, the calling program can regain control only through interrupts explicitly provided by the programmer.\nTo be sure, there have been attempts to blend control theory and computer science. For example, the work of Ramadge and Wonham (Ramadge & Wonham, 1989) on discrete-event systems has used the computer science notions of events, grammars, and discrete states to study the control of processes for which those ideas are appropriate. A book by Dean and Wellman (Dean & Wellman, 1991) focusses on the overlap between control theory and articial intelligence. But there has been little eort to import fundamental control-theory ideas into computer science. That is precisely what I set out to do in this paper.\nI propose a computational system that works dierently than do conventional ones. The formalism has what I call circuit semantics (Nilsson, 1992); program execution produces (at least conceptually) electrical circuits, and it is these circuits that are used for control. While importing the control-theory concept of continuous feedback, I nevertheless want to retain useful ideas of computer science. My control programs will have parameters that can be bound at run time and passed to subordinate routines. They can have a hierarchical organization, and they can be recursive. In contrast with some of the behavior-based approaches, I want the programs to be responsive to stored models of the environment as well as to their immediate sensory inputs.\nThe presentation of these ideas will be somewhat informal in line with my belief that formalization is best done after a certain amount of experience has been obtained. Although preliminary experiments indicate that the formalism works quite well, more work remains to be done to establish its place in agent control." }, { "figure_ref": [], "heading": "Teleo-Reactive Sequences", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Condition-Action Rules", "publication_ref": [ "b22", "b30" ], "table_ref": [], "text": "A teleo-reactive (T-R) sequence is an agent control program that directs the agent toward a goal (hence teleo) in a manner that takes into account changing environmental circumstances (hence reactive). In its simplest form, it consists of an ordered set of production rules:\nK 1 ! a 1 K 2 ! a 2 1 1 1 K i ! a i 1 1 1 K m ! a m\nThe K i are conditions (on sensory inputs and on a model of the world), and the a i are actions (on the world or which change the model). A T-R sequence is interpreted in a manner roughly similar to the way in which some production systems are interpreted. The list of rules is scanned from the top for the rst rule whose condition part is satised, and the corresponding action is executed. T-R sequences dier substantively from conventional production systems, however. T-R actions can be durative rather than discrete. A durative action is one that continues indenitely. For example, a mobile robot is capable of executing the durative action move, which propels the robot ahead (say at constant speed) indenitely. Such an action contrasts with a discrete one, such as move forward one meter. In a T-R sequence, a durative action continues so long as its corresponding condition remains the rst true condition. When the rst true condition changes, the action changes correspondingly. Thus, unlike production systems in computer science, the conditions must be continuously evaluated; the action associated with the currently rst true condition is always the one being executed. An action terminates only when its energizing condition ceases to be the rst true condition.\nIndeed, rather than thinking of T-R sequences in terms of the computer science idea of discrete events, it is more appropriate to think of them as being implemented by circuitry. For example, the sequence above can be implemented by the circuit shown in gure 1.\nFurthermore, we imagine that the conditions, K i , are also being continuously computed. The actions, a i , of a T-R sequence can either be primitive actions, or they can be T-R sequences themselves. Thus, programs written in this formalism can be hierarchical (even recursive, as we shall see later). In the case of hierarchical programs, it is important to realize that all conditions at all levels of the hierarchy are continuously being evaluated; a high level sequence can redirect control through a dierent path of lower level sequences as dictated by the values of the conditions at the various levels.\na m a 3 a 2 K 2 K 3 K m K 1 ¬ ¬ ¬ a 1\nIn writing a T-R sequence, a programmer ordinarily works backward from whatever goal condition the sequence is being designed to achieve. The condition K 1 is taken to be the goal condition, and the corresponding action, a 1 , is the null action. The condition K 2 is the weakest condition such that when it is satised (and K 1 is not), the durative execution of a 2 will (all other things being equal) eventually achieve K 1 . And so on. Each non-null action, a i , is supposed to achieve a condition, K j , strictly higher in the list (j < i). The conditions are therefore regressions (Nilsson, 1980) of higher conditions through the actions that achieve those higher conditions.\nFormally, we say that a T-R sequence satises the regression property if each condition, K i (m i > 1), is the regression of some higher condition in the sequence, K j (j < i), through the action a i . We say that a T-R sequence is complete if and only if K 1 _ 1 1 1 _ K i _ 1 1 1 _ K m is a tautology. A T-R sequence is universal if it satises the regression property and is complete. It is easy to see that a universal T-R sequence will always achieve its goal condition, K 1 , if there are no sensing or execution errors. Sometimes an action does not have the eect that was anticipated by the agent's designer (the normal eect), and sometimes exogenous events (separate from the actions of the agent) change the world in unexpected ways. These phenomena, of course, are the reason continuous feedback is required. Universal T-R sequences, like universal plans (Schoppers, 1987), are robust in the face of occasional deviations from normal execution. They can also exploit serendipitous eects; it may accidentally happen that an action achieves a condition higher in the list of condition/action rules than normally expected. Even if an action sometimes does not achieve its normal eect (due to occasional sensing or execution errors), nevertheless some action will be executed. So long as the environment does not too often frustrate the achievement of the normal eects of actions, the goal condition of a universal T-R sequence will ultimately be achieved." }, { "figure_ref": [], "heading": "An Example", "publication_ref": [], "table_ref": [], "text": "The following rather simple example should make these ideas more concrete. Consider the simulated robots in gure 2. Let's suppose that these robots can move bars around in their two-dimensional world. The robot on the right is holding a bar, and we want the other robot to go to and grab the bar marked A. We presume that this robot can sense its environment and can evaluate conditions which tell it whether or not it is already grabbing bar A (is-grabbing), facing toward bar A (facing-bar), positioned with respect to bar A so that it can reach and grab it (at-bar-center), on the perpendicular bisector of bar A (on-bar-midline), and facing a zone on the perpendicular bisector of bar A from which it would be appropriate to move toward bar A (facing-midline-zone). Let's assume also that these conditions have some appropriate amount of hysteresis so that hunting behavior is dampened. Suppose the robot is capable of executing the primitive actions grab-bar, move, and rotate with the obvious eects. Execution of the following T-R sequence will result in the robot grabbing bar A: Notice how each properly executed action in this sequence achieves the condition in the rule above it. In this way, the actions inexorably proceed toward the goal. Occasional setbacks merely cause delays in achieving the goal so long as the actions usually1 achieve their normal eects." }, { "figure_ref": [], "heading": "Teleo-Reactive Programs", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Rules with Variables", "publication_ref": [], "table_ref": [], "text": "We can generalize the notion of a T-R sequence by permitting the rules to contain free variables that are bound when the sequence is \\called.\" We will call such a sequence a T-R program. Additional generality is obtained if we assume that the variables are not necessarily bound to constants but to quantities whose values are continuously being computed (as if by circuitry) as the environment changes.\nA simple example involving having a robot go to a designated goal location in two dimensions will serve to illustrate. Suppose the goal location is given by the value of the variable loc. At run time, loc will be bound to a pair of X; Y coordinates, although we allow the binding to change during run time. At any time during the process, the robot's X; Y position is given by the value of the variable position. (We assume that the robot has some kind of navigational aid that reliably and continuously computes the value of position.) From the instantaneous values of loc and position, the robot can compute the direction that it should face to proceed in a straight line toward loc. Let the value of this direction at any time be given by the value of the function course(position, loc). At any time during the process, the robot's angular heading is given by the value of the variable heading. Using these variables, the T-R program to drive the robot to loc is:\ngoto(loc) equal(position, loc) ! nil equal(heading, course(position, loc)) ! move T ! rotate\nImplementing goto(loc) in circuitry is straightforward. The single parameter of the program is loc whose (possibly changing) value is specied at run time by a user, by a higher level program, or by circuitry. The other (global) parameters, position and heading, are provided by circuitry, and we assume that the function course is continuously being computed by circuitry. Given the values of all of these parameters, computing which action to energize is then computed by circuitry in the manner of gure 1." }, { "figure_ref": [], "heading": "Hierarchical Programs", "publication_ref": [ "b24" ], "table_ref": [], "text": "Our formalism allows writing hierarchical and recursive programs in which the actions in the rules are themselves T-R programs. As an example, we can write a recursive navigation program that calls goto. Our new navigation program requires some more complex sensory functions. Imagine a function clear-path(place1, place2) that has value T if and only if the direct path is clear between place1 and place2. (We assume the robot can compute this function, continuously, for place1 = position, and place2 equal to any target location.) Also imagine a function new-point(place1, place2) that computes an intermediate position between place1 and place2 whenever clear-path does not have value T . The value of newpoint lies appropriately to the side of the obstacle determined to be between place1 and place2 (so that if the robot heads toward new-point rst and then toward place2, it can navigate around the obstacle). Both clear-path and new-point are continuously computed by perceptual systems with which we endow the robot. We'll name our new navigation program amble(loc). Here is the code:\namble(loc) equal(position, loc) ! nil clear-path(position, loc) ! goto(loc) T ! amble(new-point(position, loc))\nWe show in gure 3 the path that a robot controlled by this program might take in navigating around the obstacles shown. (The program doesn't necessarily compute shortest paths; we present the program here simply as an illustration of recursion.) Note that if the obstacle positions or goal location change during execution, these changes will be reected in the values of the parameters used by the program, and program execution will proceed in a manner appropriate to the changes. In particular, if a clear path ever becomes manifest between the robot and the goal location, the robot will abandon moving to any subgoals that it might have considered and begin moving directly to the goal. A formal syntax for T-R programs is given in (Nilsson, 1992)." }, { "figure_ref": [], "heading": "Implementational Issues", "publication_ref": [ "b2" ], "table_ref": [], "text": "The T-R formalism, with its implicit assumption of continuous computation of conditions and parameters, should be thought of as a fully legitimate \\level\" in the hierarchy of program structure controlling the agent, regardless of how this level is implemented by levels below| just as computer scientists think of list processing as a level of actual operation even though it is implemented by more primitive logical operations below. If we assume (as we do) that the pace of events in the agent's environment is slow compared with the amount of time taken to perform the \\continuous\" computations required in a T-R program, then the T-R programmer is justied in assuming \\real\" continuous sensing as s/he writes programs (even though the underlying implentation may involve discrete sampling). We recommend the T-R formalism only for those applications for which this assumption is justied. For those applications, the T-R level shields the programmer from having to worry about how that level is implemented and greatly facilitates program construction.\nThere are several dierent ways in which T-R programs can be interpreted into lower level implementations. It is beyond the scope of this paper to do more than point out some obvious methods, and we leave important questions about the properties of these methods to subsequent research. One method of implementation involves the construction of actual or simulated circuits according to the basic scheme of gure 1. First, the top level conditioncomputing circuits (including circuits for computing parameters used in the conditions) are constructed and allowed to function. A specic action, say a i , is energized as a result. If a i is primitive, it is turned on, keeping the circuitry in place and functioning until some other top-level action is energized, and so on. If a i is a T-R sequence, the circuitry needed to implement it is constructed (just as was done at the top level), an action is selected, and so on|and all the while levels of circuitry above are left functioning. As new lower level circuitry is constructed, any circuitry no longer functioning (that is, circuitry no longer \\called\" by functioning higher level circuitry) can be garbage collected.\nThere are important questions of parameter passing and of timing in this process which I do not deal with here|relying on the assumption that the times needed to create circuitry and for the circuitry to function are negligible compared to the pace of events in the world. This assumption is similar to the synchrony hypothesis in the ESTEREL programming language (Berry & Gonthier, 1992) where it is assumed that a program's reaction \\. . . takes no time with respect to the external environment, which remains invariant during [the reaction].\"\nAlthough there is no reason in principle that circuitry could not be simulated or actually constructed (using some sort of programmable network of logic gates), it is also straightforward to implement a T-R program using more standard computational techniques. T-R programs can be written as LISP cond statements, and durative actions can be simulated by iterating very short action increments. For example, the increment for the move action for a simulated robot might move the robot ahead by a small amount. After each action increment, the top level LISP cond is executed anew, and of course all of the functions and parameters that it contains are evaluated anew. In our simulations of robots moving in two-dimensional worlds (to be discussed below), the computations involved are suciently fast to eect a reasonable pace with apparent smooth motion.\nThis implementation method essentially involves sampling the environment at irregular intervals. Of course, there are questions concerning how the computation times (and thus the sampling rate) aect the real-time aspects of agent behavior which we do not address here|again assuming the sampling rate to be very short.\nWhatever method is used to interpret T-R programs, care must be taken not to conate the T-R level with the levels below. The programmer ought not to have to think about circuit simulators or sampling intervals but should imagine that sensing is done continuously and immediately." }, { "figure_ref": [], "heading": "Graphical Representations", "publication_ref": [], "table_ref": [], "text": "The goto program can be represented by a graph as well as by the list of rules used earlier.\nThe graphical representation of this program is shown in gure 4. The nodes are labeled by conditions, and the arcs by actions. To execute the graphical version of the program, we look for the shallowest true node (taking the goal condition as the root) and execute the action labeling the arc leading out from that node.\nIn the graph of gure 4, each action normally achieves the condition at the head of its arc (when the condition at the tail of the arc is the shallowest true condition). If there is more than one action that can achieve a condition, we would have a tree instead of a single-path graph. A more general graph, then, is a teleo-reactive tree such as that depicted in gure 5. T-R trees are executed by searching for the shallowest true node and executing the action labeling the arc leaving that node. Alternatively, we could search for that true node judged Figure 4: Graphical Representation of goto to be on a path of least cost to the goal, where some appropriate heuristic measure of cost is used. [For simplicity, the phrase \\shallowest true node\" will be taken to mean either the shallowest true node (literally) or the true node on a path of least cost to the goal.] Ties among several equally shallow true nodes are broken according to a xed tie-breaking rule.\nIn gure 5 we see that, in particular, there are at least two ways to achieve condition K 1 . One way uses action a 2 (when K 2 is the shallowest true node), and one way uses action a 3 (when K 3 is the shallowest true node).\nIn analogy with the denitions given for T-R sequences, a T-R tree satises the regression property if every non-root node is the regression of its parent node through the action linking it with its parent. A T-R tree is complete if the disjunction of all of its conditions is a tautology. A T-R tree is universal if and only if it satises the regression property and is also complete. With a xed tie-breaking rule, a T-R tree becomes a T-R sequence. If a T-R tree is universal, then so will be the corresponding T-R sequence.\nOne might at rst object to this method for executing a T-R tree on the grounds that the sequence of actions that emerge will hop erratically from one path to another. But if the tree satises the regression property, and if the heuristic for measuring cost to the goal is reasonable, then (however erratic the actions may appear to be), each successfully executed action brings the agent closer to the goal." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b32", "b33", "b11" ], "table_ref": [], "text": "We have carried out several preliminary experiments with agents programmed in this language (using LISP cond statements and short action increments). One set of experiments uses simulated robots acting in a two-dimensional space, called Botworld2 , of construction A robot can turn and move, can grab and release a suitably adjacent bar, can turn and move a grabbed bar, and can connect a bar to other bars or structures. The robots continuously sense whether or not they are holding a bar, and they \\see\" in front of them (giving them information about the location of bars and structures). Because of the existence of other robots which may change the world in sometimes unexpected ways, it is important for each robot to sense certain critical aspects of its environment continuously.\nNilsson K m K m -1 K 1 K 2 a m a 2 K 3 a 3\nA typical Botworld graphical display is shown in gure 6.\nWe have written various T-R programs that cause the robots to build structures of various kinds (like the triangle being constructed in gure 6). A robot controlled by one of these programs exhibits homeostatic behavior. So long as the main goal (whatever it is) is satised, the robot is inactive. Whenever the goal (for whatever reason) is not satised, the robot becomes active and persists until it achieves the goal. If another agent achieves part or all of the goal, the robot carries on appropriately from the situation it nds itself in to complete the process.\nIn our experiments, the conditions used in the T-R rules are conditions on a model of the environment that the robot constructs from its sensory system and maintains separately from the T-R mechanism. The use of a model permits a robot to perform its actions in response to all the sensory stimuli (past and present) that have been used to help construct the model. But, if the T-R actions include direct changes to the model (in addition to those sequently, Patrick Teo implemented a version that runs under X-windows on any of several dierent workstations (Teo, 1991(Teo, , 1992)). The latter version allows the simulation of several robots simultaneously| each under the control of its own independently running process. In other experiments, we have used the Nomadic Technologies 100 series mobile robot. The robot is equipped with a ring of 16 infrared sensors and a ring of 16 sonar sensors. It is controlled via a radio modem by a Macintosh II running Allegro Common Lisp. We have implemented robust T-R programs for some simple oce-environment tasks, such as wall-following and corridor-following (Galles, 1993). The programs were initially developed and debugged using the Nomadics simulator of the actual robot; very few changes had to be made in porting the programs from the simulator to the robot. In performing these tasks, the robot is highly reactive and persistent even in the face of occasional extreme sonar or infrared range errors and deliberate attempts to confuse it. The robot quickly adapts to sudden changes in the environment, such as those caused by people sharing the hallways.\nIn writing T-R programs, one need only be concerned with inventing the appropriate predicates using the available perceptual functions and model database. One does not need to worry about providing interrupts of lower level programs so higher level ones can regain control. We have found that debugging T-R programs presents some challenges, though. Since they are designed to be quite robust in the face of environmental uncertainty, they also sometimes work rather well even though they are not completely debugged. These residual errors might not have undesirable eects until the programs are used in higher level programs|making the higher ones more dicult to debug." }, { "figure_ref": [], "heading": "Other Approaches for Specifying Behavior", "publication_ref": [], "table_ref": [], "text": "There have been several formalisms proposed for prescribing sensory-directed, real-time activity in dynamic environments. Some of these are closely related to the T-R formalism proposed here. In this section I point out the major similarities and dierences between T-R programs and a representative, though not complete, sample of their closest relatives. The other reactive formalisms are of two types, namely, those that sample their environments at discrete intervals (perhaps rapidly enough to be suciently reactive), and those that create circuitry (like T-R programs). The discrete-sampling systems do not abstract this activity into a higher level in which the environment is monitored continuously, and most of the circuitry-creating systems do so prior to run time (unlike T-R programs which create circuitry at run time)." }, { "figure_ref": [], "heading": "Discrete-Sampling Systems", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Production Systems", "publication_ref": [ "b34", "b23", "b9" ], "table_ref": [], "text": "As has already been mentioned, T-R programs are similar to production systems (Waterman & Hayes-Roth, 1978). The intermediate-level actions (ILAs) used in the SRI robot Shakey (Nilsson, 1984) were programmed using production rules and were very much like T-R programs. A T-R program also resembles a plan represented in triangle-table form constructed by STRIPS (Fikes, Hart & Nilsson, 1972). Each of the conditions of a T-R sequence corresponds to a triangle table kernel. In the PLANEX execution system for triangle tables, the action corresponding to the highest-numbered satised kernel is executed. A major dierence between all of these previous production-system style programs and T-R programs is that T-R programs are continuously responsive to the environment while ordinary production systems are not." }, { "figure_ref": [], "heading": "Reactive Plans", "publication_ref": [ "b30", "b10", "b14", "b31" ], "table_ref": [], "text": "Several researchers have adopted the approach of using the current situation to index into a set of pre-arranged action sequences (George & Lansky, 1987;Schoppers, 1987;Firby, 1987). This set can either be large enough to cover a substantial number of the situations in which an agent is likely to nd itself or it can cover all possible situations. In the latter case, the plan set is said to be universal. Unlike T-R programs, these systems explicitly sample their environments at discrete time steps rather than continuously. As with T-R programs, time-space trade-os must be taken into account when considering how many dierent conditions must be anticipated in providing reactive plans. Ginsberg has noted that in several domains, the number of situations likely to be encountered by the agent is so intractably large that the agent is forced to postpone most of its planning until run time when situations are actually encountered (Ginsberg, 1989). (For further discussion of this point, see (Selman, 1993).) T-R programs have the advantage that at least a rudimentary form of planning, namely parameter binding, is done at run time. The PRS system (George & Lansky, 1987) is capable of more extensive planning at run time as well as reacting appropriately to its current situation." }, { "figure_ref": [], "heading": "Situated Control Rules", "publication_ref": [ "b8", "b26" ], "table_ref": [], "text": "Drummond (Drummond, 1989) introduces the notion of a plan net which is a kind of Petri net (Reisig, 1985) for representing the eects of actions (which can be executed in parallel). Taking into account the possible interactions of actions, he then projects the eects of all possible actions from a present state up to some horizon. These eects are represented in a structure called a plan projection. The plan projection is analyzed to see, for each state in it, which states possibly have a path to the goal state. This analysis is a forward version of the backward analysis used by a programmer in producing a T-R tree. Situated control rules are the result of this analysis; they constrain the actions that might be taken at any state to those which will result in a state that still possibly has a path to the goal. Plan nets and Petri nets are based on discrete events and thus are not continuously responsive to their environments in the way that T-R programs are." }, { "figure_ref": [], "heading": "Circuit-Based Systems", "publication_ref": [ "b16", "b17", "b28", "b5", "b4", "b3", "b12" ], "table_ref": [], "text": "Kaelbling has proposed a formalism called GAPPS (Kaelbling, 1988;Kaelbling & Rosenschein, 1990), involving goal reduction rules, for implicitly describing how to achieve goals. The GAPPS programmer denes the activity of an agent by providing sucient goal reduction rules to connect the agent's goals with the situations in which it might nd itself. These rules are then compiled into circuitry for real-time control of the agent. Rosenschein and Kaelbling (Rosenschein & Kaelbling, 1986) call such circuitry situated automata.\nA collection of GAPPS rules for achieving a goal can be thought of as an implicit specication of a T-R program in which the computations needed to construct the program are performed when the rules are compiled. The GAPPS programmer typically exerts less specic control over the agent's activity|leaving some of the work to the search process performed by the GAPPS compiler. For example, a T-R program to achieve a goal, p , can be implicitly specied by the following GAPPS rule:\n(defgoalr (ach ?p) (if ((holds ?p) (do nil)) ((holds (regress ?a ?p)) (do ?a)) (T ach (regress ?a ?p)) ))\nThe recursion dened by this rule bottoms out in rules of the form:\n(defgoalr (ach ) ((holds ) (do )) ) where and are conditions and is a specic action.\nGAPPS compiles its rules into circuitry before run time, whereas the circuit implementation of a T-R program depends on parameters that are bound at run time. Both systems result in control that is continuously responsive to the environment.\nIn implementing a system to play a video game, Chapman (Chapman, 1990) compiles production-like rules into digital circuitry for real-time control using an approach that he calls \\arbitration macrology.\" As in situated automata, the compilation process occurs prior to run time.\nBrooks has developed a behavior language, BL, (Brooks, 1989), for writing reactive robot control programs based on his \\subsumption architecture\" (Brooks, 1986). A similar language, ALFA, has been implemented by Gat (Gat, 1991). Programs written in these languages compile into structures very much like circuits. Again, compilation occurs prior to run time. It has been relatively straightforward to translate examples of subsumptionarchitecture programs into T-R programs.\nIn all of these circuit-based systems, pre-run-time compiling means that more circuitry must be built than might be needed in any given run because all possible contingencies must be anticipated at compile time. 3 But in T-R programs, parameters are bound at run time, and only that circuitry required for these specic bindings is constructed." }, { "figure_ref": [], "heading": "Future Work", "publication_ref": [ "b9", "b21", "b1", "b27", "b19", "b15", "b29" ], "table_ref": [], "text": "The T-R formalism might easily be augmented to embody some features that have not been discussed in this paper. Explicit reference to time in specifying actions might be necessary.\nFor example, we might want to make sure that some action a is not initiated until after some time t 1 and ceases after some time t 2 . Time predicates, whose time terms are evaluated using an internal clock, may suce for this purpose.\nAlso, in some applications we may want to control which conditions in a T-R program are actually tested. It may be, for example, that some conditions won't have to be checked because their truth or falsity can be guessed with compelling accuracy.\nSimultaneous and asynchronous execution of multiple actions can be achieved by allowing the right-hand side of rules to contain sets of actions. Each member of the set is then duratively executed asynchronously and independently (so long as the condition in the rule that sustains this set remains the highest true condition). Of course, the programmer must decide under what conditions it is appropriate to call for parallel actions. Future work on related formalisms might reveal ways in which parallel actions might emerge from the interaction of the program and its environment rather than having to be explicitly programmed.\nAlthough we intend that T-R programs for agent control be written by human programmers, we are also interested in methods for modifying them by automatic planning and machine learning. We will briey discuss some of our preliminary ideas on planning and learning here.\nT-R trees resemble the search trees constructed by those planning systems that work backwards from a goal condition. The overall goal is the root of the tree; any non-root node g i is the regression of its parent node, g j through the action, a k , connecting them. This similarity suggests that T-R trees can be constructed (and modied) by an automatic planning system capable of regressing conditions through durative actions. Indeed triangle tables (Fikes, Hart & Nilsson, 1972), a degenerate form of T-R tree consisting of only a single path, were constructed by an automatic planning system and an EBL-style generalizer (Mitchell, Keller & Kedar-Cabelli, 1986).\nThe reader might object that there is no reason to suppose that the search trees produced by an automatic planning process will contain nodes whose conditions are those that the agent is likely to encounter in its behavior. A process of incremental modication, however, should gradually make these constructed trees more and more matched to the agent's environment. If a tree for achieving a desired goal has no true nodes in a certain situation, it is as if the search process employed by an automatic planner had not yet terminated because no subgoal in the search tree was satised in the current state. In this case, the planning system can be called upon to continue to search; that is, the existing T-R tree will be expanded until a true node is produced. Pruning of T-R trees can be accomplished by keeping statistics on how often their nodes are satised. Portions of the trees that are never or seldom used can be erased. Early unpublished work by Scott Benson indicates that T-R programs can be eectively generated by automatic planning methods (Benson, 1993).\nIn considering learning mechanisms, we note rst that T-R sequences are related to a class of Boolean functions that Rivest has termed k-decision lists (Rivest, 1987;Kohavi & Benson, 1993). A k-decision list is an ordered list of condition-value pairs in which each condition is a conjunction of Boolean variables of length at most k, and each value is a truth value (T or F ). The value of the Boolean function represented by a k-decision list is that value associated with the highest true condition. Rivest has shown that such functions are polynomially PAC learnable and has presented a supervised learning procedure for them.\nWe can see that a T-R sequence whose conditions are limited to k-length conjunctions of Boolean features is a slight generalization of k-decision lists. The only dierence is that such a T-R sequence can have more than two dierent \\values\" (that is, actions). We observe that such a T-R sequence (with, say, n dierent actions) is also PAC learnable since its actions can be encoded with log 2 n decision lists. George John (John, 1993) has investigated a supervised learning mechanism for learning T-R sequences.\nTypically, the conditions used in T-R programs are conjunctions of propositional features of the robot's world and/or model. Because a linear threshold function can implement conjunctions, one is led to propose a neural net implementation of a T-R sequence. The neural net implementation, in turn, evokes ideas about possible learning mechanisms. Consider the T-R sequence:\nK 1 ! a 1 K 2 ! a 2 1 1 1 K i ! a i 1 1 1 K m ! a m\nSuppose we stipulate that the K i are linear threshold functions of a set of propositional features. The a i are not all necessarily distinct; in fact we will assume that there are only k m distinct actions. Let these be denoted by b 1 ; 1 1 1 ; b k . The network structure in gure 7 implements such a T-R sequence.\nThe propositional features tested by the conditions are grouped into an n-dimensional binary (0,1) vector, X called the input vector. The m conditions are implemented by m threshold elements having weighted connections to the components of the input vector. The process of nding the rst true condition is implemented by a layer containing appropriate inhibitory weights and AND units such that only one AND unit can ever have an output value of 1, and that unit corresponds to the rst true condition. A unique action is associated with each condition through a layer of binary-valued weights and OR-unit associators. Each AND unit is connected to one and only one associator by a non-zero weight. Since only one AND unit can have a non-zero output, only that unit's associator can have a non-zero output. (But each associator could be connected to multiple AND units.) For example, if action b i is to be associated with conditions K j and K l , then there will be unit weights from the j-th and l-th AND units to the associator representing action b i and zero-valued weights from all other AND units to that associator. The action selected for execution is the action corresponding to the single associator having the non-zero output. We are investigating various learning methods suggested by this neural net implementation.\nWork must also be done on the question of what constitutes a goal. I have assumed goals of achievement. Can mechanisms be found that continously avoid making certain conditions true (or false) while attempting to achieve others? Or suppose priorities on a number of possibly mutually contradictory conditions are specied; what are reasonable methods for attending to those achievable goals having the highest priorities? Also, it will be interesting to ask in what sense T-R programs can be proved to be correct. It would seem that verication would have to make assumptions about the dynamics of the environment; some environments might be so malevolent that agents in them could never achieve their goals. Even so, a verier equipped with a model of the eects of actions could at least check to see that the regression property was satised and note any lapses.\nMore work remains on methods of implementing or interpreting T-R programs and the real-time properties of implementations. These properties will, of course, depend on the depth of the T-R program hierarchy and on the conditions and features that must be evaluated.\nFinally, it might be worthwhile to investigate \\fuzzy\" versions of T-R trees. One could imagine fuzzy predicates that would energize actions with a \\strength\" that depends on the degree to which the predicates are true. The SRI robot, Flakey, uses a fuzzy controller (Saotti, Ruspini & Konolige, 1993)." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b6" ], "table_ref": [], "text": "I have presented a formalism for specifying actions in dynamic and uncertain domains. Since this work rests on ideas somewhat dierent than those of conventional computer science, I expect that considerably more analysis and experimentation will be required before the T-R formalism can be fully evaluated. The need in robotics for control-theoretic ideas such as homeostasis, continuous feedback, and stability appears to be suciently strong, however, that it seems appropriate for candidate formalisms embodying these ideas to be put forward for consideration.\nExperiments with the language will produce a stock of advice about how to write T-R programs eectively. Already, for example, it is apparent that a sustaining condition in a T-R sequence must be carefully specied so that it is no more restrictive than it really needs to be; an overly restrictive condition is likely to be rendered false by the very action that it is supposed to sustain before that action succeeds in making a higher condition in the sequence true. But, of course, overly restrictive conditions won't occur in T-R programs that satisfy the regression property.\nTo be usefully employed, T-R programs (or any programs controlling agent action) need to be embodied in an overall agent architecture that integrates perceptual processing, goal selection, action computation, environmental modeling, and planning and learning mechanisms. Several architectural schemes have been suggested, and we will not summarize them here except to say that three layers of control are often delineated. A typical example is the SSS architecture of Connell (Connell, 1993). His top (Symbolic) layer does overall goal setting and sequencing, the middle (Subsumption) level selects specic actions, and the lower (Servo) level exerts standard feedback control over the eectors. We believe T-R programs would most appropriately be used in the middle level of such architectures.\nThe major limitation of T-R programs is that they involve much more computation than do programs that check only relevant conditions. Most of the conditions computed by a T-R program in selecting an action are either irrelevant to the situation at hand or have values that might be accurately predicted (if the programmer wanted to take the trouble to do so). We are essentially trading computing time for ease of programming, and our particular trade will only be advantageous in certain applications. Among these, I think, is the mid-level control of robots and (possibly) software agents.\nIn conclusion, there are three main features embodied in the T-R formalism. One is continuous computation of the parameters and conditions on which action is based. T-R programs allow for continuous feedback while still supporting parameter binding and recursion. The second feature is the regression relationship between conditions in a T-R program. Each condition is the regression of some condition closer to the goal through an action that normally achieves that closer-to-the-goal condition. The regression property assures robust goal-seeking behavior. Third, the conceptual circuitry controlling action is constructed at run time, and this feature permits programs to be universal while still being compact. In addition, T-R programs are intuitive and easy to write and are written in a formalism that is compatible with automatic planning and learning methods." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I trace my interest in reactive, yet purposive, systems to my early collaborative work on triangle tables and ILAs. Several former Stanford students, including Jonas Karlsson, Eric Ly, Rebecca Moore, and Mark Torrance, helped in the early stages of this work. I also want to thank my sabbatical hosts, Prof. Rodney Brooks at MIT, Prof. Barbara Grosz at Harvard, and the people at the Santa Fe Institute. More recently, I have benetted from discussions with Scott Benson, George John, and Ron Kohavi. I also thank the anonymous referees for their helpful suggestions. This work was performed under NASA Grant NCC2-494 and NSF Grant IRI-9116399." } ]
[ { "authors": "P Agre", "journal": "", "ref_id": "b0", "title": "The Dynamic Structure of Everyday Life", "year": "1989" }, { "authors": "S Benson", "journal": "", "ref_id": "b1", "title": "", "year": "1993" }, { "authors": "G Berry; G Gonthier", "journal": "Science of Computer Programming", "ref_id": "b2", "title": "The ESTEREL Synchronous Programming Language", "year": "1992-11" }, { "authors": "R Brooks", "journal": "IEEE Journal of Robotics and Automation", "ref_id": "b3", "title": "A Robust Layered Control System for a Mobile Robot", "year": "1986-03" }, { "authors": "R Brooks", "journal": "", "ref_id": "b4", "title": "The Behavior Language User's Guide", "year": "1989" }, { "authors": "D Chapman", "journal": "", "ref_id": "b5", "title": "Vision, Instruction and Action", "year": "1990" }, { "authors": "J Connell", "journal": "", "ref_id": "b6", "title": "SSS: A Hybrid Architecture Applied to Robot Navigation", "year": "1993" }, { "authors": "T Dean; M Wellman", "journal": "Morgan Kaufmann", "ref_id": "b7", "title": "Planning and Control", "year": "1991" }, { "authors": "M Drummond", "journal": "Morgan Kaufmann", "ref_id": "b8", "title": "Situated Control Rules", "year": "1989" }, { "authors": "R Fikes; P Hart; N Nilsson", "journal": "Articial Intelligence", "ref_id": "b9", "title": "Learning and Executing Generalized Robot Plans", "year": "1972" }, { "authors": "R Firby", "journal": "Morgan Kaufmann", "ref_id": "b10", "title": "An Investigation into Reactive Planning in Complex Domains", "year": "1987" }, { "authors": "D Galles", "journal": "IOS Press", "ref_id": "b11", "title": "Map Building and Following Using Teleo-Reactive Trees", "year": "1993" }, { "authors": "E Gat", "journal": "", "ref_id": "b12", "title": "ALFA: A Language for Programming Reactive Robotic Control Systems", "year": "1991" }, { "authors": "M George; A Lansky", "journal": "Morgan Kaufmann", "ref_id": "b13", "title": "Reactive Reasoning and Planning", "year": "1989" }, { "authors": "M L Ginsberg", "journal": "AAAI Magazine", "ref_id": "b14", "title": "Universal Planning: An (Almost) Universally Bad Idea", "year": "1989" }, { "authors": "G John", "journal": "", "ref_id": "b15", "title": "SQUISH: A Preprocessing Method for Supervised Learning of T-R Trees from Solution Paths", "year": "1993" }, { "authors": "L P Kaelbling", "journal": "American Association for Articial Intelligence", "ref_id": "b16", "title": "Goals as Parallel Program Specications", "year": "1988" }, { "authors": "L P Kaelbling; S J Rosenschein", "journal": "Robotics and Autonomous Systems", "ref_id": "b17", "title": "Action and Planning in Embedded Agents", "year": "1990-06" }, { "authors": "J Karlsson", "journal": "", "ref_id": "b18", "title": "Building a Triangle Using Action Nets", "year": "1990-06" }, { "authors": "R Kohavi; S Benson", "journal": "Machine Learning", "ref_id": "b19", "title": "Research Note on Decision Lists", "year": "1993" }, { "authors": "P Maes", "journal": "Connection Science", "ref_id": "b20", "title": "How to Do the Right Thing", "year": "1989" }, { "authors": "T M Mitchell; R M Keller; S T Kedar-Cabelli", "journal": "Machine Learning", "ref_id": "b21", "title": "Explanation-based Generalization: A Unifying View", "year": "1986" }, { "authors": "N J Nilsson", "journal": "Morgan Kaufmann", "ref_id": "b22", "title": "Principles of Articial Intelligence", "year": "1980" }, { "authors": "N Nilsson", "journal": "", "ref_id": "b23", "title": "Shakey the Robot", "year": "1984" }, { "authors": "N Nilsson", "journal": "", "ref_id": "b24", "title": "Toward Agent Programs with Circuit Semantics", "year": "1992" }, { "authors": "P J G Ramadge; W M Wonham", "journal": "", "ref_id": "b25", "title": "The Control of Discrete Event Systems", "year": "1989-01" }, { "authors": "W Reisig", "journal": "Springer Verlag", "ref_id": "b26", "title": "Petri Nets: An Introduction", "year": "1985" }, { "authors": "R L Rivest", "journal": "Machine Learning", "ref_id": "b27", "title": "Learning Decision Lists", "year": "1987" }, { "authors": "S J Rosenschein; L P Kaelbling", "journal": "", "ref_id": "b28", "title": "The Synthesis of Machines with Provable Epistemic Properties", "year": "1986" }, { "authors": "A Saotti; E Ruspini; K Konolige", "journal": "", "ref_id": "b29", "title": "Integrating Reactivity and Goaldirectedness in a Fuzzy Controller", "year": "1993" }, { "authors": "M J Schoppers", "journal": "Morgan Kaufmann", "ref_id": "b30", "title": "Universal Plans for Reactive Robots in Unpredictable Domains", "year": "1987" }, { "authors": "B Selman", "journal": "", "ref_id": "b31", "title": "Near-Optimal Plans, Tractability, and Reactivity", "year": "1993" }, { "authors": "P Teo; C-S ", "journal": "", "ref_id": "b32", "title": "\\Botworld", "year": "1991-12" }, { "authors": "P Teo; C-S ", "journal": "", "ref_id": "b33", "title": "Botworld Structures", "year": "1992-06" }, { "authors": "D A Waterman; F Hayes-Roth", "journal": "Academic Press", "ref_id": "b34", "title": "An Overview of Pattern-Directed Inference Systems", "year": "1978" }, { "authors": "S Wilson", "journal": "The MIT Press/Bradford Books", "ref_id": "b35", "title": "The Animat Path to AI", "year": "1991" } ]
[ { "formula_coordinates": [ 2, 274.14, 515.61, 63.18, 96.84 ], "formula_id": "formula_0", "formula_text": "K 1 ! a 1 K 2 ! a 2 1 1 1 K i ! a i 1 1 1 K m ! a m" }, { "formula_coordinates": [ 3, 272.75, 329.26, 174.65, 236.06 ], "formula_id": "formula_1", "formula_text": "a m a 3 a 2 K 2 K 3 K m K 1 ¬ ¬ ¬ a 1" }, { "formula_coordinates": [ 6, 106.92, 162.9, 237.96, 57.06 ], "formula_id": "formula_2", "formula_text": "goto(loc) equal(position, loc) ! nil equal(heading, course(position, loc)) ! move T ! rotate" }, { "formula_coordinates": [ 6, 106.92, 551.34, 298.98, 57.24 ], "formula_id": "formula_3", "formula_text": "amble(loc) equal(position, loc) ! nil clear-path(position, loc) ! goto(loc) T ! amble(new-point(position, loc))" }, { "formula_coordinates": [ 10, 276.87, 42.21, 108.04, 275.22 ], "formula_id": "formula_4", "formula_text": "Nilsson K m K m -1 K 1 K 2 a m a 2 K 3 a 3" }, { "formula_coordinates": [ 15, 274.14, 451.53, 63.18, 96.84 ], "formula_id": "formula_5", "formula_text": "K 1 ! a 1 K 2 ! a 2 1 1 1 K i ! a i 1 1 1 K m ! a m" } ]
Teleo-Reactive Programs for Agent Control
A formalism is presented for computing and organizing actions for autonomous agents in dynamic environments. We introduce the notion of teleo-reactive (T-R) programs whose execution entails the construction of circuitry for the continuous computation of the parameters and conditions on which agent action is based. In addition to continuous feedback, T-R programs support parameter binding and recursion. A primary dierence between T-R programs and many other circuit-based systems is that the circuitry of T-R programs is more compact; it is constructed at run time and thus does not have to anticipate all the contingencies that might arise over all possible runs. In addition, T-R programs are intuitive and easy to write and are written in a form that is compatible with automatic planning and learning methods. We briey describe some experimental applications of T-R programs in the control of simulated and actual mobile robots.
Nils J Nilsson; Nilsson
[ { "figure_caption": "Figure 1 :1Figure 1: Implementing a T-R Sequence in Circuitry", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: Robots and Bars", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Navigating using amble The continuous computation of parameters involved in T-R programs and the ability of high level programs to redirect control account for the great robustness of this formalism.A formal syntax for T-R programs is given in(Nilsson, 1992).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": ", course(position, loc))", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: A T-R Tree", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Botworld Display", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: A Neural Net that Implements a T-R Sequence", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b21", "b15", "b8", "b17", "b9", "b16", "b0", "b13", "b1", "b13", "b6", "b14", "b17", "b13", "b18", "b13", "b12", "b12" ], "table_ref": [], "text": "Learning the past tense of English verbs, a seemingly minor aspect of language acquisition, has generated heated debates since the rst connectionist implementation in 1986 (Rumelhart & McClelland, 1986). Based on their results, Rumelhart and McClelland claimed that the use and acquisition of human knowledge of language can best be formulated by ANN (Arti cial Neural Network) models without symbol processing that postulates the existence of explicit symbolic representation and rules. Since then, learning the past tense has become a landmark task for testing the adequacy of cognitive modeling. Over the years a number of criticisms of connectionist modeling appeared (Pinker & Prince, 1988;Lachter & Bever, 1988;Prasada & Pinker, 1993;Ling, Cherwenka, & Marinov, 1993). These criticisms centered mainly upon the issues of high error rates and low reliability of the experimental results, the inappropriateness of the training and testing procedures, \\hidden\" features of the representation and the network architecture that facilitate learning, as well as the opaque knowledge representation of the networks. Several subsequent attempts at improving the original results with new ANN models have been made (Plunkett & Marchman, 1991;Cottrell & Plunkett, 1991;MacWhinney & Leinbach, 1991;Daugherty & Seidenberg, 1993). Most notably, MacWhinney and Leinbach (1991) constructed a multilayer neural network with backpropagation (BP), and attempted to answer early criticisms. On the other hand, supporters of the symbolic approach believe that symbol structures such as parse trees, propositions, etc., and the rules for their manipulations, are critical at the cognitive level, while the connectionist approach may only provide an account of the neural structures in which the traditional symbol-processing cognitive architecture is implemented (Fodor & Pylyshyn, 1988). Pinker (1991) and Prasada and Pinker (1993) argue that a proper c 1994 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\naccounting for regular verbs should be dependent upon production rules, while irregular past-tense in ections may be generalized by ANN-like associative memory.\nThe proper way of debating the adequacy of symbolic and connectionist modeling is by contrasting competitive implementations. Thus, a symbolic implementation is needed that can be compared with the ANN models. This is, in fact, a challenge posed by MacWhinney and Leinbach (1991), who assert that no symbolic methods would work as well as their own model. In a section titled \\Is there a better symbolic model?\" they claim:\nIf there were some other approach that provided an even more accurate characterization of the learning process, we might still be forced to reject the connectionist approach, despite its successes. The proper way of debating conceptualizations is by contrasting competitive implementations. To do this in the present case, we would need a symbolic implementation that could be contrasted with the current implementation. (MacWhinney & Leinbach, 1991, page 153) In this paper, we present a general-purpose Symbolic Pattern Associator (SPA) based upon the symbolic decision tree learning algorithm ID3 (Quinlan, 1986). We have shown (Ling & Marinov, 1993) that the SPA's results are much more psychologically realistic than ANN models when compared with human subjects. On the issue of the predictive accuracy, MacWhinney and Leinbach (1991) did not report important results of their model on unseen regular verbs. To reply to our criticism, MacWhinney (1993) re-implemented the ANN model, and claimed that its raw generalization power is very close to that of our SPA. He believed that this should be the case because both systems learn from the same data set:\nThere is a very good reason for the equivalent performance of these two models. ...] When two computationally powerful systems are given the same set of input data, they both extract every bit of data regularity from that input. Without any further processing, there is only so much blood that can be squeezed out of a turnip, and each of our systems SPA and ANN] extracted what they could. (MacWhinney, 1993, page 295) We will show that this is not the case; obviously there are reasons why one learning algorithm outperforms another (otherwise why do we study di erent learning algorithms?). The Occam's Razor Principle | preferring the simplest hypothesis over more complex ones | creates preference biases for learning algorithms. A preference bias is a preference order among competitive hypotheses in the hypothesis space. Di erent learning algorithms, however, employ di erent ways of measuring simplicity, and thus concepts that they bias to are di erent. How well a learning program generalizes depends upon the degree to which the regularity of the data ts with its bias. We study and compare the raw generalization ability of symbolic and ANN models on the task of learning the past tense of English verbs. We perform extensive head-to-head comparisons between ANN and SPA, and show the e ects of di erent representations and encodings on their generalization abilities. Our experimental results demonstrate clearly that 1. the distributed representation, a feature that connectionists have been advocating, does not lead to better generalization when compared with the symbolic representation, or with arbitrary error-correcting codes of a proper length;\n2. ANNs cannot learn the identity mapping that preserves the verb stem in the past tense as well as the SPA can;\n3. a new representation suggested by MacWhinney (1993) improves the predictive accuracy of both SPA and ANN, but SPA still outperforms ANN models;\n4. in sum, the SPA generalizes the past tense of unseen verbs better than ANN models by a wide margin.\nIn Section 5 we discuss reasons as to why the SPA is a better learning model for the the task of English past-tense acquisition. Our results support the view that many such rule-governed cognitive processes should be better modeled by symbolic, rather than connectionist, systems." }, { "figure_ref": [], "heading": "Review of Previous Work", "publication_ref": [ "b21" ], "table_ref": [], "text": "In this section, we review brie y the two main connectionist models of learning the past tenses of English verbs, and the subsequent criticisms. The testing sample consists of 86 \\unseen\" low frequency verbs (14 irregular and 72 regular) that are not randomly chosen. The testing sample results have a 93% error rate for the irregulars. The regulars fare better with a 33.3% error rate. Thus, the overall error rate for the whole testing sample is 43% | 37 wrong or ambiguous past tense forms out of 86 tested. Rumelhart and McClelland (1986) claim that the outcome of their experiment discon rms the view that there exist explicit (though inaccessible) rules that underlie human knowledge of language." }, { "figure_ref": [], "heading": "Rumelhart and McClelland's Model", "publication_ref": [ "b13" ], "table_ref": [], "text": "2.2 MacWhinney and Leinbach's Model MacWhinney and Leinbach (1991) report a new connectionist model on the learning of the past tenses of English verbs. They claim that the results from the new simulation are far superior to Rumelhart and McClelland's results, and that they can answer most of the criticisms aimed at the earlier model. The major departure from Rumelhart and McClelland's model is that the Wickelphone/Wickelfeature representational format is replaced with the UNIBET (MacWhinney, 1990) phoneme representational system which allows the assignment of a single alphabetic/numerical letter to each of the total 36 phonemes. MacWhinney and Leinbach use special templates with which to code each phoneme and its position in a word. The actual input to the network is created by coding the individual phonemes as sets of phonetic features in a way similar to the coding of Wickelphones as Wickelfeatures (cf Section 4.3). The network has two layers of 200 \\hidden\" units fully connected to adjacent layers. This number was arrived at through trial and error. In addition, the network has a special-purpose set of connections that copy the input units directly onto the output units. Altogether, 2062 regular and irregular English verbs are selected for the experiment | 1650 of them are used for training (1532 regular and 118 irregular), but only 13 low frequency irregular verbs are used for testing (MacWhinney & Leinbach, 1991, page 144). Training the network takes 24,000 epochs. At the end of training there still are 11 errors on the irregular pasts. MacWhinney and Leinbach believe that if they allow the network to run for several additional days and give it additional hidden unit resources, it probably can reach complete convergence (MacWhinney & Leinbach, 1991, page 151). The only testing error rate reported is based on a very small and biased test sample of 13 unseen irregular verbs; 9 out of 13 are predicted incorrectly. They do not test their model on any of the unseen regular verbs: \\Unfortunately, we did not test a similar set of 13 regulars.\" (MacWhinney & Leinbach, 1991, page 151)." }, { "figure_ref": [], "heading": "Criticism of the Connectionist Models", "publication_ref": [], "table_ref": [], "text": "Previous and current criticisms of the connectionist models of learning the past tenses of English verbs center mainly on several issues. Each of these issues is summarized in the following subsections." }, { "figure_ref": [], "heading": "Error Rates", "publication_ref": [ "b13" ], "table_ref": [], "text": "The error rate in producing the past tenses of the \\unseen\" test verbs is very high in both ANN models, and important tests were not carried out in MacWhinney and Leinbach (1991) model. The experimental results indicate that neither model reaches the level of adult competence. In addition, relatively large numbers of the errors are not psychologically realistic since humans rarely make them." }, { "figure_ref": [], "heading": "Training and Testing Procedures", "publication_ref": [], "table_ref": [], "text": "In both Rumelhart and McClelland's model, and MacWhinney and Leinbach's model, the generalization ability is measured on only one training/testing sample. Further, the testing sets are not randomly chosen, and they are very small. The accuracy in testing irregular verbs can vary greatly depending upon the particular set of testing verbs chosen, and thus multiple runs with large testing samples are necessary to assess the true generalization ability of a learning model. Therefore, the results of the previous connectionist models are not reliable. In Section 4, we set up a reliable testing procedure to compare connectionist models with our symbolic approach. Previous connectionist simulations have also been criticized for their crude training processes (for example, the sudden increase of regular verbs in the training set), which create such behavior as the U-shaped learning curves." }, { "figure_ref": [], "heading": "Data Representation and Network Architecture", "publication_ref": [ "b8", "b15", "b8" ], "table_ref": [], "text": "Most of the past criticisms of the connectionist models have been aimed at the datarepresentation formats employed in the simulations. Lachter and Bever (1988) pointed out that the results achieved by Rumelhart and McClelland's model would have been impossible without the use of several TRICS (The Representations It Crucially Supposes) introduced with the adoption of the Wickelphone/Wickelfeature representational format. MacWhinney and Leinbach claim that they have improved upon the earlier connectionist model by getting rid of the Wickelphone/Wickelfeature representation format, and thus to have responded to the many criticisms that this format entailed. However, MacWhinney and Leinbach also introduce several TRICS in their data-representation format. For example, instead of coding predecessor and successor phonemes as Wickelphones, they introduce special templates with which to code positional information. This means that the network will learn to associate patterns of phoneme/positions within a predetermined consonant/vowel pattern. Further, the use of restrictive templates gets rid of many English verbs that do not t the chosen template. This may bias the model in favour of shorter verbs, predominantly of Anglo-Saxon origin, and against longer verbs, predominantly composite or of Latin and French origin. Another TRICS introduced is the phonetic feature encoding (a distributed representation). It is not clear why phonetic features such as front, centre, back, high, etc. are chosen. Do they represent ner grained \\microfeatures\" that help to capture the regularities in English past tenses? In Section 4.5, we will show that the straightforward symbolic representation leads to better generalization than does the carefully engineered distributed representation. This undermines the claimed advantages of the distributed representation of connectionist models. Pinker and Prince (1988), and Lachter and Bever (1988) point out that Rumelhart and McClelland try to model the acquisition of the production of the past tense in isolation from the rest of the English morphological system. Rumelhart and McClelland, as well as MacWhinney and Leinbach, assume that the acquisition process establishes a direct mapping from the phonetic representation of the stem to the phonetic representation of the past tense form. This direct mapping collapses some well-established distinctions such as lexical item vs. phoneme string, and morphological category vs. morpheme. Simply remaining at the level of phonetic patterns, it is impossible to express new categorical information in rst-order (predicate/function/variable) format. One of the inherent de cits of the connectionist implementations is that there is no such thing as a variable for verb stem, and hence there is no way for the model to attain the knowledge that one could add su x to a stem to get its past tense (Pinker & Prince, 1988, page 124). Since the acquired knowledge in such networks is a large weight matrix, which usually is opaque to the human observer, it is unclear how the phonological levels processing that the connectionist models carry out can be integrated with the morphological, lexical, and syntactical level of processing. Neither Rumelhart and McClelland nor MacWhinney and Leinbach address this issue. In contrast to ANNs whose internal representations are entirely opaque, the SPA can represent the acquired knowledge in the form of production rules, and allow for further processing, resulting in higher-level categories such as the verb stem and the voiced consonants, linguistically realistic production rules using these new categories for regular verbs, and associative templates for irregular verbs (Ling & Marinov, 1993)." }, { "figure_ref": [], "heading": "Knowledge Representation and Integration of Acquired Knowledge", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "The Symbolic Pattern Associator", "publication_ref": [], "table_ref": [], "text": "We take up MacWhinney and Leinbach's challenge for a better symbolic model for learning the past tense of English verbs, and present a general-purpose Symbolic Pattern Associator (SPA)1 that can generalize the past tense of unseen verbs much more accurately than connectionist models in this section. Our model is symbolic for several reasons. First, the input/output representation of the learning program is a set of phoneme symbols, which are the basic elements governing the past-tense in ection. Second, the learning program operates on those phoneme symbols directly, and the acquired knowledge can be represented in the form of production rules using those phoneme symbols as well. Third, those production rules at the phonological level can easily be further generalized into rstorder rules that use more abstract, high-level symbolic categories such as morphemes and the verb stem (Ling & Marinov, 1993). In contrast, the connectionist models operate on a distributed representation (phonetic feature vectors), and the acquired knowledge is embedded in a large weight matrix; it is therefore hard to see how this knowledge can be further generalized into more abstract representations and categories." }, { "figure_ref": [], "heading": "The Architecture of the Symbolic Pattern Associator", "publication_ref": [ "b19", "b18", "b22", "b5", "b20", "b23", "b3", "b19" ], "table_ref": [], "text": "The SPA is based on C4.5 (Quinlan, 1993) which is an improved implementation of the ID3 learning algorithm (cf. (Quinlan, 1986)). ID3 is a program for inducing classi cation rules in the form of decision trees from a set of classi ed examples. It uses information gain ratio as a criterion for selecting attributes as roots of the subtrees. The divide-and-conquer strategy is recursively applied in building subtrees until all remaining examples in the training set belong to a single concept (class); then a leaf is labeled as that concept. The information gain guides a greedy heuristic search for the locally most relevant or discriminating attribute that maximally reduces the entropy (randomness) in the divided set of the examples. The use of this heuristic usually results in building small decision trees instead of larger ones that also t the training data.\nIf the task is to learn to classify a set of di erent patterns into a single class of several mutually exclusive categories, ID3 has been shown to be comparable with neural networks (i.e., within about 5% range on the predictive accuracy) on many real-world learning tasks (cf. (Shavlik, Mooney, & Towell, 1991;Feng, King, Sutherland, & Henery, 1992;Ripley, 1992;Weiss & Kulikowski, 1991)). However, if the task is to classify a set of (input) patterns into (output) patterns of many attributes, ID3 cannot be applied directly. The reason is that if ID3 treats the di erent output patterns as mutually exclusive classes, the number of classes would be exponentially large and, more importantly, any generalization of individual output attributes within the output patterns would be lost.\nTo turn ID3 or any similar N-to-1 classi cation system into general purpose N-to-M symbolic pattern associators, the SPA applies ID3 on all output attributes and combines individual decision trees into a \\forest\", or set of trees. A similar approach was proposed for dealing with the distributed (binary) encoding in multiclass learning tasks such as NETtalk (English text-to-speech mapping) (Dietterich, Hild, & Bakiri, 1990). Each tree takes as input the set of all attributes in the input patterns, and is used to determine the value of one attribute in its output pattern. More speci cally, if a pair of input attributes ( 1 to n ) and output attributes (! 1 to ! m ) is represented as:\n1 ; :::; n ! ! 1 ; :::; ! m then the SPA will build a total of m decision trees, one for each output attribute ! i (1 i m) taking all input attributes 1 ; :::; n per tree. Once all of m trees are built, the SPA can use them jointly to determine the output pattern ! 1 ; :::; ! m from any input pattern 1 ; :::; n .\nAn important feature of the SPA is explicit knowledge representation. Decision trees for output attributes can easily be transformed into propositional production rules (Quinlan, 1993). Since entities of these rules are symbols with semantic meanings, the acquired knowledge often is comprehensible to the human observer. In addition, further processing and integration of these rules can yield high-level knowledge (e.g., rules using verb stems) (Ling & Marinov, 1993). Another feature of the SPA is that the trees for di erent output attributes contain identical components (branches and subtrees) (Ling & Marinov, 1993). These components have similar roles as hidden units in ANNs since they are shared in the decision trees of more than one output attribute. These identical components can also be viewed as high-level concepts or feature combinations created by the learning program." }, { "figure_ref": [], "heading": "Default Strategies", "publication_ref": [], "table_ref": [], "text": "An interesting research issue is how decision-tree learning algorithms handle the default class. A default class is the class to be assigned to leaves which no training examples are classi ed into. We call these leaves empty leaves. This happens when the attributes have many di erent values, or when the training set is relatively small. In these cases, during the tree construction, only a few branches are explored for some attributes. When the testing examples fall into the empty leaves, a default strategy is needed to assign classes to those empty leaves.\nFor easier understanding, we use the spelling form of verbs in this subsection to explain how di erent default strategies work. (In the actual learning experiment the verbs are represented in phonetic form.) If we use consecutive left-to-right alphabetic representation, the verb stems and their past tenses of a small training set can be represented as follows: r,d,e,d,_,_,_,_,_,_,_ e,a,t,_,_,_,_,_,_,_,_,_,_,_,_ => a,t,e,_,_,_,_,_,_,_,_,_,_,_,_ l,a,u,n,c,h,_,_,_,_,_,_,_,_,_ => l,a,u,n,c,h,e,d,_,_,_,_,_,_,_ l,e,a,v,e,_,_,_,_,_,_,_,_,_,_ => l,e,f,t,_,_,_,_,_,_,_,_,_,_,_ where is used as a ller for empty space. The left-hand 15 columns are the input patterns for the stems of the verbs; the right-hand 15 columns are the output patterns for their corresponding correct past tense forms.\na,f,f,o,r,d,_,_,_,_,_,_,_,_,_ => a,f,f,o,\nAs we have discussed, 15 decision trees will be constructed, one for each output attribute. The decision tree for the rst output attribute can be constructed (see Figure 1 where the last column is the classi cation of the rst output attribute. However, many other branches (such as 1 = c in Figure 1 (a)) are not explored, since no training example has that attribute value. If a testing pattern has its rst input attribute equal to c, what class should it be assigned to? ID3 uses the majority default. That is, the most popular class in the whole subtree under 1 is assigned to the empty leaves. In the example above, either class a or l will be chosen since they each have 2 training examples. However, this is clearly not the right strategy for this task since a verb such as create would be output as l...... or a......, which is incorrect. Because it is unlikely for a small training set to have all variations of attribute values, the majority default strategy of ID3 is not appropriate for this task. For applications such as verb past-tense learning, a new default heuristic | passthrough | may be more suitable. That is, the classi cation of an empty leaf should be the same as the attribute value of that branch. For example, using the passthrough default strategy, create will be output as c....... The passthrough strategy gives decision trees some rst-order avor, since the production rules for empty leaves can be represented as If Attribute = X then Class = X where X can be any unused attribute values. Passthrough is a domaindependent heuristic strategy because the class labels may have nothing to do with the attribute values in other applications.\nApplying the passthrough strategy alone, however, is not adequate for every output attribute. The endings of the regular past tenses are not identical to any of the input patterns, and the irregular verbs may have vowel and consonant changes in the middle of the verbs. In these cases, the majority default may be more suitable than the passthrough. In order to choose the right default strategy | majority or passthrough | a decision is made based upon the training data in the corresponding subtree. The SPA rst determines the majority class, and counts the number of examples from all subtrees that belong to this class. It then counts the number of examples in the subtrees that coincide with the passthrough strategy. These two numbers are compared, and the default strategy employed by more examples is chosen. For instance, in the example above (see Figure 1 (a)), the majority class is l (or a) having 2 instances. However, there are 3 examples coinciding with the passthrough default: two l and one a. Thus the passthrough strategy takes over, and assigns all empty leaves at this level. The empty attribute branch c would then be assigned the class c. Note that the default strategy for empty leaves of attribute X depends upon training examples falling into the subtree rooted at X. This localized method ensures that only related objects have an in uence on calculating default classes. As a result, the SPA can adapt the default strategy that is best suited at di erent levels of the decision trees. For example, in Figure 1 (b), two di erent default strategies are used at di erent levels in the same tree. We use the SPA with the adaptive default strategy throughout the remainder of this paper. Note that the new default strategy is not a TRICS in the data representation; rather, it represents a bias of the learning program. Any learning algorithm has a default strategy independent of the data representation. The e ect of di erent data representations on generalization is discussed in Sections 4.3, 4.5, and 4.6. The passthrough strategy can be imposed on ANNs as well by adding a set of copy connections between the input units and the twin output units. See Section 4.4 for detail. For neural networks, various coding methods were used to represent values of the attribute X. In the dense coding, we used 00 to represent a, 01 for b, 10 for c and 11 for d. We also tried the standard one-per-class encoding, and real number encoding (0.2 for a, 0.4 for b, 0.6 for c and 0.8 for d). The networks were trained using as few hidden units as possible in each case. We found that in most cases the classi cation of the testing example is not stable; it varies with di erent random seeds that initialize the networks. " }, { "figure_ref": [], "heading": "Comparisons of", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Head-to-head Comparisons between Symbolic and ANN Models", "publication_ref": [], "table_ref": [], "text": "In this section, we perform a series of extensive head-to-head comparisons using several di erent representations and encoding methods, and demonstrate that the SPA generalizes the past tense of unseen verbs better than ANN models do by a wide margin." }, { "figure_ref": [], "heading": "Format of the data", "publication_ref": [ "b11" ], "table_ref": [], "text": "Our verb set came from MacWhinney's original list of verbs. The set contains about 1400 stem/past tense pairs. Learning is based upon the phonological UNIBET representation (MacWhinney, 1990), in which di erent phonemes are represented by di erent alphabetic/numerical letters. There is a total of 36 phonemes. The source le is transferred into the standard format of pairs of input and output patterns. For example, the verbs in " }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [], "table_ref": [], "text": "To guarantee unbiased and reliable comparison results, we use training and testing samples randomly drawn in several independent runs. Both SPA and ANN are provided with the same sets of training/testing examples for each run. This allows us to achieve a reliable estimate of the inductive generalization capabilities of each model on this task.\nThe neural network program we used is a package called Xerion, which was developed at the University of Toronto. It has several more sophisticated search mechanisms than the standard steepest gradient descent method with momentum. We found that training with the conjugate-gradient method is much faster than with the standard backpropagation algorithm. Using the conjugate-gradient method also avoids the need to search for proper settings of parameters such as the learning rate. However, we do need to determine the proper number of hidden units. In the experiments with ANNs, we rst tried various numbers of hidden units and chose the one that produced the best predictive accuracy in a trial run, and then use the network with that number of hidden units in the actual runs. The SPA, on the other hand, has no parameters to adjust.\nOne major di erence in implementation between ANNs and SPA is that SPA can take (symbolic) phoneme letters directly while ANNs normally encode each phoneme letter to binary bits. (Of course, SPA also can apply to the binary representation). We studied various binary encoding methods and compared results with SPA using symbolic letter representation. Since outputs of neural networks are real numbers, we need to decode the network outputs back to phoneme letters. We used the standard method of decoding: the phoneme letter that has the minimal real-number Hamming distance (smallest angle) with the network outputs was chosen. To see how binary encoding a ects the generalization, the SPA was also trained with the binary representation. Since the SPA's outputs are binary, the decoding process may tie with several phoneme letters. In this case, one of them is chosen randomly. This re ects the probability of the correct decoding at the level of phoneme letters. When all of the phoneme letters are decoded, if one or more letters are incorrect, the whole pattern is counted as incorrect at the word level." }, { "figure_ref": [ "fig_5" ], "heading": "Templated, Distributed Representation", "publication_ref": [ "b13" ], "table_ref": [ "tab_8" ], "text": "This set of experiments was conducted using the distributed representation suggested by MacWhinney and Leinbach (1991). According to MacWhinney and Leinbach, the output is a left-justi ed template in the format of CCCVVCCCVVCCCVVCCC, where C stands for consonant and V for vowel space holders. The input has two components: a left-justi ed template in the same format as the input, and a right-justi ed template in the format of VVCCC. For example, the verb bet, represented in UNIBET coding as bEt, is shown in the template format as follows ( is the blank phoneme): A speci c distributed representation | a set of (binary) phonetic features | is used to encode all phoneme letters for the connectionist networks. Each vowel (V in the above templates) is encoded by 8 phonetic features (front, centre, back, high, low, middle, round, and diphthong) and each consonant (C in the above templates) by 10 phonetic features (voiced, labial, dental, palatal, velar, nasal, liquid, trill, fricative and interdental). Note that because the two feature sets of vowels and consonants are not identical, templates are needed in order to decode the right type of the phoneme letters from the outputs of the network.\nIn our experimental comparison, we decided not to use the right-justi ed template (VVCCC) since this information is redundant. Therefore, we used only the left-justi ed template (CCCVVCCCVVCCCVVCCC) in both input and output. (The whole verb set in the templated phoneme representation is available in Online Appendix 1. It contains 1320 pairs of verb stems and past tenses that t the template). To ease implementation, we added two extra features that always were assigned to 0 in the vowel phonetic feature set. Therefore, both vowels and consonants were encoded by 10 binary bits. The ANN thus had 18 10 = 180 input bits and 180 output bits, and we found that one layer of 200 hidden units (same as MacWhinney (1993) model) reached the highest predictive accuracy in a trial run. See Figure 2 for the network architecture used. The SPA was trained and tested on the same data sets but with phoneme letters directly; that is, 18 decision trees were built for each of the phoneme letters in the output templates. To see how phonetic feature encoding a ects the generalization, we also trained the SPA with the the same distributed representation | binary bit patterns of 180 input bits and 180 output bits | exactly the same as those in the ANN simulation. In addition, to see how the \\symbolic\" encoding works in ANN, we also train another neural network (with 120 hidden units) with the \\one-per-class\" encoding. That is, each phoneme letter (total of 37; 36 phoneme letters plus one for blank) is encoded by 37 bits, one for each phoneme letter. We used 500 verb pairs (including both regular and irregular verbs) in the training and testing sets. Sampling was done randomly without replacement, and training and testing sets were disjoint. Three runs of SPA and ANN were conducted, and both SPA and ANN were trained and tested on the same data set in each run. Training reached 100% accuracy for SPA and around 99% for ANN.\nTesting accuracy on novel verbs produced some interesting results. The ANN model and the SPA using the distributed representation have very similar accuracy, with ANN slightly better. This may well be caused by the binary outputs of SPA that suppress the ne di erences in prediction. On the other hand, the SPA using phoneme letters directly produces much higher accuracy on testing. The SPA outperforms neural networks (with either distributed or one-per-class representations) by 20 percentage points! The testing results of ANN and SPA can be found in Table 4. Our ndings clearly indicate that the SPA using symbolic representation leads to much better generalization than ANN models." }, { "figure_ref": [], "heading": "Learning Regular Verbs", "publication_ref": [ "b17", "b14", "b17", "b17" ], "table_ref": [], "text": "Predicting the past tense of an unseen verb, which can be either regular or irregular, is not an easy task. Irregular verbs are not learned by rote as traditionally thought since children and adults occasionally extend irregular in ection to irregular-sounding regular verbs or pseudo verbs (such as cleef | cleft) (Prasada & Pinker, 1993). The more similar the novel verb is to the cluster of irregular verbs with similar phonological patterns, the more likely the prediction of an irregular past-tense form. Pinker (1991) and Prasada and Pinker (1993) argue that regular past tenses are governed by rules, while irregulars may be generated by the associated memory which has this graded e ect of irregular past-tense generalization. It is would be interesting, therefore, to compare SPA and ANN on the past-tense generalization of regular verbs only. Because both SPA and ANN use the same, position speci c, representation, learning regular past tenses would require learning di erent su xes2 at di erent positions, and to learn the identity mapping that copies the verb stem to the past tenses for verbs of di erent lengths. We used the same templated representation as in the previous section, but both training and testing sets contained only regular verbs. Again samples were drawn randomly without replacement. To maximize the size of the testing sets, testing sets simply consisted of all regular verbs that were not sampled in the training sets. The same training and testing sets were used for each of the following methods compared. To see the e ect of the adaptive default strategy (as discussed in Section 3.2) on generalization, the SPA with the majority default only and with the adaptive default were both tested. The ANN models were similar to those used in the previous section (except with 160 one-layer hidden units, which turned out to have the best predictive accuracy in a test run). The passthrough default strategy can be imposed on neural networks by adding a set of copy connections that connect directly from the input units to the twin output units. MacWhinney and Leinbach (1991) used such copy connections in their simulation. We therefore tested the networks with the copy connection to see if generalization would be improved as well.\nThe results on the predictive accuracy of the SPA and ANNs on one run with with randomly sampled training and testing sets are summarized in Table 5. As we can see, the SPA with the adaptive default strategy, which combines the majority and passthrough default, outperforms the SPA with only the majority default strategy used in ID3. The ANNs with copy connections do generalize better than the ones without. However, even ANN models with copy connections have a lower predictive accuracy than the SPA (majority). In addition, the di erences in the predictive accuracy are larger with smaller sets of training examples. Smaller training sets make the di erence in testing accuracy more evident. When the training set contains 1000 patterns (out of 1184), the testing accuracy becomes very similar, and would approach asymptotically to 100% with larger training sets. Upon examination, most of the errors made in ANN models occur in the identity mapping (i.e., strange phoneme change and drop); the verb stems cannot be preserved in the past tense if the phonemes are not previously seen in the training examples. This contradicts the ndings of Prasada and Pinker (1993), which show that native English speakers generate regular su x-adding past tenses equally well with unfamiliar-sounding verb stems (as long as these verb stems do not sound close to irregular verbs). This also indicates that the bias of the ANN learning algorithms is not suitable to this type of task. See further discussion in Section 5." }, { "figure_ref": [], "heading": "Training", "publication_ref": [ "b2", "b2", "b2", "b13", "b12" ], "table_ref": [ "tab_10", "tab_3", "tab_12" ], "text": "Percent Dietterich and Bakiri (1991) reported an increase in the predictive accuracy when errorcorrecting codes of large Hamming distances are used to encode values of the attributes. This is because codes with larger Hamming distance (d) allow for correcting fewer than d=2 bits of errors. Thus, learning programs are allowed to make some mistakes at the bit level without their outputs being misinterpreted at the word level. We wanted to nd if performances of the SPA and ANNs are improved with the errorcorrecting codes encoding all of the 36 phonemes. We chose error-correcting codes ranging from ones with small Hamming distance to ones with very large Hamming distance (using the BHC codes, see Dietterich and Bakiri (1991)). Because the number of attributes for each phoneme is too large, the data representation was changed slightly for this experiment. Instead of 18 phoneme holders with templates, 8 consecutive, left-to-right phoneme holders were used. Verbs with stems or past tenses of more than 8 phonemes were removed from the training/testing sets. (The whole verb set in the this representation is available in Online Appendix 1. It contains 1225 pairs of verb stems and past tenses whose lengths are shorter than 8). Both SPA and ANN take exactly the same training/testing sets, each contains 500 pairs of verb stems and past tenses, with the error-correcting codes encoding each phoneme letter. Still, training networks with 92-bit or longer error-correcting codes takes too long to run (there are 8 92 = 736 input attributes and 736 output attributes). Therefore, only two runs with 23-and 46-bit codes were conducted. Consistent with Dietterich and Bakiri (1991)'s ndings, we found that the testing accuracy generally increases when the Hamming distance increases. However, we also observed that the testing accuracy decreases very slightly when the codes become too long. The accuracy using 46-bit codes (with Hamming distance of 20) reaches the maximum value (77.2%), which is quite close to the accuracy (78.3%) of SPA using the direct phoneme letter representation. It seems there is a trade-o between tolerance of errors with large Hamming distance and di culty in learning with longer codes. In addition, we found the testing accuracy of ANNs to be lower than the one of SPA for both 23 bit-and 46-bit error-correcting codes. The results are summarized in Table 6 Our results in this and the previous two subsections undermine the advantages of the distributed representation of ANNs, a unique feature advocated by connectionists. We have demonstrated that, in this task, the distributed representation actually does not allow for adequate generalization. Both SPA using direct symbolic phoneme letters and SPA with error-correcting codes outperform ANNs with distributed representation by a wide margin. However, neither phoneme symbols nor bits in the error-correcting codes encode, implicitly or explicitly, any micro-features as in the distributed representation. It may be that the distributed representation used was not optimally designed. Nevertheless, straightforward symbolic format requires little representation engineering compared with the distributed representation in ANNs.\n4.6 Right-justi ed, Isolated Su x Representation MacWhinney and Leinbach (1991) did not report important results of the predictive accuracy of their model on unseen regular verbs. In his reply (MacWhinney, 1993) to our paper (Ling & Marinov, 1993), MacWhinney re-implemented the ANN model. In his new implementation, 1,200 verb stem and past-tense pairs were in the training set, among which 1081 were regular and 119 were irregular. Training took 4,200 epochs, and reached 100% correct on regulars and 80% on irregulars. The testing set consisted of 87 regulars and 15 irregulars. The percent correct on testing at epoch 4,200 was 91% for regulars and 27% for irregulars, with a combined 80.0% on the testing set. MacWhinney claimed that the raw generalization power of ANN model is very close to that of our SPA. He believes that this should be the case simply because both systems were trained on the same data set.\nWe realize (via private communication) that a new representation used in MacWhinney's recent implementation plays a critical role in the improved performance. In MacWhinney's new representation, the input (for verb stems) is coded by the right-justi ed template CCCVVCCCVVCCCVVCCC. The output contains two parts: a right-justi ed template that is the same as the one in the input, and a coda in the form of VVCCC. The rightjusti ed template in the output is used to represent the past tense without including the su x for the regular verbs. The su x of the regular past tense always stays in the coda, which is isolated from the main, right-justi ed templates. For the irregular past tense, the coda is left empty. For example, the input and output templated patterns for the past tense of verbs in Table 3 Such data representation clearly facilitates learning. For the regular verbs, the output patterns are always identical to the input patterns. In addition, the verb-ending phoneme letters always appear at a few xed positions (i.e., the right most VVCCC section in the input template) due to the right-justi ed, templated representation. Furthermore, the su x always occupies the coda, isolated from the right-justi ed templates.\nWe performed a series of experiments to see how much improvement we could accomplish using the new representation over MacWhinney's recent ANN model and over the left-justi ed representation discussed in Section 4.3. Our SPA (with an averaged predictive accuracy of 89.0%) outperforms MacWhinney's recent ANN implementation (with the predictive accuracy of 80.0%) by a wide margin. In addition, the predictive accuracy is also improved from an average of 76.3% from the left-justi ed representation to 82.8% of the right-justi ed, isolated su x one. See results in Table 7." }, { "figure_ref": [], "heading": "General Discussion and Conclusions", "publication_ref": [ "b17", "b7" ], "table_ref": [], "text": "Two factors contribute to the generalization ability of a learning program. The rst is the data representation, and the other is the bias of the learning program. Arriving at the right, optimal, representation is a di cult task. As argued by Prasada and Pinker (1993), regular verbs should be represented in a coarse grain in terms of the verb stem and su xes; while irregular verbs in a ner grain in terms of phonological properties. Admittedly, SPA works uniformly at the level of phoneme letters, as ANNs do. However, because SPA produces simple production rules that use these phoneme letters directly, those rules can be further generalized to rst-order rules with new representations such as stems and the voiced consonants which can be used across the board in other such rule-learning modules (Ling & Marinov, 1993). This is one of the major advantages over ANN models. Even with exactly the same data representation, there exist some learning tasks that symbolic methods such as the SPA generalize categorically better than ANNs. The converse also is true. This fact re ects the di erent inductive biases of the di erent learning algorithms. The Occam's Razor Principle | preferring the simplest hypothesis over more complex ones | creates a preference bias, a preference of choosing certain hypotheses over others in the hypothesis space. However, di erent learning algorithms choose di erent hypotheses because they use di erent measurements for simplicity. For example, among all possible decision trees that t the training examples, ID3 and SPA induce simple decision trees instead of complicated ones. Simple decision trees can be converted to small sets of production rules. How well a learning algorithm generalizes depends upon the degree to which the underlying regularities of the target concept t its bias. In other words, if the underlying regularities can be represented compactly in the format of hypotheses produced by the learning algorithm, the data can be generalized well, even with a small set of training examples. Otherwise, if the underlying regularities only have a large hypothesis, but the algorithm is looking for compact ones (as per the Occam's Razor Principle), the hypotheses inferred will not be accurate. A learning algorithm that searches for hypotheses larger than necessary (i.e., that does not use the Occam's Razor Principle) is normally \\underconstrained\"; it does not know, based on the training examples only, which of the many competitive hypotheses of the large size should be inferred.\nWe also can describe the bias of a learning algorithm by looking at how training examples of di erent classes are separated in the n-dimensional hyperspace where n is the number of attributes. A decision node in a decision tree forms a hyperplane as described by a linear function such as X = a. Not only are these hyperplanes perpendicular to the axis, they are also partial-space hyperplanes that extend only within the subregion formed by the hyperplanes of their parents' nodes. Likewise, hidden units with a threshold function in ANNs can be viewed as forming hyperplanes in the hyperspace. However, unlike the ones in the decision trees, they need not be perpendicular to any axis, and they are full-space hyperplanes that extend through the whole space. If ID3 is applied to the concepts that t ANN's bias, especially if their hyperplanes are not perpendicular to any axis, then many zigzag hyperplanes that are perpendicular to axes would be needed to separate di erent classes of the examples. Hence, a large decision tree would be needed, but this does not t ID3's bias. Similarly, if ANN learning algorithms are applied to the concepts that t ID3's bias, especially if their hyperplanes form many separated, partial-space regions, then many hidden units may be needed for these regions.\nAnother major di erence between ANNs and ID3 is that ANNs have a larger variation and a weaker bias (cf. (Geman, Bienenstock, & Doursat, 1992)) than ID3. Many more Boolean functions (e.g., linearly separable functions) can t a small network (e.g., one with no hidden units) than they can a small decision tree. This is sometimes attributed to the claimed versatility and exibility of ANNs; they can learn (but not necessarily predict reliably well) many functions, while symbolic methods are brittle. However, it is my belief that we humans are versatile, not because we have a learning algorithm with a large variation, but rather because we have a set of strong-biased learning algorithms, and we can somehow search in the bias space and add new members into the set for the new learning tasks. Symbolic learning algorithms have clear semantic components and explicit representation, and thus we can more easily construct strong-based algorithms motivated from various speci c learning tasks. The adaptive default strategy in the SPA is such an example. On the other hand, we still largely do not know how to e ectively strengthen the bias of ANNs for many speci c tasks (such as the identity mapping, k-term DNF, etc.). Some techniques (such as adding copy connections and weight decaying) exist, but their exact e ects on biasing towards classes of functions are not clear.\nFrom our analyses (Ling & Marinov, 1993), the underlying regularities governing the in ection of the past tense of English verbs do form a small set of production rules with phoneme letters. This is especially so for regular verbs; all the rules are either identity rules or the su x-adding rules. For example, decision trees can be converted into a set of precedence-ordered production rules with more complicated rules (rules with more conditions) listed rst. As an example, using consecutive, left-to-right phonetic representation, a typical su x-adding rule for verb stems with 4 phoneme letters (such as talk | talked) is:\nIf 4 = k and 5 = , then ! 5 = t That is, if the fourth input phoneme is k and the fth is blank (i.e., if we are at a verb ending) then the fth output phoneme is t. On the other hand, the identity-mapping rules have only one condition. A typical identity rule looks like: If 3 = l, then ! 3 = l In fact, the passthrough default strategy allows all of the identity-mapping rules to be represented in a simple rst-order format: If 3 = X, then ! 3 = X where X can be any phoneme. Clearly, the knowledge of forming regular past tenses can thus be expressed in simple, conjunctive rules which t the bias of the SPA (ID3), and therefore, the SPA has a much better generalization ability than the ANN models.\nTo conclude, we have demonstrated, via extensive head-to-head comparisons, that the SPA has a more realistic and better generalization capacity than ANNs on learning the past tense of English verbs. We have argued that symbolic decision-tree/production-rule learning algorithms should outperform ANNs. This is because, rst, the domain seems to be governed by a compact set of rules, and thus ts the bias of our symbolic learning algorithm; second, the SPA directly manipulates on a representation better than ANNs do (i.e., the symbolic phoneme letters vs. the distributed representation); and third, the SPA is able to derive high-level concepts used throughout English morphology. Our results support the view that many such high-level, rule-governed cognitive tasks should be better modeled by symbolic, rather than connectionist, systems." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I gratefully thank Steve Pinker for his constant encouragement, and Marin Marinov, Steve Cherwenka and Huaqing Zeng for discussions and for help in implementing the SPA. I thank Brian MacWhinney for providing the verb data used in his simulation. Discussions with Tom Dietterich, Dave Touretzky and Brian MacWhinney, as well as comments from reviewers, have been very helpful. The research is conducted with support from the NSERC Research Grant and computing facilities from our Department." } ]
[ { "authors": "G Cottrell; K Plunkett", "journal": "", "ref_id": "b0", "title": "Using a recurrent net to learn the past tense", "year": "1991" }, { "authors": "K Daugherty; M Seidenberg", "journal": "John Benjamins", "ref_id": "b1", "title": "Beyond rules and exceptions: A connectionist modeling approach to in ectional morphology", "year": "1993" }, { "authors": "T Dietterich; G Bakiri", "journal": "", "ref_id": "b2", "title": "Error-correcting output codes: A general method for improving multiclass inductive learning programs", "year": "1991" }, { "authors": "T Dietterich; H Hild; G Bakiri", "journal": "", "ref_id": "b3", "title": "A comparative study of ID3 and backpropagation for English text-to-speech mapping", "year": "1990" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "C Feng; R King; A Sutherland; R Henery", "journal": "", "ref_id": "b5", "title": "Comparison of symbolic, statistical and neural network classi ers", "year": "1992" }, { "authors": "J Fodor; Z Pylyshyn", "journal": "MIT Press", "ref_id": "b6", "title": "Connectionism and cognitive architecture: A critical analysis", "year": "1988" }, { "authors": "S Geman; E Bienenstock; R Doursat", "journal": "Neural Computation", "ref_id": "b7", "title": "Neural networks and the bias/variance dilemma", "year": "1992" }, { "authors": "J Lachter; T Bever", "journal": "MIT Press", "ref_id": "b8", "title": "The relation between linguistic structure and associative theories of language learning { a constructive critique of some connectionist learning models", "year": "1988" }, { "authors": "X Ling; S Cherwenka; M Marinov", "journal": "Morgan Kaufmann Publishers", "ref_id": "b9", "title": "A symbolic model for learning the past tenses of English verbs", "year": "1993" }, { "authors": "X Ling; M Marinov", "journal": "Cognition", "ref_id": "b10", "title": "Answering the connectionist challenge: a symbolic model of learning the past tense of English verbs", "year": "1993" }, { "authors": "B Macwhinney", "journal": "Erlbaum", "ref_id": "b11", "title": "The CHILDES Project: Tools for Analyzing Talk", "year": "1990" }, { "authors": "B Macwhinney", "journal": "Cognition", "ref_id": "b12", "title": "Connections and symbols: closing the gap", "year": "1993" }, { "authors": "B Macwhinney; J Leinbach", "journal": "Cognition", "ref_id": "b13", "title": "Implementations are not conceptualizations: Revising the verb model", "year": "1991" }, { "authors": "S Pinker", "journal": "Science", "ref_id": "b14", "title": "Rules of language", "year": "1991" }, { "authors": "S Pinker; A Prince", "journal": "MIT Press", "ref_id": "b15", "title": "On language and connectionism: Analysis of a parallel distributed processing model of language acquisition", "year": "1988" }, { "authors": "K Plunkett; V Marchman", "journal": "Cognition", "ref_id": "b16", "title": "U-shaped learning and frequency e ects in a multilayered perceptron: Implications for child language acquisition", "year": "1991" }, { "authors": "S Prasada; S Pinker", "journal": "Language and Cognitive Processes", "ref_id": "b17", "title": "Generalization of regular and irregular morphological patterns", "year": "1993" }, { "authors": "J Quinlan", "journal": "Machine Learning", "ref_id": "b18", "title": "Induction of decision trees", "year": "1986" }, { "authors": "J Quinlan", "journal": "Morgan Kaufmann", "ref_id": "b19", "title": "C4.5 Programs for Machine Learning", "year": "1993" }, { "authors": "B Ripley", "journal": "", "ref_id": "b20", "title": "Statistical aspects of neural networks", "year": "1992-04-30" }, { "authors": "D Rumelhart; J Mcclelland", "journal": "MIT Press", "ref_id": "b21", "title": "On learning the past tenses of English verbs", "year": "1986" }, { "authors": "J Shavlik; R Mooney; G Towell", "journal": "Machine Learning", "ref_id": "b22", "title": "Symbolic and neural learning algorithms: An experimental comparison", "year": "1991" }, { "authors": "S Weiss; C Kulikowski", "journal": "", "ref_id": "b23", "title": "Computer Systems that Learn: classi cation and prediction methods from statistics, neural networks, machine learning, and expert systems", "year": "1991" }, { "authors": "Morgan Kaufmann; San Mateo; Ca", "journal": "", "ref_id": "b24", "title": "", "year": "" } ]
[ { "formula_coordinates": [ 7, 107.28, 515.52, 234.89, 9.6 ], "formula_id": "formula_0", "formula_text": "a,f,f,o,r,d,_,_,_,_,_,_,_,_,_ => a,f,f,o," } ]
Learning the Past Tense of English Verbs: The Symbolic Pattern Associator vs. Connectionist Models
Learning the past tense of English verbs | a seemingly minor aspect of language acquisition | has generated heated debates since 1986, and has become a landmark task for testing the adequacy of cognitive modeling. Several arti cial neural networks (ANNs) have been implemented, and a challenge for better symbolic models has been posed. In this paper, we present a general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree learning algorithm ID3. We conduct extensive head-to-head comparisons on the generalization ability between ANN models and the SPA under di erent representations. We conclude that the SPA generalizes the past tense of unseen verbs better than ANN models by a wide margin, and we o er insights as to why this should be the case. We also discuss a new default strategy for decision-tree learning algorithms.
Charles X Ling
[ { "figure_caption": "Rumelhart and McClelland's model is based on a simple perceptron-based pattern associator interfaced with an input/output encoding/decoding network which allows the model to associate verb stems with their past tenses using a special Wickelphone/Wickelfeature phoneme-representation format. The learning algorithm is the classical perceptron convergence procedure. The training and the testing sets are mutually disjoint in the experiments. The errors made by the model during the training process broadly follow the U-shaped learning curve in the stages of acquisition of the English past tense exhibited by young children.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a)) from the following 4 examples: Ling l,e,a,v,e,_,_,_,_,_,_,_,_,_,_ => l", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1: (a) Passthrough default (b) Various default", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Default Strategies of ID3, SPA, and ANN Which default strategy do neural networks tend to take in generalizing default classes when compared with ID3 and SPA? We conducted several experiments to determine neural networks' default strategy. We assume that the domain has only one attribute X which may take values a, b, c, and d. The class also can be one of the a, b, c, and d. The training examples have attribute values a, b, and c but not d | it is reserved for testing the default class. The training set contains multiple copies of the same example to form a certain majority class.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The architecture of the network used in the experiment.", "figure_data": "", "figure_id": "fig_5", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Table 1 shows two sets of training/testing examples that we used to test and compare default strategies of ID3, SPA and neural networks. Two data sets for testing default strategies of various methods.The classi cation of the testing examples by ID3 and SPA is quite easy to decide. Since ID3 takes only the majority default, the output class is a (with 10 training examples) for the rst data set, and c (with 17 training examples) for the second data set. For SPA, the number of examples using passthrough is 15 for the rst data set, and 13 for the second data set. Therefore, the passthrough strategy wins in the rst case with the output class d, and the majority strategy wins in the second case with the output class c.", "figure_data": "Data set 1 Training examples Values of X Class # of copies Values of X Class # of copies Data set 2 Training examples a a 10 a c 10 b b 2 b b 6 c c 3 c c 7 Testing example Testing example d ? 1 d ? 1", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Table2summarises the experimental results. For ANNs, various classi cations obtained by 20 di erent random seeds are listed with the rst ones occurring most frequently. It seems that not only do neural networks not have a consistent default strategy, but also that it is neither the majority default as in ID3 nor the passthrough default as in SPA. This may explain why connectionist models cannot generalize unseen regular verbs well even when the training set contains only regular verbs (see Section 4.4). The networks have di culty (or are underconstrained) in generalizing the identity mapping that copies the attributes of the verb stems into the past tenses.", "figure_data": "The classi cation for the testing example Data set 1 Data set 2 ID3", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Default strategies of ID3, SPA and ANN on two data sets.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "are represented as pairs of input and output patterns (verb stem => past tense):6,b,&,n,d,6,n=>6,b,&,n,d,6,n,dI,k,s,E,l,6,r,e,t => I,k,s,E,l,6,r,e,t,I,d", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The original verb set is available in Online Appendix 1). We keep only one form of the past tense among multiple past tenses (such as hang-hanged and hang-hung) in the data set. In addition, no homophones exist in the original data set. Consequently, there is no noise (contradictory data which have the same input pattern but di erent output patterns) in the training and testing examples. Note also that information as to whether the verb is regular or irregular is not provided in training/testing processes.", "figure_data": "base (stem) spelling form phonetic form d= past tense 0 = regular UNIBET b=base 1 = irregularabandon abandoned bene t bene ted arise arose become became ......6b&nd6n 6b&nd6nd bEn6fIt bEn6fItId 6r3z 6roz bIk6m bIkemb d b d b d b d0 0 0 0 0 1 0 1", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Source le from MacWhinney and Leinbach.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparisons of testing accuracy of SPA and ANN with distributed and symbolic representations.", "figure_data": "Distributed representation ANN: % Correct SPA: % Correct Reg Irrg Comb Reg Irrg Comb Reg Irrg Comb Reg Irrg Comb Symbolic representation ANN: % Correct SPA: % Correct 65.3 14.6 60.4 62.2 18.8 58.0 63.3 18.8 59.2 83.0 29.2 77.8 59.7 8.6 53.8 57.9 8.2 52.2 58.8 10.3 53.2 83.3 22.4 76.2 60.0 16.0 55.6 58.0 8.0 53.0 58.7 16.0 54.4 80.9 20.0 74.861.7 13.1 56.6 59.4 11.7 54.4 60.3 15.0 55.6 82.4 23.9 76.3", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": ". Comparisons of the testing accuracy of SPA and ANNs with error-correcting codes", "figure_data": "ANN 23-bit codes 46-bit codes SPA 23-bit codes 46-bit codes 92-bit codes 127-bit codesHamming Distance Correct at bit level Correct at word level 10 93.5% 65.6% 20 94.1% 67.4% Hamming Distance Correct at bit level Correct at word level 10 96.3% 72.4% 20 96.3% 77.2% 40 96.1% 75.6% 54 96.1% 75.4%", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "are represented as:", "figure_data": "INPUTOUTPUT(right-justified)(right-justified)(suffix only)CCCVVCCCVVCCCVVCCCCCCVVCCCVVCCCVVCCC VVCCC___6_b__&_nd_6_n_____6_b__&_nd_6_n__ __d__ (for abandon-abandoned)b__E_n__6_f__I_t__b__E_n__6_f__I_t__ I_d__ (for benefit-benefited)________6_r__3_z__________6_r__o_z__ _____ (for arise-arose)_____b__I_k__6_m_______b__I_k__e_m__ _____ (for become-became)", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparisons of testing accuracy of SPA and ANN (with right-justi ed, isolated su x representation) It seems quite conceivable that children acquire these high-level concepts such as stems and voiced consonants through learning noun plurals, verb past tense, verb third-person singular, comparative adjectives, and so on. With a large weight matrix as the result of learning, it is hard to see how this knowledge can be further generalized in ANN models and shared in other modules.", "figure_data": "Run 1 Run 2 Run 3 AveragePredictive accuracy with right-justi ed, isolated su x representation SPA MacWhinney's ANN model training/testing training/testing training/testing 500/500 1200/102 1200/102 81.3 89.2 84.1 90.4 83.1 87.4 82.8 89.0 80.0 (one run)", "figure_id": "tab_12", "figure_label": "7", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b7" ], "table_ref": [], "text": "The large amount of data collected today is quickly overwhelming researchers' abilities to interpret the data and discover interesting patterns within the data. In response to this problem, a number of researchers have developed techniques for discovering concepts in databases. These techniques work well for data expressed in a non-structural, attributevalue representation, and address issues of data relevance, missing data, noise and uncertainty, and utilization of domain knowledge. However, recent data acquisition projects are collecting structural data describing the relationships among the data objects. Correspondingly, there exists a need for techniques to analyze and discover concepts in structural databases.\nOne method for discovering knowledge in structural data is the identi cation of common substructures within the data. The motivation for this process is to nd substructures capable of compressing the data and to identify conceptually interesting substructures that enhance the interpretation of the data. Substructure discovery is the process of identifying concepts describing interesting and repetitive substructures within structural data. Once discovered, the substructure concept can be used to simplify the data by replacing instances of the substructure with a pointer to the newly discovered concept. The discovered substructure concepts allow abstraction over detailed structure in the original data and provide c 1994 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. new, relevant attributes for interpreting the data. Iteration of the substructure discovery and replacement process constructs a hierarchical description of the structural data in terms of the discovered substructures. This hierarchy provides varying levels of interpretation that can be accessed based on the goals of the data analysis.\nWe describe a system called Subdue (Holder, Cook, & Bunke, 1992;Holder & Cook, 1993) that discovers interesting substructures in structural data based on the minimum description length principle. The Subdue system discovers substructures that compress the original data and represent structural concepts in the data. By replacing previouslydiscovered substructures in the data, multiple passes of Subdue produce a hierarchical description of the structural regularities in the data. Subdue uses a computationally-bounded inexact graph match that identi es similar, but not identical, instances of a substructure and nds an approximate measure of closeness of two substructures when under computational constraints. In addition to the minimum description length principle, other background knowledge can be used by Subdue to guide the search towards more appropriate substructures.\nThe following sections describe the approach in detail. Section 2 describes the process of substructure discovery and introduces needed de nitions. Section 3 compares the Subdue discovery system to other work found in the literature. Section 4 introduces the minimum description length encoding used by this approach, and Section 5 presents the inexact graph match algorithm employed by Subdue. Section 6 describes methods of incorporating background knowledge into the substructure discovery process. The experiments detailed in Section 7 demonstrate Subdue's ability to nd substructures that compress the data and to re-discover known concepts in a variety of domains. Section 8 details the hierarchical discovery process. We conclude with observations and directions for future research." }, { "figure_ref": [ "fig_0" ], "heading": "Substructure Discovery", "publication_ref": [], "table_ref": [], "text": "The substructure discovery system represents structured data as a labeled graph. Objects in the data map to vertices or small subgraphs in the graph, and relationships between objects map to directed or undirected edges in the graph. A substructure is a connected subgraph within the graphical representation. This graphical representation serves as input to the substructure discovery system. Figure 1 shows a geometric example of such an input graph. The objects in the gure (e.g., T1, S1, R1) become labeled vertices in the graph, and the relationships (e.g., on(T1,S1), shape(C1,circle)) become labeled edges in the graph. The graphical representation of the substructure discovered by Subdue from this data is also shown in Figure 1.\nAn instance of a substructure in an input graph is a set of vertices and edges from the input graph that match, graph theoretically, to the graphical representation of the substructure. For example, the instances of the substructure in Figure 1 are shown in Figure 2.\nThe substructure discovery algorithm used by Subdue is a computationally-constrained beam search. The algorithm begins with the substructure matching a single vertex in the graph. Each iteration through the algorithm selects the best substructure and expands the instances of the substructure by one neighboring edge in all possible ways. The new unique generated substructures become candidates for further expansion. The algorithm searches for the best substructure until all possible substructures have been considered or the total amount of computation exceeds a given limit. The evaluation of each substructure is guided by the MDL principle and other background knowledge provided by the user. Typically, once the description length of an expanding substructure begins to increase, further expansion of the substructure will not yield a smaller description length. As a result, Subdue makes use of an optional pruning mechanism that eliminates substructure expansions from consideration when the description lengths for these expansions increases." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b24", "b10", "b20", "b21", "b4", "b2", "b2", "b25", "b19", "b5", "b12", "b8" ], "table_ref": [], "text": "Several approaches to substructure discovery have been developed. Winston's Arch program (Winston, 1975) discovers substructures in order to deepen the hierarchical description of a scene and to group objects into more general concepts. The Arch program searches for two types of substructure in the blocks-world domain. The rst type involves a sequence of objects connected by a chain of similar relations. The second type involves a set of objects each having a similar relationship to some \\grouping\" object. The main di erence between the substructure discovery procedures used by the Arch program and Subdue is that the Arch program is designed speci cally for the blocks-world domain. For instance, the sequence discovery method looks for supported-by and in-front-of relations only. Subdue's substructure discovery method is domain independent, although the inclusion of domain-speci c knowledge would improve Subdue's performance.\nMotivated by the need to construct a knowledge base of chemical structures, Levinson (Levinson, 1984) developed a system for storing labeled graphs in which individual graphs are represented by the set of vertices in a universal graph. In addition, the individual graphs are maintained in a partial ordering de ned by the subgraph-of relation, which improves the performance of graph comparisons. The universal graph representation provides a method for compressing the set of graphs stored in the knowledge base. Subgraphs of the universal graph used by several individual graphs suggest common substructure in the individual graphs. One di erence between the two approaches is that Levinson's system is designed to incrementally process smaller individual graphs; whereas, Subdue processes larger graphs all at once. Also, Levinson's system discovers common substructure only as an indirect result of the universal graph construction; whereas, Subdue's main goal is to discover and output substructure de nitions that reduce the minimum description length encoding of the graph. Finally, the subgraph-of partial ordering used by Levinson's system is not included in Subdue, but maintaining this partial ordering would improve the performance of the graph matching procedure by pruning the number of possible matching graphs.\nSegen (Segen, 1990) describes a system for storing graphs using a probabilistic graph model to represent subsets of the graph. Alternative models are evaluated based on a minimum description length measure of the information needed to represent the stored graphs using the model. In addition, Segen's system clusters the graphs into classes based on minimizing the description length of the graphs according to the entire clustering. Apart from the probabilistic representation, Segen's approach is similar to Levinson's system in that both methods take advantage of commonalities in the graphs to assist in graph storage and matching. The probabilistic graphs contain information for identifying common substructure in the exact graphs they represent. The portion of the probabilistic graph with high probability de nes a substructure that appears frequently in the exact graphs. This notion was not emphasized in Segen's work, but provides an alternative method to substructure discovery by clustering subgraphs of the original input graphs. As with Levinson's approach, graphs are processed incrementally, and substructure is found across several graphs, not within a single graph as in Subdue.\nThe Labyrinth system (Thompson & Langley, 1991) extends the Cobweb incremental conceptual clustering system (Fisher, 1987) to handle structured objects. Labyrinth uses Cobweb to form hierarchical concepts of the individual objects in the domain based on their primitive attributes. Concepts of structured objects are formed in a similar manner using the individual objects as attributes. The resulting hierarchy represents a componential model of the structured objects. Because Cobweb's concepts are probabilistic, Labyrinth produces probabilistic models of the structured objects, but with an added hierarchical organization. The upper-level components of the structured-object hierarchy produced by Labyrinth represent substructures common to the examples. Therefore, although not the primary focus, Labyrinth is discovering substructure, but in a more constrained context than the general graph representation used by Subdue. Conklin et al. (Conklin & Glasgow, 1992) have developed the i-mem system for constructing an image hierarchy, similar to that of Labyrinth, used for discovering common substructures in a set of images and for e cient retrieval of images similar to a given image. Images are expressed in terms of a set of relations de ned by the user. Speci c and general (conceptual) images are stored in the hierarchy based on a subsumption relation similar to Levinson's subgraph-of partial ordering. Image matching utilizes a transformational approach (similar to Subdue's inexact graph match) as a measure of image closeness.\nAs with the approaches of Segen and Levinson, i-mem is designed to process individual images. Therefore, the general image concepts that appear higher in i-mem's hierarchy will represent common substructures across several images. Subdue is designed to discover common substructures within a single image. Subdue can mimic the individual approach of these systems by processing a set of individual images as one disconnected graph. The substructures found will be common to the individual images. The hierarchy also represents a componential view of the images. This same view can be constructed by Subdue using multiple passes over the graph after replacing portions of the input graph with substructures discovered during previous passes. i-mem has performed well in a simple chess domain and molecular chemistry domains (Conklin & Glasgow, 1992). However, i-mem requires domain-speci c relations for expressing images in order for the hierarchy to nd relevant substructures and for image matching to be e cient. Again, maintaining the concepts (images, graphs) in a partially-ordered hierarchy improves the e ciency of matching and retrieval, and suggests a possible improvement to Subdue.\nThe CLiP system (Yoshida, Motoda, & Indurkhya, 1993) for graph-based induction is more similar to Subdue than the previous systems. CLiP iteratively discovers patterns in graphs by expanding and combining patterns discovered in previous iterations. Patterns are grouped into views based on their collective ability to compress the original input graph. During each iteration CLiP uses existing views to contract the input graph and then considers adding to the views new patterns consisting of two vertices and an edge from the contracted graph. The compression of the new proposed views is estimated, and the best views (according to a given beam width) are retained for the next iteration.\nCLiP discovers substructures (patterns) di erently than Subdue. First, CLiP produces a set of substructures that collectively compress the input graph; whereas, Subdue produces only single substructures evaluated using the more principled minimum description length. CLiP has the ability to grow substructures agglomeratively (i.e., merging two substructures together); whereas, Subdue always produces new substructures using incremental growth along one new edge. CLiP initially estimates the compression value of new views based on the compression value of the parent view; whereas, Subdue performs an expensive exact measurement of compression for each new substructure. Finally, CLiP employs an e cient graph match based on graph identity, not graph isomorphism as in Subdue. Graph identity assumes an ordering over the incident edges of a vertex and does not consider all possible mappings when looking for occurrences of a pattern in an input graph. These di erences in CLiP suggest possible enhancements to Subdue.\nResearch in pattern recognition has begun to investigate the use of graphs and graph grammars as an underlying representation for structural problems (Schalko , 1992). Many results in grammatical inference are applicable to constrained classes of graphs (e.g., trees) (Fu, 1982;Miclet, 1986). The approach begins with a set of sample graphs and produces a generalized graph grammar capable of deriving the original sample graphs and many others. The production rules of this general grammar capture regularities (substructures) in the sample graphs. Jeltsch and Kreowski (Jeltsch & Kreowski, 1991) describe an approach that begins with a maximally-speci c grammar and iteratively identi es common subgraphs in the right-hand sides of the production rules. These common subgraphs are used to form new, more general production rules. Although their method does not address the underlying combinatorial nondeterminism, heuristic approaches could provide a feasible method for extracting substructures in the form of graph grammars. Furthermore, the graph grammar production-rule may provide a suitable representation for background knowledge during the substructure discovery process." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "Minimum Description Length Encoding of Graphs", "publication_ref": [ "b18", "b16", "b13", "b14", "b9", "b3", "b17", "b16" ], "table_ref": [], "text": "The minimum description length principle (MDLP) introduced by Rissanen (Rissanen, 1989) states that the best theory to describe a set of data is that theory which minimizes the description length of the entire data set. The MDL principle has been used for decision tree induction (Quinlan & Rivest, 1989), image processing (Pednault, 1989;Pentland, 1989;Leclerc, 1989), concept learning from relational data (Derthick, 1991), and learning models of non-homogeneous engineering domains (Rao & Lu, 1992).\nWe demonstrate how the minimum description length principle can be used to discover substructures in complex data. In particular, a substructure is evaluated based on how well it can compress the entire dataset using the minimum description length. We de ne the minimum description length of a graph to be the number of bits necessary to completely describe the graph.\nAccording to the minimum description length (MDL) principle, the theory that best accounts for a collection of data is the one that minimizes I(S) + I(GjS), where S is the discovered substructure, G is the input graph, I(S) is the number of bits required to encode the discovered substructure, and I(GjS) is the number of bits required to encode the input graph G with respect to S.\nThe graph connectivity can be represented by an adjacency matrix. Consider a graph that has n vertices, which are numbered 0; 1; : : :; n 1. An n n adjacency matrix A can be formed with entry A i; j] set to 0 or 1. If A i; j] = 0, then there is no connection from vertex i to vertex j. If A i; j] = 1, then there is at least one connection from vertex i to vertex j. Undirected edges are recorded in only one entry of the matrix. The adjacency matrix for the graph in Figure 3 is shown below.\nx triangle y square r rectangle 2 6 6 6 6 6 6 6 4 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 3 7 7 7 7 7 7 7 5\nThe encoding of the graph consists of the following steps. We assume that the decoder has a table of the l u unique labels in the original graph G.\n1. Determine the number of bits vbits needed to encode the vertex labels of the graph.\nFirst, we need (lg v) bits to encode the number of vertices v in the graph. Then, encoding the labels of all v vertices requires (v lg l u ) bits. We assume the vertices are speci ed in the same order they appear in the adjacency matrix. The total number of bits to encode the vertex labels is vbits = lg v + v lg l u For the example in Figure 3, v = 6, and we assume that there are l u = 8 unique labels in the original graph. The number of bits needed to encode these vertices is lg 6 + 6 lg 8 = 20:58 bits.\n2. Determine the number of bits rbits needed to encode the rows of the adjacency matrix A. Typically, in large graphs, a single vertex has edges to only a small percentage of the vertices in the entire graph. Therefore, a typical row in the adjacency matrix will have much fewer than v 1s, where v is the total number of vertices in the graph. We apply a variant of the coding scheme used by (Quinlan & Rivest, 1989) to encode bit strings with length n consisting of k 1s and (n k) 0s, where k (n k). In our case, row i (1 i v) can be represented as a bit string of length v containing k i 1s. If we let b = max i k i , then the i th row of the adjacency matrix can be encoded as follows.\n(a) Encoding the value of k i requires lg(b + 1) bits. (b) Given that only k i 1s occur in the row bit string of length v, only v k i strings of 0s and 1s are possible. Since all of these strings have equal probability of occurrence, lg v k i bits are needed to encode the positions of 1s in row i. The value of v is known from the vertex encoding.\nFinally, we need an additional lg(b + 1) bits to encode the number of bits needed to specify the value of k i for each row. The total encoding length in bits for the adjacency matrix is\nrbits = lg(b + 1) + v X i=1 lg(b + 1) + lg v k i = (v + 1) lg(b + 1) v X i=1 lg v k i\nFor the example in Figure 3, b = 2, and the number of bits needed to encode the adjacency matrix is (7 lg 3)+lg 6 2 +lg 6 0 +lg 6 2 +lg 6 0 +lg 6 1 +lg 6 0 = 21:49 bits. 3. Determine the number of bits ebits needed to encode the edges represented by the entries A i; j] = 1 of the adjacency matrix A. The number of bits needed to encode entry A i; j] is (lg m) + e(i; j) 1 + lg l u ], where e(i; j) is the actual number of edges between vertex i and j in the graph and m = max i;j e(i; j). The (lg m) bits are needed to encode the number of edges between vertex i and j, and 1 + lg l u ] bits are needed per edge to encode the edge label and whether the edge is directed or undirected. In addition to encoding the edges, we need to encode the number of bits (lg m) needed to specify the number of edges per entry. The total encoding of the edges is\nebits = lg m + v X i=1 v X j=1 lg m + e(i; j) 1 + lg l u ] = lg m + e(1 + lg l u ) + v X i=1 v X j=1 A i; j] lgm = e(1 + lg l u ) + (K + 1) lg m\nwhere e is the number of edges in the graph, and K is the number of 1s in the adjacency matrix A. For the example in Figure 3, e = 5, K = 5, m = 1, l u = 8, and the number of bits needed to encode the edges is 5(1 + lg 8) + 6 lg 1 = 20.\nThe total encoding of the graph takes (vbits + rbits + ebits) bits. For the example in Figure 3, this value is 62:07 bits.\nBoth the input graph and discovered substructure can be encoded using the above scheme. After a substructure is discovered, each instance of the substructure in the input graph is replaced by a single vertex representing the entire substructure. The discovered substructure is represented in I(S) bits, and the graph after the substructure replacement is represented in I(GjS) bits. Subdue searches for the substructure S in graph G minimizing I(S) + I(GjS)." }, { "figure_ref": [ "fig_6", "fig_2", "fig_3", "fig_3" ], "heading": "Inexact Graph Match", "publication_ref": [ "b0" ], "table_ref": [], "text": "Although exact structure match can be used to nd many interesting substructures, many of the most interesting substructures show up in a slightly di erent form throughout the data. These di erences may be due to noise and distortion, or may just illustrate slight di erences between instances of the same general class of structures. Consider the image shown in Figure 9. The pencil and the cube would make ideal substructures in the picture, but an exact match algorithm may not consider these as strong substructures, because they rarely occur in the same form and level of detail throughout the picture.\nGiven an input graph and a set of de ned substructures, we want to nd those subgraphs of the input graph that most closely resemble the given substructures. Furthermore, we want to associate a distance measure between a pair of graphs consisting of a given substructure and a subgraph of the input graph. We adopt the approach to inexact graph match given by Bunke and Allermann (Bunke & Allermann, 1983). In this inexact match approach, each distortion of a graph is assigned a cost. A distortion is described in terms of basic transformations such as deletion, insertion, and substitution of vertices and edges. The distortion costs can be determined by the user to bias the match for or against particular types of distortions.\nAn inexact graph match between two graphs g 1 and g 2 maps g 1 to g 2 such that g 2 is interpreted as a distorted version of g 1 . Formally, an inexact graph match is a mapping f : N 1 ! N 2 f g, where N 1 and N 2 are the sets of vertices of g 1 and g 2 , respectively. A vertex v 2 N 1 that is mapped to (i.e., f(v) = ) is deleted. That is, it has no corresponding vertex in g 2 . Given a set of particular distortion costs as discussed above, we de ne the cost of an inexact graph match cost(f), as the sum of the cost of the individual transformations resulting from f, and we de ne matchcost(g 1 ; g 2 ) as the value of the least-cost function that maps graph g 1 onto graph g 2 .\nGiven g 1 , g 2 , and a set of distortion costs, the actual computation of matchcost(g 1 ; g 2 ) can be determined using a tree search procedure. A state in the search tree corresponds to a partial match that maps a subset of the vertices of g 1 to a subset of the vertices in g 2 .\nInitially, we start with an empty mapping at the root of the search tree. Expanding a state corresponds to adding a pair of vertices, one from g 1 and one from g 2 , to the partial mapping constructed so far. A nal state in the search tree is a match that maps all vertices of g 1 to g 2 or to . The complete search tree of the example in Figure 4 is shown in Figure 5. For this example we assign a value of 1 to each distortion cost. The numbers in circles in this gure represent the cost of a state. As we are eventually interested in the mapping with minimum cost, each state in the search tree gets assigned the cost of the partial mapping that it represents. Thus the goal state to be found by our tree search procedure is the nal state with minimum cost among all nal states. From Figure 5 we conclude that the minimum cost inexact graph match of g 1 and g 2 is given by the mapping f(1) = 4, f(2) = 3.\nThe cost of this mapping is 4.\nGiven graphs g 1 with n vertices and g 2 with m vertices, m n, the complexity of the full inexact graph match is O(n m+1 ). Because this routine is used heavily throughout the discovery and evaluation process, the complexity of the algorithm can signi cantly degrade the performance of the system.\n(1, 3) (1, 4) (1, 5) (1, ) (2,4) (2,5) (2, ) (2,3) (2,5) (2, ) (2,3) (2,4) (2, ) (2,3) (2,4) (2,5) (2, )1\nTo improve the performance of the inexact graph match algorithm, we extend Bunke's approach by applying a branch-and-bound search to the tree. The cost from the root of the tree to a given node is computed as described above. Nodes are considered for pairings in order from the most heavily connected vertex to the least connected, as this constrains the remaining match. Because branch-and-bound search guarantees an optimal solution, the search ends as soon as the rst complete mapping is found.\nIn addition, the user can place a limit on the number of search nodes considered by the branch-and-bound procedure (de ned as a function of the size of the input graphs). Once the number of nodes expanded in the search tree reaches the de ned limit, the search resorts to hill climbing using the cost of the mapping so far as the measure for choosing the best node at a given level. By de ning such a limit, signi cant speedup can be realized at the expense of accuracy for the computed match cost.\nAnother approach to inexact graph match would be to encode the di erence between two graphs using the MDL principle. Smaller encodings would indicate a lower match cost between the two graphs. We leave this as a future research direction." }, { "figure_ref": [], "heading": "Guiding the Discovery Process with Background Knowledge", "publication_ref": [ "b23", "b15" ], "table_ref": [], "text": "Although the principle of minimum description length is useful for discovering substructures that maximize compression of the data, scientists may realize more bene t from the discovery of substructures that exhibit other domain-speci c and domain-independent characteristics.\nTo make Subdue more powerful across a wide variety of domains, we have added the ability to guide the discovery process with background knowledge. Although the minimum description length principle still drives the discovery process, the background knowledge can be used to input a bias toward certain types of substructures. This background knowledge is encoded in the form of rules for evaluating substructures, and can represent domainindependent or domain-dependent rules. Each time a substructure is evaluated, these input rules are used to determine the value of the substructure under consideration. Because only the most-favored substructures are kept and expanded, these rules bias the discovery process of the system.\nEach background rule can be assigned a positive, zero, or negative weight, that biases the procedure toward a type of substructure, eliminates the use of the rule, or biases the procedure away from a type of substructure, respectively. The value of a substructure is de ned as the description length (DL) of the input graph using the substructure multiplied by the weighted value of each background rule from a set of rules R applied to the substructure. value(s) = DL(G; s) jRj Y r=1 rule r (s) er (1) Three domain-independent heuristics that have been incorporated as rules into the Subdue system are compactness, connectivity, and coverage. For the de nitions of these rules, we will let G represent the input graph, s represent a substructure in the graph, and I represent the set of instances of the substructure s in G. The instance weight w of an instance i 2 I of a substructure s is de ned to be w(i; s) = 1 matchcost(i; s) size(i) ;\n(\nwhere size(i) = #vertices(i) + #edges(i). If the match cost is greater than the size of the larger graph, then w(i; s) = 0. The instance weights are used in these rules to compute a weighted average over instances of a substructure. A value of 1 is added to each formula so that the exponential weights can be used to control the rule's signi cance.\nThe rst rule, compactness, is a generalization of Wertheimer's Factor of Closure, which states that human attention is drawn to closed structures (Wertheimer, 1939). A closed substructure has at least as many edges as vertices, whereas a non-closed substructure has fewer edges than vertices (Prather, 1976). Thus, closed substructures have a higher compactness value. Compactness is de ned as the weighted average of the ratio of the number of edges in the substructure to the number of vertices in the substructure." }, { "figure_ref": [], "heading": "compactness(s)", "publication_ref": [ "b23", "b26" ], "table_ref": [], "text": "= 1 + 1 jIj X i2I w(i; s) #edges(i) #vertices(i) (3)\nThe second rule, connectivity, measures the amount of external connection in the instances of the substructure. The connectivity rule is a variant of Wertheimer's Factor of Proximity (Wertheimer, 1939), and is related to earlier numerical clustering techniques (Zahn, 1971). These works demonstrate the human preference for \\isolated\" substructures, that is, substructures that are minimally related to adjoining structure. Connectivity measures the \\isolation\" of a substructure by computing the inverse of the average number of external connections over all the weighted instances of the substructure in the input graph. An external connection is de ned here as an edge that connects a vertex in the substructure to a vertex outside the substructure. The formula for determining the connectivity of a substructure s with instances I in the input graph G is given below." }, { "figure_ref": [], "heading": "connectivity(s)", "publication_ref": [ "b11" ], "table_ref": [], "text": "= 1 + \" 1 jIj X i2I w(i; s) num external conns(i) # 1 (4)\nThe third rule, coverage, measures the fraction of structure in the input graph described by the substructure. The coverage rule is motivated from research in inductive learning and provides that concept descriptions describing more input examples are considered better (Michalski & Stepp, 1983). Although MDL measures the amount of structure, the coverage rule includes the relevance of this savings with respect to the size of the entire input graph. Coverage is de ned as the number of unique vertices and edges in the instances of the substructure divided by the total number of vertices and edges in the input graph. In this formula, the unique structure(i) of an instance i is the of vertices and edges in i that have not already appeared in previous instances in the summation. coverage(s) = 1 + P i2I w(i; s) unique structure(i) size(G)\n(5) Domain-dependent rules can also be used to guide the discovery process in a domain where scientists can contribute their expertise. For example, CAD circuits generally consist of two types of components, active and passive components. The active components are the main driving components. Identifying the active components is the rst step in understanding the main function of the circuit. To add this knowledge to Subdue we include a rule that assigns higher values to substructures (circuit components) representing active components and lower values to substructures representing passive components. Since the active components have higher scores, they are expected to be selected. The system can then focus the attention on the active components which will be expanded to the functional substructures.\nAnother method of biasing the discovery process with background knowledge is to let background rules a ect the prior probabilities of possible substructures. However, choosing the appropriate prior probabilities to express desired properties of substructures is dicult, but indicates a future direction for the inclusion of background knowledge into the substructure discovery process." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "The experiments in this section evaluate Subdue's substructure discovery capability in several domains, including chemical compound analysis, scene analysis, CAD circuit design analysis, and analysis of an arti cially-generated structural database.\nTwo goals of our substructure discovery system are to nd substructures that can reduce the amount of information needed to describe the data, and to nd substructures that are considered interesting for the given database. As a result, we evaluate the Subdue system in this section along these two criteria. First, we measure the amount of compression that Subdue provides across a variety of databases. Second, we use the Subdue system with the additional background knowledge rules to re-discover substructures that have been identi ed as interesting by experts in each speci c domain. Section 7.1 describes the domains used in these experiments, and Section 7.2 presents the experimental results. \nC C CH 2 CH 3 H CH 2 C C CH 2 CH 3 H CH 2 C C CH 2 CH 3 H CH 2 C C CH 2 CH 3 H CH 2 C C CH 2 CH 3 H CH 2\nFigure 7: Natural rubber (all-cis polyisoprene)." }, { "figure_ref": [], "heading": "Domains", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Chemical Compound Analysis", "publication_ref": [], "table_ref": [], "text": "Chemical compounds are rich in structure. Identi cation of the common and interesting substructures can bene t scientists by identifying recurring components, simplying the data description, and focusing on substructures that stand out and merit additional attention. Chemical compounds are represented graphically by mapping individual atoms, such as carbon and oxygen, to labeled vertices in the graph, and by mapping bonds between the atoms onto labeled edges in the graph. Figures 6,7, and 8 show the graphs representing the chemical compound databases for cortisone, rubber, and a portion of a DNA molecule." }, { "figure_ref": [ "fig_7", "fig_6", "fig_8" ], "heading": "Scene Analysis", "publication_ref": [ "b22" ], "table_ref": [], "text": "Images and scene descriptions provide a rich source of structure. Images that humans encounter, both natural and synthesized, have many structured subcomponents that draw our attention and that help us to interpret the data or the scene.\nDiscovering common structures in scenes can be useful to a computer vision system. First, automatic substructure discovery can help a system interpret an image. Instead of working from low-level vertices and edges, Subdue can provide more abstract structured components, resulting in a hierarchical view of the image that the machine can analyze at many levels of detail and focus, depending on the goal of the analysis. Second, substructure discovery that makes use of an inexact graph match can help identify objects in a 2D image of a 3D scene where noise and orientation di erences are likely to exist. If an object appears often in the scene, the inexact graph match driving the Subdue system may capture slightly di erent views of the same object. Although an object may be di cult to identify from just one 2D picture, Subdue will match instances of similar objects, and the di erences between these instances can provide additional information for identi cation. Third, substructure discovery can be used to compress the image. Replacing common interesting substructures by a single vertex simpli es the image description and reduces the amount of storage necessary to represent the image.\nTo apply Subdue to image data, we extract edge information from the image and construct a graph representing the scene. The graph representation consists of eight types of vertices and two types of arcs (edge and space). The vertex labels (f, a, l, t, k, x, p, and m) follow the Waltz labelings (Waltz, 1975) of junctions of edges in the image and represent the types of vertices shown in Figure 10. An edge arc represents the edge of an object in the image, and a space arc links non-connecting objects together. The edge arcs represent an edge in the scene that connects two vertices, and the space arcs connect the closest vertices from two disjoint neighboring objects. Distance, curve, and angle information has not been included in the graph representation, but can be added to give additional information about the scene. Figure 11 shows the graph representation of a portion of the scene depicted in Figure 9. In this gure, the edge arcs are solid and the space arcs are dashed. In this domain, we employ Subdue to nd circuit components in CAD circuit data. Discovery of substructures in circuit data can be a valuable tool to an engineer who is attempting to identify common reusable parts in a circuit layout. Replacing individual components in the circuit description by larger substructure descriptions will also simplify the representation of the circuit.\nThe data for the circuit domain was obtained from National Semiconductor, and consists of a set of components making up a circuit as output by the Cadence Design System. The particular circuit used for this experiment is a portion of an analog-to-digital converter. Figure 12 presents a circuit for an ampli er and gives the corresponding graph representation." }, { "figure_ref": [ "fig_10" ], "heading": "Artificial Domain", "publication_ref": [], "table_ref": [], "text": "In the nal domain, we arti cially generate graphs to evaluate Subdue's ability to discover substructures capable of compressing the graph. Four substructures are created of varying sizes with randomly-selected vertices and edges (see Figure 13). The name of a substructure re ects the number of vertices and edges in its graph representation. Next, these substructures are embedded in larger graphs whose size is 15 times the size of the substructure. The graphs vary across four parameters: number of possible vertex and edge labels (one times and two times the number of labels used in the substructure), connectivity of the substructure (1 or 2 external connections), coverage of the instances (60% and 80%), and the amount of distortion in the instances (0, 1 or 2 distortions). This yields a total of 96 graphs (24 for each di erent substructure)." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_11" ], "heading": "Experiment 1: Data compression", "publication_ref": [], "table_ref": [ "tab_0", "tab_0" ], "text": "In the rst experiment, we test Subdue's ability to compress a structural database. Using a beam width of 4 and Subdue's pruning mechanism, we applied the discovery algorithm to each of the databases mentioned above. We repeat the experiment with match thresholds ranging from 0.0 to 1.0 in increments of 0.1. Table 1 shows the description length (DL) of the original graph, the description length of the best substructure discovered by Subdue, and the value of compression. Compression here is de ned as DL of compressed graph DL of original graph . Figure 14, shows the actual discovered substructures for the rst four datasets.\nAs can be seen from Table 1, Subdue was able to reduce the database to slightly larger than 1 4 of its original size in the best case. The average compression value over all of these domains (treating the arti cial graphs as one value) is 0.62. The results of this experiment demonstrate that the substructure discovered by Subdue can signi cantly reduce the amount of data needed to represent an input graph. We expect that compressing the graph using combinations of substructures and hierarchies of substructures will realize even greater compression in some databases. Another way of evaluating the discovery process is to evaluate the interestingness of the discovered substructures. The determination of this value will change from domain to domain. As a result, in this second set of experiments we test Subdue's ability to discover substructures that have already been labeled as important by experts in the domains under consideration." }, { "figure_ref": [ "fig_11", "fig_12", "fig_6", "fig_13" ], "heading": "Database", "publication_ref": [], "table_ref": [], "text": "In the chemical compound domain, chemists frequently describe compounds in terms of the building-block components that are heavily used. For example, in the rubber compound database shown in Figure 7, the compound is made up of a chain of structures that are labeled by chemists as isoprene units. Subdue's ability to re-discover this structure is exempli ed in Figure 14a. This substructure, which was discovered using the MDL principle with no extra background knowledge, represents an isoprene unit.\nAlthough Subdue was able to re-discover isoprene units without extra background knowledge, the substructure a ording the most compression will not always be the most interesting or important substructure in the database. For example, in the cortisone database the benzene ring which consists of a ring of carbons is not discovered using only the MDL principle. However, the additional background rules can be used to increase the chance of nding interesting substructures in these domains. In the case of the cortisone compound, we know that the interesting structures exhibit a characteristic of closure. Therefore, we give a strong weight (8.0) to the compactness background rule and use a match threshold of 0.2 to allow for deviations in the benzene ring instances. In the resulting output, Subdue nds the benzene ring shown in Figure 15.\nIn the same way, we can use the background rules to nd the pencil substructure in the image data. When the image in Figure 9 is viewed, the substructure of interest is the pencil in its various forms. However, the substructure that a orded the most compression does not make up an entire pencil. We know that the pencils have a high degree of closure and of coverage, so the weights for these rules are set to 1.0. With these weights, Subdue is able to nd the pencil substructure shown in Figure 16 for all tested match thresholds between 0.0 and 1.0." }, { "figure_ref": [ "fig_11", "fig_5", "fig_5" ], "heading": "Hierarchical Concept Discovery", "publication_ref": [], "table_ref": [], "text": "After a substructure is discovered, each instance of the substructure in the input graph can be replaced by a single vertex representing the entire substructure. The discovery procedure can then be repeated on the compressed data set, resulting in new interesting substructures. If the newly-discovered substructures are de ned in terms of existing substructure concepts, the substructure de nitions form a hierarchy of substructure concepts. Hierarchical concept discovery also adds the capability to improve Subdue's performance. When Subdue is applied to a large input graph, the complexity of the algorithm prevents consideration of larger substructures. Using hierarchical concept discovery, Subdue can rst discover those smaller substructures which best compress the data. Applying the compression reduces the graph to a more manageable size, increasing the chance that Subdue will nd the larger substructures on the subsequent passes through the database.\nOnce Subdue selects a substructure, all vertices that comprise the exact instances of the substructure are replaced in the graph by a single vertex representing the discovered substructure. Edges connecting vertices outside the instance to vertices inside the instance now connect to the new vertex. Edges internal to the instance are removed. The discovery process is then applied to the compressed data. If a hierarchical description of concepts is particularly desired, heavier weight can be given to substructures which utilize previously discovered substructures. The increased weight re ects increased attention to this substructure. Figure 17 illustrates the compressed rubber compound graph using the substructure shown in Figure 14a.\nTo demonstrate the ability of Subdue to nd a hierarchy of substructures, we let the system make multiple passes through a database that represents a portion of a DNA molecule. Figure 8 shows a portion of two chains of a double helix, using three pairs of bases which are held together by hydrogen bonds. Figure 18 shows the substructures found by Subdue after each of three passes through the data. Note that, on the third pass, Subdue linked together the instances of the substructure in the second pass to nd the chains of the double helix.\nAlthough replacing portions of the input graph with the discovered substructures compresses the data and provides a basis for discovering hierarchical concepts in the data, the substructure replacement procedure becomes more complicated when concepts with inexact instances are discovered. When inexact instances of a discovered concept are replaced by a single vertex in the data, all distortions of the graph (di erences between the instance graph and the substructure de nition) must be attached as annotations to the vertex label. " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b1" ], "table_ref": [], "text": "Extracting knowledge from structural databases requires the identi cation of repetitive substructures in the data. Substructure discovery identi es interesting and repetitive structure in structural data. The substructures represent concepts found in the data and a means of reducing the complexity of the representation by abstracting over instances of the substructure. We have shown how the minimum description length (MDL) principle can be used to perform substructure discovery in a variety of domains. The substructure discovery process can also be guided by background knowledge. The use of an inexact graph match allows deviation in the instances of a substructure. Once a substructure is discovered, instances of the substructure can be replaced by the concept de nition, a ording compression of the data description and providing a basis for discovering hierarchically-de ned structures. Future work will combine structural discovery with discovery of concepts using a linearbased representation such as AutoClass (Cheeseman, Kelly, Self, Stutz, Taylor, & Freeman, 1988). In particular, we will use Subdue to compress the data fed to AutoClass, and let Subdue evaluate the interesting structures in the classes generated by AutoClass. In addition, we will be developing a parallel implementation of the AutoClass / Subdue system that will enable application of substructure discovery to larger structural databases." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This project is supported by NASA grant NAS5-32337. The authors would like to thank Mike Shay at National Semiconductor for providing the circuit data. We would also like to thank Surnjani Djoko and Tom Lai for their help with this project. Thanks also to the reviewers for their numerous insightful comments." } ]
[ { "authors": "H Bunke; G Allermann", "journal": "Pattern Recognition Letters", "ref_id": "b0", "title": "Inexact graph matching for structural pattern recognition", "year": "1983" }, { "authors": "P Cheeseman; J Kelly; M Self; J Stutz; W Taylor; D Freeman", "journal": "", "ref_id": "b1", "title": "Autoclass: A bayesian classi cation system", "year": "1988" }, { "authors": "D Conklin; J Glasgow", "journal": "", "ref_id": "b2", "title": "Spatial analogy and subsumption", "year": "1992" }, { "authors": "M Derthick", "journal": "", "ref_id": "b3", "title": "A minimal encoding approach to feature discovery", "year": "1991" }, { "authors": "D H Fisher", "journal": "Machine Learning", "ref_id": "b4", "title": "Knowledge acquisition via incremental conceptual clustering", "year": "1987" }, { "authors": "K S Fu", "journal": "Prentice-Hall", "ref_id": "b5", "title": "Syntactic Pattern Recognition and Applications", "year": "1982" }, { "authors": "L B Holder; D J Cook; H Bunke", "journal": "", "ref_id": "b6", "title": "Fuzzy substructure discovery", "year": "1992" }, { "authors": "L B Holder; D J Cook", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b7", "title": "Discovery of inexact concepts from structural data", "year": "1993" }, { "authors": "E Jeltsch; H J Kreowski", "journal": "", "ref_id": "b8", "title": "Grammatical inference based on hyperedge replacement", "year": "1991" }, { "authors": "Y G Leclerc", "journal": "ternational journal of Computer Vision", "ref_id": "b9", "title": "Constructing simple stable descriptions for image partitioning", "year": "1989" }, { "authors": "R Levinson", "journal": "", "ref_id": "b10", "title": "A self-organizing retrieval system for graphs", "year": "1984" }, { "authors": "R S Michalski; R E Stepp", "journal": "Tioga Publishing Company", "ref_id": "b11", "title": "Learning from observation: Conceptual clustering", "year": "1983" }, { "authors": "L Miclet", "journal": "Chapman and Hall", "ref_id": "b12", "title": "Structural Methods in Pattern Recognition", "year": "1986" }, { "authors": "E P D Pednault", "journal": "", "ref_id": "b13", "title": "Some experiments in applying inductive inference principles to surfa ce reconstruction", "year": "1989" }, { "authors": "A Pentland", "journal": "Neural Computation", "ref_id": "b14", "title": "Part segmentation for object recognition", "year": "1989" }, { "authors": "R Prather", "journal": "Houghton Mi n Company", "ref_id": "b15", "title": "Discrete Mathemetical Structures for Computer Science", "year": "1976" }, { "authors": "J R Quinlan; R L Rivest", "journal": "Information and Computation", "ref_id": "b16", "title": "Inferring decision trees using the minimum description length principle", "year": "1989" }, { "authors": "R B Rao; S C Lu", "journal": "", "ref_id": "b17", "title": "Learning engineering models with the minimum description length principle", "year": "1992" }, { "authors": "J Rissanen", "journal": "World Scienti c Publishing Company", "ref_id": "b18", "title": "Stochastic Complexity in Statistical Inquiry", "year": "1989" }, { "authors": "R J Schalko", "journal": "John Wiley & Sons", "ref_id": "b19", "title": "Pattern Recognition: Statistical, Structural and Neural Approaches", "year": "1992" }, { "authors": "J Segen", "journal": "", "ref_id": "b20", "title": "Graph clustering and model learning by data compression", "year": "1990" }, { "authors": "K Thompson; P Langley", "journal": "Morgan Kaufmann Publishers, Inc", "ref_id": "b21", "title": "Concept formation in structured domains", "year": "1991" }, { "authors": "D Waltz", "journal": "McGraw-Hill", "ref_id": "b22", "title": "Understanding line drawings of scenes with shadows", "year": "1975" }, { "authors": "M Wertheimer", "journal": "Harcourt, Brace and Company", "ref_id": "b23", "title": "Laws of organization in perceptual forms", "year": "1939" }, { "authors": "P H Winston", "journal": "McGraw-Hill", "ref_id": "b24", "title": "Learning structural descriptions from examples", "year": "1975" }, { "authors": "K Yoshida; H Motoda; N Indurkhya", "journal": "", "ref_id": "b25", "title": "Unifying learning methods by colored digraphs", "year": "1993" }, { "authors": "C T Zahn", "journal": "IEEE Transactions on Computers", "ref_id": "b26", "title": "Graph-theoretical methods for detecting and describing gestalt clusters", "year": "1971" } ]
[ { "formula_coordinates": [ 7, 214.08, 631.5, 201.6, 71.44 ], "formula_id": "formula_0", "formula_text": "rbits = lg(b + 1) + v X i=1 lg(b + 1) + lg v k i = (v + 1) lg(b + 1) v X i=1 lg v k i" }, { "formula_coordinates": [ 8, 203.76, 246.54, 231.84, 89.44 ], "formula_id": "formula_1", "formula_text": "ebits = lg m + v X i=1 v X j=1 lg m + e(i; j) 1 + lg l u ] = lg m + e(1 + lg l u ) + v X i=1 v X j=1 A i; j] lgm = e(1 + lg l u ) + (K + 1) lg m" }, { "formula_coordinates": [ 10, 121.58, 149.9, 369.6, 54.29 ], "formula_id": "formula_2", "formula_text": "(1, 3) (1, 4) (1, 5) (1, ) (2,4) (2,5) (2, ) (2,3) (2,5) (2, ) (2,3) (2,4) (2, ) (2,3) (2,4) (2,5) (2, )1" }, { "formula_coordinates": [ 11, 260.88, 524.52, 261.36, 34.18 ], "formula_id": "formula_4", "formula_text": "= 1 + 1 jIj X i2I w(i; s) #edges(i) #vertices(i) (3)" }, { "formula_coordinates": [ 12, 223.68, 90.84, 298.56, 41.86 ], "formula_id": "formula_5", "formula_text": "= 1 + \" 1 jIj X i2I w(i; s) num external conns(i) # 1 (4)" }, { "formula_coordinates": [ 13, 101.86, 227.5, 409.19, 83.79 ], "formula_id": "formula_6", "formula_text": "C C CH 2 CH 3 H CH 2 C C CH 2 CH 3 H CH 2 C C CH 2 CH 3 H CH 2 C C CH 2 CH 3 H CH 2 C C CH 2 CH 3 H CH 2" } ]
Substructure Discovery Using Minimum Description Length and Background Knowledge
The ability to identify interesting and repetitive substructures is an essential component to discovering knowledge in structural data. We describe a new version of our Subdue substructure discovery system based on the minimum description length principle. The Subdue system discovers substructures that compress the original data and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of Subdue produce a hierarchical description of the structural regularities in the data. Subdue uses a computationally-bounded inexact graph match that identi es similar, but not identical, instances of a substructure and nds an approximate measure of closeness of two substructures when under computational constraints. In addition to the minimum description length principle, other background knowledge can be used by Subdue to guide the search towards more appropriate substructures. Experiments in a variety of domains demonstrate Subdue's ability to nd substructures capable of compressing the original data and to discover structural concepts important to the domain.
Diane J Cook; Lawrence B Holder
[ { "figure_caption": "Figure 2 :2Figure 1: Example substructure in graph form.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: MDL example graph.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Two similar graphs g 1 and g 2 .", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Search tree for computing matchcost(g 1 ,g 2 ) from Figure 4.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6: Cortisone.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Portion of a DNA molecule.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Scene analysis example.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 10: Possible vertices and labels.", "figure_data": "", "figure_id": "fig_7", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Ampli er circuit and graph representation.", "figure_data": "", "figure_id": "fig_8", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Four arti cial substructures used to evaluate Subdue.", "figure_data": "", "figure_id": "fig_10", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Best substructure for (a) rubber database, (b) cortisone database, (c) DNA database, and (d) image database.", "figure_data": "", "figure_id": "fig_11", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Benzene ring discovered by Subdue.", "figure_data": "", "figure_id": "fig_12", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Pencil substructure discovered by Subdue.", "figure_data": "", "figure_id": "fig_13", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 :Figure 18 :1718Figure 17: Compressed graph for rubber compound data.", "figure_data": "", "figure_id": "fig_15", "figure_label": "1718", "figure_type": "figure" }, { "figure_caption": "Graph compression results. ", "figure_data": "DL original Threshold optimal DL compressed Compression 371.78 0.1 95.20 0.26 355.03 0.3 173.25 0.49 2427.93 1.0 2211.87 0.91 1592.33 1.0 769.18 0.48 4095.73 0.7 2148.8 0.52 1860.14 0.7 1149.29 0.62 12715.12 0.7 9070.21 0.71 8606.69 0.7 6204.74 0.72 427.73 0.1 324.52 0.76 Arti cial (avg. over 96 graphs) 1636.25 Rubber Cortisone DNA Pencils CAD { M1 CAD { S1SegDec CAD { S1DrvBlk CAD { BlankSub CAD { And2 0.0: : :1.0 1164.02 0.71", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "One of the main problems in building expert systems is that models elicited from experts tend to be only approximately correct. Although such hand-coded models might makeag ood first approximation to the real world, theytypically contain inaccuracies that are exposed when a fact is asserted that does not agree with empirical observation. The theory revision problem is the problem of howb est to go about revising a knowledge base on the basis of a collection of examples, some of which expose inaccuracies in the original knowledge base. Of course, there may be manypossible revisions that sufficiently account for all of the observed examples; ideally, one would find a revised knowledge base which is both consistent with the examples and as faithful as possible to the original knowledge base. Consider,f or example, the following simple propositional domain theory, Τ.T his theory, although flawed and incomplete, is meant to recognize situations where an investor should buy stock in a soft drink company.\nbuy-stock ← increased-demand ∧ ¬product-liability product-liability ← popular-product ∧ unsafe-packaging increased-demand ← popular-product ∧ established-market increased-demand ← new-market ∧ superior-flavor.\nThe theory Τ essentially states that buying stock in this companyisagood idea if demand for its product is expected to increase and the companyisnot expected to face product liability lawsuits. In this theory,p roduct liability lawsuits may result if the product is popular (and therefore may present an attractive target for sabotage) and if the packaging is not tamper-proof. Increased product demand results if the product is popular and enjoys a large market share, or if there are newm arket opportunities and the product boasts a superior flavor. Using the closed world assumption, buy-stock is derivable givent hat the set of true observable propositions is precisely, say, {popular-product, established-market, celebrity-endorsement},or {popular-product, established-market, colorful-label} butnot if theyare, say, {unsafe-packaging, new-market},or {popular-product, unsafe-packaging, established-market}. Suppose nowt hat we are told for various examples whether buy-stock should be derivable. Forexample, suppose we are told that if the set of true observable propositions is:\n(1) {popular-product, unsafe-packaging, established-market} then buy-stock is false,\n(2) {unsafe-packaging, new-market} then buy-stock is true,\n(3) {popular-product, established-market, celebrity-endorsement} then buy-stock is true, (4) {popular-product, established-market, superior-flavor} then buy-stock is false,\n(5) {popular-product, established-market, ecologically-correct} then buy-stock is false, and (6) {new-market, celebrity-endorsement} then buy-stock is true.\nObservethat examples 2, 4, 5 and 6 are misclassified by the current theory Τ.A ssuming that the explicitly giveni nformation regarding the examples is correct, the question is howt or evise the theory so that all of the examples will be correctly classified." }, { "figure_ref": [], "heading": "Two Paradigms", "publication_ref": [ "b3", "b9", "b13" ], "table_ref": [], "text": "One approach to this problem consists of enumerating partial proofs of the various examples in order to find a minimal set of domain theory elements (i.e., literals or clauses) the repair of which will satisfy all the examples (EITHER, Ourston & Mooney, inp ress). One problem with this approach is that proof enumeration evenfor a single example is potentially exponential in the size of the theory.A nother problem with this approach is that it is unable to handle negated internal literals, and is restricted to situations where each example must belong to one and only one class. These problems suggest that it would be worthwhile to circumvent proof enumeration by employing incremental numerical schemes for focusing blame on specific elements.\nAc ompletely different approach to the revision problem is based on the use of neural networks (KBANN, Towell & Shavlik, 1993). The idea is to transform the original domain theory into network form, assigning weights in the graph according to some pre-established scheme. The connection weights are then adjusted in accordance with the observed examples using standard neural-network backpropagation techniques. The resulting network is then translated back into clausal form. The main disadvantage of this method that it lacks representational transparency;t he neural network representation does not preservet he structure of the original knowledge base while revising it. As a result, a great deal of structural information may be lost translating back and forth between representations. Moreover, such translation imposes the limitations of both representations; for example, since neural networks are typically slowt o converge, the method is practical for only very shallowdomain theories. Finally,revised domain theories obtained via translation from neural networks tend to be significantly larger than their corresponding original domain theories.\nOther approaches to theory revision which are much less closely related to the approach we will espouse here are RTLS (Ginsberg, 1990), KR-FOCL (Pazzani & Brunk, 1991), and ODYSSEUS (Wilkins, 1988)." }, { "figure_ref": [], "heading": "Probabilistic Theory Revision", "publication_ref": [], "table_ref": [], "text": "Probabilistic Theory Revision (PTR) is a newa pproach to theory revision which combines the best features of the twoa pproaches discussed above.T he starting point for PTR is the observation that anym ethod for choosing among several possible revisions is based on some implicit bias, namely the a priori probability that each element (clause or literal) of the domain theory requires revision.\nIn PTR this bias is made explicit right from the start. That is, each element in the theory is assigned some a priori probability that it is not flawed. These probabilities might be assigned by an expert or simply chosen by default.\nThe mere existence of such probabilities solves twoc entral problems at once. First, these probabilities very naturally define the ''best''( i.e., most probable) revision out of a givens et of possible revisions. Thus, our objective isw ell-defined; there is no need to impose artificial syntactic or semantic criteria for identifying the optimal revision. Second, these probabilities can be adjusted in response to newly-obtained information. Thus theyp rovide a framework for incremental revision of the flawed domain theory. Briefly,t hen, PTR is an algorithm which uses a set of provided examples to incrementally adjust probabilities associated with the elements of a possibly-flawed domain theory in order to find the ''most probable''s et of revisions to the theory which will bring it into accord with the examples. 1 LikeK BANN, PTR incrementally adjusts weights associated with domain theory elements; likeE ITHER, all stages of PTR are carried out within the symbolic logic framework and the obtained theories are not probabilistic.\nAs a result PTR has the following features:\n(1) it can handle a broad range of theories including those with negated internal literals and multiple roots.\n(2) it is linear in the size of the theory times the number of givenexamples.\n(3) it produces relatively small, accurate theories that retain much of the structure of the original theory.\n(4) it can exploit additional user-provided bias.\nIn the next section of this paper we formally define the theory revision problem and discuss issues of data representation. We lay the foundations for anyf uture approach to theory revision by introducing very sharply defined terminology and notation. In Section 3 we propose an efficient algorithm for finding flawed elements of a theory,a nd in Section 4 we showh ow to revise these elements. Section 5 describes howt hese twoc omponents are combined to form the PTR algorithm. In Section 5, we also discuss the termination and convergence properties of PTR and walk through a simple example of PTR in action. In Section 6 we experimentally evaluate PTR and compare it to other theory revision algorithms. In Section 7, we sum up our results and indicate directions for further research.\nThe formal presentation of the work described here is, unfortunately,n ecessarily dense. To aid the more casual reader,w eh av e moveda ll formal proofs to three separate appendices. In particular,i nt he third appendix we prove that, under appropriate conditions, PTR converges. Reading of these appendices can safely be postponed until after the rest of the paper has been read. In addition, we provide in Appendix D, a ''quick reference guide''t ot he notation used throughout the paper.W ew ould suggest that a more casual reader might prefer to focus on Section 2, followed by a cursory reading of Sections 3 and 4, and a more thorough reading of Section 5." }, { "figure_ref": [], "heading": "Representing the Problem", "publication_ref": [], "table_ref": [], "text": "A propositional domain theory,d enoted Γ,i sas tratified set of clauses of the form C i : H i ← B i where C i is a clause label, H i is a proposition (called the head of C i )a nd B i is a set of positive and negative literals (called the body of C i ). As usual, the clause C i : H i ← B i represents the assertion that the proposition H i is implied by the conjunction of literals in B i .The domain theory is simply the conjunction of its clauses. It may be convenient to think of this as a propositional logic program without facts (but with negation allowed).\nAp roposition which does not appear in the head of anyc lause is said to be observable. A proposition which appears in the head of some clause but does not appear in the body of any clause is called a root.A n example,E,i sat ruth assignment to all observable propositions. It is convenient to think of E as a set of true observable propositions.\nLet Γ be a domain theory with roots r 1 , ... , r n .F or an example, E,w ed efine the vector\nΓ(E) =〈Γ 1 (E), ... , Γ n (E) 〉 where Γ i (E) = 1i f E |-Γ r i (using resolution) and Γ i (E) = 0i f E |-/ Γ r i .\nIntuitively, Γ(E)tells us which of the conclusions r 1 , ... , r n can be drawn by the expert system when giventhe truth assignment E.\nLet the target domain theory, Θ,besome domain theory which accurately models the domain of interest. In other words, Θ represents the correct domain theory.A nordered pair, 〈 E, Θ(E) 〉 , is called an exemplar of the domain: if Θ i (E) = 1then the exemplar is said to be an IN exemplar of r i ,w hile if Θ i (E) = 0t hen the exemplar is said to be an OUT exemplar of r i .T ypically,i n theory revision, we know Θ(E)without knowing Θ.\nLet Γ be some possibly incorrect theory for a domain which is in turn correctly modeled by the target theory Θ.Any inaccuracies in Γ will be reflected by exemplars for which Γ(E) ≠Θ(E). Such exemplars are said to be misclassified by Γ.Thus, a misclassified IN exemplar for r i , or false negative for r i , will have Θ i (E) = 1b ut Γ i (E) = 0, while a misclassified OUT exemplar for r i , or false positive for r i , will have Θ i (E) = 0b ut Γ i (E) = 1.2 Typically,i nt heory revision we know Θ(E)without knowing Θ. Consider,f or example, the domain theory, T ,a nd example set introduced in Section 1. The theory T has only a single root, buy-stock.T he observable propositions mentioned in the examples are popular-product, unsafe-packaging, established-market, new-market, celebrity-endorsement, superior-flavor,a nd ecologically-correct.F or the example E = {unsafe-packaging, new-market} we have Τ(E) =〈Τ 1 (E) 〉=〈0 〉 .N ev ertheless, we are told that Θ(E) =〈Θ 1 (E) 〉=〈1 〉 .T hus, E =〈{unsafe-packaging, new-market}, 〈 1 〉〉 is a misclassified IN exemplar of the root buy-stock. Now, giv enm isclassified exemplars, there are four re vision operators available for use with propositional domain theories:\n(1) add aliteral to an existing clause,\n(2) delete an existing clause,\n(3) add anew clause, and (4) delete aliteral from an existing clause.\nForneg ation-free domain theories, the first twooperations result in specializing Γ,since theymay allows ome IN exemplars to become OUT exemplars. The latter twoo perations result in generalizing Γ,since theymay allowsome OUT exemplars to become IN exemplars. 3We say that a set of revisions to Γ is adequate for a set of exemplars if, after the revision operators are applied, all the exemplars are correctly classified by the revised domain theory Γ′. Note that we are not implying that Γ′ is identical to Θ,b ut rather that for every exemplar 〈 E, Θ(E) 〉 , Γ′(E) =Θ(E). Thus, there may be more than one adequate revision set. The goal of anyt heory revision system, then, is to find the ''best''r evision set for Γ,w hich is adequate for a givenaset of exemplars." }, { "figure_ref": [ "fig_0" ], "heading": "Domain Theories as Graphs", "publication_ref": [], "table_ref": [], "text": "In order to define the problem evenm ore precisely and to set the stage for its solution, we will showhow torepresent a domain theory in the form of a weighted digraph. Webegin by defining a more general version of the standard AND-OR proof tree, which collapses the distinction between AND nodes and OR nodes.\nFora ny set of propositions {P 1 , ... , P n },l et NAND({P 1 , ... , P n })b eaB oolean formula which is false if and only if {P 1 , ... , P n } are all true. Anydomain theory Γ can be translated into an equivalent domain theory Γ consisting of NAND equations as follows:\n(1) For each clause C i :\nH i ← B i ∈Γ,the equation Ĉi = NAND(B i )isin Γ.\n(2) For each non-observable proposition P appearing in Γ the equation P = NAND(C P )i si n Γ,where C P = { Ĉi H i = P}, i.e., the set consisting of the label of each clause in Γ whose head is P.\n(3) For each negative literal ¬P appearing in Γ,the equation ¬P = NAND({P})isin Γ. Γ contains no equations other than these. Observet hat the literals of Γ are the literals of Γ together with the newl iterals { Ĉi }w hich correspond to the clauses of Γ.M ost important, Γ is equivalent to Γ in the sense that for each literal l in Γ and anyassignment E of truth values to the observable propositions of Γ, E |-Γl if and only if E |-Γ l.\nConsider,for example, the domain theory Τ of Section 1. The set of NAND equations Τ is NAND({popular-product, established-market}), and C 4 = NAND({new-market, superior-flavor}).\nbuy-stock = NAND({C 1 }), C 1 = NAND({increased-demand, ¬product-liability}), ¬product-liability = NAND({product-liability}), increased-demand = NAND({C 3 , C 4 }), product-liability = NAND({C 2 }), C 2 = NAND({popular-product, unsafe-packaging}), C 3 =\nObservet hat buy-stock is true in Τ for precisely those truth assignments to the observables for which buy-stock is true in T .\nWe now use Γ to obtain a useful graph representation of Γ.F or an equation Γi in Γ,let h( Γi ) refer to the left side of Γi and let b( Γi )refer to the set of literals which appear on the right side of Γi .Inother words, h( Γi ) = NAND(b( Γi )).\nDefinition:Adt-graph ∆ Γ for a domain theory Γ consists of a set of nodes which correspond to the literals of Γ and a set of directed edges corresponding to the set of ordered pairs { 〈 x, y 〉 x = h( Γi ), y ∈ b( Γi ), Γi ∈ Γ}. In addition, for each root r we add an edge, e r ,leading into r (from some artificial node).\nIn other words, ∆ Γ consists of edges from each literal in Γ to each of its antecedents. The dtgraph representation of Τ is shown in Figure 1.\nLet n e be the node to which the edge e leads and let n e be the node from which it comes. If n e is a clause, then we say that e is a clause edge;if n e is a root, then we say that e is a root edge; if n e is a literal and n e is a clause, then we say that e is a literal edge;if n e is a proposition and n e is its negation, then we say that e is a negation edge.\nThe dt-graph ∆ Γ is very much likeanAND-OR graph for Γ.Ithas, however, a very significant advantage overA ND-OR graphs because it collapses the distinction between clause edges and literal edges which is central to the AND-OR graph representation. In fact, evenn eg ation edges (which do not appear at all in the AND-OR representation) are not distinguished from literal edges and clause edges in the dt-graph representation.\nIn terms of the dt-graph ∆ Γ ,there are twobasic revision operators -deleting edges or adding edges. What are the effects of adding or deleting edges from ∆ Γ ?I fthe length of every path from aroot r to a node n is even(odd) then n is said to be an even(odd) node for r i .If n e is even(odd) for r i ,t hen e is said to be even( odd) for r i .( Of course it is possible that the depth of an edge is neither evenn or odd.) Deleting an evene dge for r i specializes the definitions of r i in the sense that if ∆ Γ′ is the result of the deletion, then Γ′ i (E) ≤Γ i (E)for all exemplars 〈 E, Θ(E) 〉 ;likewise, adding an evene dge for r i generalizes the definition of r i in the sense that if ∆ Γ′ is the result of adding the edge to ∆ Γ then Γ′ i (E) ≥Γ i (E). Analogously,deleting an odd edge for r i generalizes the definition of r i ,w hile adding an odd edge for r i specializes the definition of r i .( Deleting or adding an edge which is neither odd nor evenfor r i might result in a newdefinition of r i which is neither strictly more general nor strictly more specific.)\nTo understand this intuitively,fi rst consider the case in which there are no negation edges in ∆ Γ .Then an evenedge in ∆ Γ represents a clause in Γ,sothat deleting is specialization and adding is generalization. An odd edge in ∆ Γ represents a literal in the body of a clause in Γ so that deleting is generalization and adding a specialization. Now, ifa no dd number of negation edges are present on the path from r i to an edge then the role of the edge is reversed." }, { "figure_ref": [ "fig_1" ], "heading": "Weighted Graphs", "publication_ref": [], "table_ref": [], "text": "A weighted dt-graph is an ordered pair 〈∆ Γ , w 〉 where ∆ Γ is a dt-graph w and is an assignment of values in (0, 1] to each node and edge in ∆ Γ .F or an edge e, w(e)i sm eant to represent the user'sd egree of confidence that the edge e need not be deleted to obtain the correct domain theory.F or a node n, w(n)isthe user'sdegree of confidence that no edge leading from the node n need be added in order to obtain the correct domain theory.T hus, for example, the assignment w(n) = 1m eans that it is certain that no edge need be added to the node n and the assignment w(e)means that it is certain that e should not be deleted. Observethat if the node n is labeled by an eg ative literal or an observable proposition then w(n) = 1b yd efinition, since graphs obtained by adding edges to such nodes do not correspond to anydomain theory.L ikewise, if e is a rootedge or a negation-edge, then w(e) = 1.\nForpractical reasons, we conflate the weight w(e)ofanedge e and the weight, w(n e ), of the node n e ,i nto a single value, p(e) = w(e) × w(n e ), associated with the edge e.T he value p(e)i s the user'sconfidence that e need not be repaired, either by deletion or by dilution via addition of child edges.\nThere are manyw ays that these values can be assigned. Ideally,theycan be provided by the expert such that theya ctually reflect the expert'sd egree of confidence in each element of the theory.H owev er, eveninthe absence of such information, values can be assigned by default; for example, all elements can be assigned equal value. A more sophisticated method of assigning values is to assign higher values to elements which have greater ''semantic impact''( e.g., those closer to the roots). The details of one such method are giveni nA ppendix A. It is also, of course, possible for the expert to assign some weights and for the rest to be assigned according to some default scheme. Fore xample, in the weighted dt-graph, 〈∆ Τ , p 〉 ,shown in Figure 2, some edges have been assigned weight near 1 and others have been assigned weights according to a simple default scheme.\nThe semantics of the values associated with the edges can be made clear by considering the case in which it is known that the correct dt-graph is a subset of the givendt-graph, ∆.Consider a probability function on the space of all subgraphs of ∆.The weight of an edge is simply the sum of the probabilities of the subgraphs in which the edge appears. Thus the weight of an edge is the probability that the edge does indeed appear in the target dt-graph. We easily extend this to the case where the target dt-graph is not necessarily a subgraph of the givenone. 4Conversely,giv enonly the probabilities associated with edges and assuming that the deletion of different edges are independent events, we can compute the probability of a subgraph, ∆′. Since p(e)isthe probability that e is not deleted and 1p(e)isthe probability that e is deleted, it follows that\np(∆′) = e ∈∆′ Π p(e) × e ∈∆-∆′ Π 1 -p(e).\nLetting S =∆-∆′,werewrite this as\np(∆′) = e ∈∆-S Π p(e) × e ∈ S Π 1 -p(e).\nWe use this formula as a basis for assigning a value to each dt-graph ∆′ obtainable from ∆ via revision of the set of edges S,e veninthe case where edge-independence does not hold and even in the case in which ∆′ is not a subset of ∆.W esimply define\nw(∆′) = e ∈∆-S Π p(e) × e ∈ S Π 1 -p(e).\n(In the event that ∆ and ∆′ are such that S is not uniquely defined, choose S such that w(∆′)i s maximized.) Note that where independence holds and ∆′ is subgraph of ∆,w eh av e w(∆′) = p(∆′)." }, { "figure_ref": [ "fig_1" ], "heading": "ObjectivesofTheory Revision", "publication_ref": [ "b14" ], "table_ref": [], "text": "Nowwecan formally define the proper objective ofatheory revision algorithm:\nGiven a weighted dt-graph 〈∆, p 〉 and a set of exemplars Ζ,fi nd a dt-graph ∆′ sucht hat ∆′ correctly classifies every exemplar in Ζ and w(∆′) is maximal over all suchdt-graphs.\nRestating this in the terminology of information theory,w ed efine the radicality of a dt-graph ∆′ relative toaninitial weighted dt-graph Κ=〈∆, p 〉 as\nRad Κ (∆′) = e ∈∆-S Σ -log( p(e)) + e ∈ S Σ -log(1 -p(e))\nwhere S is the set of edges of ∆ which need to be revised in order to obtain ∆′.T hus givena weighted dt-graph Κ and a set of exemplars Ζ,w ew ish to find the least radical dt-graph relative to Κ which correctly classifies the set of exemplars Ζ.\nNote that radicality is a straightforward measure of the quality of a revision set which neatly balances syntactic and semantic considerations. It has been often noted that minimizing syntactic change alone can lead to counter-intuitive results by giving preference to changes near the root which radically alter the semantics of the theory.O nthe other hand, regardless of the distribution of examples, minimizing semantic change alone results in simply appending to the domain theory the correct classifications of the givenm isclassified examples without affecting the classification of anyother examples.\nMinimizing radicality automatically takes into account both these criteria. Thus, for example, by assigning higher initial weights to edges with greater semantic impact (as in our default scheme of Appendix A), the syntactic advantage of revising close to the root is offset by the higher cost of such revisions. For example, suppose we are giventhe theory Τ of the introduction and the single misclassified exemplar 〈 {unsafe-packaging, new-market}, 〈 1 〉〉.\nThere are several possible revisions which would bring Τ into accord with the exemplar.W e could, for example, add a newclause buy-stock ← unsafe-packaging ∧ new-market, delete superior-flavor from clause C4, delete popular-product and established-market from clause C3, or delete increased-demand from clause C1. Givent he weights of Figure 2, the deletion of superior-flavor from clause C4isclearly the least radical revision.\nObservet hat in the special case where all edges are assigned identical initial weights, regardless of their semantic strength, minimization of radicality does indeed reduce to a form of minimization of syntactic change. Wew ish to point out, however, that eveni nt his case our definition of ''syntactic change''d iffers from some previous definitions (Wogulis & Pazzani, 1993). Whereas those definitions count the number of deleted and added edges, we count the number of edges deleted or added to.T ounderstand whythis is preferable, consider the case in which some internal literal, which happens to have a large definition, is omitted from one of the clause in the target theory.M ethods which count the number of added edges will be strongly biased against restoring this literal, prefering instead to make several different repairs which collectively involvef ewer edges than to makeasingle repair involving more edges. Nevertheless, givent he assumption that the probabilities of the various edges in the givent heory being mistaken are equal, it is far more intuitive torepair only at a single edge, as PTR does. (We agree, though, that once an edge has been chosen for repair,the chosen repair should be minimal overall equally effective repairs.)" }, { "figure_ref": [], "heading": "Finding Flawed Elements", "publication_ref": [], "table_ref": [], "text": "PTR is an algorithm which finds an adequate set of revisions of approximately minimum radicality.I ta chievest his by locating flawed edges and then repairing them. In this section we give the algorithm for locating flawed edges; in the next section we showhow torepair them.\nThe underlying principle of locating flawed edges is to process exemplars one at a time, in each case updating the weights associated with edges in accordance with the information contained in the exemplars. Wem easure the ''flow''o fap roof (or refutation) through the edges of the graph. The more an edge contributes to the correct classification of an example, the more its weight is raised; the more it contributes to the misclassification of the example, the more its weight is lowered. If the weight of an edge drops belowap respecified revision threshold σ ,i ti s revised.\nThe core of the algorithm is the method of updating the weights. Recall that the weight represents the probability that an edge appears in the target domain theory.T he most natural way to update these weights, then, is to replace the probability that an edge need not be revised with the conditional probability that it need not be revised given the classification of an exemplar.A s we shall see later,the computation of conditional probabilities ensures manydesirable properties of updating which ad hoc methods are liable to miss." }, { "figure_ref": [ "fig_2", "fig_1", "fig_4" ], "heading": "Processing a Single Exemplar", "publication_ref": [], "table_ref": [], "text": "One of the most important results of this paper is that under certain conditions the conditional probabilities of all the edges in the graph can be computed in a single bottom-up-then-top-down sweep through the dt-graph.W es hall employt his method of computation evenw hen those conditions do not hold. In this way,updating is performed in highly efficient fashion while, at the same time, retaining the relevant desirable properties of conditional probabilities.\nMore precisely,t he algorithm proceeds as follows. Wet hink of the nodes of ∆ Γ which represent observable propositions as input nodes, and we think of the values assigned by an example E to each observable proposition as inputs. Recall that the assignment of weights to the edges is associated with an implicit assignment of probabilities to various dt-graphs obtainable via revision of ∆ Γ .For some of these dt-graphs, the root r i is provable from the example E,while for others it is not. We wish to makeabottom-up pass through Κ=〈∆ Γ , p 〉 in order to compute (or at least approximate) for each root r i ,the probability that the target domain theory is such that r i is true for the example E.T he obtained probability can then be compared with the desired result, Θ i (E), and the resulting difference can be used as a basis for adjusting the weights, w(e), for each edge e.\nLet\nE(P) =    1 0 if P is true in E if P is false in E.\nWe say that a node n ∈∆ Γ is true if the literal of Γ which labels it is true. Now, a node passes the value ''true''u pt he graph if it is either true or deleted, i.e., if it is not both undeleted and false. Thus, for an edge e such that n e is the observable proposition P,t he value\nu E (e) = 1 -[ p(e) × (1 -E(P))]\nis the probability of the value ''true''b eing passed up the graph from e. 5Now, recalling that a node in ∆ Γ represents a NAND operation, if the truth of a node in ∆ Γ is independent of the truth of anyofits brothers, then for anyedge e,the probability of ''true''being passed up the graph is\nu E (e) = 1 -p(e) s ∈ children(e)\nΠ u E (s).\nWe call u E (e)the flow of E through e.\nWe hav e defined the flow u E (e)such that, under appropriate independence conditions, for any node n e , u E (e)isinfact the probability that n e is true given 〈∆ Γ , w 〉 and E.( Foraformal proof of this, see Appendix B.) In particular,for a root r i ,the flow u E (e r i )is, eveninthe absence of the independence conditions, a good approximation of the probability that the target theory is such that r i is true given 〈∆ Γ , w 〉 and E.\nIn the second stage of the updating algorithm, we propagate the difference between each computed value u E (e r i )( which lies somewhere between 0 and 1) and its target value Θ i (E) (which is either 0 or 1) top-down through ∆ Γ in a process similar to backpropagation in neural networks. As we proceed, we compute a newv alue v E (e)a sw ell as an updated value for p(e), for every edge e in ∆ Γ .T he newv alue v E (e)r epresents an updating of u E (e)w here the correct classification, Θ(E), of the example E has been taken into account.\nThus, we begin by setting each value v E (r i )t or eflect the correct classification of the example. Let ε >0be some very small constant6 and let\nv E (e r i ) =    ε 1 -ε if Θ i (E) = 0 if Θ i (E) = 1.\nNoww ep roceed top down through ∆ Γ ,c omputing v E (e)f or each edge in ∆ Γ .I ne ach case we compute v E (e)onthe basis of u E (e), that is, on the basis of howmuch of the proof (or refutation) of E flows through the edge e.T he precise formula is\nv E (e) = 1 -(1 -u E (e)) × v E ( f (e)) u E ( f (e))\nwhere f (e)i st hat parent of e for which\n   1 - max[v E ( f (e)), u E ( f (e))] min[v E ( f (e)), u E ( f (e))]\n   is greatest. We showi n Appendix B whythis formula works.\nFinally,wecompute p new (e), the newv alues of p(e), using the current value of p(e)and the values of v E (e)and u E (e)just computed:\np new (e) = 1 -(1 -p(e)) × v E (e) u E (e) .\nIf the deletion of different edges are independent events and Θ is known to be a subgraph of Γ,t hen p new (e)i st he conditional probability that the edge e appears in Θ,g iv ent he exemplar 〈 E, Θ(E) 〉 (see proof in Appendix B). Figure 3 givest he pseudo code for processing a single exemplar. Consider the application of this updating algorithm to the weighted dt-graph of Figure 2. We are givent he exemplar 〈 {unsafe-packaging, new-market}, 〈 1 〉〉,i .e., the example in which unsafe-packaging and new-market are true (and all other observables are false) should yield a derivation of the root buy-stock.T he weighted dt-graph obtained by applying the algorithm is shown in Figure 4. This example illustrates some important general properties of the method.\nV ⇐ Roots(∆); for r i ∈ Roots(∆) do begin if Γ i (E) = 1 then v(r i ) ⇐ ε ; else v(r i ) ⇐ 1 -ε ; S ⇐ Merge(S, Children(r i , ∆)); end while S ≠∅ do begin e ⇐ PopSuitableChild(S, V ); V ⇐ AddElement(e, V ); f ⇐ MostChangedParent(e, ∆); v(e) ⇐ 1 -(1 -u(e)) × v( f ) u( f ) ; p(e) ⇐ 1 -(1 -p(e)) × v(e) u(\n(1)\nGiven an IN exemplar,t he weight of an odd edgec annot decrease and the weight of an even edgec annot increase. (The analogous property holds for an OUT exemplar.) In the case where no negation edge appears in ∆ Γ ,t his corresponds to the fact that a clause cannot help prevent a proof, and literals in the body of a clause cannot help complete a proof. Note in particular that the weights of the edges corresponding to the literals popular-product and established-market in clause C3d ropped by the same amount, reflecting the identical roles played by them in this example. However, the weight of the edge corresponding to the literal superior-flavor in clause C4drops a great deal more than both those edges, reflecting the fact that the deletion of superior-flavor alone would allow ap roof of buy-stock,w hile the deletion of either popular-product alone or establishedmarket alone would not allowaproof of buy-stock.\n(2)\nAn edgewith initial weight 1 is immutable; its weight remains 1 forever. Thus although an edge with weight 1, such as that corresponding to the literal increased-demand in clause C1, may contribute to the prevention of a desired proof, its weight is not diminished since we are told that there is no possibility of that literal being flawed.\n(3) If the processed exemplar can only be correctly classified if a particular edgeeis revised, then the updated probability of ew ill approach0a nd e will be immediately revised.7 Thus, for example, were the initial weights of the edge corresponding to establishedmarket and popular-product in C3toapproach 1, the weight of the edge corresponding to superior-flavor in C4w ould approach 0. Since we use weights only as a temporary device for locating flawed elements, this property renders our updating method more appropriate for our purposes then standard backpropagation techniques which adjust weights gradually to ensure convergence.\n(4) The computational complexity of processing a single exemplar is linear in the size of the theory Γ. Thus, the updating algorithm is quite efficient when compared to revision techniques which rely on enumerating all proofs for a root. Note further that the computation required to update a weight is identical for every edge of ∆ Γ regardless of edge type. Thus, PTR is well suited for mapping onto fine-grained SIMD machines." }, { "figure_ref": [], "heading": "Processing Multiple Exemplars", "publication_ref": [], "table_ref": [], "text": "As stated above,t he updating method is applied iteratively to one example at a time (in random order) until some edge drops belowt he revision threshold, σ .I fa fter a complete cycle no edge has dropped belowt he revision threshold, the examples are reordered (randomly) and the updating is continued.8 substantially to the misclassification of the second and third examples from the list above while not contributing substantially to the correct classification of the first." }, { "figure_ref": [], "heading": "Revising a Flawed Edge", "publication_ref": [], "table_ref": [], "text": "Once an edge has been selected for revision, we must decide howt or evise it. Recall that p(e) represents the product of w(e)a nd w(n e ). Thus, the drop in p(e)i ndicates either that e needs to be deleted or that, less dramatically,asubtree needs to be appended to the node n e .Thus, we need to determine whether to delete an edge completely or to simply weaken it by adding children; intuitively,a dding edges to a clause node weakens the clause by adding conditions to its body, while adding edges to a proposition node weakens the proposition'sr efutation power by adding clauses to its definition. Further,i fw ed ecide to add children, then we need to determine which children to add." }, { "figure_ref": [ "fig_5" ], "heading": "Finding Relevant Exemplars", "publication_ref": [], "table_ref": [], "text": "The first stage in making such a determination consists of establishing, for each exemplar,the role of the edge in enabling or preventing a derivation of a root. More specifically,f or an IN exemplar, 〈 E, Θ(E) 〉 ,ofsome root, r,anedge e might play a positive role by facilitating a proof of r,o rp lay a destructive role by preventing a proof of r,o rm ay simply be irrelevant to a proof of r.\nOnce the sets of exemplars for which e plays a positive role or a destructive role are determined, it is possible to append to e an appropriate subtree which effectively redefines the role of e such that it is used only for those exemplars for which it plays a positive role.9 How, then, can we measure the role of e in allowing or preventing a proof of r from E?\nAt first glance, it would appear that it is sufficient to compare the graph ∆ with the graph ∆ e which results from deleting e from ∆.If E |-∆ r and E |-/ ∆ e r (or vice versa) then it is clear that e is ''responsible''f or r being provable or not provable givent he exemplar 〈 E, Θ(E) 〉 .B ut, this criterion is too rigid. In the case of an OUT exemplar,evenifitisthe case that E |-/ ∆ e r,itisstill necessary to modify e in the event that e allowed an additional proof of r from E.A nd, in the case of an IN exemplar,e veni fi ti st he case that E |-∆ r it is still necessary not to modify e in such a way as to further prevent a proof of r from E,since ultimately some proof is needed.\nFortunately,the weights assigned to the edges allowusthe flexibility to not merely determine whether or not there is a proof of r from E given ∆ or ∆ e butalso to measure numerically the flow of E through r both with and without e.T his is just what is needed to design a simple heuristic which captures the degree to which e contributes to a proof of r from E.\nLet Κ=〈∆, p 〉 be the weighted dt-graph which is being revised. Let Κ e =〈∆, p′〉 where p′ is identical with p,e xcept that p′(e) = 1. Let Κ e =〈∆, p′〉 where p′ is identical with p,e xcept that p′(e) = 0; that is, Κ e is obtained from Κ by deleting the edge e.\nThen define for each root\nr i R i ( 〈 E, Θ(E) 〉 , e, Κ) =   1 -Θ i (E)   -u Κ e E (e r i )   1 -Θ i (E)   -u Κ e E (e r i )\n.\nThen if R i ( 〈 E, Θ(E) 〉 , e, Κ)>2,w es ay that e is needed for E and r i and if R i ( 〈 E, Θ(E) 〉 , e, Κ)<1/2 we say that e is destructive for E and r i .\nIntuitively,this means, for example, that the edge e is needed for an IN exemplar, E,of r i ,if most of the derivation of r i from E passes through the edge e.W eh av e simply givenf ormal definition to the notion that ''most''o ft he derivation passes through e,n amely,t hat the flow, u Κ e E (e r i ), of E through r i without e is less than half of the flow, u Κ e E (e r i ), of E through r i with e. Forn eg ation-free theories, this corresponds to the case where the edge e represents a clause which is critical for the derivation of r i from E.T he intuition for destructive edges and for OUT exemplars is analogous. Figure 6 givesthe pseudo code for computing the needed and destructive sets for a givenedge e and exemplar set Ζ.\nIn order to understand this better,l et us nowr eturn to our example dt-graph in the state in which we left it in Figure 5. The edge corresponding to the clause C3h as dropped belowt he threshold. Nowl et us check for which exemplars that edge is needed and for which it is destructive.C omputing R( 〈 E, Θ(E) 〉 , C3, Η)for each example E we obtain the following: \nR( 〈 {popular-product, unsafe-packaging, established-market}, 〈 0 〉〉, C3, Η) = 0. 8 R( 〈 {unsafe-packaging, new-market}, 〈 1 〉〉, C3, Η) = 1. 0 R( 〈 {popular-product, established-market, celebrity-endorsement}, 〈 1 〉〉, C3, Η) = 136. 1 R( 〈 {popular-product, established-market, superior-flavor}, 〈 0 〉〉, C3, Η) = 0. 1 R( 〈 {popular-product, established-market, ecologically-correct}, 〈 0 〉〉, C3, Η) = 0. 1 R( 〈 {new-market, celebrity-endorsement}, 〈 1 〉〉, C3, Η) = 1. 0\nΚ e E ⇐ u(r i ); p ⇐ p saved ; if Γ i (E) = 1 then R i ⇐ u Κ e E u Κ e E ; else R i ⇐ 1 -u Κ e E 1 -u Κ e E ; if R i >2then N ⇐ N ∪ {E}; if R i < 1 2 then D ⇐ D ∪ {E};\nend end return 〈 N , D 〉 ; end Figure 6: Pseudo code for computing the relevant sets (i.e., the needed and destructive sets) for a givene dge e and exemplar set Ζ.T he general idea is to compare proof ''flow''( computed using function BottomUp)both with and without the edge in question for each exemplar in the exemplar set. Note that the original weights are savedand later restored at the end of the computation.\nThe high value of R( 〈 {popular-product, established-market, celebrity-endorsement}, 〈 1 〉〉, C3, Η) reflects the fact that without the clause C3, there is scant hope of a derivation of buy-stock for this example. (Of course, in principle, both new-market and superior-flavor might still be deleted from the body of clause C4, thus obviating the need for C3, but the high weight associated with the literal new-market in C4indicates that this is unlikely.) The lowvalues of R( 〈 {popular-product, established-market, superior-flavor}, 〈 0 〉〉, C3, Η)and R( 〈 {popular-product, established-market, ecologically-correct}, 〈 0 〉〉, C3, Η) reflect the fact that eliminating the clause C3w ould greatly diminish the currently undesirably high flowt hrough buy-stock (i.e., probability of a derivation of buy-stock)f rom each of these examples.\nAn interesting case to examine is that of 〈 {popular-product, unsafe-packaging, established-market}, 〈 0 〉〉.\nIt is true that the elimination of C3i sh elpful in preventing an unwanted derivation of buy-stock because it prevents a derivation of increased-demand which is necessary for buy-stock in clause C1. Nevertheless, R correctly reflects the fact that the clause C3i s not destructive for this exemplar since eveni nt he presence of C3, buy-stock is not derivable due to the failure of the literal ¬product-liability." }, { "figure_ref": [], "heading": "Appending a Subtree", "publication_ref": [], "table_ref": [], "text": "Let N be the set of examples for which e is needed for some root and let D be the set of examples for which e is destructive for some root (and not needed for anyo ther root). Having found the sets N and D,how dowerepair e? At this point, if the set D is non-empty and the set N is empty,w es imply delete the edge from ∆ Γ .W ej ustify this deletion by noting that no exemplars require e,s od eletion will not compromise the performance of the theory.O nthe other hand, if N is not empty,weapply some inductive algorithm10 to produce a disjunctive normal form (DNF) logical expression constructed from observable propositions which is true for each exemplar in D butn oe xemplar in N .W e reformulate this DNF expression as a conjunction of clauses by taking a single newliteral l as the head of each clause, and using each conjunct in the DNF expression as the body of one of the clauses. This set of clauses is converted into dt-graph ∆ n with l as its root. We then suture ∆ n to e by adding to ∆ Γ anew node t,anedge from e to t,and another edge from t to the root, l,of Γ n .\nIn order to understand whyt his works, first note the important fact that (likee very other subroutine of PTR), this method is essentially identical whether the edge, e,b eing repaired is a clause edge, literal edge or negation edge. However, when translating back from dt-graph form to domain theory form, the newn ode t will be interpreted differently depending on whether n e is a clause or a literal. If n e is a literal, then t is interpreted as the clause n e ← l.I f n e is a clause, then t is interpreted as the negative literal ¬l. 11 Nowitisplain that those exemplars for which e is destructive will use the graph rooted at t to overcome the effect of e.If n e is a literal which undesirably excludes E,then E will get by n e by satisfying the clause t;if n e is a clause which undesirably allows E,then E will be stopped by the 7: Pseudo code for performing a revision. The function Revise takes a dt-graph, a set of exemplars Ζ,anedge to be revised e,and a parameter λ as inputs and produces a revised dt-graph as output. The function DNF-ID3 is an inductive learning algorithm that produces a DNF formula that accepts elements of D butnot of N ,while the function DTGraph produces a dt-graph with the givenroot from the givenDNF expression as described in the text. For the sakeofexpository simplicity,w eh av e not shown the special cases in which n e is a leaf or e is a negation edge, as discussed in Footnote 11. 11 Of course, if we were willing to sacrifice some elegance, we could allows eparate sub-routines for the clause case and the literal case. This would allowu st om aket he dt-graphs to be sutured considerably more compact. In particular,if n e is a literal we could suture the children of l in ∆ n directly to n e .I f n e is a clause, we could use the inductive algorithm to find a DNF expression which excludes examples in D and includes those in N (rather than the other way around as we nowdoit). Translating this expression to a dtgraph ∆ with root l,w ec ould suture ∆ n to ∆ Γ by simply adding an edge from the clause n e to the root l. Moreover, if ∆ n represents a single clause l ← l 1 , ... , l m then we can simply suture each of the leaf-nodes l 1 , ... , l m directly to n e .N ote that if n e is a leaf or a negative literal, it is inappropriate to append child edges to n e .Insuch cases, we simply replace n e with a newliteral l′ and append to l′ both ∆ n and the graph of the clause l′←n e . newliteral t = ¬l.\nWheneverag raph ∆ n is sutured into ∆ Γ ,w em ust assign weights to the edges of ∆ n .U nlike the original domain theory,howev er, the newsubstructure is really just an artifact of the inductive algorithm used and the current relevant exemplar set. Fort his reason, it is almost certainly inadvisable to try to revise it as newe xemplars are encountered. Instead, we would prefer that this news tructure be removeda nd replaced with a more appropriate newc onstruct should the need arise. To ensure replacement instead of revision, we assign unit certainty factors to all edges of the substructure. Since the internal edges of the news tructure have weights equal to 1, they will neverberevised. Finally,weassign a default weight λ to the substructure root edge 〈 n e , t 〉 , that connects the newc omponent to the existing ∆ Γ and we reset the weight of the revised edge, e,t ot he same value λ.F igure 7 givest he pseudo code for performing the revision step just described. " }, { "figure_ref": [], "heading": "The PTR Algorithm", "publication_ref": [], "table_ref": [], "text": "In this section we give the details of the control algorithm which puts the pieces of the previous twosections together and determines termination." }, { "figure_ref": [], "heading": "Control", "publication_ref": [], "table_ref": [], "text": "The PTR algorithm is shown in Figure 9. We can briefly summarize its operation as follows:\n(1) PTR process exemplars in random order,u pdating weights and performing revisions when necessary.\n(2) Wheneverar evision is made, the domain theory which corresponds to the newly revised graph is checked against all exemplars.\n(3) PTR terminates if: (i) All exemplars are correctly classified, or (ii) Every edge in the newly revised graph has weight 1. (4) If, after a revision is made, PTR does not terminate, then it continues processing exemplars in random order.\n(5) if, after a complete cycle of exemplars has been processed, there remain misclassified exemplars, then we (i) Increment the revision threshold σ so that σ = min[σ + δ σ ,1], and (ii) Increment the value λ assigned to a revised edge and to the root edge of an added component, so that λ = min[λ + δ λ ,1].\n(6) Nowwebegin anew, processing the exemplars in (new) random order.\nIt is easy to see that PTR is guaranteed to terminate. The argument is as follows. Within\nmax    1 δ σ , 1 δ λ   \ncycles, both σ and λ will reach 1. At this point, every edge with weight less than 1will be revised and will either be deleted or have its weight reset to λ = 1. Moreover, any edges added during revision will also be assigned certainty factor λ = 1. Thus all edges will have weight 1and the algorithm terminates by the termination criterion (ii). Now, wew ish to showt hat PTR not only terminates, but that it terminates with every exemplar correctly classified. That is, we wish to showthat, in fact, termination criterion (ii) can neverb es atisfied unless termination criterion (i) is satisfied as well. We call this property convergence.I nA ppendix C we prove that, under certain very general conditions, PTR is guaranteed to converge." }, { "figure_ref": [ "fig_1", "fig_2", "fig_5", "fig_7", "fig_7" ], "heading": "A Complete Example", "publication_ref": [], "table_ref": [], "text": "Let us nowreviewthe example which we have been considering throughout this paper.\nWe begin with the flawed domain theory and set of exemplars introduced in Section 1.\nC1: buy-stock ← increased-demand ∧ ¬product-liability C2: product-liability ← popular-product ∧ unsafe-packaging C3: increased-demand ← popular-product ∧ established-market C4: increased-demand ← new-market ∧ superior-flavor.\nWe translate the domain theory into the weighted dt-graph 〈∆ Τ , p 〉 of Figure 2, assigning weights via a combination of user-provided information and default values. For example, the user has indicated that their confidence in the first literal (increased-demand)inthe body of clause C1 is greater than their confidence in the second literal ( ¬product-liability). \nif ∀ e ∈∆, p(e) = 1 or ∀ E ∈Ζ, Γ(E) =Θ(E) then return 〈∆, p 〉 ; end λ ⇐ max[λ + δ λ ,1]; σ ⇐ max[σ + δ σ ,1]; end end\nFigure 9: The PTR control algorithm. Input to the algorithm consists of a weighted dt-graph 〈∆, p 〉 ,aset of exemplars Ζ,and fivereal-valued parameters λ 0 , σ 0 , δ λ , δ σ ,and ε .The algorithm produces a revised weighted dt-graph whose implicit theory correctly classifies all exemplars in Ζ.\nWe set the revision threshold σ to .1, the reset value λ initially to .7 and their respective increments δ σ and δ λ to . 03. Wen ow start updating the weights of the edges by processing the exemplars in some random order.\nWe first process the exemplar 〈 {unsafe-packaging, new-market}, 〈 1 〉〉.\nFirst, the leavesofthe dt-graph are labeled according to their presence or absence in the exemplar. Second, u E (e)v alues (proof flow) are computed for all edges of the dt-graph in bottom up fashion. Next, v E (e r i )v alues are set to reflect the vector of correct classifications for the example Θ(E). Newv alues for v E (e)are computed in top down fashion for each edge in the dt-graph. As these values are computed, newv alues for p(e)a re also computed. Processing of this first exemplar produces the updated dt-graph shown in Figure 3.\nProcessing of exemplars continues until either an edge weight falls below σ (indicating a flawed domain theory element has been located), a cycle (processing of all known exemplars) is completed, or the PTR termination conditions are met. For our example, after processing the additional exemplars 〈 {popular-product, established-market, superior-flavor}, 〈 0 〉〉and 〈 {popular-product, established-market, ecologically-correct}, 〈 0 〉〉 the weight of the edge corresponding to clause C3d rops below σ (see Figure 5), indicating that this edge needs to be revised.\nWe proceed with the revision by using the heuristic in Section 4.2 in order to determine for which set of exemplars the edge in question is needed and for which it is destructive.T he edge corresponding to the clause C3isneeded for { 〈 {popular-product, established-market, celebrity-endorsement}, 〈 1 〉〉} and is destructive for { 〈 {popular-product, established-market, ecologically-correct}, 〈 0 〉〉, 〈 {popular-product, established-market, superior-flavor}, 〈 0 〉〉}.\nSince the set for which the edge is needed is not empty,P TR chooses to append a subtree weakening clause C3rather than simply deleting the clause outright. Using these sets as input to ID3, we determine that the fact celebrity-endorsement suitably discriminates between the needed and destructive sets. Wethen repair the graph to obtain the weighted dt-graph shown in Figure 8. This graph corresponds to the theory in which the literal celebrity-endorsement has been added to the body of C3.\nWe now check the newly-obtained theory embodied in the dt-graph of Figure 8 (i.e., ignoring weights) against all the exemplars and determine that there are still misclassified exemplars, namely 〈 {unsafe-packaging, new-market}, 〈 1 〉〉and 〈 {new-market, celebrity-endorsement}, 〈 1 〉〉.\nThus, we continue processing the remaining exemplars in the original (random) order.\nAfter processing the exemplars 〈 {popular-product, unsafe-packaging, established-market}, 〈 0 〉〉, 〈 {popular-product, established-market, celebrity-endorsement}, 〈 1 〉〉,and 〈 {new-market, celebrity-endorsement}, 〈 1 〉〉, the weight of the edge corresponding to the literal superior-flavor in clause C4d rops belowt he revision threshold σ .W et hen determine that this edge is not needed for anye xemplar and thus the edge is simply deleted.\nAt this point, no misclassified exemplars remain. The final domain theory is:\nC1: buy-stock ← increased-demand ∧ ¬product-liability C2: product-liability ← popular-product ∧ unsafe-packaging C3: increased-demand ← popular-product ∧ established-market ∧ celebrity-endorsement C4: increased-demand ← new-market.\nThis theory correctly classifies all known exemplars and PTR terminates." }, { "figure_ref": [], "heading": "Experimental Evaluation", "publication_ref": [], "table_ref": [], "text": "In this section we will examine experimental evidence that illustrates several fundamental hypotheses concerning PTR. Wewish to showthat:\n(1) theories produced by PTR are of high quality in three respects: theyare of lowradicality, theya re of reasonable size, and theyp rovide accurate information regarding exemplars other than those used in the training.\n(2) PTR converges rapidly -that is, it requires fewc ycles to find an adequate set of revisions.\n(3) well-chosen initial weights provided by a domain expert can significantly improve the performance of PTR.\nMore precisely,giv enatheory Γ′ obtained by using PTR to revise a theory Γ on the basis of a set of training examplars, we will test these hypotheses as follows.\nRadicality.O ur claim is that Rad Κ (Γ′)i st ypically close to minimal overa ll theories which correctly classify all the examples. For cases where the target theory, Θ,i sk nown, we measure Rad Κ (Γ′) Rad Κ (Θ)\n.I ft his value is less than 1, then PTR can be said to have done even' 'better''t han finding the target theory in the sense that it was able to correctly classify all training examples using less radical revisions than those required to restore the target theory.I fthe value is greater than 1, then PTR can be said to have ''over-revised''the theory.\nCross-validation.W ep erform one hundred repetitions of cross-validation using nested sets of training examples. It should be noted that our actual objective ist om inimize radicality,a nd that often there are theories that are less radical than the target theory which also satisfy all training examples. Thus, while cross-validation givess ome indication that theory revision is being successfully performed, it is not a primary objective oftheory revision.\nTheory size.W ec ount the number of clauses and literals in the revised theory merely to demonstrate that theories obtained using PTR are comprehensible. Of course, the precise size of the theory obtained by PTR is largely an artifact of the choice of inductive component." }, { "figure_ref": [], "heading": "Complexity.P rocessing a complete cycle of exemplars is O(n × d)w", "publication_ref": [], "table_ref": [], "text": "here n is the number of edges in the graph and d is the number of exemplars. Likewise repairing an edge is O(n × d). We will measure the number of cycles and the number of repairs made until convergence. (Recall that the number of cycles until convergence is in anye vent bounded by max   \n1 δ σ , 1 δ λ    .W ew ill\nshowthat, in practice, the number of cycles is small evenif δ σ = δ λ = 0.\nUtility of Bias.W ew ish to showt hat user-provided guidance in choosing initial weights leads to faster and more accurate results. Forcases in which the target theory, Θ,isknown, let S be the set of edges of ∆ Γ which need to be revised in order to restore the target theory Θ.Define p β (e)s uch that for each e ∈ S,1-p β (e) = (1p(e)) That is, each edge which needs to be revised to obtain the intended theory has its initial weight diminished and each edge which need not be revised to obtain the intended theory has its weight increased. Let Κ β =〈∆ Γ , p β 〉 .Then, for each β ,\nRad Κ β (Θ) =-log( e ∈ S Π (1 -p(e)) 1 β × e ∈ /S Π ( p(e)) 1 β ) = 1 β Rad Κ (Θ).\nHere, we compare the results of cross-validation and number-of-cycles experiments for β = 2 with their unbiased counterparts (i.e., β = 1)." }, { "figure_ref": [], "heading": "Comparison with other Methods", "publication_ref": [ "b11", "b0" ], "table_ref": [], "text": "In order to put our results in perspective wec ompare them with results obtained by other methods. 12(1) ID3 (Quinlan, 1986) is the inductive component we use in PTR. Thus using ID3 is equivalent to learning directly from the examples without using the initial flawed domain theory.B yc omparing results obtained using ID3 with those obtained using PTR we can gauge the usefulness of the giventheory.\n(2) EITHER (Ourston & Mooney, inpress) uses enumeration of partial proofs in order to find am inimal set of literals, the repair of which will satisfy all the exemplars. Repairs are then made using an inductive component. EITHER is exponential in the size of the theory.I tc annot handle theories with negated internal literals. It also cannot handle theories with multiple roots unless those roots are mutually exclusive.\n(3) KBANN (Towell & Shavlik, 1993) translates a symbolic domain theory into a neural net, uses backpropagation to adjust the weights of the net'se dges, and then translates back from net form to partially symbolic form. Some of the rules in the theory output by KBANN might be numerical, i.e., not strictly symbolic.\n(4) RAPTURE (Mahoney& M ooney, 1993) uses a variant of backpropagation to adjust certainty factors in a probabilistic domain theory.I fnecessary,itcan also add a clause to ar oot. All the rules produced by RAPTURE are numerical. LikeE ITHER, RAPTURE cannot handle negated internal literals or multiple roots which are not mutually exclusive.\nObservet hat, relative tot he other methods considered here, PTR is liberal in terms of the theories it can handle, in that (likeK BANN, but unlikeE ITHER and RAPTURE) it can handle negated literals and non-mutually exclusive multiple roots; it is also strict in terms of the theories it yields in that (likeE ITHER, but unlikeK BANN and RAPTURE) it produces strictly symbolic theories.\nWe hav e noted that both KBANN and RAPTURE output ''numerical''r ules. In the case of KBANN, a numerical rule is one which fires if the sum of weights associated with satisfied antecedents exceeds a threshold. In the case of RAPTURE, the rules are probabilistic rules using certainty factors along the lines of MYCIN (Buchanan & Shortliffe, 1984). One might ask, then, to what extent are results obtained by theory revision algorithms which output numerical rules merely artifacts of the use of such numerical rules? In other words, can we separate the effects of using numerical rules from the effects of learning?\nTo maket his more concrete, consider the following simple method for transforming a symbolic domain theory into a probabilistic domain theory and then reclassifying examples using the obtained probabilistic theory.S uppose we are givens ome possibly-flawed domain theory Γ. Suppose further that we are not giventhe classification of evenasingle example. Assign aweight p(e)toeach edge of ∆ Γ according to the default scheme of Appendix A. Now, using the bottomup subroutine of the updating algorithm, compute u E (e r )f or each test example E.( Recall that u E (e r )isameasure of howclose to a derivation of r from E there is, giventhe weighted dt-graph 〈∆ Γ , p 〉 .) Now, for some chosen ''cutoff''v alue 0 ≤ n ≤ 100, if E 0 is such that u E 0 (e r )l ies in the upper n%ofthe set of values {u E (e r )} then conclude that Γ is true for E 0 ;otherwise conclude that Γ is false for E 0 . This method, which for the purpose of discussion we call PTR*, does not use anyt raining examples at all. Thus if the results of theory revision systems that employnumerical rules can be matched by PTR* -whichp erforms no learning -t hen it is clear that the results are merely artifacts of the use of numerical rules." }, { "figure_ref": [], "heading": "Results on the PROMOTER Theory", "publication_ref": [], "table_ref": [], "text": "We first consider the PROMOTER theory from molecular biology (Murphy&Aha, 1992), which is of interest solely because it has been extensively studied in the theory revision literature (Towell & Shavlik, 1993), thus enabling explicit performance comparison with other algorithms. The PROMOTER theory is a flawed theory intended to recognize promoters in DNAnucleotides. The theory recognized none of a set of 106 examples as promoters despite the fact that precisely half of them are indeed promoters. 13Unfortunately,t he PROMOTER theory (likem anyo thers used in the theory revision literature) is trivial in that it is very shallow. Moreover, itisatypical of flawed domains in that it is overly specific but not overly general. Giventhe shortcomings of the PROMOTER theory,we will also test PTR on a synthetically-generated theory in which errors have been artificially introduced. These synthetic theories are significantly deeper than those used to test previous methods. Moreover, the fact that the intended theory is known will enable us to perform experiments involving radicality and bias." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Cross-validation", "publication_ref": [], "table_ref": [], "text": "In Figure 10 we compare the results of cross-validation for PROMOTER. Wed istinguish between methods which use numerical rules (top plot) and those which are purely symbolic (bottom plot).\nThe lower plot in Figure 10 highlights the fact that, using the value n = 50, PTR* achieves better accuracy, using no training examples,t han anyo ft he methods considered here achieve using 90 training examples. In particular,c omputing u E (e r )f or each example, we obtain that of the 53 highest-ranking examples 50 are indeed promoters (and, therefore, of the 53 lowestranking examples 50 are indeed non-promoters). Thus, PTR* achieves94. 3% accuracy. (In fact, all of the 47 highest-ranking examples are promoters and all of the 47 lowest-ranking are not promoters. Thus, am ore conservative version of PTR* which classifies the, say,4 0% highestranking examples as IN and the 40% lowest-ranking as OUT,w ould indeed achieve 100% accuracyoverthe examples for which it ventured a prediction.)\nThis merely shows that the original PROMOTER theory is very accurate provided that it is givenan umerical interpretation. Thus we conclude that the success of RAPTURE and KBANN for this domain is not a consequence of learning from examples but rather an artifact of the use of numerical rules.\nAs for the three methods -EITHER, PTR and ID3 -which yield symbolic rules, we see in the top plot of Figure 10 that, as reported in (Ourston & Mooney, inp ress;Towell & Shavlik, 1993), the methods which exploit the givenfl awed theory do indeed achieve better results on PROMOTER than ID3, which does not exploit the theory.Moreover, asthe size of the training set grows, the performance of PTR is increasingly better than that of EITHER. 14Finally,w ew ish to point out an interesting fact about the example set. There is a set of 13 out of the 106 examples which each contain information substantially different than that in the rest of the examples. Experiments showt hat using ten-fold cross-validation on the 93 ''good'' examples yields 99. 2% accuracy, while training on all 93 of these examples and testing on the 13 ''bad''examples yields below40% accuracy." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Theory size", "publication_ref": [], "table_ref": [], "text": "The size of the output theory is an important measure of the comprehensibility of the output theory.I deally,t he size of the theory should not growt oo rapidly as the number of training examples is increased, as larger theories are necessarily harder to interpret. This observation holds both for the number of clauses in the theory as well as for the average number of antecedents in each of those clauses.\nTheory sizes for the theories produced by PTR are shown in Figure 11. The most striking aspect of these numbers is that all measures of theory size are relatively stable with respect to training set size. Naturally,t he exact values are to a large degree an artifact of the inductive learning component used. In Figure 10: PROMOTER: Error rates using nested training sets for purely symbolic theories (top plot) and numeric theories (bottom plot). Results for EITHER, RAPTURE, and KBANN are taken from (Mahoney&Mooney, 1993), while results for ID3 and PTR were generated using similar experimental procedures. Recall that PTR* is a non-learning numerical rule system; the PTR* line is extended horizontally for clarity. Unfortunately,m aking direct comparisons with KBANN or RAPTURE is difficult. In the case of KBANN and RAPTURE, which allown umerical rules, comparison is impossible given the differences in the underlying representation languages. Nevertheless, it is clear that, as expected, KBANN produces significantly larger theories than PTR. Fore xample, using 90 training examples from the PROMOTER theory,K BANN produces numerical theories with, on av erage, 10 clauses and 102 literals (Towell & Shavlik, 1993). These numbers would grow substantially if the theory were converted into strictly symbolic terms. RAPTURE, on the other hand, does not change the theory size, but, likeK BANN, yields numerical rules (Mahoney& Mooney, 1993)." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Complexity", "publication_ref": [], "table_ref": [], "text": "EITHER is exponential in the size of the theory and the number of training examples. For KBANN, each cycle of the training-by-backpropagation subroutine is O(d × n)( where d is the size of the network and n is the number of exemplars), and the number of such cycles typically numbers in the hundreds evenfor shallownets.\nLikebackpropagation, the cost of processing an example with PTR is linear in the size of the theory.I ncontrast, however, PTR typically converges after processing only a tinyfraction of the number of examples required by standard backpropagation techniques. Figure 11 shows the av erage number of exemplars (not cycles!) processed by PTR until convergence as a function of training set size. The only other cost incurred by PTR is that of revising the theory.E ach such revision in O(d × n). The average number of revisions to convergence is also shown in Figure 11." }, { "figure_ref": [ "fig_10", "fig_2", "fig_4", "fig_2", "fig_0", "fig_12", "fig_13" ], "heading": "Results on Synthetic Theories", "publication_ref": [ "b1", "b1", "b1" ], "table_ref": [], "text": "The character of the PROMOTER theory makei tl ess than ideal for testing theory revision algorithms. Wewish to consider theories which (i) are deeper,which (ii) makesubstantial use of negated internal literals and which (iii) are overly general as well as overly specific. As opposed to shallowt heories which can generally be easily repaired at the leaf level, deeper theories often require repairs at internal levels of the theory.T herefore, a theory revision algorithm which may perform well on shallowt heories will not necessarily scale up well to larger theories. Moreover, as theory size increases, the computational complexity of an algorithm might preclude its application altogether.W ewish to showthat PTR scales well to larger,deeper theories.\nSince deeper,p ropositional, real-world theories are scarce, we have generated them synthetically.A sa na dded bonus, we nowk nowt he target theory so we can perform controlled experiments on bias and radicality.I n ( Feldman, 1993) the aggregate results of experiments performed on a collection of synthetic theories are reported. In order to avoid the dubious practice of averaging results overdifferent theories and in order to highlight significant features of aparticular application of PTR, we consider here one synthetic theory typical of those studied in (Feldman, 1993). The theory Θ is shown is Figure 12. Observet hat Θ includes four levels of clauses and has manyn eg ated internal nodes. It is thus substantially deeper than theories considered before in testing theory revision algorithms. We artificially introduce, in succession, 15 errors into the theory Θ.The errors are shown in Figure 13. Foreach of these theories, we use the default initial weights assigned by the scheme of Appendix A.\nr ← A, BL ← T , p 1 r ← C, ¬D L ← p 2 , p 12 , p 16 A ← E, FM ← Z , ¬p 17 A ← p 0 , ¬G, p 1 , p 2 , p 3 M ← p 18 , ¬p 19 B ← ¬p 0 N ← ¬p 0 , p 1 B ← p 1 , ¬H N ← p 3 , p 4 , p 6 B ← p 4 , ¬p 11 N ← p 10 , ¬p 12 C ← I , JZ ← p 2 , p 3 C ← p 2 , ¬K Z ← ¬p 2 ,\nLet Γ i be the theory obtained after introducing the first i of these errors. In Figure 14 we showthe radicality, Rad Γ i (Θ), of Θ relative toeach of the flawed theories, Γ i for i = 3, 6, 9, 12, 15, as well as the number of examples misclassified by each of those theories. Note that, in general, the number of misclassified examples cannot necessarily be assumed to increase monotonically with the number of errors introduced since introducing an error may either generalize or specialize the theory.F or example, the fourth error introduced is ''undone''b yt he fifth error. Nevertheless, it is the case that for this particular set of errors, each successive theory is more radical and misclassifies a larger number of examples with respect to Θ.\nTo measure radicality and accuracy, wechoose 200 exemplars which are classified according to Θ.Now for each Γ i (i = 3, 6, 9, 12, 15) 4, 4.1, 7.6, 8.3, and 10.4, respectively. An example will showhow PTR achievesthis. Note from Figure 13 that the errors introduced in Γ 3 are the additions of the rules:\nA ← ¬p 6 S ← ¬p 5 S ← p 8 , ¬p 15 .\nIn most cases, PTR quickly locates the extraneous clause A ← ¬p 6 ,and discovers that deleting it results in the correct classification of all exemplars in the training set. In fact, this change also results in the correct classification of all test examples as well. The other twoa dded rules do not affect the classification of anyt raining examples, and therefore are not deleted or repaired by PTR. Thus the radicality of the changes made by PTR is lower than that required for restoring the original theory.Inaminority of cases, PTR first deletes the clause B ← ¬p 0 and only then deletes the clause A ← p 6 .Since the literal B is higher in the tree than the literal S,the radicality of these changes is marginally higher that that required to restore the original theory.\nIn Figure 16, we graph the accuracyof Γ′ on the test set. As expected, accuracydegenerates somewhat as the number of errors is increased. Nevertheless, evenf or Γ 15 ,P TR yields theories which generalize accurately. Next we wish showt he effects of positive bias, i.e., to showt hat user-provided guidance in the choice of initial weights can improve speed of convergence and accuracyi nc ross-validation. Fore ach of the flawed theories Γ 3 and Γ 15 ,w ec ompare the performance of PTR using default initial weights and biased initial weights (β = 2). In Figure 18, we showh ow cross-validation accuracyincreases when bias is introduced. In Figure 19, we showhow the number of examples which need to be processed until convergence decreases when bias is introduced.\nReturning to the example above,w es ee that the introduction of bias allows PTR to immediately find the flawed clause A ← p 6 and to delete it straight away. Inf act, PTR never requires the processing of more than 8 exemplars to do so. Thus, in this case, the introduction of bias both speeds up the revision process and results in the consistent choice of the optimal revision. Moreover, ith as also been shown in (Feldman, 1993) that PTR is robust with respect to random perturbations in the initial weights. In particular,intests on thirty different syntheticallygenerated theories, introducing small random perturbations to each edge of a dt-graph before training resulted in less than 2% of test examples being classified differently than when training wasperformed using the original initial weights. " }, { "figure_ref": [], "heading": "Summary", "publication_ref": [], "table_ref": [], "text": "Repairing internal literals and clauses is as natural for PTR as repairing leaves. Moreover, PTR converges rapidly.A sar esult, PTR scales up to deep theories without difficulty.E venf or very badly flawed theories, PTR quickly finds repairs which correctly classify all known exemplars. These repairs are typically less radical than restoring the original theory and are close enough to the original theory to generalize accurately to test examples.\nMoreover, although PTR is robust with respect to initial weights, user guidance in choosing these weights can significantly improve both speed of convergence and cross-validation accuracy." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b1", "b1", "b4" ], "table_ref": [], "text": "In this paper,w eh av e presented our approach, called PTR, to the theory revision problem for propositional theories. Our approach uses probabilities associated with domain theory elements to numerically track the ''flow''ofproof through the theory,allowing us to efficiently locate and repair flawed elements of the theory.W ep rove that PTR converges to a theory which correctly classifies all examples, and showe xperimentally that PTR is fast and accurate evenf or deep theories.\nThere are several ways in which PTR can be extended.\nFirst-order theories.T he updating method at the core of PTR assumes that provided exemplars unambiguously assign truth values to each observable proposition. In first-order theory revision the truth of an observable predicate typically depends on variable assignments. Thus, in order to apply PTR to first-order theory revision it is necessary to determine ''optimal'' variable assignments on the basis of which probabilities can be updated. One method for doing so is discussed in (Feldman, 1993).\nInductive bias.P TR uses bias to locate flawed elements of a theory.A nother type of bias can be used to determine which revision to make. For example, it might be known that a particular clause might be missing a literal in its body but should under no circumstances be deleted, or that only certain types of literals can be added to the clause but not others. Likewise, it might be known that a particular literal is replaceable but not deletable, etc. It has been shown (Feldman et al.,1993) that by modifying the inductive component of PTR to account for such bias, both convergence speed and cross-validation accuracyare substantially improved.\nNoisy exemplars.W eh av e assumed that it is only the domain theory which is in need of revision, but that the exemplars are all correctly classified. Often this is not the case. Thus, it is necessary to modify PTR to takei nto account the possibility of reclassifying exemplars on the basis of the theory rather than vice-versa. The PTR* algorithm (Section 6) suggests that misclassed exemplars can sometimes be detected before processing. Briefly,t he idea is that an example which allows multiple proofs of some root is almost certainly IN for that root regardless of the classification we have been told. Thus, if u E (e r )ishigh, then E is probably IN regardless of what we are told; analogously,i f u E (e r )i sl ow.Amodified version of PTR based on this observation has already been successfully implemented (Koppel et al.,1993).\nIn conclusion, we believe the PTR system marks an important contribution to the domain theory revision problem. More specifically,the primary innovations reported here are:\n(1) By assigning bias in the form of the probability that an element of a domain theory is flawed, we can clearly define the objective ofatheory revision algorithm.\n(2) By reformulating a domain theory as a weighted dt-graph, we can numerically trace the flowofaproof or refutation through the various elements of a domain theory.\n(3) Proof flowc an be used to efficiently update the probability that an element is flawed on the basis of an exemplar.\n(4) By updating probabilities on the basis of exemplars, we can efficiently locate flawed elements of a theory.\n(5) By using proof flow, wecan determine precisely on the basis of which exemplars to revise aflawed element of the theory.\nthe weight 1. Let E be the example such that for each observable proposition P in Γ, E(P)isthe ap riori probability that P is true in a randomly selected example. 15 In particular,f or the typical case in which observable propositions are Boolean and all example are equiprobable, E(P) = 1 2 .\nE can be thought of as the ''average''example. Then, if no edge of ∆ Γ has more than one parentedge, we formally define the semantic significance, Μ(e), of an edge e in ∆ Γ as follows:\nΜ(e) = u Κ I (e) E (e r ) -u Κ e I (e) E (e r ).\nThat is, Μ(e)isthe difference of the flowof E through the root r,with and without the edge e.\nNote that Μ(e)c an be efficiently computed by first computing u Κ I E (e)f or every edge e in a single bottom-up traversal of ∆ Γ ,and then computing Μ(e)for every edge e in a single top-down traversal of ∆ Γ ,asfollows:\n(1) For a root edge r, Μ(r) = 1u Κ I E (r).\n(2) For all other edges, Μ(e) =Μ(\nf (e)) × 2(1 -u Κ I E (e)) u Κ I E (e)\n,w here f (e)i st he parent edge of e.\nIf some edge in ∆ Γ has more than one parent-edge then we define Μ(e)f or an edge by using this method of computation, where in place of Μ( f (e)) we use\nf max   Μ( f (e))   . Finally,for a set, R,ofedges in G,wedefine Μ(R) = e ∈ R Σ Μ(e). 16\nNow, having computed Μ(e)w ec ompute the initial weight assignment to e, p(e), in the following way.C hoose some large C.17 Foreach e in ∆ Γ define:\nIf n e is not a observable proposition then Θ |-E ¬n e precisely if all its children in Θ are true in Θ,that is, if all its children are unused in Θ.But then\n(edge independence) p(N E (e)) = p(e) × p(Θ |-E ¬n e ) (induction hypothesis) = p(e) × s ∈ children(e) Π p(N E (s)) = p(e) × s ∈ children(e) Π u E (s) = u E (e).\nThis justifies the bottom-up part of the algorithm. In order to justify the top-down part we need one more definition.\nLet p(e| 〈 E, Θ(E) 〉 )b et he probability that e ∈∆ Θ given 〈∆ Γ , p 〉 and the exemplar 〈 E, Θ(E) 〉 .Then\np(e| 〈 E, Θ(E) 〉 ) = Γ′ ⊆ Γ Σ { p(∆ Θ =∆ Γ′ )|e ∈∆ Γ′ , Θ(E) =Γ′(E)} Γ′ ⊆ Γ Σ { p(∆ Θ =∆ Γ′ )|Θ(E) =Γ′(E)} ." }, { "figure_ref": [], "heading": "Nowwehav e", "publication_ref": [ "b10" ], "table_ref": [], "text": "Theorem B2:If 〈∆ Γ , w 〉 is deletion-only,edge-independent and tree-like, then for ev ery edge e in ∆ Γ , p new (e) = p(e| 〈 E, Θ(E) 〉 ).\nIn order to prove the theorem we need several lemmas: This follows immediately from the fact that if an edge, e,isused, then its parent-edge, f (e), is not used. This lemma states that N E (e)a nd 〈 E, Θ(E) 〉 are conditionally independent given N E ( f (e)) (Pearl, 1988). That is, once N E ( f (e)) is known, 〈 E, Θ(E) 〉 adds no information regarding N E (e). This is immediate from the fact that p( 〈 E, Θ(E) 〉 |N E ( f (e))) can be expressed in terms of the probabilities associated with non-descendants of f (e), while p(N E (e)) can be expressed in terms of the probabilities associated with descendants of r(e). This lemma, which is analogous to Lemma B1, follows from the fact that if e is deleted, then e is unused." }, { "figure_ref": [], "heading": "Lemma", "publication_ref": [], "table_ref": [], "text": "Lemma B5:For every example E and every edge e in ∆ Γ , p( ¬e| ¬N E (e), 〈 E, Θ(E) 〉 ) = p( ¬e| ¬N E (e)). This lemma, which is analogous to Lemma B2, states that ¬e and 〈 E, Θ(E) 〉 are conditionally independent given ¬N E (e). That is, once ¬N E (e)i sk nown, 〈 E, Θ(E) 〉 adds no information regarding the probability of ¬e.This is immediate from the fact that p( 〈 E, Θ(E) 〉 | ¬N E (e)) can be expressed in terms of the probabilities of edges other than e.\nWe now hav e all the pieces to prove Theorem B2. " }, { "figure_ref": [], "heading": "Κ, u Κ′", "publication_ref": [ "b1" ], "table_ref": [], "text": "E (e r )>0.This contradicts the assumption that E is misclassified by Κ′. Let us nowt urn to the proof of Theorem C1. We will use the following four lemmas, slight variants of which are provedin (Feldman, 1993).\nLemma C1:If Κ′ = 〈∆, p′〉 is obtained from Κ=〈∆, p 〉 via updating of weights, then for every edge e ∈∆such that 0 < p(e)<1,wehav e 0< p′(e)<1. 19 Lemma C2:L et Κ=〈∆, p 〉 be a weighted dt-graph such that 0 < u Κ E (e r )<1 and let Κ′ = 〈∆, p′〉.T hen if for every edge e in ∆ such that 0 < p(e)<1,w eh av e 0< p′(e)<1,itfollows that 0 < u Κ′ E (e r )<1. Lemma C3:L et Κ=〈∆, p 〉 be a weighted dt-graph such that u Κ E (e r )>0 and let Κ′ = 〈∆′, p′〉.T he, if for every edge e in ∆,itholds that either:\n(i) p′(e) = p(e), or (ii) depth(e)isodd and u Κ′ E (e)>0,or (iii) depth(e)isevenand u Κ′ E (e)<1 then u Κ′ E (e)>0. An analogous lemma holds where the roles of ''> 0''and ''< 1''are reversed. Κ e E (r). We can nowp rove consistency( Theorem C1). We assume, without loss of generality,t hat 〈 E, Θ(E) 〉 is an IN exemplar of the root r and prove that for each one of the fiveo perations (updating and four revision operators) of PTR, that if Κ′ is obtained by that operation from Κ and u Κ E (e r )>0,then u Κ′ E (e r )>0. Proof of Theorem C1:T he proof consists of fives eparate cases, each corresponding to one of the operations performed by PTR.\nCase 1: Κ′ is obtained from Κ via updating of weights.\nBy Lemma C1, for every edge e in ∆,i f0< p(e)<1 then 0 < p′(e)<1.B ut then by Lemma C2, if u Κ E (e r )>0then u Κ′ E (e r )>0. Case 2: Κ′ is obtained from Κ via deletion of an evenedge, e.\nFrom Lemma C4(i), we have u Κ e E (e r ) ≥ u Κ E (e r )>0. Case 3: Κ′ is obtained from Κ via deletion of an odd edge, e.\nThe edge e is deleted only if it is not needed for anye xemplar.S uppose that, contrary to the theorem, there is an IN exemplar 〈 E, Θ(E) 〉 such that u Κ E (e r )>0 but u Κ′ E (e r ) = 0. Then 19 Recall that in the updating algorithm we defined\nv E (e r i ) =    ε 1 -ε if Θ i (E) = 0 if Θ i (E) = 1 .\nThe somewhat annoying presence of ε >0is necessary for the proof of Lemma C1. But then e is needed for E,c ontradicting the fact that e is not needed for any exemplar.\nCase 4: Κ′ is obtained from Κ via appending a subtree beneath an evenedge, e.\nIf p′(e)<1,then the result is immediate from Lemma C2. Otherwise, let f be the root edge of the subtree ∆ a which is appended to Thus, e is needed for E in Κ.Now,let f be the root edge of the appended subtree, ∆ a .T hen, by the construction of ∆ a ,i tf ollows that u Κ′ E ( f )<1 and, therefore u Κ′ E (e)>0.The result is immediate from Lemma C3.\nThis completes the proof of the theorem.\nIt is instructive ton ote whyt he proof of Theorem C1 fails if ∆ is not restricted to unambiguous single-rooted dt-graphs. In case 4 of the proof of Theorem C1, we use the fact that if an edge e is destructive for an exemplar 〈 E, Θ(E) 〉 then the revision algorithm used to construct the subgraph, ∆ a ,a ppended to e will be such that u Κ′ E ( f ) = 1. However, this fact does not hold in the case where e is simultaneously needed and destructive.T his can occur if e is a descendant of twor oots where E is IN for one root and OUT for another root. It can also occur when one path from e to the root r is of evenlength and another path is of odd length." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "The authors wish to thank Hillel Walters of Bar-Ilan University for his significant contributions to the content of this paper.T he authors also wish to thank the JAIR reviewers for their exceptionally prompt and helpful remarks. Support for this research was provided in part by the Office of NavalR esearch through grant N00014-90-J-1542 (AMS, RF) and the Air Force Office of Scientific Research under contract F30602-93-C-0018 (AMS)." }, { "figure_ref": [], "heading": "Appendix A: Assigning Initial Weights", "publication_ref": [], "table_ref": [], "text": "In this appendix we give one method for assigning initial weights to the elements of a domain theory.T he method is based on the topology of the domain theory and assumes that no userprovided information regarding the likelihood of errors is available. If such information is available, then it can be used to override the values determined by this method.\nThe method works as follows. First, for each edge e in ∆ Γ we define the ''semantic impact'' of e, Μ(e). Μ(e)i sm eant to signify the proportion of examples whose classification is directly affected by the presence of e in ∆ Γ .\nOne straightforward way of formally defining Μ(e)i st he following. Let Κ I be the pair 〈∆ Γ , I 〉 such that I assigns all root and negation edges the weight 1 and all other edges the weight 1 2 .L et I (e)b ei dentical to I except that e and all its ancestor edges have been assigned\np(e) = C Μ(e) C Μ(e) + 1 . Now, reg ardless of how Μ(e)isdefined, the virtue of this method of computing p(e)from Μ(e)is the following: for such an initial assignment, p, if two sets of edges 〈∆ Γ , p 〉 areo fe qual total strength then as revision sets theya re ofe qual radicality. This means that all revision sets of equal strength are a priori equally probable.\nForaset of edges of ∆ Γ ,define\nThen the above can be formalized as follows:\nTheorem A1:I f R and S are sets of elements of Γ such that Μ(R) =Μ(S)t hen it follows that Rad(R) = Rad(S).\nProof of Theorem A1:L et R and S be sets of edges such that Μ(R) =Μ(S). = C Μ(R)-Μ(S) = 1." }, { "figure_ref": [], "heading": "Recall that", "publication_ref": [], "table_ref": [], "text": "It follows immediately that Rad(R) = Rad(S).\nAs imple consequence which illustrates the intuitiveness of this theorem is the following: suppose we have two possible revisions of ∆,e ach of which entails deleting a simple literal. Suppose further that one literal, l 1 ,isdeep in the tree and the other, l 2 ,ishigher in the tree so that Μ(l 2 ) = 4 ×Μ(l 1 ). Then, using default initial weights as assigned above,t he radicality of deleting l 2 is 4 times as great as the radicality of deleting l 1 ." }, { "figure_ref": [], "heading": "Appendix B: Updated Weights as Conditional Probabilities", "publication_ref": [], "table_ref": [], "text": "In this appendix we prove that under certain limiting conditions, the algorithm computes the conditional probabilities of the edges giventhe classification of the example.\nOur first assumption for the purpose of this appendix is that the correct dt-graph ∆ Θ is known to be a subgraph of the givendt-graph ∆ Γ .This means that for every node n in ∆ Γ , w(n) = 1(and, consequently,f or every edge e in ∆ Γ , p(e) = w(e)). A pair 〈∆ Γ , w 〉 with this property is said to be deletion-only.\nAlthough we informally defined probabilities directly on edges, for the purposes of this appendix we formally define our probability function on the space of all subgraphs of ∆ Γ .That is, the elementary events are of the form ∆ Θ =∆ Γ′ where ∆ Γ′ ⊆∆ Γ .T hen the probability that e ∈∆ Θ is simply\nWe say that a deletion-only,w eighted dt-graph 〈∆ Γ , p 〉 is edge-independent if for any Γ′ ⊆ Γ,\nFinally,wesay that ∆ Γ is tree-like if no edge e ∈∆ Γ has more than one parent-edge. Observethat anydt-graph which is connected and tree-likehas only one root.\nWe will prove results for deletion-only,edge-independent, tree-likeweighted dt-graphs. 18 First we introduce some more terminology.R ecall that every node in ∆ Γ is labeled by one of the literals in Γ and that by definition, this literal is true if not all of its children in ∆ Γ are true. Recall also that the dt-graph ∆ Γ′ ⊆∆ Γ represents the sets of NAND equations, Γ′ ⊆ Γ.Aliteral l in Γ forces its parent in Γ to be true, giventhe set of equations Γ′ and the example E,if l appears in Γ′ and is false given Γ′ and E.( This follows from the definition of NAND.) Thus we say that an edge e in ∆ Γ is used by E in ∆ Γ′ if e ∈∆ Γ′ and Γ′ |-E ¬n e .\nIf e is not used by E in ∆ Γ′ we write N Γ′ E (e). Note that N Γ′ E (e r )ifand only if Γ′(E) = 1. Note that, giventhe probabilities of the elementary events ∆ Γ′ =∆ Θ ,the probability p(N Θ E (e)) that the edge e is not used by E in the target domain theory Θ is simply\nWhere there is no ambiguity we will use N E (e)torefer to N Θ E (e).\nTheorem B1:I f 〈∆ Γ , w 〉 is a deletion-only,e dge-independent, tree-likew eighted dt-graph, then for every edge e in ∆ Γ , u E (e) = p(N E (e)).\nProof of Theorem B1:W eu se induction on the distance of n e from its deepest descendant.\nIf n e is an observable proposition P then e is used by E in Θ precisely if e ∈Θand P is false in E.Thus the probability that e is not used by" }, { "figure_ref": [], "heading": "Appendix C: Proof of Convergence", "publication_ref": [], "table_ref": [], "text": "We hav e seen in Section 5 that PTR always terminates. We wish to showt hat when it does, all exemplars are classified correctly.W ew ill prove this for domain theories which satisfy certain conditions which will be made precise below. The general idea of the proof is the following: by definition, the algorithm terminates either when all exemplars are correctly classified or when all edges have weight 1. Thus, it is only necessary to showt hat it is not possible to reach a state in which all edges have weight 1 and some exemplar is misclassified. We will prove that such a state fails to possess the property of ''consistency''w hich is assumed to hold for the initial weighted dt-graph Κ,and which is preserved at all times by the algorithm.\nDefinition (Consistency):T he weighted dt-graph Κ=〈∆, p 〉 is consistent with exemplar 〈 E, Θ(E) 〉 if, for every root r i in ∆,either:\nRecall that an edge e is defined to be evenifitisofe vendepth along every path from a root and odd if is of odd depth along every path from a root. Adomain theory is said to be unambiguous if ev ery edge is either odd or even. Note that negation-free domain theories are unambiguous. We will prove our main theorem for unambiguous, single-root domain theories.\nRecall that the only operations performed by PTR are:\n(1) updating weights,\n(2) deleting ev enedges,\n(3) deleting odd edges, (4) adding asubtree beneath an evenedge, and\n(5) adding asubtree beneath an odd edge.\nWe shall showt hat each of these operations is performed in such a way as to preserve consistency.\nTheorem C1 (Consistency):I f Κ=〈∆, p 〉 is a single-rooted, unambiguous weighted dt-graph which is consistent with the exemplar 〈 E, Θ(E) 〉 and Κ′ = 〈∆′, p′〉 is obtained from Κ via a single operation performed by PTR, then Κ′ is also a single-rooted, unambiguous dt-graph which is consistent with E.\nBefore we prove this theorem we showthat it easily implies convergence of the algorithm.\nTheorem C2 (Convergence):G iv enas ingle-rooted, unambiguous weighted dtgraph Κ and a set of exemplars Ζ such that Κ is consistent with every exemplar in Ζ,PTR terminates and produces a dt-graph ∆′ which classifies every exemplar in Ζ correctly.\nProof of Theorem C2:I fP TR terminates prior to each edge being assigned the weight 1, then by definition, all exemplars are correctly classified. Suppose then that PTR produces a weighted dt-graph Κ′ = 〈∆′, p′〉 such that p′(e) = 1for every e ∈∆′.A ssume, contrary to the theorem, that some exemplar 〈 E, Θ(E) 〉 is misclassified by Κ′ for the root r.W ithout loss of generality,a ssume that 〈 E, Θ(E) 〉 is an IN exemplar of r.Since p′(e) = 1for every edge, this means that u Κ′ E (e r ) = 0. But this is impossible since the consistencyo f Κ implies that u K E (e r )>0 and thus it follows from Theorem C1 that for any Κ′ obtainable form C i Aclause label." }, { "figure_ref": [], "heading": "H i", "publication_ref": [], "table_ref": [], "text": "Aclause head; it consists of a single positive literal." }, { "figure_ref": [], "heading": "B i", "publication_ref": [], "table_ref": [], "text": "Aclause body; it consists of a conjunction of positive orneg ative literals." }, { "figure_ref": [], "heading": "E", "publication_ref": [], "table_ref": [], "text": "An example; it is a set of observable propositions. Γ i (E)T he classification of the example E for the ith root according to domain theory Γ. Θ i (E)T he correct classification of the example E for the ith root.\n〈 E, Θ(E) 〉 An exemplar,aclassified example. Γ\nThe set of NAND clauses equivalent to Γ. ∆ Γ\nThe dt-graph representation of Γ." }, { "figure_ref": [], "heading": "n e", "publication_ref": [], "table_ref": [], "text": "The node to which the edge e leads." }, { "figure_ref": [], "heading": "n e", "publication_ref": [], "table_ref": [], "text": "The node from which the edge e comes.\np(e)T he weight of the edge e;i tr epresents the probability that the edge e needs to be deleted or that edges need to be appended to the node n e . Κ=〈∆, p 〉 Aweighted dt-graph. Κ e Same as Κ butwith the weight of the edge e equal to 1. Κ e Same as Κ butwith the edge e deleted.\nu E (e)T he ''flow''ofproof from the example E through the edge e.\nv E (e)T he adjusted flowo fp roof through e taking into account the correct classification of the example E.\nR i ( 〈 E, Θ(E) 〉 , e, Κ)T he extent (ranging from 0 to ∞ )towhich the edge e in the weighted dt- graph Κ contributes to the correct classification of the example E for the ith root. If R i is less/more than 1, then e is harmful/helpful; if R i = 1then e is irrelevant.\nσ\nThe revision threshold; if p(e)<σ then e is revised.\nλ\nThe weight assigned to a revised edge and to the root of an appended component.\nδ σ\nThe revision threshold increment.\nThe revised edge weight increment.\nRad Κ (Γ′)T he radicality of the changes required to Κ in order to obtain a revised theory Γ′." } ]
[ { "authors": "B Buchanan; E H Shortliffe", "journal": "Addison Wesley", "ref_id": "b0", "title": "Rule-Based Expert Systems: The MYCIN Experiments of the StanfordHeuristic Programming Project", "year": "1984" }, { "authors": "R Feldman", "journal": "", "ref_id": "b1", "title": "Probabilistic Revision of Logical Domain Theories", "year": "1993" }, { "authors": "R Feldman; M Koppel; A M Segre", "journal": "", "ref_id": "b2", "title": "The Relevance of Bias in the Revision of Approximate Domain Theories", "year": "1993-08" }, { "authors": "A Ginsberg", "journal": "", "ref_id": "b3", "title": "Theory Reduction, Theory Revision, and Retranslation", "year": "1990-07" }, { "authors": "M Koppel; R Feldman; A M Segre", "journal": "", "ref_id": "b4", "title": "Theory Revision Using Noisy Exemplars", "year": "1993-12" }, { "authors": "J Mahoney; R Ooney", "journal": "Connection Science", "ref_id": "b5", "title": "Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases", "year": "1993" }, { "authors": "P M Murphy; D W Aha", "journal": "", "ref_id": "b6", "title": "UCI Repository of Machine Learning Databases [Machinereadable data repository", "year": "1992" }, { "authors": "D Ourston", "journal": "", "ref_id": "b7", "title": "Using Explanation-Based and Empirical Methods in Theory Revision", "year": "1991-08" }, { "authors": "D Ourston; R Mooney", "journal": "Artificial Intelligence", "ref_id": "b8", "title": "Theory Refinement Combining Analytical and Empirical Methods", "year": "" }, { "authors": "M Pazzani; C Brunk", "journal": "KnowledgeA cquisition", "ref_id": "b9", "title": "Detecting and Correcting Errors in Rule-Based Expert Systems: An Integration of Empirical and Explanation-Based Learning", "year": "1991-06" }, { "authors": "J Pearl", "journal": "Morgan Kaufmann", "ref_id": "b10", "title": "Probabilistic Reasoning in Intelligent Systems", "year": "1988" }, { "authors": "J R Quinlan", "journal": "Machine Learning", "ref_id": "b11", "title": "Induction of Decision Trees", "year": "1986" }, { "authors": "G G To Well; J W Shavlik", "journal": "Machine Learning", "ref_id": "b12", "title": "Extracting Refined Rules From Knowledge-Based Neural Networks", "year": "1993-10" }, { "authors": "D C Wilkins", "journal": "", "ref_id": "b13", "title": "Knowledge Base Refinement Using Apprenticeship Learning Techniques", "year": "1988-07" }, { "authors": "J Wogulis; M J Pazzani", "journal": "", "ref_id": "b14", "title": "AM ethodology for Evaluating Theory Revision Systems: Results with AudreyI I", "year": "1993-08" } ]
[ { "formula_coordinates": [ 4, 90, 395.49, 432, 28.85 ], "formula_id": "formula_0", "formula_text": "Γ(E) =〈Γ 1 (E), ... , Γ n (E) 〉 where Γ i (E) = 1i f E |-Γ r i (using resolution) and Γ i (E) = 0i f E |-/ Γ r i ." }, { "formula_coordinates": [ 5, 212.1, 503.12, 224.11, 15.05 ], "formula_id": "formula_1", "formula_text": "H i ← B i ∈Γ,the equation Ĉi = NAND(B i )isin Γ." }, { "formula_coordinates": [ 6, 104.82, 107.41, 240.23, 94.25 ], "formula_id": "formula_2", "formula_text": "buy-stock = NAND({C 1 }), C 1 = NAND({increased-demand, ¬product-liability}), ¬product-liability = NAND({product-liability}), increased-demand = NAND({C 3 , C 4 }), product-liability = NAND({C 2 }), C 2 = NAND({popular-product, unsafe-packaging}), C 3 =" }, { "formula_coordinates": [ 8, 106.38, 424.56, 158.93, 26.25 ], "formula_id": "formula_3", "formula_text": "p(∆′) = e ∈∆′ Π p(e) × e ∈∆-∆′ Π 1 -p(e)." }, { "formula_coordinates": [ 8, 106.38, 470.58, 154.64, 25.88 ], "formula_id": "formula_4", "formula_text": "p(∆′) = e ∈∆-S Π p(e) × e ∈ S Π 1 -p(e)." }, { "formula_coordinates": [ 8, 105.37, 546.87, 156.61, 25.88 ], "formula_id": "formula_5", "formula_text": "w(∆′) = e ∈∆-S Π p(e) × e ∈ S Π 1 -p(e)." }, { "formula_coordinates": [ 9, 105.69, 651.18, 226.23, 25.88 ], "formula_id": "formula_6", "formula_text": "Rad Κ (∆′) = e ∈∆-S Σ -log( p(e)) + e ∈ S Σ -log(1 -p(e))" }, { "formula_coordinates": [ 11, 105.56, 455.12, 130.11, 36.33 ], "formula_id": "formula_7", "formula_text": "E(P) =    1 0 if P is true in E if P is false in E." }, { "formula_coordinates": [ 11, 90.09, 533.41, 138.18, 15.05 ], "formula_id": "formula_8", "formula_text": "u E (e) = 1 -[ p(e) × (1 -E(P))]" }, { "formula_coordinates": [ 12, 105.09, 89.49, 124.95, 23.31 ], "formula_id": "formula_9", "formula_text": "u E (e) = 1 -p(e) s ∈ children(e)" }, { "formula_coordinates": [ 12, 105.32, 323.67, 135.53, 36.33 ], "formula_id": "formula_10", "formula_text": "v E (e r i ) =    ε 1 -ε if Θ i (E) = 0 if Θ i (E) = 1." }, { "formula_coordinates": [ 12, 105.32, 409.15, 153.29, 26.85 ], "formula_id": "formula_11", "formula_text": "v E (e) = 1 -(1 -u E (e)) × v E ( f (e)) u E ( f (e))" }, { "formula_coordinates": [ 12, 272.57, 439.41, 132.95, 36.33 ], "formula_id": "formula_12", "formula_text": "   1 - max[v E ( f (e)), u E ( f (e))] min[v E ( f (e)), u E ( f (e))]" }, { "formula_coordinates": [ 12, 106.38, 523.06, 147.33, 26.85 ], "formula_id": "formula_13", "formula_text": "p new (e) = 1 -(1 -p(e)) × v E (e) u E (e) ." }, { "formula_coordinates": [ 13, 127.04, 368.2, 349.92, 167.38 ], "formula_id": "formula_14", "formula_text": "V ⇐ Roots(∆); for r i ∈ Roots(∆) do begin if Γ i (E) = 1 then v(r i ) ⇐ ε ; else v(r i ) ⇐ 1 -ε ; S ⇐ Merge(S, Children(r i , ∆)); end while S ≠∅ do begin e ⇐ PopSuitableChild(S, V ); V ⇐ AddElement(e, V ); f ⇐ MostChangedParent(e, ∆); v(e) ⇐ 1 -(1 -u(e)) × v( f ) u( f ) ; p(e) ⇐ 1 -(1 -p(e)) × v(e) u(" }, { "formula_coordinates": [ 17, 105.69, 488.92, 201.16, 66.84 ], "formula_id": "formula_15", "formula_text": "r i R i ( 〈 E, Θ(E) 〉 , e, Κ) =   1 -Θ i (E)   -u Κ e E (e r i )   1 -Θ i (E)   -u Κ e E (e r i )" }, { "formula_coordinates": [ 18, 105.69, 257.97, 397.56, 80.33 ], "formula_id": "formula_16", "formula_text": "R( 〈 {popular-product, unsafe-packaging, established-market}, 〈 0 〉〉, C3, Η) = 0. 8 R( 〈 {unsafe-packaging, new-market}, 〈 1 〉〉, C3, Η) = 1. 0 R( 〈 {popular-product, established-market, celebrity-endorsement}, 〈 1 〉〉, C3, Η) = 136. 1 R( 〈 {popular-product, established-market, superior-flavor}, 〈 0 〉〉, C3, Η) = 0. 1 R( 〈 {popular-product, established-market, ecologically-correct}, 〈 0 〉〉, C3, Η) = 0. 1 R( 〈 {new-market, celebrity-endorsement}, 〈 1 〉〉, C3, Η) = 1. 0" }, { "formula_coordinates": [ 18, 138.86, 458.68, 239.72, 106.36 ], "formula_id": "formula_17", "formula_text": "Κ e E ⇐ u(r i ); p ⇐ p saved ; if Γ i (E) = 1 then R i ⇐ u Κ e E u Κ e E ; else R i ⇐ 1 -u Κ e E 1 -u Κ e E ; if R i >2then N ⇐ N ∪ {E}; if R i < 1 2 then D ⇐ D ∪ {E};" }, { "formula_coordinates": [ 22, 90, 677.6, 63.53, 36.33 ], "formula_id": "formula_18", "formula_text": "max    1 δ σ , 1 δ λ   " }, { "formula_coordinates": [ 23, 138.6, 544.5, 317.19, 72.27 ], "formula_id": "formula_19", "formula_text": "if ∀ e ∈∆, p(e) = 1 or ∀ E ∈Ζ, Γ(E) =Θ(E) then return 〈∆, p 〉 ; end λ ⇐ max[λ + δ λ ,1]; σ ⇐ max[σ + δ σ ,1]; end end" }, { "formula_coordinates": [ 25, 104.82, 150.69, 391.03, 53.93 ], "formula_id": "formula_20", "formula_text": "C1: buy-stock ← increased-demand ∧ ¬product-liability C2: product-liability ← popular-product ∧ unsafe-packaging C3: increased-demand ← popular-product ∧ established-market ∧ celebrity-endorsement C4: increased-demand ← new-market." }, { "formula_coordinates": [ 25, 442.83, 666.8, 79.18, 36.33 ], "formula_id": "formula_21", "formula_text": "1 δ σ , 1 δ λ    .W ew ill" }, { "formula_coordinates": [ 26, 105.69, 199.67, 288.42, 31.56 ], "formula_id": "formula_22", "formula_text": "Rad Κ β (Θ) =-log( e ∈ S Π (1 -p(e)) 1 β × e ∈ /S Π ( p(e)) 1 β ) = 1 β Rad Κ (Θ)." }, { "formula_coordinates": [ 31, 140.26, 281.29, 231.42, 120.65 ], "formula_id": "formula_23", "formula_text": "r ← A, BL ← T , p 1 r ← C, ¬D L ← p 2 , p 12 , p 16 A ← E, FM ← Z , ¬p 17 A ← p 0 , ¬G, p 1 , p 2 , p 3 M ← p 18 , ¬p 19 B ← ¬p 0 N ← ¬p 0 , p 1 B ← p 1 , ¬H N ← p 3 , p 4 , p 6 B ← p 4 , ¬p 11 N ← p 10 , ¬p 12 C ← I , JZ ← p 2 , p 3 C ← p 2 , ¬K Z ← ¬p 2 ," }, { "formula_coordinates": [ 34, 105.36, 150.69, 65.22, 41.45 ], "formula_id": "formula_24", "formula_text": "A ← ¬p 6 S ← ¬p 5 S ← p 8 , ¬p 15 ." }, { "formula_coordinates": [ 39, 105, 342.1, 130.16, 18.47 ], "formula_id": "formula_25", "formula_text": "Μ(e) = u Κ I (e) E (e r ) -u Κ e I (e) E (e r )." }, { "formula_coordinates": [ 39, 265.62, 447.35, 91.05, 33.96 ], "formula_id": "formula_26", "formula_text": "f (e)) × 2(1 -u Κ I E (e)) u Κ I E (e)" }, { "formula_coordinates": [ 39, 90, 491.83, 408.98, 52.67 ], "formula_id": "formula_27", "formula_text": "f max   Μ( f (e))   . Finally,for a set, R,ofedges in G,wedefine Μ(R) = e ∈ R Σ Μ(e). 16" }, { "formula_coordinates": [ 42, 128.37, 123.19, 349.63, 107.69 ], "formula_id": "formula_28", "formula_text": "(edge independence) p(N E (e)) = p(e) × p(Θ |-E ¬n e ) (induction hypothesis) = p(e) × s ∈ children(e) Π p(N E (s)) = p(e) × s ∈ children(e) Π u E (s) = u E (e)." }, { "formula_coordinates": [ 42, 106.37, 311.8, 278.07, 50.73 ], "formula_id": "formula_29", "formula_text": "p(e| 〈 E, Θ(E) 〉 ) = Γ′ ⊆ Γ Σ { p(∆ Θ =∆ Γ′ )|e ∈∆ Γ′ , Θ(E) =Γ′(E)} Γ′ ⊆ Γ Σ { p(∆ Θ =∆ Γ′ )|Θ(E) =Γ′(E)} ." }, { "formula_coordinates": [ 46, 105.32, 636.46, 135.53, 36.33 ], "formula_id": "formula_30", "formula_text": "v E (e r i ) =    ε 1 -ε if Θ i (E) = 0 if Θ i (E) = 1 ." } ]
Bias-DrivenRevision of Logical Domain Theories
The theory revision problem is the problem of howb est to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ''flow''o fp roof through the theory.T his allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair flawed elements of the theory. PTR is provedt oc onverget oat heory which correctly classifies all examples, and shown experimentally to be fast and accurate evenfor deep theories.
Moshe Koppel; Ronen Feldman; A C Il; Alberto Maria Segre
[ { "figure_caption": "Figure 1 :1Figure 1: The dt-graph, ∆ Τ ,ofthe theory Τ.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The weighted dt-graph, 〈∆ Τ , p 〉 .", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Pseudo code for processing a single exemplar.T he functions BottomUp and TopDown sweep the dt-graph. BottomUp returns an array on edges representing proof flow, while TopDown returns an updated weighted dt-graph. We are assuming the dt-graph datastructure has been defined and initialized appropriately.F unctions Children, Parents, Roots,a nd Leaves return sets of edges corresponding to the corresponding graph relation on the dt-graph. Function Merge and Ad-dElement operate on sets, and functions PopSuitableParent and PopSuitableChild return an element of its first argument whose children or parents, respectively,a re all already elements of its second argument while simultaneously deleting the element from the first set, thus guaranteeing the appropriate graph traversal strategy.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The weighted dt-graph of Figure 2 after processing the exemplar 〈 {unsafe-packaging, new-market}, 〈 1 〉〉.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The weighted dt-graph of Figure 2 after processing exemplars 〈 {unsafe-packaging, new-market}, 〈 1 〉〉, 〈 {popular-product, established-market, superior-flavor}, 〈 0 〉〉,and 〈 {popular-product, established-market, celebrity-endorsement}, 〈 0 〉〉.The clause C3has dropped belowthe threshold.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Consider our example from above.W ea re repairing the clause C3. Weh av e already found that the set D consists of the examples {popular-product, established-market, superior-flavor} and {popular-product, established-market, ecologically-correct} while the set N consists of the single example {popular-product, established-market, celebrity-endorsement}. Using ID3 to find a formula which excludes N and includes D,w eo btain { ¬celebrity-endorsement} which translates into the single clause, {l ← ¬celebrity-endorsement}.T ranslating into dt-graph form and suturing (and simplifying using the technique of Footnote 11), we obtain the dt-graph shown in Figure 8. Observen ow that the domain theory Τ′ represented by this dt-graph correctly classifies the examples {popular-product, established-market, superior-flavor} and {popular-product, established-market, ecologically-correct} which were misclassified by the original domain theory Τ.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: The weighted dt-graph of Figure 2 after revising the clause C3( the graph has been slightly simplified in accordance with the remark in Footnote 11).", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "1βand for each e ∈ /S , p β (e) = ( p(e)) 1 β .", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "contrast, for EITHER, theory size increases with training set", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: The synthetic domain theory Θ used for the experiments of Section 6.", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 17 Figure 17 :1717Figure17shows the average number of exemplars required for convergence. As expected, the fewer errors in the theory,t he fewer exemplars PTR requires for convergence. Moreover, the", "figure_data": "", "figure_id": "fig_11", "figure_label": "1717", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure18: Error rates for the output theories produced by PTR from Γ i (i = 3, 6, 9, 12, 15), using favorably-biased initial weights.", "figure_data": "", "figure_id": "fig_12", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 :19Figure 19: Number of exemplars processed until convergence using favorably-biased initial weights.", "figure_data": "", "figure_id": "fig_13", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Lemma B1 :B1For every example E and every edge e in ∆ Γ p( ¬N E (e)) = p( ¬N E (e), N E ( f (e))) = p( ¬N E (e)|N E ( f (e))) × p(N E ( f (e))).", "figure_data": "", "figure_id": "fig_14", "figure_label": "B1", "figure_type": "figure" }, { "figure_caption": "B2:For every example E and every edge e in ∆ Γ ,p(N E (E)|N E ( f (e)), 〈 E, Θ(E) 〉 ) = p(N E (e)|N E ( f (e))).", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Lemma B3 :B3For every example E and every edge e in ∆ Γ , v E (e) = p(N E (e)| 〈 E, Θ(E) 〉 ).Proof of Lemma B3:T he proof is by induction on the depth of the edge, e.F or the root edge, e r ,wehav ev E (e r ) =Θ(E) = p(Θ(E) = 1| 〈 E, Θ(E) 〉 ) = p(N E (e r )| 〈 E, Θ(E) 〉 ).Assuming that the theorem is known for f (e), we showt hat it holds for e as follows:(definition of v) 1v E (e) =   1u E (e)   v E ( f (e)) u E ( f (e)) (Theorem B1) = p( ¬N E (e)) × v E ( f (e)) p(N E ( f (e)) (induction hypothesis) = p(N E (e)| 〈 E, Θ(E) 〉 ) × p( ¬N E (e)) p(N E ( f (e)) (Lemma B1) = p(N E (e)| 〈 E, Θ(E) 〉 ) × p( ¬N E (e)|N E ( f (e)) (Lemma B2) = p(N E (e)| 〈 E, Θ(E) 〉 ) × p( ¬N E (e)|N E ( f (e)), 〈 E, Θ(E) 〉 ) (Bayes rule) = p( ¬N E (e), N E ( f (e))| 〈 E, Θ(E) 〉 ) (Lemma B1) = p( ¬N E (e)| 〈 E, Θ(E) 〉 ) = 1p(N E (e)| 〈 E, Θ(E) 〉 ).Let ¬e be short for the event e ∈ / ∆ Θ .Then we have Lemma B4:For every example E and every edge e in ∆ Γ , p( ¬e) = p( ¬e, ¬N E (e)) = p( ¬e|N E (e)) × p(N E (e)).", "figure_data": "", "figure_id": "fig_16", "figure_label": "B3", "figure_type": "figure" }, { "figure_caption": ") = p( ¬e) × v E (e) p(N E (e)) (Lemma B3) = p(N E (e)| 〈 E, Θ(E) 〉 ) × p( ¬e) p(N E (e)) (Lemma B4) = p(N E (e)| 〈 E, Θ(E) 〉 ) × p( ¬e|N E (e)) (Lemma B5) = p(N E (e)| 〈 E, Θ(E) 〉 ) × p( ¬e|N E (e), 〈 E, Θ(E) 〉 (Bayes rule) = p( ¬e, N E (e)| 〈 E, Θ(E) 〉 ) (Lemma B4) = p( ¬e| 〈 E, Θ(E) 〉 ) = 1p(e| 〈 E, Θ(E) 〉 ).", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Lemma C4 :C4If e is evenedge in Κ,then u Κ e E (e r ) ≥ u Κ E (e r ) ≥ u Κ e E (r). In addition, if e is an odd edge in Κ,then u Κ e E (e r ) ≤ u Κ E (e r ) ≤ u", "figure_data": "", "figure_id": "fig_18", "figure_label": "C4", "figure_type": "figure" }, { "figure_caption": "Κ e E (e r ) = u Κ′|e E (e r ) ≤ u Κ′ E (e r ) = 0. But then, R( 〈 E, Θ(E) 〉 , e, e is destructive for E in Κ.B ut then, by the construction of ∆ a , u Κ′ E ( f ) = 1. Thus, u Κ′ E (e) = 0<1.T he result follows immediately from Lemma C3. Case 5: Κ′ is obtained from Κ via appending a subtree to Κ beneath the odd edge, e. Suppose that, contrary to the theorem, some IN exemplar 〈 E, Θ(E) 〉 , u Κ E (e r )>0 but u Κ′ E (e r ) = 0. Since Κ′ e =Κ e ,itfollows that R( 〈 E, Θ(E) 〉 , e, Κ) = u r )u Κ′ E (e r ) = ∞ >2.", "figure_data": "", "figure_id": "fig_20", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Revise( 〈∆, p 〉 : weighted dt-graph , Ζ: set of exemplars, e: edge, λ: real): weighted dt-graph; begin 〈 N , D 〉⇐Relevance( 〈∆, p 〉 , Ζ, e);", "figure_data": "if D ≠∅ thenbegin if N =∅ then p(e) ⇐ 0;elsebegin p(e) ⇐ λ; l ⇐ NewLiteral(); ∆", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "PROMOTER: Results. Numbers reported for each training set size are average values overone hundred trials (ten trials for each of ten example partitions).(Ourston, 1991). Fore xample, for 20 training examples the output theory size (clauses plus literals) is 78, while for 80 training examples, the output theory size is 106.", "figure_data": "TrainingMeanMeanMeanMeanSet SizeClauses inLiterals inRevisions toExemplars toOutputOutputConvergenceConvergenceOriginalTheory148320113910.78840113615.214060113518.218680113222.1232100123622.0236Figure 11:", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "p 3 , p 17 , p 18 , p 20 C ← ¬p 8 , ¬p 9 O ← ¬p 3 , p 4 , p 5 , p 11 , ¬p 12 D ← p 10 , ¬p 12 , LO ← ¬p 13 , p 18 D ← p 3 , ¬p 9 , ¬M Y ← p 4 , p 5 p 6 E ← N , p 5 , p 6 P ← ¬p 6 , p 7 , p 8 E ← ¬O, ¬p 7 , ¬p 8 X ← p 7 , p 9 F ← p 4 Q ← p 0 , p 4 F ← Q, ¬R Q ← p 3 , ¬p 13 , p 14 , p 15 G ← S, ¬p 3 , p 8 W ← p 10 , p 11 G ← ¬p 10 , p 12 W ← p 3 , p 9 H ← U, VR ← p 12 , ¬p 13 , p 14 H ← p 1 , p 2 ; p 3 , p 4 V ← ¬p 14 , p 15 I ← WS ← p 3 , p 6 , ¬p 14 , p 15 , p 16 I ← p 6 U ← p 11 , p 12 J ← X, p 5 U ← p 13 , p 14 , ¬p 15 , ¬p 16 , ¬p 17 J ← YT ← p 7 K ← P, ¬p 5 , p 9 T ← ¬p 7 , p 8 , p 9 , ¬p 16 , ¬p 17 , ¬p 18 K ← ¬p 6 , p 9", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", we withhold 100 test examples and train on nested sets of 20, 40, 60, 80 and 100 training examples. Wechoose ten such partitions and run ten trials for each partition.In Figure15, we graph the average value ofRad Γ i (Γ′) Rad Γ i (Θ) ,where Γ′ is the theory produced by PTR. As can be seen, this value is consistently below1 .T his indicates that the revisions found 1A dded clause A ← ¬p 6 2A dded clause S ← ¬p 5 3A dded clause A ← p 8 , ¬p 15 4A dded literal ¬p 6 to clause B ← p 4 , ¬p 11 5D eleted clause B ← p 4 , ¬p 6 , ¬p 11 6A dded clause D ← ¬p 14 7A dded clause G ← ¬p 12 , p 8 8A dded literal p 2 to clause A ← E, F 9A dded clause L ← p 16 10 Added clause M ← ¬p 13 , ¬p 7 11 Deleted clause Q ← p 3 , ¬p 13 , p 14 , p 15 12 Deleted clause L ← p 2 , p 12 , p 16 13 Added clause J ← p 11 14 Deleted literal p 4 from clause F ← p 4 15 Deleted literal p 1 from clause B ← p 1 , ¬H PTR are less radical than what is needed to restore the original Θ.Thus by the criterion of success that PTR set for itself,m inimizing radicality,P TR does better than restoring Θ.A si st o be expected, the larger the training set the closer this value is to 1. Also note that as the number of errors introduced increases, the saving in radicality achievedbyPTR increases as well, since a larger number of opportunities are created for more parsimonious revision. More precisely,t he av erage number of revisions made by PTR to Γ 3 , Γ 6 , Γ 9 , Γ 12 ,and Γ 15 with a 100 element training set are 1.", "figure_data": "Γ 3Γ 6Γ 9Γ 12Γ 15Number of Errors3691215Rad(Θ)7 .3217.5322.6627.1533.60Misclassified IN02 63 43 42 7Misclassified OUT5045454664Initial Accuracy7 5%64.5% 60.5%60%54.5%Figure 14: Descriptive statistics for the flawed synthetic theories Γ i (i = 3, 6, 9, 12, 15).10.80.60.40.2020 40 Figure 15: The normalized radicality, Rad Γ i (Γ′) Rad Γ i (Θ) Γ i (i = 3, 6, 9, 12, 15). Error bars reflect 1 standard error. ,for the output theories Γ′ produced by PTR from", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "∆,b eneath e.T hen Κ′| f =Κ e . Suppose that, contrary to the theorem, there is some IN exemplar 〈 E, Θ(E) 〉 such", "figure_data": "thatu Κ E (e r )>0butu Κ′ E (e r ) = 0.ThenbyLemmaC4(ii),u", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b7", "b3", "b5" ], "table_ref": [], "text": "The top-down induction of decision trees is an approach to machine learning that has been used on a variety of real world tasks. Decision trees are well-suited for such tasks since they scale fairly well with the number of training examples and the number of features, and can represent complex concepts in a representation that is fairly easy for people to understand.\nDecision tree induction algorithms (Breiman, Friedman, Olshen, & Stone, 1984;Quinlan, 1986;Fayyad & Irani, 1992) typically operate by choosing a feature that partitions the training data according to some evaluation function (e.g., the purity of the resulting partitions). Partitions are then further partitioned recursively until some stopping criterion is reached (e.g., the partitions contain training examples of a single class). Nearly all decision tree induction algorithms create a single decision tree based upon local information of how well a feature partitions the training data. However, this decision tree is only one of a set of decision trees consistent with the training data. In this paper, we experimentally examine the properties of the set of consistent decision trees. We will call the set of decision trees that are consistent with the training data a decision forest.\nOur experiments were run on several articial concepts for which we know the correct answer and two naturally occurring databases from real world tasks available from the UCI Machine Learning Repository (Murphy & Aha, 1994) in which the correct answer is not known. The goal of the experiments were to gain insight into how factors such as the size of a consistent decision tree are related to the error rate on classifying unseen test instances. Decision tree learners, as well as most other learners, attempt to produce the smallest consistent hypothesis. 1 Occam's razor is often used to justify this bias. Here, we experimentally evaluate this bias towards simplicity by investigating the relationship between the size of a consistent decision tree and its accuracy. If the average error of decision trees with N test nodes is less than the average error of decision trees of size N + i (for i > 0), an appropriate bias for a learner attempting to minimize average error would be to return the smallest decision tree it can nd within its resource constraints.\nIn this paper, we restrict our attention to decision trees that are consistent with the training data and ignore issues such as pruning which trade o consistency with the training data and the simplicity of the hypothesis. For the purposes of this paper, a consistent decision tree is one that correctly classies every training example. 2 We also place two additional constraints on decision trees. First, no discriminator can pass all instances down a single branch. This insures that the test made by the decision tree partitions the training data. Second, if all of the training instances at a node are of the same class, no additional discriminations are made. In this case, a leaf is formed with class label specied by the class of the instances at the leaf. These two constraints are added to insure that the decision trees analyzed in the experiments correspond to those that could be formed by top down decision tree induction algorithms. In this paper, we will not investigate problems that have continuous-valued features or missing feature values.\nIn Section 2 (and the appendix), we will report on some initial exploratory experiments in which the smallest consistent decision trees tend to be less accurate than the average accuracy of those slightly larger. Section 3 provides results of additional experiments that address this issue. Section 4 addresses the implication of our ndings to the policy a learner should take in deciding which of the many consistent hypotheses it should prefer. Section 5 relates this work to previous empirical and theoretical research." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Initial Experiments", "publication_ref": [ "b4", "b10" ], "table_ref": [ "tab_0" ], "text": "We will investigate the relationship between various tree characteristics and error. In particular, we will look at node cardinality (i.e., the number of internal nodes in a tree) and leaf cardinality (i.e., the total number of leaves in a tree).\nIt should be noted that even when using a powerful massively parallel computer, the choice of problems is severely constrained by the computational complexity of the task.\nThe number of trees of any node cardinality that might be generated is O(d c ) where d is the number of discriminators and c is the node cardinality. Even on a massively parallel computer, this precluded the use of problems with many features or any continuous-valued features.\nThe rst experiment considered learning from training data in which there are 5 boolean features. The concept learned was XY Z _ AB. This concept was chosen because it was of moderate complexity, requiring a decision tree with at least 8 nodes to represent correctly. With 5 boolean features, the smallest concept (e.g., True) would require 0 test nodes and the largest (e.g., parity) would require 31.\nWe ran 100 trials, creating a training set by randomly choosing without replacement 20 of the 32 possible training examples and using the remaining 12 examples as the test set. For each trial, every consistent decision tree was created, and we computed the average error rate made by trees with the same node cardinality. Figure 1 plots the mean and 95% condence interval of these average errors as a function of the node cardinality. Figure 1 also plots the number of trials on which at least one decision tree of a given node cardinality is consistent with the training data. From node cardinality 7 to node cardinality 16, there is a monotonic increase in error with increasing node cardinality. For the range from 2 to 3 nodes, the error is varied; however there is little evidence for these error values because they are based on only 2 and 1 trials, respectively. For the range of node cardinalities between 4 and 7, average error is denitely not a monotonically increasing function of node cardinality. As seen in the curve, 5 node trees are on the average more accurate than 4 node trees, and 7 node trees are on the average more accurate than trees with 6 nodes. This last result is somewhat surprising since one gets the impression from reading the machine learning literature (Muggleton, Srinivasan, & Bain, 1992) that the smaller hypothesis (i.e., the one that provides the most compression of the data (Rissanen, 1978)) is likely to be more accurate. We will explore this issue in further detail in Section 3. Appendix 1 presents data showing that this result is not unique to this particular concept. A nal, interesting nding that we will not explore further in this paper is that for very large node cardinalities, error begins to decrease as the node cardinality increases.\nTable 1 lists the average number of consistent trees for each node cardinality and the average number of correct trees (i.e., those trees consistent with the training data that make no errors on the unseen test examples). There are no correct trees with fewer than 8 nodes, since at least 8 nodes are required to represent this concept. Clearly, since there are many trees consistent with the training data, a learner needs some policy to decide which tree to return. We will return to this issue in Section 4. " }, { "figure_ref": [], "heading": "Nodes", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2", "fig_3" ], "heading": "Further Experimentation", "publication_ref": [], "table_ref": [], "text": "For most of the problems studied, we found that on average, the smallest decision trees consistent with the training data had more error on unseen examples than slightly larger trees. We ran additional experiments to make sure that this result is not an artifact of the experimental methodology that we used, as reported in the next sections. on the 69 of 100 trials of the XY Z _ AB concept that met this representative test. Notice that the two trials that formed the only 2 and 3 node trees were removed. Even when only the more representative training sets are considered, the average error of trees of size 4 is greater than the average error of size 5 trees. By regrouping the results of 100 trials for the XY Z _AB concept so that trials with the same minimum-sized trees are grouped together, a set of ve curves, each associated with a subgroup, was formed (Figure 3). The intent of the grouping is to allow us to determine whether the minimum-sized trees for any given trial are on average more accurate than larger trees. Note that in Figure 3, for most minimum tree sizes, error is not a monotonically increasing function of node cardinality. Furthermore, the average error of the smallest trees found is not the most accurate when the smallest tree has 4 or 6 nodes. In addition, regardless of the size of the smallest tree found, the average accuracy of trees of size 8 (the size of the smallest correct tree) rarely has the minimum average error.\nAnother interesting nding becomes apparent with this way of viewing the data: the average error rates of trees for training sets that allow creation of smaller consistent trees tends to be higher than for those training sets that can only form larger trees. For example, the error rate for those training sets whose minimum-sized trees have 4 nodes is higher than the error rate on trials whose minimum-sized trees has 7 nodes. The denition of representative that we used earlier in this section used global characteristics of the training data to determine representativeness. Here, we consider a more detailed view of representativeness that takes the structure of the correct concept into account. It is unreasonable to expect a decision tree learner to learn an accurate concept if there are no examples that correspond to some of the leaves of some correct decision tree. To generate training data for the next experiment, we rst randomly selected one of the 72 trees with 8 nodes that is consistent with all the data. Next, for each leaf of the tree, we randomly selected two examples (if possible) to include in the training set. If a leaf only had one example, that example was included in the training set. Finally, we randomly selected from the remaining examples so that there were 20 training examples and 12 test examples. We had anticipated that with representative training sets formed in this manner, very small consistent trees would be rare and perhaps the error rate would monotonically increase with node cardinality. However, the results of 100 trials, as displayed in Figure 4, indicate the same general pattern as before. In particular, the average error of trees with 7 nodes is substantially less than the average error of those with 6 nodes. Another experiment with one randomly selected example per leaf had similar results." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Training Set Size and Concept Complexity", "publication_ref": [ "b9", "b15", "b14" ], "table_ref": [], "text": "The minimum-sized decision tree for the concept XY Z _AB has 8 tests and 9 leaves. Since the correct tree does not provide much compression 3 of a set of 20 examples used to induce 3. The exact amount of compression provided depends upon the particular scheme chosen for encoding the training data. See (Quinlan & Rivest, 1989;Wallace & Patrick, 1993) for two such schemes.\nthe tree, one might argue that the sample used was too small for this complex a concept. Therefore, we increased the number of training examples to the maximum possible. Figure 5 plots the average error of 32 trials in which we formed all decision trees consistent with 31 examples. Each tree was evaluated on the remaining unseen example. Figure 5 shows that the smaller trees formed from samples of size 31 have more error than the slightly larger trees. Since the minimum correct decision tree has 8 nodes and the consistent trees classify all 31 training examples correctly, any decision tree with fewer than 8 nodes classies the test example incorrectly. To refute further the hypothesis that the results obtained so far were based on using too small a training set for a given concept complexity, we considered two less complex concepts. In particular, we investigated a single attribute discrimination, A with four irrelevant features (Figure 6) and a simple conjunction, AB with three irrelevant features (Figure 7). For each concept, 100 trials were run in which 20 examples were used for training and the remaining 12 for testing. For these simpler concepts, though the smallest trees are the most accurate, error again is not a monotonically increasing function of node cardinality. In our previous experiments, we used a methodology that is typical in empirical evaluations of machine learning systems: the training data and the test data are disjoint. In contrast, most theoretical work on the PAC model (Valiant, 1984) assumes that the training and test data are generated from the same probability distribution over the examples. For this section, we ran an experiment in which training and test examples were selected with replacement from the same distribution to ensure that our results were not dependent on a particular experimental methodology. This testing methodology produces much smaller values for the proportion of test examples misclassied than the disjoint training and test set methodology because those test examples which also were training examples are always classied correctly. However, the same basic pattern of results is observed. Error is not at a minimum for the smallest decision trees nor at decision trees with 8 nodes (the minimum-sized correct tree). Error monotonically increases starting at trees with 7 nodes and then begins to decrease again for very large node cardinalities. Note that on some trials, it is possible to build decision trees with up to 21 nodes since some training sets contained 22 distinct examples." }, { "figure_ref": [ "fig_8", "fig_0" ], "heading": "Average Path Length", "publication_ref": [], "table_ref": [], "text": "The information gain metric of ID3 is intended to minimize the number of tests required to classify an example. Figure 9 reanalyzes the data from Figure 1 by graphing average error as a function of average path length for the XY Z _ AB concept. The results are similar to those obtained when relating the number of test nodes to the error rate: error is not a monotonically increasing function of average path length. Similar analyses were performed and similar results have been obtained for other concepts which are presented in the Appendix." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0", "fig_2" ], "heading": "The Minimum-Sized Decision Tree Policy", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "A designer of a learning algorithm either explicitly or implicitly must decide which hypothesis to prefer when multiple hypotheses are consistent with the training data. As Table 1 shows, there can be many consistent decision trees. Should the learner always prefer the smallest consistent decision tree? A learner that adopts this strategy can be said to be following the minimum-sized decision tree policy.\nIn this section, we present results from additional experiments to evaluate this policy. In particular, we gather evidence to address two related questions:\nGiven any two consistent decision trees with dierent node cardinalities, what is the probability that the smaller decision tree is more accurate?\nGiven the minimum-sized decision tree and a larger consistent decision tree, what is the probability that the smallest decision tree is more accurate?\nThe rst question is of more interest to the current practice of decision tree induction since, for eciency reasons, no algorithm attempts to nd the smallest consistent decision tree for large data sets. Nonetheless, most algorithms are biased toward favoring trees with fewer nodes. . The probability that the accuracy of a smaller decision tree is greater than, equal to, or less than the accuracy of a larger tree as a function of the dierence of node cardinalities for the XY Z _AB concept (upper). The number of trials out of 1000 on which at least 2 trees had a given dierence in node cardinality (lower).\nTo address the question of whether a learner should prefer the smaller of two randomly selected consistent trees, we ran 1000 trials of learning the concept XY Z _ AB from 20 training examples. For each trial, we recorded the node cardinality and accuracy (on the 12 test examples) of every consistent tree. For each pair of consistent trees (with dierent node cardinalities), we computed the dierence in node cardinality and indicated whether the accuracy of the smaller tree was greater than, equal to, or less than the accuracy of the larger tree. From this data, we computed the observed probability that one decision tree was more accurate than another as a function of the dierence in node cardinalities (see Figure 10 upper). The graph shows that on this concept, the probability that the smaller of two randomly chosen consistent decision trees will be more accurate is greater than the probability that the larger tree will be more accurate. Furthermore, the probability that the smaller tree is more accurate increases as the dierence in node cardinality increases. An exception to this trend occurs for very large dierences in node cardinality. However, as Figure 10 lower shows, these exceptions are quite rare. Consistent decision trees whose node cardinalities diered by 16 were found in only 6 of the 1000 trials. 4 The results of this experiment indicate that on average, a learner that prefers the smaller of two randomly selected decision trees has a higher probability of being more accurate on this concept than a learner that selects the larger tree. . The probability that the accuracy of a minimum-sized decision is greater than, equal to, or less than the accuracy of a larger tree as a function of the dierence of node cardinalities for the XY Z _ AB concept.\nTo address the question of whether a learner should prefer the smallest consistent decision over a randomly selected consistent tree with more test nodes, we reanalyzed the data from the previous experiment. Figure 11 graphs the observed probability that a consistent decision tree with the minimum node cardinality is more accurate than a larger tree as a function of the dierence in node cardinalities between the two trees. The graph shows that a learner that chooses randomly among the consistent decision trees with minimum node cardinalities is more likely to nd a tree that is more accurate than a learner that randomly selects among larger trees. 5 Figure 11 clearly shows that for this particular concept, preferring the minimum-sized decision tree policy is on average a better policy than preferring a decision tree that is any 4. Four trials had minimum-sized trees with 2 nodes and maximally sized trees with 18 nodes. Two trials had minimum-sized trees with 3 nodes and maximally sized trees with 19 nodes. 5. Except for the rare case when an extremely small and an extremely large decision trees is found on the same trial.\nxed size larger than the smallest decision tree. However, it is not clear that the minimumsized decision tree is the best possible policy for this concept. Indeed, by looking at the data from Figure 3, it is apparent that a better strategy for this concept would be to nd the minimum-sized tree and then decide whether to return the minimum-sized tree or a tree of a dierent node cardinality as a function of the node cardinality of the minimum-sized consistent tree. Table 2 shows which node cardinality has the highest probability of being most accurate as a function of the minimally sized tree, together with the number of trials (out of 1000) on which the minimum-sized tree had a particular node cardinality. Table 2. A policy of returning a larger decision tree as a function of the minimum-sized tree for the XY Z _ AB concept." }, { "figure_ref": [ "fig_0", "fig_1", "fig_1", "fig_3" ], "heading": "Minimum", "publication_ref": [ "b11", "b12", "b0", "b9", "b4", "b2", "b0", "b2" ], "table_ref": [], "text": "Figure 11 provides some of the data that illustrates that the policy in Table 2 will perform better than preferring the minimum-sized decision tree on this concept. Figure 12 graphs the observed probability that a consistent decision tree with a minimum node cardinality of 5 (upper), 6 (middle), or 7 (lower) is more accurate than a larger tree as a function of the dierence in node cardinalities between the two trees. The graph shows that when the minimum-sized decision tree has 5 nodes, the probability that a larger tree is more accurate is less than the probability that the smaller tree is more accurate for all node cardinalities. This is particularly interesting because it shows that giving a decision tree learner the size of the correct tree and having the decision tree learner produce an hypothesis of this size is not the best strategy for this concept. However, when the smallest consistent tree has 6 nodes, there is a 0.560 probability that a randomly chosen tree with 8 nodes will be more accurate and a 0.208 probability that a tree with 8 test nodes will have the same accuracy. In addition, when the minimum-sized tree has 7 test nodes, the probability that a tree with 8 nodes is more accurate is 0.345 while the probability that it is less accurate is 0.312.\nNote that we do not believe that the policy in Table 2 is uniformly superior to preferring the minimum-sized decision tree. Rather, there is probably some interaction between the complexity of the concept to be learned, the number of training examples, and the size of the smallest consistent decision tree. Furthermore, a learner should not be tuned to learn a particular concept, but should perform well on a variety of concepts. Clearly, if extremely simple concepts are to be learned suciently frequently, the minimum-sized decision tree policy will be better than the policy in Table 2. Indeed, the minimum-sized decision tree policy would work well on the simple concepts A and AB discussed in Section 3.2. However, if simple concepts are rarely encountered, there may be better policies. The best policy must depend upon the distribution of concepts that are encountered. Clearly, if the only concept . The probability that the accuracy of a minimum-sized decision tree is greater than, equal to, or less than the accuracy of a larger tree as a function of the dierence of node cardinalities for the XY Z _ AB concept when the minimum-sized decision tree has 5 (upper), 6 (middle), or 7 (lower) test nodes. to be learned is XY Z _AB, the best policy would be to ignore the training data and return the decision tree representation for XY Z _ AB. It may be that Occam's razor should be viewed as a philosophical statement about the distribution of concepts one is likely to encounter. Occam's razor has not been shown to be a guarantee that when learning a complex concept, the simplest hypothesis consistent with the data is likely to be more accurate than the randomly-chosen more complex hypothesis consistent with the training data.\n5. Analysis Schaer (1992Schaer ( , 1993) ) presents a series of experiments on overtting avoidance algorithms. Overtting avoidance algorithms prefer simpler decision trees over more complex ones, even though the simpler decision trees are less accurate on the training data, in hopes that the trees will be more accurate on the test data. Schaer shows that these overtting avoidance algorithms are a form of bias. Rather than uniformly improving performance, the overtting avoidance algorithms improve performance on some distributions of concepts and worsen performance on other distributions of concepts.\nThe results of our experiments go a step further than Schaer's. We have shown that for some concepts, the preference for simpler decision trees does not result in an increase in predictive accuracy on unseen test data, even when the simple trees are consistent with the training data. Like Schaer, we do not dispute the theoretical results on Occam's razor (Blumer, Ehrenfeucht, Haussler, & Warmuth, 1987), minimum description length (Quinlan & Rivest, 1989;Muggleton et al., 1992), or minimizing the number of leaves of a decision tree (Fayyad & Irani, 1990). Rather, we point out that for a variety of reasons, the assumptions behind these theoretical results mean that the results do not apply to the experiments reported here. For example, (Blumer et al., 1987) indicates that if one nds an hypothesis in a suciently small hypothesis space (and simpler hypotheses are one example of a small hypothesis space) and this hypothesis is consistent with a suciently large sample of training data, one can be fairly condent that it will be fairly accurate on unseen data drawn from the same distribution of examples. However, it does not say that on average this hypothesis will be more accurate than other consistent hypotheses not in this small hypothesis space.\nThe (Fayyad & Irani, 1990) paper explicitly states that the results on minimizing the number of leaves of decision trees are worst case results and should not be used to make absolute statements concerning improvements in performances. Nonetheless, informal arguments in the paper state: \\This may then serve as a basis for provably establishing that one method for inducing decision trees is better than another by proving that one algorithm always produces a tree with a smaller number of leaves, given the same training data.\" Furthermore, other informal arguments imply that this result is probabilistic because of the existence of \\pathological training sets.\" However, as we have shown in Figures 2 and4 (as well as a reanalysis of the mux6 data in the Appendix), eliminating pathological (i.e., unrepresentative) training sets does not change the qualitative result that on some concepts, the smaller trees are less accurate predictors than slightly larger trees." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b13", "b0", "b2", "b6", "b14" ], "table_ref": [], "text": "We have reported on a series of experiments in which we generated all decision trees on a variety of articial concepts and two naturally occurring data sets. We found that for many of the concepts, the consistent decision trees that had a smaller number of nodes were less accurate on unseen data than the slightly larger ones. These results do not contradict existing theoretical results. Rather, they serve to remind us to be cautious when informally using the intuitions derived from theoretical results on problems that are not covered by the theorems or when using intuitions derived from worst-case results to predict average-case performance.\nWe stress that our results are purely experimental. Like the reader, we too would be pleased if there were theoretical results that indicated, for a given sample of training data, which decision tree is likely to be most accurate. However, it is not clear whether this can be done without knowledge of the distribution of concepts one is likely to encounter (Schaer, 1994).\nWe also note that our results may be due to the small size of the training sets relative to the size of the correct tree. We tried to rule out this possibility by using larger training sets (31 of the 32 possible examples) and by testing simpler concepts. For the simpler concepts, the smallest decision trees were the most accurate, but error did not monotonically increase with node cardinality. Since most decision tree learners that greedily build decision trees do not return the smallest decision tree, our results may be of practical interest even for simple concepts. In the future, experiments with more features and more examples could help to answer this question, but considerably more complex problems cannot be handled even by future generations of parallel supercomputers. In addition, we note that in our experiments, we did not build decision trees in which a test did not partition the training data. This explains why we found relatively few extremely large decision trees and may explain why very large trees made few errors. To our knowledge, all decision tree algorithms have this constraint. However, the theoretical work on learning does not make use of this information. We could rerun all of our experiments without this constraint, but we would prefer that some future theoretical work take this constraint into account.\nAlthough we have found situations in which the smallest consistent decision tree is not on average the most accurate and cases in which there is a greater than 0.5 probability that a larger decision tree is more accurate than the smallest, we believe that learning algorithms (and people) with no relevant knowledge of the concept and no information about the distribution of concepts that are likely to be encountered should prefer simpler hypotheses. This bias is appropriate for learning simple concepts. For more complex concepts, the opposite bias, preferring the more complex hypotheses, is unlikely to produce an accurate hypothesis (Blumer et al., 1987) and (Fayyad & Irani, 1990) due to the large number of consistent complex hypotheses. We believe that the only way to learn complex hypotheses reliably is to have some bias (e.g., prior domain knowledge) which favors particular complex hypotheses such as combinations of existing hypotheses learned inductively as in OCCAM (Pazzani, 1990). Indeed, (Valiant, 1984) advocates a similar position: \\If the class of learnable concepts is as severely limited as suggested by our results, then it would follow that the only way of teaching more complex concepts is to build them up from simpler ones.\"" }, { "figure_ref": [], "heading": "Lenses", "publication_ref": [], "table_ref": [], "text": "The lenses domain has one 3-valued and three binary features, three classes, and 24 instances. Since the lenses domain has one non-binary feature, trees with a range of leaf cardinalities are possible for a particular node cardinality. The minimum-sized tree has 6 nodes and 9 leaves. Separate analyses for leaf and node cardinalities were performed. We used training set sizes of 8, 12, and 18 for this domain, built all consistent trees, and measured the error rate on all unseen examples. " }, { "figure_ref": [ "fig_4" ], "heading": "Shuttle Landing", "publication_ref": [], "table_ref": [], "text": "The shuttle landing domain has four binary and two 4-valued features, two classes, and 277 instances. The minimum-sized consistent tree has 7 nodes and 14 leaves. We used training sets of size 20, 50, and 100 for the shuttle domain, generating all consistent decision trees with fewer than 8, 10, and 12 nodes, and measured the error of these trees on all unseen examples. Figure 15 presents the error as a function of leaf cardinality, averaged over" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Ross Quinlan, Georey Hinton, Michael Cameron-Jones, Cullen Schaer, Dennis Kibler, Steve Hampson, Jason Catlett, Haym Hirsh, Anselm Blumer, Steve Minton, Michael Kearns, Tom Dietterich, Pat Langley, and David Schulenburg for commenting on various aspects of this research. The research reported here was supported in part by NSF infrastructure grant number MIP-9205737, NSF Grant INT-9201842, AFOSR grant F49620-92-J-0430, and AFOSR AASERT grant F49620-93-1-0569." }, { "figure_ref": [], "heading": "Appendix A. Experiments on Additional Problems", "publication_ref": [], "table_ref": [], "text": "In this appendix, we provide data on experiments which we ran on additional problems. The experiments show that the basic ndings in this paper are not unique to the articial concept, XY Z _ AB." }, { "figure_ref": [], "heading": "Mux6", "publication_ref": [ "b8" ], "table_ref": [], "text": "The multiplexor concept we consider, mux6, has a total of 8 binary features. Six features represent the functionality of a multiplexor and 2 features are irrelevant. The minimum sized tree has 7 nodes. This particular concept was chosen because it is dicult for a top-down inductive decision tree learner with limited look ahead to nd a small hypothesis (Quinlan, 1993). On each trial, we selected 20 examples randomly and tested on the remaining examples. Since most of the computational cost of building consistent trees is for larger node cardinalities and we are primarily interested in trees with small node cardinalities, we only computed consistent trees with up to 10 nodes for 10 trials and up to 8 nodes for 340 trials. Figure 13 presents the average error as a function of the node cardinality for these trials. This graph again shows that average error does not monotonically increase with node cardinality. Trees of 4 nodes are on the average 4% less accurate than trees of 5 nodes. 10 trials. For this domain, there is a monotonically increasing relationship between node cardinality and error. " } ]
[ { "authors": "A Blumer; A Ehrenfeucht; D Haussler; M Warmuth", "journal": "Information Processing Letters", "ref_id": "b0", "title": "Occam's razor", "year": "1987" }, { "authors": "L Breiman; J Friedman; R Olshen; C Stone", "journal": "Wadsworth & Brooks", "ref_id": "b1", "title": "Classication and Regression Trees", "year": "1984" }, { "authors": "U Fayyad; K Irani", "journal": "", "ref_id": "b2", "title": "What should be minimized in a decision tree?", "year": "1990" }, { "authors": "U Fayyad; K Irani", "journal": "", "ref_id": "b3", "title": "The attribute selection problem in decision tree generation", "year": "1992" }, { "authors": "S Muggleton; A Srinivasan; M Bain", "journal": "", "ref_id": "b4", "title": "Compression, signicance and accuracy", "year": "1992" }, { "authors": "P Murphy; D Aha", "journal": "", "ref_id": "b5", "title": "UCI Repository of machine learning databases", "year": "1994" }, { "authors": "M Pazzani", "journal": "Lawrence Erlbaum Associates", "ref_id": "b6", "title": "Creating a memory of causal relationships: An integration of empirical and explanation-based learning methods", "year": "1990" }, { "authors": "J Quinlan", "journal": "Machine Learning", "ref_id": "b7", "title": "Induction of decision trees", "year": "1986" }, { "authors": "J Quinlan", "journal": "Morgan Kaufmann", "ref_id": "b8", "title": "C4.5 Programs for Machine Learning", "year": "1993" }, { "authors": "J Quinlan; R Rivest", "journal": "Information and Computation", "ref_id": "b9", "title": "Inferring decision trees using the minimum description length principle", "year": "1989" }, { "authors": "J Rissanen", "journal": "Automatica", "ref_id": "b10", "title": "Modeling by shortest data description", "year": "1978" }, { "authors": "C Schaer", "journal": "", "ref_id": "b11", "title": "Sparse data and the eect of overtting avoidance in decision tree induction", "year": "1992" }, { "authors": "C Schaer", "journal": "Machine Learning", "ref_id": "b12", "title": "Overtting avoidance as bias", "year": "1993" }, { "authors": "C Schaer", "journal": "", "ref_id": "b13", "title": "A conservation law for generalization performance", "year": "1994" }, { "authors": "L Valiant", "journal": "Communications of the ACM", "ref_id": "b14", "title": "A theory of the learnable", "year": "1984" }, { "authors": "C Wallace; J Patrick", "journal": "Machine Learning", "ref_id": "b15", "title": "Coding decision trees", "year": "1993" } ]
[]
Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction
We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that aect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several articial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.
Patrick M Murphy; Michael J Pazzani
[ { "figure_caption": "Figure 1 .1Figure 1. The average error of 100 trials as a function of node cardinality and the number of trials for each node cardinality.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Error rate of consistent trees from representative training sets as a function of node cardinality.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Error as a function of node cardinality for the XY Z _ AB concept when rst grouped by minimum-sized trees built.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Error rate of consistent trees with 2 examples per leaf of some correct 8 node tree as a function of node cardinality.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Error rate of consistent trees with leave-one-out testing as a function of node cardinality.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .Figure 7 .67Figure 6. Error as a function of node cardinality for the single attribute discrimination A concept.", "figure_data": "", "figure_id": "fig_5", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "3. 33Training and Testing using the Same Probability Distribution.", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Error as a function of node cardinality when the training and test examples are generated by the same distribution for the XY Z _ AB concept. Once again, the target concept was XY Z _ AB. By randomly choosing 31 training examples with replacement from the set of 32 possible instances, on average approximately 20 distinct training examples are selected. Error is estimated by randomly choosing 1000", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Error as a function of average path length for the XY Z _ AB concept.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure10. The probability that the accuracy of a smaller decision tree is greater than, equal to, or less than the accuracy of a larger tree as a function of the dierence of node", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure11. The probability that the accuracy of a minimum-sized decision is greater than, equal to, or less than the accuracy of a larger tree as a function of the dierence of node", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure12. The probability that the accuracy of a minimum-sized decision tree is greater than, equal to, or less than the accuracy of a larger tree as a function of the dierence of", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Error as a function of node cardinality (left) and error as a function of leaf cardinality (right).", "figure_data": "", "figure_id": "fig_12", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 14 (14Figure14 (left) shows the error as a function of the node cardinality for the 3 training set sizes averaged over 50 trials. These curves indicate that the smallest consistent trees are not always the most accurate. When observing the larger node cardinalities for the training set sizes 12 and 18, error monotonically decreases with increasing node cardinality. Similar statements can be said for the curve in Figure14(right), which relates average error as a function of leaf cardinality.", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "The average number of trees consistent with 20 training examples of the XY Z_AB", "figure_data": "Number ofNumber ofConsistent Trees Correct Trees22.00.034.00.043.30.0512.30.0627.60.07117.10.08377.017.89879.437.8101799.950.2113097.841.6124383.095.4135068.966.6144828.337.7153631.531.3161910.614.817854.44.018308.63.619113.80.0", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction to Description Logics", "publication_ref": [ "b31", "b19", "b30", "b11", "b20", "b24", "b34", "b3" ], "table_ref": [], "text": "Data and knowledge bases are models of some part of the natural world. Such models are often built from individual objects that are inter-related by relationships and grouped into classes that capture commonalities among their instances. Description logics (DLs), also known as terminological logics, form a class of languages used to build and access such models; their distinguishing feature is that classes (usually called concepts) can be de ned intensionally|in terms of descriptions that specify the properties that objects must satisfy to belong to the concept. These descriptions are expressed using some language that allows the construction of composite descriptions, including restrictions on the binary relationships (usually called roles) connecting objects.\nAs an example, consider the description GAME u 4 participants u 8participants:(PERSON u gender : Female): 1\nThis description characterizes objects in the intersection (u) of three sub-descriptions: GAME|objects that belong to the atomic concept; 4 participants|objects with at least four llers for the participants role; and 8participants:(PERSON u gender : Female)|objects all of whose participants llers are restricted to belong to PERSONs, which themselves have gender role lled by the value Female.\nA key di erence between DLs and the standard representation formalisms based on First-Order Logic, e.g., relational and deductive databases, is that DLs provide an arena for exploring new sets of \\logical connectives\"|the constructors used to form composite descriptions|that are di erent from the standard connectives such as conjunction, universal quanti ers, etc.. Therefore, DLs provide a new space in which to search for expressive yet e ectively computable representation languages. Moreover, although it is possible to translate many aspects of DLs currently encountered into First Order Logic, reasoning with such a translation would be a very poor substitute because DL-based systems reason in a way that does not resemble standard theorem proving (e.g., by making use of imperative programming features).\nDescriptions such as the one above can be used in several ways in a knowledge base management system (KBMS) based on a description logic:\n1. To state queries: The KBMS can locate all the objects that satisfy the description's properties. 2. To de ne and classify concepts: Identi ers can be attached to descriptions, in the manner of views in relational DBMSs. The system can in addition automatically determine the \\subclass\" relationship between pairs of such concepts based on their de nitions. For example, a concept de ned by the above description would be subsumed by a concept de ned by \\games with at least two participants\" (GAME u 2 participants).\n3. To provide partial information about objects: It is important to understand that distinct DL descriptions can be ascribed to arbitrary individuals (e.g., \\today's game of cards|individual Bgm#467|will have exactly two participants from the following set of three : : :, all of whom like tea and rum\"). Note that unlike database systems, DL-based KBMSs do not require descriptions to be prede ned. This provides considerable power in recording partial knowledge about objects. 4. To detect errors: It is possible to determine whether two descriptions are disjoint, whether a description is incoherent or not, and whether ascribing a description to an individual leads to an inconsistency. Quite a number of KBMSs based on description logics have been built, including classic (Resnick et al., 1992), loom (MacGregor & Bates, 1987), and back (Peltason et al., 1987). Such systems have been used in several practical situations, including software information bases (Devanbu et al., 1991), nancial management (Mays et al., 1987), con guration management (Owsnicki-Klewe, 1988;Wright et al., 1993), and data exploration. Additional signs that DLs are signi cant subjects of study are the several recent workshops on DLs (Nebel et al., 1991;Peltason et al., 1991;AAAI, 1992)." }, { "figure_ref": [], "heading": "On the Tractability and Completeness of DL Implementations", "publication_ref": [ "b18", "b28", "b32", "b19", "b13", "b8", "b25", "b9", "b7", "b6", "b21", "b26", "b16", "b12" ], "table_ref": [], "text": "The fundamental operation on descriptions is determining whether one description is more general, or subsumes, another, in the sense that any object satisfying the latter would also satisfy the conditions of the former. In parallel with the surge of work on nding tractable yet expressive subsets of rst order logic, the DL research community has been investigating the complexity of reasoning with various constructors. The rst result in this area (Levesque & Brachman, 1987) showed that even a seemingly simple addition to a very small language can lead to subsumption determination becoming NP-hard. A more recent, striking pair of results (Patel-Schneider, 1989b;Schmidt-Schauss, 1989) shows that adding the ability to represent equalities of role compositions makes the complexity of the subsumption problem leap from quadratic to undecidable.\nThere are three possible responses to these intractability results: Provide an incomplete implementation of the DL reasoner, in the sense that there are inferences sanctioned by the standard semantics of the constructors that are not performed by the algorithm. This approach, explicitly adopted by the loom system implementers (MacGregor & Bates, 1987), and advocated by some users (Doyle & Patil, 1991), has one major di culty: how can one describe to users the inferences actually drawn by the implementation so that systems with known properties can be implemented on top of such KBMS? Two solutions to this problem have been suggested: alternative semantic accounts (based on weaker, 4-valued logics, for example) (Patel-Schneider, 1989a), and proof-theoretic semantics (Borgida, 1992). Provide a complete implementation of a speci c DL reasoner, acknowledging that in certain circumstances it may take an inordinate amount of time. This approach, followed in systems such as kris (Baader & Hollunder, 1991), has the problem of unpredictability: when will the system \\go o into the wild blue yonder\"? And of course, in some circumstances this is impossible to even attempt since the reasoning problem is undecidable.\nCarefully devise a language of limited expressive power for which reasoning is tractable, and then provide a complete implementation for it. This was the approach chosen by the designers of such languages as kandor (Patel-Schneider, 1984) and krypton (Brachman et al., 1983), and is close to the approach in classic (Borgida et al., 1989). A hidden di culty in the second and third approach is to produce an implementation that is correct (\\complete\") with respect to the semantics. This di culty is illustrated by the discovery, several years later, that the implementation of kandor, as well as candide (Beck et al., 1989), was in fact incomplete, and its subsumption problem is NP-hard (Nebel, 1988), rather than polynomial, as was claimed; this happened despite the fact that Kandor is a very \\small\" language in comparison with other DLs, and its implementation appeared to be evidently correct. To avoid such problems, it is necessary to produce convincing demonstrations that the algorithm is correct; several such proofs have in fact already appeared in the DL literature (e.g., (Patel-Schneider, 1987;Hollunder & Nutt, 1990;Donini et al., 1991)), albeit only for languages that have not seen use in practical applications." }, { "figure_ref": [], "heading": "Outline", "publication_ref": [ "b34" ], "table_ref": [], "text": "The classic 12 system is a reasoner based on a moderately complicated DL. It is being used in commercial (Wright et al., 1993) and prototype applications at AT&T, and is made available to academic researchers by AT&T Bell Laboratories.\nOne purpose of this paper is to provide a rigorous formal analysis of the correctness and e ciency for the classic DL subsumption algorithm. 3 We start by presenting such a result for a subset of the language, which we call Basic classic. The subsumption algorithm relies on the transformation of descriptions into a data structure, which we call description graphs, and which are a generalization of A -Kaci's psi-terms (1984). In the process of normalizing such a graph to a canonical form, we remove obvious redundancies and explicate certain implicit facts, encoding in particular the in nite set of inferences that can be drawn from so-called \\coreference constraints\". The correctness of the subsumption algorithm is demonstrated rigorously by showing how to construct (inductively) a countermodel in case the algorithm returns the answer \\no\".\nNext, we explore the e ect of adding individuals to descriptions. We show that, using individuals, one can encode disjunctive information leading to the need to examine combinatorially many possibilities. The classic implementation is in fact incomplete with respect to the standard semantics. The second contribution of this paper is then a well-motivated, understandable, and small change to the standard semantics that alleviates this problem. We extend the subsumption algorithm and its proof of correctness to deal with individuals under the modi ed semantics, thereby characterizing in some sense the \\incompleteness\" of the reasoner.\nThis paper therefore illustrates all three paradigms described above, albeit in a nonstandard manner for the second paradigm, and does so for the rst time on a realistic language with signi cant practical use." }, { "figure_ref": [], "heading": "Basic CLASSIC", "publication_ref": [], "table_ref": [], "text": "Descriptions in Basic classic are built up from a collection of atomic concept names, role names, and attribute names. Roles and attributes are always atomic but descriptions can be built up using operators/constructors such as value restrictions and number restrictions, as we indicate below.\nBasic classic incorporates objects from the host programming language,4 called host individuals, which form a distinct group from classic individuals; only the latter can have roles or attributes of their own, the former being restricted to be role or attribute llers.\nThe denotational semantics of classic descriptions starts, as usual, with a domain of values, , subsets of which are extensions for descriptions, while subsets of are extensions of roles and attributes. This domain is in fact disjointly divided into two realms, the host realm, H , containing objects corresponding to host language individuals, and the classic realm C , containing the other objects. Every description, except for THING, which denotes the entire domain has as its extension a subset of either the classic realm or the host realm. (NOTHING denotes the empty set, which is therefore both a classic and host concept.) The extension of a role in a possible world is a relation from the classic realm to the entire domain, while the extension of an attribute is a function from the classic realm into the entire domain.\nHost descriptions are relatively simple: (i) HOST-THING, denoting the entire host realm, H ; (ii) special, pre-de ned names corresponding to the types in the host programming language; and (iii) conjunctions of the above descriptions. The descriptions corresponding to the host programming language types have pre-de ned extensions and subsumption relationships, mirroring the subtype relationship in the host programming language. This subtype relationship is satis ed in all possible worlds/interpretations. We require that (i) all host concepts have an extension that is either of in nite size or is empty; (ii) that if the extensions of two host concepts overlap, then one must be subsumed by the other, i.e., types are disjoint, unless they are subtypes of each other; and (iii) that a host concept has an in nite number of extra instances than each of its child concepts. (These conditions are needed to avoid being able to infer conclusions from the size of host descriptions.) This allows for host concepts like De nition 1 A possible world/interpretation, I, consists of a domain, , and an interpretation function : I . The domain is disjointly divided into a classic realm, C , and a host realm, H . The interpretation function assigns extensions to atomic identi ers as follows:\nThe extension of an atomic concept name E is some subset E I of the classic realm. The extension of an atomic role name R is some subset R I of C . The extension of an atomic attribute name A is some total function A I from C to . The extension C I of a non-atomic classic description is computed as follows:\nCLASSIC-THING I = C .\n(C u D) I = C I \\ D I .\n(8p:C) I = fd 2 C j 8x (d; x) 2 p I ) x 2 C I g, i.e., those objects in C all of whose p-role or p-attribute llers are in the extension of C;\n( n p) I (resp. ( n p) I ) is those objects in C with at least (resp. at most) n llers for role p.\n(A 1 : : : A k = B 1 : : : B h ) I = fd 2 C j A k I (: : :A 1 I (d)) = B h I (: : :B 1 I (d))g, i.e., those objects in C with the property that applying the composition of the extension of the A i s and the composition of the extension of the B j s to the object both result in the same value.5 \nA description, D 1 , is then said to subsume another, D 2 , if for all possible worlds I, D 2 I D 1 I .\nOf key interest is the computation of the subsumption relationship between descriptions in Basic classic. Subsumption computation is a multi-part process. First, descriptions are turned into description graphs. Next, description graphs are put into canonical form, where certain inferences are explicated and other redundancies are reduced by combining nodes and edges in the graph. Finally, subsumption is determined between a description and a canonical description graph.\nTo describe in detail the above process, we start with a formal de nition of the notion of description graph (De nition 2), and then present techniques for translating a description to a description graph (Section 2.2), which requires merging pairs of nodes, and pairs of graphs (De nitions 4 and 5); putting a description graph into canonical form (Section 2.3); determining whether a description subsumes a description graph (Algorithm 1).\nTo prove the correctness of this approach, we need to show that the rst two steps lead us in the right direction, i.e., that the following three questions are equivalent: \\Does description D subsume description C?\", \\Does description D subsume graph G C ?\", and \\Does description D subsume graph canonical(G C )?\". To do this, we need to de ne the formal semantics of both descriptions and graphs (De nitions 1 and 3), and then prove the results (Theorems 1 and 2). To prove the \\completeness\" of the subsumption algorithm, we show that if the algorithm does not indicate that D subsumes canonical(G C ), then we can construct an interpretation (\\graphical world\") in which some object is in the denotation of canonical(G C ) but not that of D." }, { "figure_ref": [ "fig_1" ], "heading": "Description Graphs", "publication_ref": [ "b0" ], "table_ref": [], "text": "One way of developing a subsumption algorithm is to rst transform descriptions into a canonical form, and then determine subsumption relationships between them. Canonical descriptions can normally be thought of as trees since descriptions are terms in a rst order term language. The presence of equality restrictions in classic signi cantly changes the handling of subsumption because they introduce relationships between di erent pieces of the normal form. Most signi cantly, in the presence of equalities, a small description, such as 8friend:TALL u friend = friend friend, can be subsumed by descriptions of arbitrary size, such as 8friend:(8friend:(: : :(8friend:TALL) : : :)):\nIn order to record such sets of inferences in the canonical form, we will resort to a graphbased representation, suggested by the semantic-network origins of description logics, and the work of A -Kaci (1984).\nIntuitively, a description graph is a labelled, directed multigraph, with a distinguished node. Nodes of the graph correspond to descriptions, while edges of the graph correspond to restrictions on roles or attributes. The edges of the graph are labelled with the role name and the minimum and maximum number of llers associated with the edge, or just with the attribute name. The nodes of the graph are labelled with concept names associated with the node concept. For example, Figure 1 is a description graph, which, as we shall see later, corresponds to the description GAME u 8participants:PERSON u coach = (captain father).\nBecause equality restrictions (and hence the non-tree portions of the graph) involve only attributes, edges labelled with roles are all cut-edges, i.e., their removal increases by one the number of connected components of the graph. This restriction is important because if the graph is in tree form, there is really no di erence between a graphical and a linear notation, and a semantics is simple to develop. If the graph is a general directed acyclic graph, then there is the problem of relating the semantics generated by two di erent paths in the graph that share the same beginning and ending nodes. If the graph contains cycles, the problem of developing a correct semantics is even more di cult, as a simplistic semantics will be non-well-founded, and some sort of xed-point or model-preference semantics will be required. Fortunately, any non-tree parts of our graphical notation will involve attributes only, and because attributes are functional, our job will be much easier.\nAs a result of the above restrictions, it is possible to view a description graph as having the following recursive structure: (i) There is a distinguished node r, which has an \\island\"\nof nodes connected to it by edges labelled with attributes. (ii) Nodes in this island may have 0 or more edges labelled with roles leaving them, pointing to distinguished nodes of other description graphs. (iii) These graphs share no nodes or edges in common with each other, nor with the islands above them.\nBecause of this recursive structure, it is easier to represent description graphs using a recursive de nition, instead of the usual graph de nition. This recursive de nition is similar to the recursive de nition of a tree, which states that a tree consists of some information (the information on the root of the tree) plus a set of trees (the children of the root of the tree). As description graphs are more complex than simple trees, we will have to use a two-part de nition.\nDe nition 2 A description graph is a triple, hN; E; ri, consisting of a set N of nodes; a bag E of edges (a-edges) labelled with attribute names; and a distinguished node r in N.\nElements of E will be written hn 1 ; n 2 ; Ai where n 1 and n 2 are nodes and A is an attribute name.\nA node in a description graph is a pair, hC; Hi consisting of a set C of concept names (the atoms of the node), and a bag H of tuples (the r-edges of the node). An r-edge is a tuple, hR; m; M; Gi, of a role name, R; a min, m, which is a non-negative integer; a max, M, which is a non-negative integer or 1; and a (recursively nested) description graph G, representing the restriction on the llers of the role. (G will often be called the restriction graph of the node.)\nConcept names in a description graph are atomic concept names, host concept names, THING, CLASSIC-THING, or HOST-THING.\nDescriptions graphs are provided extensions starting from the same possible worlds I as used for descriptions. However, in addition we need a way of identifying the individuals to be related by attributes, which will be given by the function .\nDe nition 3 Let G = hN; E; ri be a description graph and let I be a possible world. Then the interpretation G I of G, and the interpretation n I of each of the nodes in N, are recursively (and mutually) de ned as follows:\nAn element, d, of is in G I , i there is some function, , from N into such that 1. d = (r);\n2. for all n 2 N (n) 2 n I ; 3. for all hn 1 ; n 2 ; Ai 2 E we have h (n 1 ); (n 2 )i 2 A I , (which is equivalent to (n 2 ) = A I ( (n 1 )), since A I is a function).\nAn element, d, of is in n I , where n = hC; Hi, i 1. for all C 2 C, we have d 2 C I ; and 2. for all hR; m; M; Gi 2 H, (a) there are between m and M elements, d 0 , of the domain such that hd; d 0 i 2 R I and (b) d 0 2 G I for all d 0 such that hd; d 0 i 2 R I ." }, { "figure_ref": [], "heading": "Translating Descriptions to Description Graphs", "publication_ref": [], "table_ref": [], "text": "A Basic classic description is turned into a description graph by a recursive process, working from the \\inside out\". In this process, description graphs and nodes are often merged.\nDe nition 4 The merge of two nodes, n 1 n 2 , is a new node whose atoms are the union of the atoms of the two nodes and whose r-edges are the union of the r-edges of the two nodes6 .\nDe nition 5 The merge of two description graphs, G 1 G 2 , is a description graph whose nodes are the disjoint union7 of the non-distinguished nodes of G 1 and G 2 plus a new distinguished node. The edges of the merged graph are the union of the edges of G 1 and G 2 , except that edges touching on the distinguished nodes of G 1 or G 2 are modi ed to touch the new distinguished node. The new distinguished node is the merge of the two distinguished nodes of G 1 and G 2 .\nThe rules for translating a description C in Basic classic into a description graph G C are as follows:\n1. A description that consists of a concept name is turned into a description graph with one node and no a-edges. The atoms of the node contains only the concept name.\nThe node has no r-edges. 2. A description of the form n R is turned into a description graph with one node and no a-edges. The node has as its atoms CLASSIC-THING and has a single r-edge with role R, min n, max 1, and restriction G THING .\n3. A description of the form n R is turned into a description graph with one node and no a-edges. The node has as its atoms CLASSIC-THING and a single r-edge with role R, min 0, max n, and restriction G THING . 4. A description of the form 8R:C, with R a role, is turned into a description graph with one node and no a-edges. The node has as its atoms CLASSIC-THING and has a single r-edge with role R, min 0, max 1, and restriction G C . 5. To turn a description of the form C u D into a description graph, construct G C and G D and merge them. 6. To turn a description of the form 8A:C, with A an attribute, into a description graph, rst construct the description graph hN C ; E C ; r C i for C. The description graph for 8A:C is hN C ftg; E C fht; r C ; Aig; ti, where t is the node hfCLASSIC-THINGg; fgi. 7. To turn a description of the form A 1 : : : A n = B 1 : : : B m into a description graph rst create a distinguished node, node r, with CLASSIC-THING as its atoms, and a node e, with THING as its atoms. For 1 i n 1 create a node a i , with its atoms being CLASSIC-THING. For 1 j m 1 create a node b j , with its atoms being CLASSIC-THING. None of the a i or b j have r-edges. If n = 1, create the edge hr; e; A 1 i; if n > 1 then create edges hr; a 1 ; A 1 i, ha n 1 ; e; A n i, and ha i 1 ; a i ; A i i for 2 i n 1. Similarly, if m = 1, create the edge hr; e; B 1 i; if m > 1 then create edges hr; b 1 ; B 1 i, hb m 1 ; e; B m i, and hb i 1 ; b i ; B i i for 2 i m 1. This creates two disjoint paths, one for the A i and one for the B j , from the distinguished node to the end node. Proof: Since the components (atoms and r-edges) of the merged node are obtained by unioning the components of the respective nodes, and since the interpretation of a node is the intersection of the interpretation of its components, the result is obviously true for nodes.\nFor merging graphs, the only di erence is that the root nodes are replaced by their merger in all edges, as well as the root. But then an element of (D 1 D 2 ) I is clearly an element of both D I 1 and D I 2 . Conversely, since we take the disjoint union of the other nodes in the two graphs, the mapping functions 1 and 2 in De nition 3 can simply be unioned, so that an element of both D I 1 and D I" }, { "figure_ref": [], "heading": "Canonical Description Graphs", "publication_ref": [], "table_ref": [], "text": "In the following sections we will occasionally refer to \\marking a node incoherent\"; this consists of replacing it with a special node having no outgoing r-edges, and including in its atoms NOTHING, which always has the empty interpretation. Marking a description graph as incoherent consists of replacing it with a description graph consisting only of an incoherent node. (Incoherent graphs are to be thought of as representing concepts with empty extension.) Description graphs are transformed into canonical form by repeating the following normalization steps whenever possible for the description graph and all its descendants.\n1. If some node has in its atoms a pre-de ned host concept, add HOST-THING to its atoms. If some node has an atomic concept name in its atoms, add CLASSIC-THING to its atoms. For each pre-de ned host concept in the atoms of the node, add all the more-general pre-de ned host concepts to its atoms.\n2. If some node has both HOST-THING and CLASSIC-THING in its atoms, mark the node incoherent. If some node has in its atoms a pair of host concepts that are not related by the pre-de ned subsumption relationship, mark the node incoherent, since their intersection will be empty.\n3. If any node in a description graph is marked incoherent, mark the description graph as incoherent. (Reason: Even if the node is not a root, attributes must always have a value, and this value cannot belong to the empty set.)\n4. If some r-edge in a node has its min greater than its max, mark the node incoherent.\n5. If some r-edge in a node has its description graph marked incoherent, change its max to 0. (Reason: It cannot have any llers that belong to the empty set.)\n6. If some r-edge in a node has a max of 0, mark its description graph as incoherent.\n(Reason: This normalization step records the equivalence between 0 R and 8R:NOTHING, and is used then to infer that a concept with 8R:C for arbitrary C subsumes 0 R.)\n7. If some node has two r-edges labelled with the same role, merge the two edges, as described below.\n8. If some description graph has two a-edges from the same node labelled with the same attribute, merge the two edges.\nTo merge two r-edges of a node, which have identical roles, replace them with one redge. The new r-edge has the role as its role, the maximum of the two mins as its min, the minimum of the two maxs as its max, and the merge of the two description graphs as its restriction.\nTo merge two a-edges hn; n 1 ; Ai and hn; n 2 ; Ai, replace them with a single new edge hn; n 0 ; Ai, where n 0 results from merging n 1 and n 2 , i.e., n 0 = n 1 n 2 . (If n 1 = n 2 then n 0 = n 1 .) In addition, replace n 1 and n 2 by n 0 in all other a-edges of this description graph.\nWe need to show that the transformations to canonical form do not change the extension of the graph. The main di culty is in showing that the two edge-merging processes do not change the extension.\nLemma 2 Let G = hN; E; ri be a description graph with two mergeable a-edges and let G 0 = hN 0 ; E 0 ; r 0 i be the result of merging these two a-edges. Then G I = G 0I . Proof: Let the two edges be hn; n 1 ; Ai and hn; n 2 ; Ai and the new node n 0 be n 1 n 2 .\nChoose d 2 G I , and let be a function from N into the domain satisfying the conditions for extensions (De nition 3) such that (r) = d. Then (n 1 ) = (n 2 ) because both are equal to A I ( (n)). Let 0 be the same as except that 0 (n 0 ) = (n 1 ) = (n 2 ). Then 0 satis es De nition 3, part 3, for G 0 , because we replace n 1 and n 2 by n 0 everywhere. Moreover, 0 (n 0 ) = (n 1 ) 2 n I 1 \\ n I 2 , which, by Lemma 1, equals (n 1 n 2 ) I ; so part 2 is satis ed too, since n 0 = n 1 n 2 . Finally, if the root is modi ed by the merger, i.e., n 1 or n 2 is r, say n 1 , then d = (n 1 ) = 0 (n 0 ), so part 1 of the de nition is also satis ed.\nConversely, given arbitrary d 2 G 0I , let 0 be the function stipulated by De nition 3 such that 0 (r 0 ) = d. Let be the same as 0 except that (n 1 ) = (n 0 ) and (n 2 ) = 0 (n 0 ).\nThen the above argument can be traversed in reverse to verify that satis es De nition 3, so that d 2 G I . Lemma 3 Let n be a node with two mergeable r-edges and let n 0 be the node with these edges merged. Then n I = n 0I .\nProof: Let the two r-edges be hR; m 1 ; M 1 ; G 1 i and hR; m 2 ; M 2 ; G 2 i.\nLet d 2 n I . Then there are between m 1 (m 2 ) and M 1 (M 2 ) elements of the domain, d 0 , such that hd; d 0 i 2 R I . Therefore there are between the maximum of m 1 and m 2 and the minimum of M 1 and M 2 elements of the domain, d 0 , such that hd; d 0 i 2 R I . Also, all d 0 such that hd; d 0 i 2 R I are in G I 1 (G I 2 ). Therefore, all d 0 such that hd; d 0 i 2 R I are in G I 1 \\ G I 2 , which equals (G 1 G 2 ) I by Lemma 1. Thus d 2 n 0I .\nLet d 2 n 0I . Then there are between the maximum of m 1 and m 2 and the minimum of M 1 and M 2 elements of the domain, d 0 , such that hd; d 0 i 2 R I . Therefore there are between m 1 (m 2 ) and M 1 (M 2 ) elements of the domain, d 0 , such that hd; d 0 i 2 R I . Also, all d 0 such that hd; d 0 i 2 R I are in (G 1 G 2 ) I = G I 1 \\ G I 2 . Therefore, all d 0 such that hd; d 0 i 2 R I are in G I 1 (G I 2 ). Therefore d 2 n I .\nHaving dealt with the issue of merging, we can now return to our desired result: showing that \\normalization\" does not a ect the meaning of description graphs.\nTheorem 2 For all possible worlds I, the extension of the canonical form of a description graph, G, resulting from a Basic classic description is the same as the extension of the description.\nProof: Steps 1 and 2 are justi ed since G I is a subset of either H or C , which are disjoint.\nStep 3 is justi ed by the fact that, by the de nition of description graphs, there must be an element of the domain in the extension of each node in a description graph.\nSteps 4, 5, and 6 are easily derived from De nition 3.\nSteps 7 and 8 are dealt with in the preceding two lemmas." }, { "figure_ref": [], "heading": "Subsumption Algorithm", "publication_ref": [], "table_ref": [], "text": "The nal part of the subsumption process is checking to see if a canonical description graph is subsumed by a description. It turns out that it is possible to carry out the subsumption test without the expense of normalizing the candidate subsumer concept.\nAlgorithm 1 (Subsumption Algorithm) Given a description D and description graph G = hN; E; ri, subsumes?(D; G) is de ned to be true if and only if any of the following conditions hold:\n1. The description graph G is marked incoherent.\n2. D is equivalent to THING. (This is determined by checking rst if D=THING, or by recursively testing whether D subsumes the canonical description graph G THING .)\n3. D is a concept name and is an element of the atoms of r.\n4. D is n R and some r-edge of r has R as its role and min greater than or equal to n.\n5. D is n R and some r-edge of r has R as its role and max less than or equal to n.\n6. D is 8R:C and some r-edge of r has R as its role and G 0 as its restriction graph and subsumes?(C; G 0 ).\n7. D is 8R:C and subsumes?(C; G THING ) and r has CLASSIC-THING in its atoms. (Reason: 8R:THING only requires the possibility that R be applicable to an object, which is absent for host values.)\n8. D is 8A:C and some a-edge of G is of the form hr; r 0 ; Ai, and subsumes?(C; hN; E; r 0 i).\n9. D is 8A:C and subsumes?(C; G THING ) and r has CLASSIC-THING in its atoms. for any attribute F, as long as the attribute is applicable (i.e., the value is not in the host domain).)\n12. D is C u E and both subsumes?(C; G) and subsumes?(E; G) are true." }, { "figure_ref": [], "heading": "Correctness of Subsumption Algorithm", "publication_ref": [], "table_ref": [], "text": "The soundness of this algorithm is fairly obvious, so we shall not dwell on it. The completeness of the algorithm is, as usual, more di cult to establish. First we have to show that for any canonical description graph or node that is not marked as incoherent, a possible world having a non-empty extension for the description graph or node can be constructed. We will do this in a constructive, inductive manner, constructing a collection of such possible worlds, called the graphical worlds of a description graph. A graphical world has a distinguished domain element that is in the extension of the description graph or node.\nA common operation is to merge two possible worlds.\nDe nition 6 Let I 1 and I 2 be two possible worlds. The merge of I 1 and I 2 , I 1 I 2 , is a possible world with classic realm the disjoint union of the classic realm of I 1 and the classic realm of I 2 . The extension of atomic names in I 1 I 2 is the disjoint union of their extensions in I 1 and I 2 .\nIt is easy to show that the extension of a description, a description graph, or a node in I 1 I 2 is the union (disjoint union for the classic realm, regular union for the host realm) of its extensions in I 1 and I 2 .\nAnother operation is to add new domain elements to a possible world. These new domain elements must be in the classic realm. The extension of all atomic identi ers remain the same except that the new domain elements belong to some arbitrary set of atomic concept names and have some arbitrary set of llers ( ller) for each role (attribute). Again, it is easy to show that a domain element of the original world is in an extension in the original world i it is in the extension in the augmented world.\nGiven a node, n, that is not marked as incoherent, we construct the graphical worlds for n as follows:\n1. If the atoms of n are precisely THING, then n can have no r-edges, because the only constructs that cause r-edges to be created also add CLASSIC-THING to the atoms. Any possible world, with any domain element the distinguished domain element, is a graphical world for n. 2. If the atoms of n include HOST-THING, then n can have no r-edges. Any possible world, with distinguished element any domain element in the extension of all the atoms of n and in no other host concepts, is a graphical world for n. (Because of the requirements on the host domain, there are an in nite number of these domain elements.)\n3. If the atoms of n include CLASSIC-THING, then for each r-edge, hR; m; M; Gi, in n, construct between m and M graphical worlds for G. This can be done for any number between m and M because if m > 0 then G is not marked incoherent, and if G is marked incoherent then M = 0.\nNo two of these graphical worlds should have the same host domain element as their distinguished element. (Again, this is possible because the extension of a host concept is either empty or in nite.) Now merge all the graphical worlds for each r-edge into one possible world. Add some new domain elements such that one of them is in exactly the extensions of the atoms of n and has as llers for each R exactly the distinguished elements of the appropriate graphical worlds. This domain element will have the correct number of llers for each r-edge, because of the disjoint union of the classic realms in the merge process and because of the di erent host domain elements picked above; therefore it is in the extension of n. Thus the resulting world is a graphical world for n.\nGiven a description graph, G = hN; E; ri, that is not marked incoherent, we construct the graphical worlds for G as follows: For each node n 2 N construct a graphical world for n. This can be done because none of them are marked incoherent. Merge these graphical worlds. Modify the resulting world so that for each hn 1 ; n 2 ; Ai 2 E the A-ller for the distinguished node of the graphical world from n 1 is the distinguished node of the graphical world from n 2 . It is easy to show that the distinguished node of the graphical world of r is in the extension of G, making this a graphical world for G. Now we can show the nal part of the result.\nTheorem 3 If the subsumption algorithm indicates that the canonical description of some graph G is not subsumed by the Basic classic description D, then for some possible world there is a domain element in the extension of the graph but not in the extension of D. Therefore G is not subsumed by D.\nProof: The proof actually shows that if the subsumption algorithm indicates that some canonical description graph, G, is not subsumed by some description, D, then there are some graphical worlds for G such that their distinguished domain elements are not in the extension of D. Remember that the subsumption algorithm indicates that G is not subsumed by D, so G must not be marked as incoherent and thus there are graphical worlds for G.\nThe proof proceeds by structural induction on D. Let G = hN; E; ri.\nIf D is an atomic concept name or a pre-de ned host concept, then D does not occur in the atoms of r. By construction, in any graphical world for G the distinguished domain element will not be in the extension of D. Similarly, if D is CLASSIC-THING or HOST-THING, then the distinguished domain elements will be in the wrong realm. If D is THING, then it is not possible for the subsumption algorithm to indicate a non-subsumption. In each case any graphical world for G has the property that its distinguished domain element is not in the extension of D.\nIf D is of the form D 1 u D 2 then the subsumption algorithm must indicate that G is not subsumed by at least one of D 1 or D 2 . By the inductive hypothesis, we get some graphical worlds of G where the distinguished domain elements are not in the extension of D 1 or not in the extension of D 2 , and thus are not in the extension of D.\nIf D is the form n R then either the r-edge from r labelled with R has min less than n or there is no such r-edge.\nIn the former case there are graphical worlds for G in which the distinguished node has n 1 llers for R, because n is greater than the min on the r-edge for R, and thus the distinguished node is not in the extension of D.\nIn the latter case, there are graphical worlds for G in which its distinguished node has any number of llers for R. Those with n 1 llers have the property that their distinguished node is not in the extension of D.\nIf D is of the form n R then either the r-edge from r labelled with R has max greater than n (including 1) or there is no such r-edge.\nIn the former case there are graphical worlds for G in which the distinguished node has n + 1 llers for R, because n is less than the max on the r-edge for R, and thus the distinguished node is not in the extension of D.\nIn the latter case, there are graphical worlds for G in which its distinguished node has any number of llers for R. Those with n + 1 llers have the property that their distinguished node is not in the extension of D.\nIf D is of the form 8R:C, where R is a role, then two cases arise.\n1. If subsumes?(C; G THING ) then CLASSIC-THING is not in the atoms of r. Then there are some graphical worlds for G whose distinguished element is in the host realm, and thus not in the extension of D.\n2. Otherwise, either there is an r-edge from r with role R and description graph H such that subsumes?(C; H) is false or there is no r-edge from r with role R. Note that the extension of C is not the entire domain, and thus must be a subset of either the host realm or the classic realm.\nIn the former case H is not marked incoherent (or else the subsumption could not be false) and the max on the r-edge cannot be 0. Thus there are graphical worlds for H whose distinguished element is not in the extension of C and there are graphical worlds for G that use these graphical worlds for H as distinguished domain element R-llers. In these graphical worlds for G the distinguished element is not in the extension of D.\nIn the latter case, pick graphical worlds for G that have some distinguished node R-ller in the wrong realm. In these graphical worlds for G the distinguished element is not in the extension of D.\nIf D is of the form 8A:C where A is an attribute then two cases arise. 1. If subsumes?(C; G THING ) then CLASSIC-THING is not in the atoms of r. Then there are some graphical worlds for G whose distinguished element is in the host realm, and thus not in the extension of D.\n2. Otherwise, either there is an a-edge from r with attribute A to some other node r 0 such that subsumes?(C; H) is false, where H = hN; E; r 0 i; or there is no a-edge from r with attribute A. Note that the extension of C is not the entire domain, and thus must be a subset of either the host realm or the classic realm.\nIn the former case H is not marked incoherent, because G is not marked incoherent. Thus there are graphical worlds for H whose distinguished element is not in the extension of C. Given any graphical world for H, a graphical world for G can be formed simply by changing the distinguished domain element. If the original graphical world's distinguished element is not in the extension of C, then the new graphical world's distinguished element will not be in the extension of D, as required.\nIn the latter case, pick graphical worlds for G that have their distinguished node A-ller in the wrong realm. In these graphical worlds for G the distinguished element is not in the extension of D.\nIf D is of the form A 1 : : : A n = B 1 : : : B m several cases again arise.\n1. If one of the paths A 1 ; : : :; A n 1 or B 1 ; : : :; B m 1 does not exist in G starting from r, then nd the end of the partial path and use graphical worlds in which the domain element for this node has an element of the host domain as its ller for the next attribute in the path. Then one of the full paths will have no ller.\n2. If the paths A 1 ; : : :; A n and B 1 ; : : :; B m exist in G starting from r but end at di erent nodes, then use graphical worlds in which the domain elements for these two nodes are di erent.\n3. If one of the paths A 1 ; : : :; A n and B 1 ; : : :; B m does not exist in G starting from r but the paths A 1 ; : : :; A n 1 and B 1 ; : : :; B m 1 both exist in G starting from r and end at the same node then either CLASSIC-THING is not in the atoms of this node or A n 6 = B m . In the former case use graphical worlds in which the domain element for this node is in the host realm. In the latter case use graphical worlds that have di erent llers for A n and B m for the domain element for this node.\n4. If one of the paths A 1 ; : : :; A n and B 1 ; : : :; B m does not exist in G starting from r but the paths A 1 ; : : :; A n 1 and B 1 ; : : :; B m 1 both exist in G starting from r and end at di erent nodes then use graphical worlds that have di erent llers for the domain elements of these nodes or that have the domain elements in the host realm.\nIn all cases we have that either one of A n I (: : :A 1 I )(d) or B m I (: : :B 1 I )(d) does not exist or A n I (: : :A 1 I )(d) 6 = B m I (: : :B 1 I )(d), so the distinguished domain element is not in the extension of D." }, { "figure_ref": [], "heading": "Implementing the subsumption algorithm", "publication_ref": [ "b0", "b1", "b22" ], "table_ref": [], "text": "In this section we provide some further comments about the actual subsumption algorithm used by the classic system, including a rough analysis of its complexity.\nAs we have described it, deciding whether description C subsumes D is accomplished in three phases:\n1. Convert D into a description graph G D . 2. Normalize G D . 3. Verify whether C subsumes G D .\nStep 1: Conversion is accomplished by a simple recursive descent parser, which takes advantage of the fact that the syntax of description logics (i.e., the leading term constructor) makes them amenable to predictive parsing. Clearly, constructing graphs for xed sized terms (like at-least) is constant time (if we measure size so that an integer is size 1 no matter how large), while the time for non-recursive terms (like same-as) is proportional to their length. Finally, recursive terms (like all, and) only require a xed amount of additional work, on top of the recursive processing. Therefore, the rst stage can be accomplished in time proportional to the size of the input description. In order to speed up later processing, it will be useful to maintain various lists, such as the lists of atomic concept identi ers, or roles/attributes, in sorted order. This sorting needs to be done initially (later, ordering will be maintained by performing list merges) and this incurs, in the worst case a quadratic overhead in processing8 . In any case, the total size of the graph constructed (including the sizes of the nodes, etc.) is proportional to the size of the original concept description.\nStep 3: Checking whether a description C subsumes a description graph G D , can be seen to run in time proportional to the size of the subsuming concept, modulo the cost of lookups in various lists. Since these are sorted, the lookup costs are bounded by the logarithm of the size of the candidate subsumee graph, so the total cost is bounded by O(j C j log j G D j).\nStep 2: Normalization is accomplished by a post-order traversal of the description graph: in processing a description graph hN; E; ri, each node in N is normalized rst independently (see details below), and afterwards the attribute edges E are normalized. This later task involves identifying multiple identically-labelled attribute edges leaving a node (this is done in one pass since the attribute edges are grouped by source node, and sorted by attribute name), and \\merging\" them. Merging two edges is quite easy in and of itself, but when merging the nodes at their tips, we must be careful because node mergers may cascade; for example, if a concept has the form a 1 = b 1 u a 2 = b 2 u : : : u a n = b n u a 1 = a 2 u a 2 = a 3 u : : : u a n 1 = a n then the original graph will have 2n + 1 nodes, but 2n of these are collapsed by normalization step 8. To discover this e ciently, we use a version of A -Kaci's algorithm for unifying -terms (A t-Kaci, 1984;A t-Kaci & Nasr, 1986); the algorithm relies on the UNION-FIND technique to identify nodes to be merged, and runs in time just slightly more than linear in the number of nodes in N. Therefore the cost of the non-recursive portion of graph normalization is roughly linear in the number of nodes in it.\nThe merging of two description graph nodes is quite similar to the normalization of a single node: the atomic concept identi er lists need to sorted/merged, with duplicates eliminated on the y. This can be done in time proportional to the size of the nodes themselves, if we make the size of the node include the size of the various lists in it, such as atoms. The processing of role edges leaving a node is, again, one of identifying and merging identically-labelled edges. (But in this case the mergers of labelled edges do not interact, so a single pass over the role-edge list is su cient.) The cost of non-recursive aspects of any such merger is once again proportional to the size of the local information.\nWe are therefore left with the problem of bounding the total number of procedure calls to NormalizeGraph, NormalizeNode, MergeEdge, and MergeNode, and then bounding the sizes of the nodes being merged.\nNormalizeGraph and NormalizeNode are called exactly once on every (sub)graph and node in the original graph, as part of the depth-rst traversal, and as argued above, on their own they contribute at most time proportional to the total size of the original graph, which was proportional to the size of the original description.\nThe number of calls to MergeEdge and MergeNode is not so simply bounded however { the same node may be merged several times with others. However, these calls are paired, and each invocation of MergeNode reduces the number of nodes in the graph by one. Therefore, since the number of nodes is not incremented elsewhere, the total number of calls to MergeEdge and MergeNode is bounded by the number of nodes in the original graph. The only problem is that the non-recursive cost of a call to MergeNode depends on the size of the argument nodes, and each call may increase the size of the remaining node to be the sum of the sizes of the two original nodes.\nTherefore, if the original concept had size S, with the graph having n nodes, each of size v i , then the worst case cost would result from the iterative summation of sizes:\n(v i 1 + v i 2 ) + (v i 1 + v i 2 + v i 3 ) + (v i 1 + v i 2 + v i 3 + v i 4 ) + : : : = n v i 1 + (n 1) v i 2 + : : : + 1 v in\nGiven that n and all v j are bounded by S, clearly the above is in the worst case O(S 3 ).\nIn fact, given the constraint that P j=1 nv j = S, it is possible to argue that the worst case cost will occur when v j = 1 for every j, (i.e., when n = S), in which case the cost is really just O(S 2 ).\nThere are other theoretical improvements that could be attempted for the algorithm (e.g., merging nodes in the correct order of increasing size) as well as its analysis (e.g., only nodes in graphs at the same depth in the tree can be merged).\nWe remark that like all other description logics, classic permits identi ers to be associated with complex descriptions and then these identi ers can be used in other descriptions (though no recursion is allowed). The expansion of identi ers is a standard operation which can lead to exponential growth in size in certain pathological cases (Nebel, 1990), making the subsumption problem inherently intractable. As with the type system of the programming language Standard ML, such pathological cases are not encountered in practice, and the correct algorithm is simple, straightforward and e cient in normal cases (unlike the correct algorithm for reasoning with the set constructor, say).\nBecause users rarely ask only whether some concept subsumes another, but rather are interested in the relationship between pairs of concepts, classic in fact constructs the normalized description graph of any description given to it. This suggests that it might be better to check whether one description graph subsumes another one, rather than checking whether a description subsumes a graph. In general, this works quite well, except that we would have to verify that the attribute edges in the subsumer graph form a subgraph of the subsumee's attribute edges. Since edges are uniquely labelled after normalization, this is not inherently hard, but it still requires a complete traversal (and hence marking/unmarking) of the upper graph. We have therefore found it useful to encode as part of the description graph's root the same-as restrictions that lead to the construction of the corresponding aedges; then, during subsumption testing, the only aspect of the subsumer related to same-as which is checked is this list of same-as pairs. Also, the above description of the algorithm has tried to optimize the cost of normalization, which dominates when checking a single subsumption. If in the overall use of a system (e.g., processing individuals), inquiries about the restrictions on roles/attributes are frequent, and space usage is not a problem, then it may be practically advantageous to maintain the r-edges and a-edges of a node in a hash table, rather than a sorted list, in order to speed up access. (Note that for merging r-edges, one must however still have some way of iterating through all the values stored in the hash table.)" }, { "figure_ref": [], "heading": "Individuals in Descriptions", "publication_ref": [], "table_ref": [], "text": "In practical applications where DLs have been used, such as integrity constraint checking, it is often very useful to be able to specify ranges of atomic values for roles. The most common examples of this involve integers, e.g., \\the year of a student can be 1,2,3 or 4\", or what are called enumerated types in Pascal, e.g., \\the gender of a person is either M or F\". One way to allow such constraints is to introduce a new description constructor, a set description, which creates a description from a list of individual names, and whose obvious extension is the set consisting of the extensions of the individuals that appear in the list. This construct could be used in terms like 8year:f1 2 3 4g. Another useful constructor involving individuals is a lls restriction, p : I, which denotes objects that have the extension of the individual I as one of the llers for the relationship denoted by role or attribute p. (Note that for an attribute, q, 8q:fIg is the same as q : I.)\nWithin the paradigm of DLs, these constructors are quite useful and can in fact be used to express new forms of incomplete information. For example, if we only know that Ringo is in his early fties, we can simply assert that Ringo is described by 8age:f50 51 52 53 54g.\nThe constructors can also be used to ask very useful queries. For example, to nd all the male persons it su ces to determine the instances of gender : M.\nThe new constructors do interact with previous ones, such as cardinality constraints: clearly the size of a set is an upper cardinality bound for any role it restricts. This interaction is not problematic as long as the individuals in the set are host values, since such individuals have properties that are xed and known ahead of time. However, once we allow classic individuals as members of sets, then the properties of these individuals might themselves a ect subsumption. As a simple example, if we know that Ringo is an instance of the concept ROCK-SINGER (which we shall write as Ringo 2 ROCK-SINGER) then the extension of 8friends:ROCK-SINGER is always a superset of the extension of 8friends:fRingog. This is disturbing because then the classi cation hierarchy of de nitions would change as new facts about individuals are added to the knowledge base. De nitions are not meant to be contingent of facts about the current world. Therefore, subsumption is usually de ned to be independent of these \\contingent\" assertions. As we shall see below, the use of individual properties in description subsumption also leads to intractability." }, { "figure_ref": [], "heading": "Complex Subsumption Reasoning: An Example", "publication_ref": [ "b18" ], "table_ref": [], "text": "Traditional proofs of intractability (e.g. (Levesque & Brachman, 1987)) have occasionally left users of DLs puzzled over the intuitive aspects of a language which make reasoning di cult. For this reason we present an example that illustrates the complexity of reasoning with the set description.\nSuppose that we have the concept of JADED-PERSON as being one who wants only to visit the Arctic and/or the Antarctic, wherever there are penguins: JADED-PERSON :\n= 8wantsToVisit:(fArctic Antarcticg u 8hasPenguins!:fYesg) Suppose we do not remember which is the Arctic and which the Antarctic; but we do know that the South Pole is located in one of these two places, and that there are penguins there, while the North Pole is located in one of these two places, and there are no penguins there. Assuming that isLocatedIn! and hasPenguins! are attributes|roles with exactly one ller, we can record Southpole 2 8isLocatedIn!:(fArctic Antarcticg u 8hasPenguins!:fYesg) Northpole 2 8isLocatedIn!:(fArctic Antarcticg u 8hasPenguins!:fNog)\nWe are thus unable to distinguish the exact location of the Southpole and Northpole; however, since hasPenguins! has a single ller, exactly one of Arctic and Antarctic can (and in fact must) have Yes as ller for hasPenguins!, and therefore exactly one of them is the location of Southpole.\nAs a result of these facts, we know that the extension of JADED-PERSON must be a subset of the extension of 1 wantsToVisit in any database containing the above facts about Southpole and Northpole.\nObserve that we have here not just an occasional worse-case behavior, but a generalized di culty in reasoning with set descriptions. Because subsumption ignores assertions about individuals, this does not (yet) show that subsumption per se must perform these inferences. A simple transformation, given in the appendix, establishes this fact, by converting the recognition of individuals into a question about the subsumption of two descriptions by making all the individuals involved attribute-llers for new dummy attributes, and their descriptions as restrictions on these attributes. As a result, if the description is non-empty then these attribute values must satisfy the corresponding restrictions." }, { "figure_ref": [], "heading": "A Modi ed Semantics for Individuals", "publication_ref": [], "table_ref": [], "text": "We have seen two problems with individuals appearing in descriptions: (1) the e ect of \\mutable facts\" on extensional relationships between \\immutable\" descriptions, and (2) the computational intractability of subsumption caused by the appearance of individuals in descriptions.\nTo deal with the rst problem, it is reasonable to restrict the computation of subsumption so that it cannot access \\database facts\" about individuals, such as their role llers, so that all individuals are treated like host identi ers. This is a procedural description of some aspect of reasoning, in the same sense as negation-by-failure is in Prolog. As with Prolog, it would be desirable to nd a semantic account of this phenomenon.\nA semantics that ignores mutable facts when determining subsumption is not hard to devise|all that is required is to have two di erent sets of possible worlds corresponding to a KB containing both concepts and individuals. One set consists of all possible worlds that model all the information in the KB; the second consists of all possible worlds that model only the information about concepts (and roles and attributes). When asking questions about individuals, the rst set of possible worlds must be considered; when asking subsumption questions, the second, larger, set must be considered, thus ignoring any e ects of the mutable facts.\nHowever, this semantics does not solve the computational problem with individuals in descriptions. To deal with this problem, the semantics of individuals are modi ed as follows: instead of mapping individuals into separate elements of the domain, as is done in a standard semantics, individuals are mapped into disjoint subsets of the domain, intuitively representing di erent possible realizations of that (Platonic) individual.\nTherefore, the semantics of the set constructor is now stated as follows: Domain value d belongs to the extension of fB 1 : : : B n g i d belongs to the extension of one of the B i .\nAn associated change in the notion of cardinality is required|two elements of the domain are considered congruent if they belong to the extension of the same individual or if they are identical. The cardinality of a set of elements of the domain is then the size of the set modulo this congruence relationship. This means that occurrences of di erent identi ers in description(s) are guaranteed to be unequal, but distinct occurrences of the same individual identi er are not guaranteed to denote the same individual.\nHere are two consequences of this stance: 19. If the max on an edge is equal to the cardinality of llers on the edge, let the dom on the distinguished node of the description graph of the r-edge be the intersection of the dom and the llers. (If max is less than the cardinality, steps 18 and 4 detect the inconsistency.)\nNote that in the new canonical form all a-edges pointing to a single node have the same value for their llers, and that if this is not the empty set, then the node has this set as the value for its dom.\nThe proofs of Lemmas 3 and 2 also work for this extension of description graphs. The proof of Theorem 2 can then be extended for these graphs.\nThe subsumption algorithm from page 289 is extended as follows:\n13. D is R : I and some r-edge of r has role R and llers including I. 14. D is A : I and some a-edge from r has attribute A and llers including I. 15. D is fI 1 : : : I n g and the dom of r is a subset of fI 1 : : : I n g.\nAgain, the soundness of the extended algorithm is fairly obvious. The completeness proof has the following additions to the construction of graphical worlds:\nThe extension of classic individual names starts out empty.\nWhen constructing graphical worlds for a node that includes HOST-THING in its atoms and has a non-universal dom, pick only those domain elements corresponding to the elements of its dom.\nWhen constructing graphical worlds for a node that includes CLASSIC-THING in its atoms and has a non-universal dom, add the distinguished domain element to the extension of one of its dom elements.\nWhen constructing graphical worlds for the r-edges of a node, ensure that each element of the llers of the r-edge has the distinguished element of at least one of the graphical worlds in its extension by either adding them to the extension or using appropriate host domain elements. (This can be done because the llers must be a subset of the dom of the distinguished node of the graphical world and any host values must belong to its atoms.)\nThe llers for a-edges need not be considered here because they are \\pushed\" onto the nodes in the canonicalization process.\nThe proof of Theorem 3 is then extended with the following cases:\nIf D is of the form fI 1 : : : I n g then the dom of r is not a subset of fI 1 ; : : :; I n g. Thus there are graphical worlds for G in which the distinguished domain element is not in the extension of any of the I j .\nIf D if of the form A : I then either the a-edge from r labelled with A does not have ller I or there is no such a-edge.\nIn the former case the node pointed to by the a-edge cannot have as its domain the singleton consisting of I. Therefore there are graphical worlds for G that have their distinguished node A-ller not in the extension of I, as required.\nIn the latter case, pick graphical worlds for G that have their distinguished node Aller in the wrong realm. In these graphical worlds for G the distinguished element is not in the extension of D.\nIf D is of the form R : I then either the r-edge from r labelled with R does not have ller I or there is no such r-edge. In the former case either the cardinality of the dom of the distinguished node of the description graph of this r-edge is greater than the min, m, of the r-edge, or the dom does not include I. If the dom does not include I, then all graphical worlds for the node have their distinguished element not in the extension of I, as required. If the dom does include I, then there are at least m elements of the dom besides I, and the llers of the r-edge are a subset of the set of these elements. There are thus graphical worlds for G that use only these elements, as required.\nIn the latter case, pick graphical worlds for G that have some distinguished node Rller in the wrong realm. In these graphical worlds for G the distinguished element is not in the extension of D.\nThis shows that the subsumption algorithm given here is sound and complete for the modi ed semantics presented here." }, { "figure_ref": [], "heading": "Complete CLASSIC", "publication_ref": [], "table_ref": [], "text": "We now make a nal pass to deal with some less problematic aspects of classic descriptions that have not been appropriately covered so far.\nclassic allows primitive descriptions of the form (PRIMITIVE D T), where D is a description, and T is a symbol. The extension of this is some arbitrary subset of the extension of D, but is the same as the extension of (PRIMITIVE E T), provided that D and E subsume each other. In this way one can express EMPLOYEE, a kind of a person who must have an employee number, as (PRIMITIVE (PERSON u 1 employeeNr) employee)\nThis construct can be removed by creating for every such primitive an atomic concept (e.g., EMPLOYEEHOOD) and then replacing the de nition of the concept by the conjunction of the necessary conditions and this atom, in this case EMPLOYEEHOOD u (PERSON u 1 employeeNr). Care has to be taken to use the same atomic concept for equivalent primitives.\nclassic permits the declaration of disjoint primitives, essentially allowing one to state that the extensions of various atomic concepts must be disjoint in all possible worlds. To deal with such declarations, we need only modify the algorithm for creating canonical graphs by adding a step that marks a node as incoherent whenever its atoms contains two identi ers that have been declared to be disjoint.\nTo allow an approximate representation for ideas that cannot be encoded using the constructors expressly provided, classic allows the use of test-de ned concepts, using the following syntax:\n(TEST host-language Boolean function]) e.g., (TEST Prime-Number-Testing-Function). 9 For the purposes of subsumption, these are treated as \\black-boxes\", with semantics assigned as for atomic concepts. (Test concepts have a real e ect on reasoning at the level of individuals, where they can perform constraint checking.)\nWith these simple additions, the above algorithm is a sound and complete subsumption algorithm for descriptions in classic 1, under the modi ed semantics introduced in this paper." }, { "figure_ref": [], "heading": "Summary, Related Work, and Conclusions", "publication_ref": [ "b25", "b6", "b21", "b12", "b16", "b14", "b17" ], "table_ref": [], "text": "We believe this paper makes two kinds of contributions: First, the paper presents an abstracted form of the subsumption algorithm for the classic description logic, and shows that it is e cient and correct under the modi ed semantics. This is signi cant because previous claims of correct and e cient subsumption algorithms in implemented DLs such as kandor (Patel-Schneider, 1984) and candide (Beck et al., 1989) have turned out to be unfounded (Nebel, 1988).\nA tractability proof for a language like Basic classic is claimed to exist (but is not proven) in (Donini et al., 1991), and an alternate proof technique may be found by considering a restriction of the (corrected) subsumption algorithm in (Hollunder & Nutt, 1990).\nDescription graphs have also turned out to be of interest because they support further theoretical results about DLs, concerning their learnability (Cohen & Hirsh, 1994;Pitt & Frazier, 1994)|results which would seem harder to obtain using the standard notation for DLs.\nSecond, this paper investigates the e ect of allowing individuals to appear in descriptions of DLs. As independently demonstrated in (Lenzerini & Schaerf, 1991), adding a set description introduces yet another source of intractability, and we have provided an intuitive example illustrating the source of di culties. The implementers of the classic system, like others who do not use refutation/tableaux theorem-proving techniques, chose not to perform all inferences validated by a standard semantics, not just because of the formal intractability result but because no obvious algorithm was apparent, short of enumerating all possible ways of lling roles. The subset of inferences actually performed was initially described procedurally: \\facts\" about individuals were not taken into account in the subsumption algorithm. This paper provides a denotational semantic account for this incomplete set of inferences. The formal proof of this being a correct account is a corollary of the completeness proof for the subsumption algorithm in Section 4, and the observation that the graph construction and subsumption algorithms in that section do indeed ignore 9. In order to deal with the two realms, classic in fact provides two constructors: H-TEST and C-TEST, for host and classic descriptions, but this does not cause any added complications besides keeping track of the correct realm.\nthe properties of the individuals involved. The one di erence between the original implementation of classic and the current semantics is that attribute paths ending with the same ller were used to imply an equality condition. As noted in Section 3.2, the modi ed semantics does not support this inference, and it was taken out of the implementation of classic. It is signi cant that the change to the standard semantics is small, easy to explain to users (either procedurally or semantically), and only a ects the desired aspects of the language (i.e., all reasoning with Basic classic remains exactly as before).\nbe extensions of individuals whose membership in all terms is not known a priori, i.e., nonhost individuals. In particular, we will show how to encode the testing of unsatis ability of a formula in 3CNF as the question of recognizing an individual as an instance of a description. Since this problem is known to be NP-hard, we have strong indication of its intractability.\nStart with a formula F, in 3CNF. Using DeMorgan's laws, construct formula G, which is the negation of F, and which is in 3DNF. Testing the validity of G is equivalent to checking the unsatis ability of F.\nConstruct for every propositional symbol p used in F, two individual names P and P. (Here P will represent the negation of p.) Each individual will have attribute truthValue, with possible llers True and False P; P 2 8truthValue:fTrue Falseg:\nTo make sure that P and P have exactly one, and opposite, truth values, we create two more individual names, Yesp and Nop, with additional attributes approve and deny respectively, whose llers need to have truth value True and False respectively:\nYesp 2 8approve:(fP Pg u 8truthValue:fTrueg) Nop 2 8deny:(fP Pg u 8truthValue:fFalseg) Now, given the formula G = C1 _ C2 _ : : : _ Cn, create individual names C1, C2, : : :, Cn, each with role conjuncts containing the propositions that are its conjuncts. For example, if C1 = p ^:q ^:r then C1 2 8conjuncts:fP Q Rg u 3 conjuncts: Finally, construct individual G to have C1, C2, : : :, Cn as possible llers for a new role disjunctsHolding: G 2 8disjunctsHolding:fC1 C2 : : : Cng: The formula G will then be valid i there is always at least one disjunct that holds. This is equivalent to membership in the concept VALID-FORMULAE de ned as 1 disjunctsHolding u 8disjunctsHolding:(8conjuncts:(8truthValue:fTrueg)):\nThe above shows that recognizing whether individuals are instances of descriptions is intractable in the presence of set descriptions, minimum number restrictions, and value restrictions.\nWe can convert this into a question concerning the subsumption of two descriptions by essentially making all the individuals involved attribute-llers for new dummy attributes, and their descriptions as restrictions on these attributes. Then if the description is nonempty then these attribute values must satisfy the corresponding restrictions.\nSo, de ne concept UPPER to be 8formula:VALID-FORMULAE and de ne concept LOWER to be 8dummy1-p:(fPg u P 0 s concept descriptor]) u 8dummy2-p:(f Pg u P0 s concept descriptor]) u 8dummy3-p:(fYespg u : : :) u 8dummy4-p:(fNopg u : : :) u : : : 8dummy5-c i :(fC i g u : : :) u : : : 8formula:(fGg u : : :)\nThen in any database state either concept LOWER has no instances, in which case it is a subset of the extension of UPPER, or it has at least one instance, in which case the individual names lling the various dummy attributes must have the properties ascribed to them, whence C will be in VALID-FORMULAE (and hence UPPER will subsume LOWER) i C is valid, which completes the proof." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We wish to thank Ronald Brachman and our other colleagues in the classic project for their collaboration, and the JAIR referees for their excellent suggestions for improving the paper. In particular, one of the referees deserves a medal for the thoroughness and care taken in locating weaknesses in our arguments, and we are most thankful. Any remaining errors are of course our own responsibility." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "p : I I = fd 2 C j 9x (d; x) 2 p I ^x 2 I I g fI 1 : : : I n g I = S k I I k if the I k are all classic individuals; fI 1 : : : I n g I = fI 1 : : : I n g if I k are all host individuals; empty otherwise.\n( n p) I (resp. ( n p) I ) is those objects in C with at least (resp. at most) n noncongruent llers for role p\nThe development of the subsumption algorithm in Section 2 is then modi ed to take into account the added constructs with the modi ed semantics introduced earlier.\nFirst description graphs are extended. A node of a description graph is given a third eld, which is either a nite set of individuals or a special marker denoting the \\universal\" set. This eld is often called the dom of the node. Both a-edges and r-edges are given an extra eld, called the llers of the edge. This eld is a nite set of individuals. Where unspeci ed, as in constructions in previous sections, the dom of a node is the universal set and the llers of an a-edge or an r-edge is the empty set.\nThe semantics of description graphs in De nition 3 are extended to the following:\nDe nition 7 Let G = hN; E; ri be a description graph and let I be a possible world.\nAn element, d, of is in G I , i there is some function, , from N into such that 1. d = (r);\n2. for all n 2 N (n) 2 n I ; 3. for all hn 1 ; n 2 ; A; Fi 2 E we have h (n 1 ); (n 2 )i 2 A I , and for all f 2 F, (n 2 ) 2 f I . An element, d, of is in n I , where n = hC; H; Si, i 1. for all C 2 C, we have d 2 C I ; 2. for all hR; m; M; G; Fi 2 H, (a) there are between m and M elements, d 0 , of the domain such that hd; d 0 i 2 R I ; (b) d 0 2 G I for all d 0 such that hd; d 0 i 2 R I ; and (c) for all f 2 F there is a domain element, d 0 , such that hd; d 0 i 2 R I and d 0 2 f I 3. If the S is not the universal set then 9f 2 S such that d 2 f I .\nWhen merging nodes, the dom sets are intersected. Merging description graphs is unchanged. When merging a-edges and r-edges, the sets of llers are unioned.\nThe translation of descriptions into description graphs is extended by the following rules:\n8. A description of the form R : I is turned into a description graph with one node and no a-edges. The node has as its atoms CLASSIC-THING and a single r-edge with role" } ]
[ { "authors": "H A T-Kaci", "journal": "", "ref_id": "b0", "title": "A Lattice Theoretic Approach to Computation Based on a Calculus of Partially-Ordered Type Structures", "year": "1984" }, { "authors": "H A T-Kaci; R Nasr", "journal": "Journal of Logic Programming", "ref_id": "b1", "title": "LOGIN: A logic programming language with built-in inheritance", "year": "1986" }, { "authors": "", "journal": "American Association for Arti cial Intelligence", "ref_id": "b2", "title": "Issues in Description Logics: Users Meet Developers", "year": "1992" }, { "authors": "F Baader; H.-J Heinsohn; J Hollunder; B ; J Nebel; B Nutt; W Pro Tlich; H.-J ", "journal": "", "ref_id": "b3", "title": "Terminological knowledge representation: A proposal for a terminological logic", "year": "1991" }, { "authors": "F Baader; P Hanschke", "journal": "", "ref_id": "b4", "title": "A scheme for integrating concrete domains into concept languages", "year": "1991-04" }, { "authors": "F Baader; B Hollunder", "journal": "SIGART Bulletin", "ref_id": "b5", "title": "KRIS: Knowledge Representation and Inference System", "year": "1991" }, { "authors": "H W Beck; S K Gala; S B Navathe", "journal": "Institute of Electric and Electronic Engineers", "ref_id": "b6", "title": "Classi cation as a query processing technique in the CANDIDE semantic data model", "year": "1989" }, { "authors": "A Borgida; R J Brachman; D L Mcguinness; L A Resnick", "journal": "Association for Computing Machinery", "ref_id": "b7", "title": "CLASSIC: A structural data model for objects", "year": "1989" }, { "authors": "A Borgida", "journal": "International Journal of Intelligent and Cooperative Information Systems", "ref_id": "b8", "title": "From type systems to knowledge representation: Natural semantics speci cations for description logics", "year": "1992" }, { "authors": "R J Brachman; R E Fikes; H J Levesque", "journal": "IEEE Computer", "ref_id": "b9", "title": "KRYPTON: A functional approach to knowledge representation", "year": "1983" }, { "authors": "W W Cohen; H Hirsh", "journal": "", "ref_id": "b10", "title": "Learnability of description logics with equality constraints", "year": "" }, { "authors": "P Devanbu; R J Brachman; B Ballard; P G Selfridge", "journal": "Communications of the ACM", "ref_id": "b11", "title": "LaSSIE: A knowledgebased software information system", "year": "1991" }, { "authors": "F M Donini; M Lenzerini; D Nardi; W Nutt", "journal": "International Joint Committee on Arti cial Intelligence", "ref_id": "b12", "title": "Tractable concept languages", "year": "1991" }, { "authors": "J Doyle; R Patil", "journal": "Arti cial Intelligence", "ref_id": "b13", "title": "Two theses of knowledge representation: Language restrictions, taxonomic classi cation, and the utility of representation services", "year": "1991" }, { "authors": "L Pitt; M Frazier", "journal": "ACM Press", "ref_id": "b14", "title": "Classic learning", "year": "1994" }, { "authors": "J Heinsohn; D Kudenko; B Nebel; H.-J Pro Tlich", "journal": "American Association for Arti cial Intelligence", "ref_id": "b15", "title": "An empirical analysis of terminological representation systems", "year": "1992" }, { "authors": "B Hollunder; W Nutt", "journal": "", "ref_id": "b16", "title": "Subsumption algorithms for concept languages", "year": "1990" }, { "authors": "M Lenzerini; A Schaerf", "journal": "American Association for Arti cial Intelligence", "ref_id": "b17", "title": "Concept languages and query languages", "year": "1991" }, { "authors": "H J Levesque; R J Brachman", "journal": "Computational Intelligence", "ref_id": "b18", "title": "Expressiveness and tractability in knowledge representation and reasoning", "year": "1987" }, { "authors": "R M Macgregor; R Bates", "journal": "", "ref_id": "b19", "title": "The Loom knowledge representation language", "year": "1987" }, { "authors": "E Mays; C Apt E; J Griesmer; J Kastner", "journal": "IEEE Expert", "ref_id": "b20", "title": "Organizing knowledge in a complex nancial domain", "year": "1987" }, { "authors": "B Nebel", "journal": "Arti cial Intelligence", "ref_id": "b21", "title": "Computational complexity of terminological reasoning in BACK", "year": "1988" }, { "authors": "B Nebel", "journal": "Arti cial Intelligence", "ref_id": "b22", "title": "Terminological reasoning is inherently intractable", "year": "1990" }, { "authors": "", "journal": "", "ref_id": "b23", "title": "International Workshop on Terminological Logics", "year": "1991" }, { "authors": "B Owsnicki-Klewe", "journal": "Springer Verlag", "ref_id": "b24", "title": "Con guration as a consistency maintenance task", "year": "1988" }, { "authors": "P F Patel-Schneider", "journal": "IEEE Computer Society", "ref_id": "b25", "title": "Small can be beautiful in knowledge representation", "year": "1984" }, { "authors": "P F Patel-Schneider", "journal": "", "ref_id": "b26", "title": "Decidable, Logic-Based Knowledge Representation", "year": "1987" }, { "authors": "P F Patel-Schneider", "journal": "Arti cial Intelligence", "ref_id": "b27", "title": "A four-valued semantics for terminological logics", "year": "1989" }, { "authors": "P F Patel-Schneider", "journal": "Arti cial Intelligence", "ref_id": "b28", "title": "Undecidability of subsumption in NIKL", "year": "1989" }, { "authors": "", "journal": "", "ref_id": "b29", "title": "Terminological Logic Users Workshop", "year": "1991" }, { "authors": "C Peltason; K Von Luck; B Nebel; A Schmiedel", "journal": "", "ref_id": "b30", "title": "The user's guide to the BACK system", "year": "1987" }, { "authors": "L A Resnick; A Borgida; R J Brachman; D L Mcguinness; P F Patel-Schneider", "journal": "", "ref_id": "b31", "title": "CLASSIC description and reference manual for the COMMON LISP implementation", "year": "1992" }, { "authors": "M Schmidt-Schauss", "journal": "", "ref_id": "b32", "title": "Subsumption in KL-ONE is undecidable", "year": "1989" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "J R Wright; E S Weixelbaum; K Brown; G T Vesonder; S R Palmer; J I Berman; H H Moore", "journal": "American Association for Arti cial Intelligence", "ref_id": "b34", "title": "A knowledge-based con gurator that supports sales, engineering, and manufacturing at AT&T network systems", "year": "1993" }, { "authors": "A ", "journal": "", "ref_id": "b35", "title": "Intractability of Reasoning with ONE-OF We present here a formal proof that subsumption with set descriptions is in fact NP-hard. 10", "year": "1991" } ]
[ { "formula_coordinates": [ 17, 103.44, 599.52, 204.72, 60.22 ], "formula_id": "formula_0", "formula_text": "1. Convert D into a description graph G D . 2. Normalize G D . 3. Verify whether C subsumes G D ." }, { "formula_coordinates": [ 19, 169.2, 261.12, 274.08, 30.46 ], "formula_id": "formula_1", "formula_text": "(v i 1 + v i 2 ) + (v i 1 + v i 2 + v i 3 ) + (v i 1 + v i 2 + v i 3 + v i 4 ) + : : : = n v i 1 + (n 1) v i 2 + : : : + 1 v in" } ]
A Semantics and Complete Algorithm for Subsumption in the CLASSIC Description Logic
This paper analyzes the correctness of the subsumption algorithm used in classic, a description logic-based knowledge representation system that is being used in practical applications. In order to deal e ciently with individuals in classic descriptions, the developers have had to use an algorithm that is incomplete with respect to the standard, model-theoretic semantics for description logics. We provide a variant semantics for descriptions with respect to which the current implementation is complete, and which can be independently motivated. The soundness and completeness of the polynomial-time subsumption algorithm is established using description graphs, which are an abstracted version of the implementation structures used in classic, and are of independent interest. 1. The notation used for descriptions here is the standard notation in the description logic community (Baader et al., 1991). The classic notation is not used because it is more verbose.
Alex Borgida; Peter F Patel-Schneider
[ { "figure_caption": "Figure 1 :1Figure 1: A description graph.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 11Figure 1 presents a view of a description graph constructed in this fashion from the description GAME u 8participants:PERSON u coach = captain father:Now we want to show that this process preserves extensions. As we use the merge operations we rst show that they work correctly.Lemma 1 If n 1 and n 2 are nodes then (n 1 n 2 ) I = n I 1 \\n I 2 . If D 1 and D 2 are description graphs then (D 1 D 2 ) I = D I 1 \\ D I 2 .", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "F10. D is A 1 : : : A n = B 1 : : : B m and the paths A 1 ; : : :; A n and B 1 ; : : :; B m exist in G starting from r and end at the same node. 11. D is A 1 : : : A n = B 1 : : : B m with A n the same as B m and the paths A 1 ; : : :; A n 1 and B 1 ; : : :; B m 1 exist in G starting from r and end at the same node, which has CLASSIC-THING in its atoms. (Reason: If A i I (: : : A 1 I (d)) = B j I (: : : B 1 I (d)) then", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1. Looking at the descriptions of Southpole and Northpole in Section 3.1, the distinct occurrences of Arctic might be satis ed by distinct domain elements, with di erent role llers. (In greater detail: the extension of Arctic might include domain elements d 1 and d 2 , with d 1 satisfying condition hasPenguins! : Yes, while d 2 satis es hasPenguins! : No. If Southpole is then located in d 1 , while Northpole is located in d 2 , then we still have both satisfying isLocatedIn! : Arctic. Similarly for domain elements d 3 and d 4 in the extension of Antarctic. Therefore one could have two places to visit where there are penguins, d 1 and d 3 .) 2. Even though an individual may have a description that includes isLocatedIn! : Arctic u originatesIn! : Arctic; it need not satisfy the condition isLocatedIn! = originatesIn!, since the equality restriction requires identity of domain values.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4. Adding Individuals to CLASSICIndividuals can occur in both classic and host descriptions. The following constructs create classic descriptionsan attribute, R is a role, I is the name of a classic individual or a host value, collectively called individuals, and I j are names of classic individuals. New host descriptions can be constructed using fI 1 : : : I n g, where the I j are host values.The interpretation function : I is extended to individual identi ers, by requiring that I I be a non-empty subset of C , if I is syntactically not recognized to be a host individual, and making I I = fIg for host values I. As stated earlier, the interpretations of distinct identi ers must be non-overlapping.The interpretation C I of non-atomic descriptions is modi ed as follows:", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "INTEGER, REAL, COMPLEX, and STRING, but not BOOLEAN or NON-ZERO-INTEGER.", "figure_data": "Non-host (classic) descriptions in Basic classic are formed according to the following syntax:SyntaxConstructor NameCLASSIC-THING E C u D 8R:C 8A:C n R m R A 1 : : : A k = B 1 : : : B h Equality Restriction Atomic Concept Name Intersection Role Value Restriction Attribute Value Restriction Minimum Number Restriction Maximum Number Restrictionwhere E is an atomic concept name; C and D are classic descriptions; R is a role; A, A i , and B j are attributes; n,k,h are positive integers; and m is a non-negative integer. The set of constructors in Basic classic was judiciously chosen to result in a language in which subsumption is easy to compute. The denotational semantics for descriptions in Basic classic is recursively built on the extensions assigned to atomic names by a possible world:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b14", "b13", "b2", "b12", "b14", "b11", "b7", "b9", "b0", "b8", "b6", "b1", "b15" ], "table_ref": [], "text": "GSAT (Selman, Levesque, & Mitchell, 1992;Selman & Kautz, 1993) is an incomplete model-nding algorithm for clausal propositional formulas which performs a randomized local search. GSAT has been shown to solve many \\hard\" problems much more e ciently than other traditional algorithms like, e.g., DP (Davis & Putnam, 1960). Since GSAT applies only to clausal formulas, using it to nd models for ordinary propositional formulas requires some previous clausal-form conversion. This requires extra computation (which can be extremely heavy if the \\standard\" clausal conversion is used). Much worse, clausal-form conversion causes either a large increase in the size of the input formula or an enlargement of the search space.\nIn this paper we describe how to modify GSAT so that it can be applied to non-clausal formulas directly, i.e., with no previous clausal form conversion. An extended version of the paper (Sebastiani, 1994) provides the proofs of the theorems and a detailed description of the algorithm introduced.\nThis achievement could enlarge GSAT's application domain. Selman et al. (1992) suggest that some traditional AI problems can be formulated as model-nding tasks; e.g., visual interpretation (Reiter & Mackworth, 1989), planning (Kautz & Selman, 1992), generation of \\vivid\" knowledge representation (Levesque, 1986). It is often the case that non-clausal representations are more compact for such problems. For instance, each rule in the form \\ V i i \" gives rise to several distinct clauses if some i are disjuncts or is a conjunct. In automated theorem proving (a.t.p.) some applications of model-nding have been proposed (see, e.g., (Artosi & Governatori, 1994;Klingerbeck, 1994)). For instance, some decision procedures for decidable subclasses of rst-order logic iteratively perform nonclausal model-nding for propositional instances of the input formulas (Jeroslow, 1988). More generally, some model-guided techniques for proof search, like goal deletion (Ballantyne & Bledsoe, 1982), false preference, or semantic resolution (Slaney, 1993), seem to be applicable to non-clausal a.t.p. as well.\nc 1994 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nprocedure GSAT( ) for j := 1 to Max-tries do T := initial( ) for k := 1 to Max-ips do if T j = then return T else Poss-ips := hill-climb( ; T)\nV := pick(Poss-ips) T := ip(V,T) UpdateScores( ; V ) end end return \\no satisfying assignment found\".\nFigure 1: A general schema for GSAT." }, { "figure_ref": [], "heading": "GSAT", "publication_ref": [ "b5" ], "table_ref": [], "text": "If is a clausal propositional formula and T is a truth assignment for the variables of , then the number of clauses of which are falsi ed by T is called the score of T for (score(T; )). T satis es i score(T; ) = 0. The notion of score plays a key role in GSAT, as it is considered as the \\distance\" from a truth assignment to a satisfying one.\nThe schema of Figure 2 describes GSAT as well as many of its possible variants. We use the notation from (Gent & Walsh, 1993). GSAT performs an iterative search for a satisfying truth assignment for , starting from a random assignment provided by initial(). At each step, the successive assignment is obtained by ipping (inverting) the truth value of one single variable V in T. V is chosen to minimize the score. Let T i be the assignment obtained from T by ipping its i-th variable V i . hill-climb() returns the set Poss-ips of the variables V r which minimize score(T r ; ). If the current values of s i = score(T i ; ) score(T; ) are stored for every variable V i , then hill-climb() simply returns the set of the variables V r with the best s r . pick() chooses randomly one of such variables. ip() returns T with V 's value ipped. After each ipping, UpdateScores() updates the values of s i , for all i.\nThis paper exploits the observation that the functions initial(), hill-climb(), pick() and ip() do not depend on the structure of the input formula , and that the computation of the scores is the only step where the input formula is required to be in clausal form. The idea is thus to nd a suitable notion of score for non-clausal formulas, and an e cient algorithm computing it." }, { "figure_ref": [], "heading": "An extended notion of score", "publication_ref": [], "table_ref": [], "text": "Let cnf(') be the result of converting a propositional formula ' into clausal form by the standard method (i.e., by applying the rules of De Morgan). Then the following de nition extends the notion of score to all propositional formulas.\nDe nition 3.1 The score of a truth assignment T for a propositional formula ' is the number of the clauses of cnf(') which are falsi ed by T. . \nB B B J J J A A A B B B b b b b , , , , E E E A A A # # # # S S\n. -D -B 1,-] 1,-] 1,-] 0,-] 1,-] 1,-] 1 , 0] 0,1] 1 , 0] 0,1] 0,1] 0,-] 1,-] 2,-] 4,-] 7,-] 14,-] 2,-] 2,0] 2,-] 2,-] 2,-] 1,-] 2,-] -C 1,-] 1,-] A 1,-] 1,-] -A -B C -E -F -D A -E C F D -A -F D -B 1,-] 0,-] 1,-] 0,1] 1,-] 1,-] 0,-] 0,-] 1,-] -E B -F D Figure 2:\nThe computation tree of s(T; '). cnf() represents the \\natural\" clausal form conversion. cnf(') has the same number of propositional variables as ' and it is logically equivalent to '. The problem with cnf() is the exponential size growth of cnf('), that is, jcnf(')j = O(2 j'j ). De nition 3.1 overcomes such a problem, for it is possible to introduce a linear-time computable function s(T; ') which gives the score of T for a formula '. This is done directly, i.e., without converting ' into clausal form. We de ne s(T; ') recursively as follows:1 ' s(T; ') s (T; ') ' literal ( 0 if T j = ' 1 otherwise\n( 1 if T j = '\n0 otherwise :' 1 s (T; ' 1 ) s(T; ' 1 ) V k ' k P k s(T; ' k ) Q k s (T; ' k ) W k ' k Q k s(T; ' k ) P k s (T; ' k ) ' 1 ' 2 s (T; ' 1 ) s(T; ' 2 )\ns(T; ' 1 ) + s (T; ' 2 ) ' 1 ' 2 s (T; ' 1 ) s(T; ' 2 )+ s(T; ' 1 ) s (T; ' 2 ) (s(T; ' 1 ) + s (T; ' 2 )) (s (T; ' 1 ) + s(T; ' 2 )) s (T; ' k ) is s(T; :' k ). The distinction between s(T; ' k ) and s (T; ' k ) is due to the polarity of the current subformula ' k . During the computation of s(T; '), a call to the function s(T; ' j ) s (T; ' j )] is invoked i ' j is a positive negative] subformula of '. T assigns \\true\" to all the variables of '. The information in square brackets associated to any subformula ' j represents s(T; ' j ); s (T; ' j )]. For instance, if we consider the small subtree in the left of Figure 2, then the score is computed in the following way: Theorem 3.1 Let ' be a propositional formula and T a truth assignment for the variables of '. Then the function s(T; ') gives the score of T for '.\nThe proof follows from the consideration that, for any truth assignment T, the set of the false clauses of cnf(' 1 _ ' 2 ) is the cross product between the two sets of the false clauses of cnf(' 1 ) and cnf(' 2 ).\nTheorem 3.2 Let ' be a propositional formula and T a truth assignment for the variables of '. Then the number of operations required for calculating s(T; ') grows linearly with the size of '.\nThe proof follows from the fact that, if Time(s (' i ; T)) is the number of operations required for computing both s(T; ' i ) and s (T; ' i ), and if Time(s (' i ; T)) a i j' i j + b i , then Time(s (' 1 ' 2 ; T)) max i (a i ) j' 1 ' 2 j + 2 max i (b i ) + 6, for any 2 f^; _; ; g. The number of operations required for computing the score of an assignment T for a clausal formula is O(j j). If = cnf('), then j j = O(2 j'j ). Thus the standard computation of the score of T for requires O(2 j'j ) operations, while s(T; ') performs the same result directly in linear time." }, { "figure_ref": [], "heading": "GSAT for non-clausal formulas", "publication_ref": [ "b13", "b12", "b4", "b15", "b13", "b10", "b3", "b10" ], "table_ref": [], "text": "It follows from Sections 2, 3 that we can extend GSAT to non-clausal formulas ' by simply using the extended notion of score of De nition 3.1. Let NC-GSAT (non-clausal GSAT) be a new version of GSAT in which the scores are computed by some implementation of the function s(). Then it follows from Theorem 3.1 that in NC-GSAT(') the function hillclimb() always returns the same sets of variables as in GSAT(cnf(')), so that NC-GSAT(') performs the same ips and returns the same result as GSAT(cnf(')). Theorem 3.2 ensures that every score computation is performed in linear time.\nThe current implementation of GSAT (Selman & Kautz, 1993) provides a highlyoptimized implementation of Updatescores( ; V ), which analyzes only the clauses which the last-ipped variable V occurs in. This allows a strong reduction in computational cost. In (Sebastiani, 1994) we describe in detail an analogous optimized version of the updating procedure for NC-GSAT, called NC-Updatescores('; V), and prove the following properties:\n(i) if ' is in clausal form, i.e., ' = cnf('), then NC-UpdateScores('; V) has the same complexity as UpdateScores('; V ); (ii) if = cnf('), then NC-UpdateScores('; V) is O(j'j). UpdateScores( ; V ) is O(2 j'j ). The latter mirrors the complexity issues presented in Section 3.\nThe idea introduced in this paper can be applied to most variants of GSAT. In \\CSAT\" (Cautious SAT) hill-climb() returns all the variables which cause a decrease of the score; in \\DSAT\" (Deterministic SAT) the function pick() performs a deterministic choice; in \\RSAT\" (Random walk SAT) the variable is picked randomly among all the variables; in \\MSAT\" (Memory SAT) pick() remembers the last ipped variable and avoids picking it. All these variants, proposed in (Gent & Walsh, 1992, 1993), can be transposed into NC-GSAT as well, as they are independent of the structure of the input formula. Selman and Kautz (1993) suggest some variants which improve the performance and overcome some problems, such as that of escaping local minima. The strategy \\Averaging in\" suggests a di erent implementation of the function initial(): instead of a random assignment, initial() returns a bitwise average of the best assignments of the two latest cycles. This is independent of the form of the input formula. In the strategy \\random walk\" the sequence hill-climb() -pick() is substituted with probability p by a simpler choice function: \\choose randomly a variable occurring in some unsatis ed clause\". This idea can be transposed into NC-GSAT as well: \\choose randomly a branch passing only for nodes whose score is di erent from zero, and pick the variable at the leaf\".\nOne nal observation is worth making. In order to overcome the exponential growth of CNF formulas, some algorithms have been proposed (Plaisted & Greenbaum, 1986;de la Tour, 1990) which convert propositional formulas ' into polynomial-size clausal formulas . Such methods are based on the introduction of new variables, each representing a subformula of the original input '. Unfortunately, the issue of size-polynomiality is valid only if no \\ \" occurs in ', as the number of clauses of grows exponentially with the number of \\ \" in '. Even worse, the introduction of k new variables enlarges the search space by a 2 k factor and reduces strongly the solution ratio. In fact, any model for is also a model for ', but for any model of ' we only know that one of its 2 k extensions is a model of (Plaisted & Greenbaum, 1986)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Fausto Giunchiglia and Enrico Giunchiglia have given substantial and continuous feedback during the whole development of this paper. Toby Walsh provided important feedback about a previous version of this paper. Aaron Noble, Paolo Pecchiari, and Luciano Sera ni helped with the nal revision. Bart Selman and Henry Kautz are thanked for assistance with the GSAT code." } ]
[ { "authors": "A Artosi; G Governatori", "journal": "", "ref_id": "b0", "title": "Labelled Model Modal Logic", "year": "1994" }, { "authors": "M Ballantyne; W Bledsoe", "journal": "Machines intelligence", "ref_id": "b1", "title": "On Generating and Using Examples in Proof Discovery", "year": "1982" }, { "authors": "M Davis; H Putnam", "journal": "Journal of the ACM", "ref_id": "b2", "title": "A computing procedure for quanti cation theory", "year": "1960" }, { "authors": "T B De La Tour", "journal": "Springer-Verlag", "ref_id": "b3", "title": "Minimizing the Number of Clauses by Renaming", "year": "1990" }, { "authors": "I P Gent; T Walsh", "journal": "", "ref_id": "b4", "title": "The Enigma of SAT Hill-climbing Procedures", "year": "1992" }, { "authors": "I P Gent; T Walsh", "journal": "", "ref_id": "b5", "title": "Towards an Understanding of Hill-climbing Procedures for SAT", "year": "1993" }, { "authors": "R Jeroslow", "journal": "Decision Support System", "ref_id": "b6", "title": "Computation-Oriented Reduction of Predicate to Propositional Logic", "year": "1988" }, { "authors": "H Kautz; B Selman", "journal": "", "ref_id": "b7", "title": "Planning as Satis ability", "year": "1992" }, { "authors": "S Klingerbeck", "journal": "", "ref_id": "b8", "title": "Generating Finite Counter Examples with Semantic Tableaux and Interpretation Revision", "year": "1994" }, { "authors": "H Levesque", "journal": "Arti cial Intelligence", "ref_id": "b9", "title": "Making believers out of computers", "year": "1986" }, { "authors": "D Plaisted; S Greenbaum", "journal": "Journal of Symbolic Computation", "ref_id": "b10", "title": "A Structure-preserving Clause Form Translation", "year": "1986" }, { "authors": "R Reiter; A Mackworth", "journal": "Arti cial Intelligence", "ref_id": "b11", "title": "A logical framework for depiction and image interpretation", "year": "1989" }, { "authors": "R Sebastiani", "journal": "", "ref_id": "b12", "title": "Applying GSAT to Non-Clausal Formulas", "year": "1994" }, { "authors": "B Selman; H Kautz", "journal": "", "ref_id": "b13", "title": "Domain-Independent Extension to GSAT: Solving Large Structured Satis ability Problems", "year": "1993" }, { "authors": "B Selman; H Levesque; D Mitchell", "journal": "", "ref_id": "b14", "title": "A New Method for Solving Hard Satisability Problems", "year": "1992" }, { "authors": "J Slaney", "journal": "Morgan Kaufmann", "ref_id": "b15", "title": "SCOTT: A Model-Guided Theorem Prover", "year": "1993" } ]
[ { "formula_coordinates": [ 3, 239.52, 142.44, 192.72, 104.4 ], "formula_id": "formula_0", "formula_text": "B B B J J J A A A B B B b b b b , , , , E E E A A A # # # # S S" }, { "formula_coordinates": [ 3, 127.44, 86.76, 366.24, 211.38 ], "formula_id": "formula_1", "formula_text": ". -D -B 1,-] 1,-] 1,-] 0,-] 1,-] 1,-] 1 , 0] 0,1] 1 , 0] 0,1] 0,1] 0,-] 1,-] 2,-] 4,-] 7,-] 14,-] 2,-] 2,0] 2,-] 2,-] 2,-] 1,-] 2,-] -C 1,-] 1,-] A 1,-] 1,-] -A -B C -E -F -D A -E C F D -A -F D -B 1,-] 0,-] 1,-] 0,1] 1,-] 1,-] 0,-] 0,-] 1,-] -E B -F D Figure 2:" }, { "formula_coordinates": [ 3, 160.56, 419.1, 256.56, 72.06 ], "formula_id": "formula_2", "formula_text": "0 otherwise :' 1 s (T; ' 1 ) s(T; ' 1 ) V k ' k P k s(T; ' k ) Q k s (T; ' k ) W k ' k Q k s(T; ' k ) P k s (T; ' k ) ' 1 ' 2 s (T; ' 1 ) s(T; ' 2 )" } ]
Applying GSAT to Non-Clausal Formulas
In this paper we describe how to modify GSAT so that it can be applied to non-clausal formulas. The idea is to use a particular \score" function which gives the number of clauses of the CNF conversion of a formula which are false under a given truth assignment. Its value is computed in linear time, without constructing the CNF conversion itself. The proposed methodology applies to most of the variants of GSAT proposed so far.
Roberto Sebastiani
[ { "figure_caption": "Example 3.1 Figure 2 represents the computation tree of the score of a truth assignment T for the formula ' : (((:A ^:B ^C ) _ :D _ (:E ^:F )) ^:C ^((:D ^A ^:E) (C ^F )))_ (D ^:E ^B) _ (((D ^:A) _ (:F ^D ^:B) _ :F ) ^A ^((E ^:C ^F ) _ :B)):", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "T; :A) + s(T; :B) + s(T; C)) s(T; :D) (s(T; :E) + s(T; :F )) = ; literals (1 + 1 + 0) 1 (1 + 1) = 4: Notice that cnf(') is 360 clauses long.2", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ............. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "P P P P P P P P P P. . . . .. . . . . . . ....................A A AS S@@ @A A A", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b30", "b25", "b20", "b34", "b35", "b39", "b1", "b28", "b1", "b1", "b3", "b4", "b8", "b12", "b16", "b8", "b18", "b19", "b8", "b18", "b19", "b21", "b37", "b23", "b24", "b32", "b38", "b32", "b38", "b0", "b20" ], "table_ref": [], "text": "Consider an agent (or expert system) with some information about a particular subject, such as internal medicine. Some facts, such as \\all patients with hepatitis exhibit jaundice\", can be naturally expressed in a standard rst-order logic, while others, such as \\80% of patients that exhibit jaundice have hepatitis\", are statistical. Suppose the agent wants to use this information to make decisions. For example, a doctor might need to decide whether to administer antibiotics to a particular patient Eric. To apply standard tools of decision theory (see (Luce & Rai a, 1957) for an introduction), the agent must assign probabilities, or degrees of belief, to various events. For example, the doctor may need to assign a degree of belief to an event such as \\Eric has hepatitis\". We would therefore like techniques for computing degrees of belief in a principled manner, using all the data at hand. In this paper we investigate the properties of one particular formalism for doing this.\nThe method we consider, which we call the random-worlds method, has origins that go back to Bernoulli and Laplace (1820). It is essentially an application of what has been called the principle of indi erence (Keynes, 1921). The basic idea is quite straightforward.\nSuppose we are interested in attaching a degree of belief to a formula ' given a knowledge base KB. One useful way of assigning semantics to degrees of belief formulas is to use a probability distribution over a set of possible worlds (Halpern, 1990). More concretely, suppose for now that we are reasoning about N individuals, 1; : : :; N. A world is a complete description of which individuals have each of the properties of interest. Formally, a world is just a model, or interpretation, over our rst-order language. For example, if our language consists of the unary predicates Hepatitis, Jaundice, Child, and BlueEyed, the binary predicate Infected-By, and the constant Eric, then a world describes which subset of the N individuals satis es each of the unary predicates, which set of pairs is in the Infected-By relation, and which of the N individuals is Eric. Given a prior probability distribution over the set of possible worlds, the agent can obtain a degree of belief in ' given KB by conditioning on KB to obtain a posterior distribution, and then computing the probability of ' according to this new distribution. The random-worlds method uses the principle of indi erence to choose a particular prior distribution over the set of worlds: all the worlds are taken to be equally likely. It is easy to see that the degree of belief in ' given KB is then precisely the fraction of worlds satisfying KB that also satisfy '.\nThe approach so far described applies whenever we actually know the precise domain size N; unfortunately this is fairly uncommon. In many cases, however, it is reasonable to believe that N is \\large\". We are thus particularly interested in the asymptotic behavior of this fraction; that is, we take our degree of belief to be the asymptotic value of this fraction as N grows large.\nFor example, suppose we want to reason about a domain of hospital patients, and KB is the conjunction of the following four formulas: 8x(Hepatitis(x) ) Jaundice(x)) (\\all patients with hepatitis exhibit jaundice\"), kHepatitis(x)jJaundice(x)k x 0:8 (\\approximately 80% of patients that exhibit jaundice have hepatitis\"; we explain this formalism and the reason we say \\approximately 80%\" rather than \\exactly 80%\" in Section 2), jjBlueEyed(x)jj x 0:25 (\\approximately 25% of patients have blue eyes\"), Jaundice(Eric) ^Child(Eric) (\\Eric is a child who exhibits jaundice\").\nLet ' be Hepatitis (Eric); that is, we want to ascribe a degree of belief to the statement \\Eric has hepatitis\". Suppose the domain has size N. Then we want to consider all worlds with domain f1; : : :; Ng such that the set of individuals satisfying Hepatitis is a subset of those satisfying Jaundice, approximately 80% of the individuals satisfying Jaundice also satisfy Hepatitis, approximately 25% of the individuals satisfy BlueEyed, and (the interpretation of) Eric is an individual satisfying Jaundice and Child. It is straightforward to show that, as expected, Hepatitis(Eric) holds in approximately 80% of these structures. Moreover, as N gets large, the fraction of structures in which Hepatitis(Eric) holds converges to exactly 0:8. Since 80% of the patients that exhibit jaundice have hepatitis and Eric exhibits jaundice, a degree of belief of 0.8 that Eric has hepatitis seems justi able. Note that, in this example, the information that Eric is a child is essentially treated as irrelevant. We would get the same answer if we did not have the information Child(Eric). It can also be shown that the degree of belief in BlueEyed(Eric) converges to 0:25 as N gets large. Furthermore, the degree of belief of BlueEyed(Eric) ^Jaundice(Eric) converges to 0:2, the product of 0:8 and 0:25. As we shall see, this is because the random-worlds method treats BlueEyed and Jaundice as being independent, which is reasonable because there is no evidence to the contrary. (It would surely be strange to postulate that two properties were correlated unless there were reason to believe they were connected in some way.) Thus, at least in this example, the random-worlds method gives answers that follow from the heuristic assumptions made in many standard AI systems (Pearl, 1989;Pollock, 1984;Spiegelhalter, 1986). Are such intuitive results typical? When do we get convergence? And when we do, is there a practical way to compute degrees of belief?\nThe answer to the rst question is yes, as we discuss in detail in (Bacchus, Grove, Halpern, & Koller, 1994). In that paper, we show that the random-worlds method is remarkably successful at satisfying the desiderata of both nonmonotonic (default) reasoning (Ginsberg, 1987) and reference class reasoning (Kyburg, 1983). The results of (Bacchus et al., 1994) show that the behavior we saw in the example above holds quite generally, as do many other properties we would hope to have satis ed. Thus, in this paper we do not spend time justifying the random-worlds approach, nor do we discuss its strengths and weaknesses; the reader is referred to (Bacchus et al., 1994) for such discussion and for an examination of previous work in the spirit of random worlds (most notably (Carnap, 1950(Carnap, , 1952) ) and subsequent work). Rather, we focus on the latter two questions asked above. These questions may seem quite familiar to readers aware of the work on asymptotic probabilities for various logics. For example, in the context of rst-order formulas, it is well-known that a formula with no constant or function symbols has an asymptotic probability of either 0 or 1 (Fagin, 1976;Glebski , Kogan, Liogon'ki , & Talanov, 1969). Furthermore, we can decide which (Grandjean, 1983). However, the 0-1 law fails if the language includes constants or if we look at conditional probabilities (Fagin, 1976), and we need both these features in order to reason about degrees of belief for formulas involving particular individuals, conditioned on what is known.\nIn two companion papers (Grove, Halpern, & Koller, 1993a, 1993b), we consider the question of what happens in the pure rst-order case (where there is no statistical information) in greater detail. We show that as long as there is at least one binary predicate symbol in the language, then not only do we not get asymptotic conditional probabilities in general (as was already shown by Fagin (1976)), but almost all the questions one might want to ask (such as deciding whether the limiting probability exists) are highly undecidable. However, if we restrict to a vocabulary with only unary predicate symbols and constants, then as long as the formula on which we are conditioning is satis able in arbitrarily large models (a question which is decidable in the unary case), the asymptotic conditional probability exists and can be computed e ectively.\nIn this paper, we consider the much more useful case where the knowledge base has statistical as well as rst-order information. In light of the results of (Grove et al., 1993a(Grove et al., , 1993b)), for most of the paper we restrict attention to the case when the knowledge base is expressed in a unary language. Our major result involves showing that asymptotic conditional probabilities can often be computed using the principle of maximum entropy (Jaynes, 1957;Shannon & Weaver, 1949).\nTo understand the use of maximum entropy, suppose the vocabulary consists of the unary predicate symbols P 1 ; : : :; P k . We can consider the 2 k atoms that can be formed from these predicate symbols, namely, the formulas of the form P 0 1 ^: : : ^P0 k , where each P 0 i is either P i or :P i . We can view the knowledge base as placing constraints on the proportion of domain elements satisfying each atom. For example, the constraint kP 1 (x)jP 2 (x)k x = 1=2 says that the proportion of the domain satisfying some atom that contains P 2 as a conjunct is twice the proportion satisfying atoms that contain both P 1 and P 2 as conjuncts. Given a model of KB, we can de ne the entropy of this model as the entropy of the vector denoting the proportions of the di erent atoms. We show that, as N grows large, there are many more models with high entropy than with lower entropy. Therefore, models with high entropy dominate. We use this concentration phenomenon to show that our degree of belief in ' given KB according to the random-worlds method is closely related to the assignment of proportions to atoms that has maximum entropy among all assignments consistent with the constraints imposed by KB.\nThe concentration phenomenon relating entropy to the random-worlds method is wellknown (Jaynes, 1982(Jaynes, , 1983)). In physics, the \\worlds\" are the possible con gurations of a system typically consisting of many particles or molecules, and the mutually exclusive properties (our atoms) can be, for example, quantum states. The corresponding entropy measure is at the heart of statistical mechanics and thermodynamics. There are subtle but important di erences between our viewpoint and that of the physicists. The main one lies in our choice of language. We want to express some intelligent agent's knowledge, which is why we take rst-order logic as our starting point. The most speci c di erence concerns constant symbols. We need these because the most interesting questions for us arise when we have some knowledge about|and wish to assign degrees of belief to statements concerning|a particular individual. The parallel in physics would address properties of a single particle, which is generally considered to be well outside the scope of statistical mechanics.\nAnother work that examines the connection between random worlds and entropy from our point of view|computing degrees of belief for formulas in a particular logic|is that of Paris and Vencovska (1989). They restrict the knowledge base to consist of a conjunction of constraints that (in our notation) have the form k (x)j (x)k x r and jj (x)jj x r, where and are quanti er-free formulas involving unary predicates only, with no constant symbols. Not only is most of the expressive power of rst-order logic not available in their approach, but the statistical information that can be expressed is quite limited. For example, it is not possible to make general assertions about statistical independence. Paris and Vencovska show that the degree of belief can be computed using maximum entropy for their language. Shastri (1989) has also shown such a result, of nearly equivalent scope. But, as we have already suggested, we believe that it is important to look at a far richer language. Our language allows arbitrary rst-order assertions, full Boolean logic, arbitrary polynomial combinations of statistical expressions, and more; these are all features that are actually useful to knowledge-representation practitioners. Furthermore, the random-worlds method makes perfect sense in this rich setting. The goal of this paper is to discover whether the connection to maximum entropy also holds. We show that maximum entropy continues to be widely useful, covering many problems that are far outside the scope of (Paris & Vencovska, 1989;Shastri, 1989).\nOn the other hand, it turns out that we cannot make this connection for our entire language. For one thing, as we hinted earlier, there are problems if we try to condition on a knowledge base that includes non-unary predicates; we suspect that maximum entropy has no role whatsoever in this case. In addition, we show that there are subtleties that arise involving the interaction between statistical information and rst-order quanti cation. We feel that an important contribution of this paper lies in pointing out some limitations of maximum-entropy methods.\nThe rest of this paper is organized as follows. In the next section, we discuss our formal framework (essentially, that of (Bacchus, 1990;Halpern, 1990)). We discuss the syntax and semantics of statistical assertions, issues involving \\approximately equals\", and de ne the random-worlds method formally. In Section 3 we state the basic results that connect maximum entropy to random-worlds, and in Section 4 we discuss how to use these results as e ective computational procedures. In Section 5 we return to the issue of unary versus non-unary predicates, and the question of how widely applicable the principle of maximum entropy is. We conclude in Section 6 with some discussion." }, { "figure_ref": [], "heading": "Technical preliminaries", "publication_ref": [ "b1" ], "table_ref": [], "text": "In this section, we give the formal de nition of our language and the random-worlds method. The material is largely taken from (Bacchus et al., 1994)." }, { "figure_ref": [], "heading": "The language", "publication_ref": [ "b0", "b0", "b1", "b0", "b20", "b17", "b20", "b20" ], "table_ref": [], "text": "We are interested in a formal logical language that allows us to express both statistical information and rst-order information. We therefore de ne a statistical language L , which is a variant of a language designed by Bacchus (1990). For the remainder of the paper, let be a nite rst-order vocabulary, consisting of predicate and constant symbols, and let X be a set of variables. 1Our statistical language augments standard rst-order logic with a form of statistical quanti er. For a formula (x), the term jj (x)jj x is a proportion expression. It will be interpreted as a rational number between 0 and 1, that represents the proportion of domain elements satisfying (x). We actually allow an arbitrary set of variables in the subscript and in the formula . Thus, for example, jjChild(x; y)jj x describes, for a xed y, the proportion of domain elements that are children of y; jjChild(x; y)jj y describes, for a xed x, the proportion of domain elements whose child is x; and jjChild(x; y)jj x;y describes the proportion of pairs of domain elements that are in the child relation. 2We also allow proportion expressions of the form k (x)j (x)k x , which we call conditional proportion expressions. Such an expression is intended to denote the proportion of domain elements satisfying from among those elements satisfying . Finally, any rational number is also considered to be a proportion expression, and the set of proportion expressions is closed under addition and multiplication.\nOne important di erence between our syntax and that of (Bacchus, 1990) is the use of approximate equality to compare proportion expressions. There are both philosophical and practical reasons why exact comparisons can be inappropriate. Consider a statement such as \\80% of patients with jaundice have hepatitis\". If this statement appears in a knowledge base, it is almost certainly there as a summary of a large pool of data. So it would be wrong to interpret the value too literally, to mean that exactly 80% of all patients with jaundice have hepatitis. Furthermore, this interpretation would imply (among other things) that the number of jaundiced patients is a multiple of ve! This is unlikely to be something we intend. We therefore use the approach described in (Bacchus et al., 1994;Koller & Halpern, 1992), and compare proportion expressions using (instead of = and ) one of an in nite family of connectives i and i , for i = 1; 2; 3 : : : (\\i-approximately equal\" or \\i-approximately less than or equal\"). For example, we can express the statement \\80% of jaundiced patients have hepatitis\" by the proportion formula kHep(x)jJaun(x)k x 1 0:8. The intuition behind the semantics of approximate equality is that each comparison should be interpreted using some small tolerance factor to account for measurement error, sample variations, and so on. The appropriate tolerance will di er for various pieces of information, so our logic allows di erent subscripts on the \\approximately equals\" connectives. A formula such as kFly(x)jBird(x)k x 1 1 ^kFly(x)jBat(x)k x 2 1 says that both kFly(x)jBird(x)k x and kFly(x)jBat(x)k x are approximately 1, but the notion of \\approximately\" may be di erent in each case. The actual choice of subscript for is unimportant. However, it is important to use di erent subscripts for di erent approximate comparisons unless the tolerances for the di erent measurements are known to be the same.\nWe can now give a recursive de nition of the language L . De nition 2.1: The set of terms in L is X C where C is the set of constant symbols in . The set of proportion expressions is the least set that (a) contains the rational numbers, (b) contains proportion terms of the form jj jj X and k j k X for formulas ; 2 L and a nite set of variables X X, and (c) is closed under addition and multiplication.\nThe set of formulas in L is the least set that (a) contains atomic formulas of the form R(t 1 ; : : :; t r ), where R is a predicate symbol in f=g of arity r and t 1 ; : : :; t r are terms, (b) contains proportion formulas of the form i 0 and i 0 , where and 0 are proportion expressions and i is a natural number, and (c) is closed under conjunction, negation, and rst-order quanti cation.\nNote that L allows the use of equality when comparing terms, but not when comparing proportion expressions. This de nition allows arbitrary nesting of quanti ers and proportion expressions. As observed in (Bacchus, 1990), the subscript x in a proportion expressions binds the variable x in the expression; indeed, we can view jj jj x as a new type of quanti cation.\nWe now need to de ne the semantics of the logic. As we shall see below, most of the de nitions are fairly straightforward. The two features that cause problems are approximate comparisons and conditional proportion expressions. We interpret the approximate connective i 0 to mean that is very close to 0 . More precisely, it is within some very small tolerance factor. We formalize this using a tolerance vector ~ = h 1 ; 2 ; : : :i, i > 0.\nIntuitively i 0 if the values of and 0 are within i of each other. Of course, one problem with this is that we generally will not know the value of i . We postpone discussion of this issue until the next section.\nAnother di culty arises when interpreting conditional proportion expressions. The problem is that k j k X cannot be de ned as a conditional probability when there are no assignments to the variables in X that would satisfy , because we cannot divide by zero. When standard equality is used rather than approximate equality this problem is easily overcome, simply by avoiding conditional probabilities in the semantics altogether. Following (Halpern, 1990), we can eliminate conditional proportion expressions altogether by viewing a statement such as k j k X = as an abbreviation for jj ^ jj X = jj jj X .\nThus, we never actually form quotients of probabilities. This approach agrees completely with the standard interpretation of conditionals so long as jj jj X 6 = 0. If jj jj X = 0, it enforces the convention that formulas such as k j k X = or k j k X are true for any . (Note that we do not really care much what happens in such cases, so long as it is consistent and well-de ned. This convention represents one reasonable choice.)\nWe used the same approach in an earlier version of this paper (Grove, Halpern, & Koller, 1992) in the context of a language that uses approximate equality. Unfortunately, as the following example shows, this has problems. Unlike the case for true equality, if we multiply by jj jj X to clear all quotients, we do not obtain an equivalent formula even if jj jj X is nonzero. Example 2.2: First consider the knowledge base KB = (kFly(x)jPenguin(x)k x 1 0). This says that the number of ying penguins forms a tiny proportion of all penguins. However, if we interpret conditional proportions as above and multiply out, we obtain the knowledge base KB 0 = jjFly(x) ^Penguin(x)jj x 1 0 jjPenguin(x)jj x , which is equivalent to jjFly(x) ^Penguin(x)jj x 1 0. KB 0 just says that the number of ying penguins is small, and has lost the (possibly important) information that the number of ying penguins is small relative to the number of penguins. It is quite consistent with KB 0 that all penguins y (provided the total number of penguins is small); this is not consistent with KB. Clearly, the process of multiplying out across an approximate connective does not preserve the intended interpretation of the formulas. This example demonstrates an undesirable interaction between the semantics we have chosen for approximate equality and the process of multiplying-out to eliminate conditional proportions. We expect k j k X 1 to mean that k j k X is within some tolerance 1 of . Assuming jj jj X > 0, this is the same as saying that jj ^ jj X is within 1 jj jj X of jj jj X . On the other hand, the expression that results by multiplying out is jj ^ jj X 1 jj jj X . This says that jj ^ jj X is within 1 (not 1 jj jj X !) of jj jj X . As we saw above, the di erence between the two interpretations can be signi cant.\nBecause of this problem, we cannot treat conditional proportions as abbreviations and instead have added them as primitive expressions in the language. Of course, we now have to give them a semantics that avoids the problem illustrated by Example 2.2. We would like to maintain the conventions used when we had equality in the language. Namely, in worlds where jj (x)jj x 6 = 0, we want k (x)j (x)k x to denote the fraction of elements satisfying (x) that also satisfy (x). In worlds where jj (x)jj x = 0, we want formulas of the form k (x)j (x)k x i or k (x)j (x)k x i to be true. There are a number of ways of accomplishing this. The way we take is perhaps not the simplest, but it introduces machinery that will be helpful later. The basic idea is to make the interpretation of more explicit, so that we can eliminate conditional proportions by multiplication and keep track of all the consequences of doing so.\nWe give semantics to the language L by providing a translation from formulas in L to formulas in a language L = whose semantics is more easily described. The language L = is essentially the language of (Halpern, 1990), that uses true equality rather than approximate equality when comparing proportion expressions. More precisely, the de nition of L = is identical to the de nition of L given in De nition 2.1, except that: we use = and instead of i and i , we allow the set of proportion expressions to include arbitrary real numbers (not just rational numbers), we do not allow conditional proportion expressions, we assume that L = has a special family of variables \" i , for i = 1; 2; : : :, interpreted over the reals.\nThe variable \" i is used to explicitly interpret the approximate equality connectives i and i . Once this is done, we can safely multiply out the conditionals, as described above. More precisely, every formula 2 L can be associated with a formula 2 L = as follows: every proportion formula i 0 in is (recursively) replaced by 0 \" i , every proportion formula i 0 in is (recursively) replaced by the conjunction ( 0 \" i ) ^( 0 \" i ), nally, conditional proportion expressions are eliminated by multiplying out.\nThis translation allows us to embed L into L = . Thus, for the remainder of the paper, we regard L as a sublanguage of L = . This embedding avoids the problem encountered in Example 2.2, because when we multiply to clear conditional proportions the tolerances are explicit, and so are also multipled as appropriate.\nThe semantics for L = is quite straightforward, and is similar to that in (Halpern, 1990). We give semantics to L = in terms of worlds, or nite rst-order models. For any natural number N, let W N consist of all worlds with domain f1; : : :; Ng. Thus, in W N , we have one world for each possible interpretation of the symbols in over the domain f1; : : :; Ng. Let W denote N W N . Now, consider some world W 2 W over the domain D = f1; : : :; Ng, some valuation V : X ! D for the variables in X, and some tolerance vector ~ . We simultaneously assign to each proportion expression a real number ] (W;V;~ ) and to each formula a truth value with respect to (W; V;~ ). Most of the clauses of the de nition are completely standard, so we omit them here. In particular, variables are interpreted using V , the tolerance variables \" i are interpreted using the tolerances i , the predicates and constants are interpreted using W, the Boolean connectives and the rst-order quanti ers are de ned in the standard fashion, and when interpreting proportion expressions, the real numbers, addition, multiplication, and are given their standard meaning. It remains to interpret proportion terms. Recall that we eliminate conditional proportion terms by multiplying out, so that we need to deal only with unconditional proportion terms. If is the proportion expression jj jj x i 1 ;:::;x i k (for i 1 < i 2 < : : : < i k ), then ] (W;V;~ ) = 1 jD k j n (d 1 ; : : :; d k ) 2 D k : (W; V x i 1 =d 1 ; : : :\n; x i k =d k ];~ ) j = o :\nThus, if jDj = N, the proportion expression jj jj x i 1 ;:::;x i k denotes the fraction of the N k k-tuples in D k that satisfy . For example, jjChild(x; y)jj x ] (W;V;~ ) is the fraction of domain elements d that are children of V (y).\nUsing our embedding of L into L = , we now have semantics for L . For 2 L , we say that (W; V;~ ) j = i (W; V;~ ) j = . It is sometimes useful in our future results to incorporate particular values for the tolerances into the formula . Thus, let ~ ] represent the formula that results from if each variable \" i is replaced with its value according to ~ , that is, i . 3Typically we are interested in closed sentences, that is, formulas with no free variables. In that case, it is not hard to show that the valuation plays no role. Thus, if is closed, we write (W;~ ) j = rather than (W; V;~ ) j = . Finally, if KB and are closed formulas, we write KB j = if (W;~ ) j = KB implies (W;~ ) j = ." }, { "figure_ref": [], "heading": "Degrees of belief", "publication_ref": [ "b19" ], "table_ref": [], "text": "As we explained in the introduction, we give semantics to degrees of belief by considering all worlds of size N to be equally likely, conditioning on KB, and then checking the probability of ' over the resulting probability distribution. In the previous section, we de ned what it means for a sentence to be satis ed in a world of size N using a tolerance vector ~ . Given N and ~ , we de ne #worlds ~ N ( ) to be the number of worlds in W N such that (W;~ ) j = .\nSince we are taking all worlds to be equally likely, the degree of belief in ' given KB with respect to W N and ~ is Pr ~ N ('jKB) = #worlds ~ N (' ^KB) #worlds ~ N (KB) :\nIf #worlds ~ N (KB) = 0, this degree of belief is not well-de ned.\nThe careful reader may have noticed a potential problem with this de nition. Strictly speaking, we should write W N ( ) rather than W N , since the set of worlds under consideration clearly depends on the vocabulary. Hence, the number of worlds in W N also depends on the vocabulary. Thus, both #worlds ~ N (') and #worlds ~ N (' ^KB) depend on the choice of . Fortunately, this dependence \\cancels out\": If 0 , then there is a constant c such that for all formulas over the vocabulary , # 0 ]worlds ~ N ( ) = c# ]worlds ~ N ( ). This result, from which it follows that the degree of belief Pr ~ N ('jKB) is independent of our choice of vocabulary, is proved in (Grove et al., 1993b).\nTypically, we know neither N nor ~ exactly. All we know is that N is \\large\" and that ~ is \\small\". Thus, we would like to take our degree of belief in ' given KB to be lim ~ ! 0 lim N!1 Pr ~ N ('jKB). Notice that the order of the two limits over ~ and N is important. If the limit lim ~ ! 0 appeared last, then we would gain nothing by using approximate equality, since the result would be equivalent to treating approximate equality as exact equality.\nThis de nition, however, is not su cient; the limit may not exist. We observed above that Pr ~ N ('jKB) is not always well-de ned. In particular, it may be the case that for certain values of ~ , Pr ~ N ('jKB) is not well-de ned for arbitrarily large N. In order to deal with this problem of well-de nedness, we de ne KB to be eventually consistent if for all su ciently small ~ and su ciently large N, #worlds ~ N (KB) > 0. Among other things, eventual consistency implies that the KB is satis able in nite domains of arbitrarily large size. For example, a KB stating that \\there are exactly 7 domain elements\" is not eventually consistent. For the remainder of the paper, we assume that all knowledge bases are eventually consistent. In practice, we expect eventual consistency to be no harder to check than consistency. We do not expect a knowledge base to place bounds on the domain size, except when the bound is readily apparent. For those unsatis ed with this intuition, it is also possible to nd formal conditions ensuring eventual consistency. For instance, it is possible to show that the following conditions are su cient to guarantee that KB is eventually consistent: (a) KB does not use any non-unary predicates, including equality between terms and (b) KB is consistent for some domain size when all approximate comparisons are replaced by exact comparisons. Since we concentrate on unary languages in this paper, this result covers most cases of interest.\nEven if KB is eventually consistent, the limit may not exist. For example, it may be the case that Pr ~ N ('jKB) oscillates between + i and i for some i as N gets large. In this case, for any particular ~ , the limit as N grows will not exist. However, it seems as if the limit as ~ grows small should, in this case, be , since the oscillations about go to 0.\nWe avoid such problems by considering the lim sup and lim inf, rather than the limit. For any set S IR, the in mum of S, inf S, is the greatest lower bound of S. The lim inf of a sequence is the limit of the in mums; that is, lim inf N!1 a N = lim N!1 inffa i : i > Ng:\nThe lim inf exists for any sequence bounded from below, even if the limit does not. The lim sup is de ned analogously, where sup S denotes the least upper bound of S. If lim N!1 a N does exist, then lim N!1 a N = lim inf N!1 a N = lim sup N!1 a N . Since, for any ~ , the sequence Pr ~ N ('jKB) is always bounded from above and below, the lim sup and lim inf always exist. Thus, we do not have to worry about the problem of nonexistence for particular values of ~ . We can now present the nal form of our de nition. both exist and are equal, then the degree of belief in ' given KB, written Pr 1 ('jKB), is de ned as the common limit; otherwise Pr 1 ('jKB) does not exist. We close this section with a few remarks on our de nition. First note that, even using this de nition, there are many cases where the degree of belief does not exist. However, as some of our later examples show, in many situations the nonexistence of a degree of belief can be understood intuitively (for instance, see Example 4.3 and the subsequent discussion). We could, alternatively, have taken the degree of belief to be the interval de ned by lim ~ ! 0 lim inf N!1 Pr ~ N ('jKB) and lim ~ ! 0 lim sup N!1 Pr ~ N ('jKB), provided each of them exist. This would have been a perfectly reasonable choice; most of the results we state would go through with very little change if we had taken this de nition. Our de nition simpli es the exposition slightly." }, { "figure_ref": [], "heading": "De nition 2.3: If", "publication_ref": [ "b1" ], "table_ref": [], "text": "Finally, we remark that it may seem unreasonable to take limits if we know the domain size or have a bound on the domain size. Clearly, if we know N and ~ , then it seems more reasonable to use Pr ~ N rather than Pr 1 as our degree of belief. Indeed, as shown in (Bacchus et al., 1994), many of the important properties that hold for the degree of belief de ned by Pr 1 hold for Pr ~ N , for all choices of N and ~ . The connection to maximum entropy that we make in this paper holds only at the limit, but because (as our proofs show) the convergence is rapid, the degree of belief Pr 1 ('jKB) is typically a very good approximation to Pr ~ N ('jKB), even for moderately large N and moderately small ~ ." }, { "figure_ref": [], "heading": "Degrees of belief and entropy", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Introduction to maximum entropy", "publication_ref": [ "b21", "b37", "b37" ], "table_ref": [], "text": "The idea of maximizing entropy has played an important role in many elds, including the study of probabilistic models for inferring degrees of belief (Jaynes, 1957;Shannon & Weaver, 1949). In the simplest setting, we can view entropy as a real-valued function on nite probability spaces. If is a nite set and is a probability measure on , the entropy H( ) is de ned to be P !2 (!) ln (!) (we take 0 ln 0 = 0). One standard application of entropy is the following. Suppose we know the space , but have only partial information about , expressed in the form of constraints. For example, we might have a constraint such as (! 1 ) + (! 2 ) 1=3. Although there may be many measures that are consistent with what we know, the principle of maximum entropy suggests that we adopt that which has the largest entropy among all the consistent possibilities. Using the appropriate de nitions, it can be shown that there is a sense in which this incorporates the \\least\" additional information (Shannon & Weaver, 1949). For example, if we have no constraints on , then will be the measure that assigns equal probability to all elements of . Roughly speaking, assigns probabilities as equally as possible given the constraints." }, { "figure_ref": [], "heading": "From formulas to constraints", "publication_ref": [], "table_ref": [], "text": "Like maximum entropy, the random-worlds method is also used to determine degrees of belief (i.e., probabilities) relative to a knowledge base. Aside from this, is there any connection between the two ideas? Of course, there is the rather trivial observation that random-worlds considers a uniform probability distribution (over the set of worlds satisfying KB), and it is well-known that the uniform distribution over any set has the highest possible entropy. But in this section we show another, entirely di erent and much deeper, connection between random-worlds and the principle of maximum entropy. This connection holds provided that we restrict the knowledge base so that it uses only unary predicates and constants. In this case we can consider probability distributions, and in particular the maximum-entropy distribution, over the set of atoms. Atoms are of course very di erent from possible worlds; for instance, there are only nitely many of them (independent of the domain size N).\nFurthermore, the maximum-entropy distributions we consider will typically not be uniform. Nevertheless, maximum entropy in this new space can tell us a lot about the degrees of belief de ned by random worlds. In particular, this connection will allow us to use maximum entropy as a tool for computing degrees of belief. We believe that the restriction to unary predicates is necessary for the connection we are about to make. Indeed, as long as the knowledge base makes use of a binary predicate symbol (or unary function symbol), we suspect that there is no useful connection between the two approaches at all; see Section 5 for some discussion.\nLet L 1 be the sublanguage of L where only unary predicate symbols and constant symbols appear in formulas; in particular, we assume that equality between terms does not occur in formulas in L 1 .4 (Recall that in L , we allow equality between terms, but disallow equality between proportion expressions.) Let L = 1 be the corresponding sublanguage of L = . In this subsection, we show that the expressive power of a knowledge base KB in the language L 1 is quite limited. In fact, such a KB can essentially only place constraints on the proportions of the atoms. If we then think of these as constraints on the \\probabilities of the atoms\", then we have the ingredients necessary to apply maximum entropy. In Section 3.3 we show that there is a strong connection between the maximum-entropy distribution found this way and the degree of belief generated by random-worlds method.\nTo see what constraints a formula places on the probabilities of atoms, it is useful to convert the formula to a certain canonical form. As a rst step to doing this, we formalize the de nition of atom given in the introduction. Let P = fP 1 ; : : :; P k g consist of the unary predicate symbols in the vocabulary .\nDe nition 3.1: An atom (over P) is conjunction of the form P 0 1 (x) ^: : : ^P0 k (x), where each P 0 i is either P i or :P i . Since the variable x is irrelevant to our concerns, we typically suppress it and describe an atom as a conjunction of the form P 0 1 ^: : : ^P0 k . Note that there are 2 jPj = 2 k atoms over P and that they are mutually exclusive and exhaustive. Throughout this paper, we use K to denote 2 k and A 1 ; : : :; A K to denote the atoms over P, listed in some xed order. Example 3.2: There are K = 4 atoms over P = fP 1 ; P 2 g: A 1 = P 1 ^P2 , A 2 = P 1 ^:P 2 , A 3 = :P 1 ^P2 , A 4 = :P 1 ^:P 2 .\nThe atomic proportion terms jjA 1 (x)jj x ; : : :; jjA K (x)jj x will play a signi cant role in our technical development. It turns out that L 1 is a rather weak language: a formula KB 2 L 1 does little more than constrain the proportion of the atoms. In other words, for any such KB we can nd an equivalent formula in which the only proportion expressions are these unconditional proportions of atoms. The more complex syntactic machinery in L 1 |proportions over tuples, rst-order quanti cation, nested proportions, and conditional proportions|does not add expressive power. (It does add convenience, however; knowledge can often be expressed far more succinctly if the full power of the language is used.) Given any KB, the rst step towards applying maximum entropy is to use L 1 's lack of expressivity and replace all proportion terms by atomic proportion terms. It is also useful to make various other simpli cations to KB that will help us in Section 4. We combine these steps and require that KB be transformed into a special canonical form which we now describe.\nDe nition 3.3: An atomic term t over P is a polynomial over terms of the form jjA(x)jj x , where A is an atom over P. Such an atomic term t is positive if every coe cient of the polynomial t is positive." }, { "figure_ref": [], "heading": "De nition 3.4:", "publication_ref": [ "b20" ], "table_ref": [], "text": "A (closed) sentence 2 L = 1 is in canonical form if it is a disjunction of conjunctions,\nwhere each conjunct is one of the following: t 0 = 0, (t 0 > 0 ^t t 0 \" i ), or (t 0 > 0 ^:(t t 0 \" i )), where t and t 0 are atomic terms and t 0 is positive, 9x A i (x) or :9x A i (x) some atom A i , or A i (c) for some atom A i and some constant c. Furthermore, a disjunct cannot contain both A i (c) and A j (c) for i 6 = j as conjuncts, nor can it contain both A i (c) and :9x A i (x). (Note that these last conditions are simply minimal consistency requirements.) Theorem 3.5: Every formula in L = 1 is equivalent to a formula in canonical form. Moreover, there is an e ective procedure that, given a formula 2 L = 1 , constructs an equivalent formula b in canonical form.\nThe proof of this theorem, and of all theorems in this paper, can be found in the appendix.\nWe remark that the length of the formula b is typically exponential in the length of . Such a blowup seems inherent in any scheme de ned in terms of atoms.\nTheorem 3.5 is a generalization of Claim 5.7.1 in (Halpern, 1990). It, in turn, is a generalization of a well-known result which says that any rst-order formula with only unary predicates is equivalent to one with only depth-one quanti er nesting. Roughly speaking, this is because for a quanti ed formula such as 9x 0 , subformulas talking about a variable y other than x can be moved outside the scope of the quanti er. This is possible because no literal subformula can talk about x and y together. Our proof uses the same idea and extends it to proportion statements. In particular, it shows that for any 2 L 1 there is an equivalent ^ which has no nested quanti ers or nested proportions.\nNotice, however, that such a result does not hold once we allow even a single binary predicate in the language. For example, the formula 8y 9x R(x; y) clearly needs nested quanti cation because R(x; y) talks about both x and y and so must remain within the scope of both quanti ers. With binary predicates, each additional depth of nesting really does add expressive power. This shows that there can be no \\canonical form\" theorem quite like Theorem 3.5 for richer languages. This issue is one of the main reasons why we restrict the KB to a unary language in this paper. (See Section 5 for further discussion.) Given any formula in canonical form we can immediately derive from it, in a syntactic manner, a set of constraints on the possible proportions of atoms.\nDe nition 3.6: Let KB be in canonical form. We construct a formula (KB) in the language of real closed elds (i.e., over the vocabulary f0; 1; +; g) as follows, where u 1 ; : : :; u K are fresh variables (distinct from the tolerance variables \" j ):\nwe replace each occurrence of the formula A i (c) by u i > 0,\nwe replace each occurrence of 9x A i (x) by u i > 0 and replace each occurrence of :9x A i (x) by u i = 0, we replace each occurrence of jjA i (x)jj x by u i .\nNotice that (KB) has two types of variables: the new variables u i that we just introduced, and the tolerance variables \" i . In order to eliminate the dependence on the latter, we often consider the formula (KB ~ ]) for some tolerance vector ~ .\nDe nition 3.7: Given a formula over the variables u 1 ; : : :; u K , let Sol ] be the set of vectors in K = fũ 2 0; 1] K : P K i u i = 1g satisfying . Formally, if (a 1 ; : : :; a K ) 2 K , then (a 1 ; : : :; a K ) 2 Sol ] i (IR; V ) j = , where V is a valuation such that V (u i ) = a i . De nition 3.8: The solution space of KB given ~ , denoted S ~ KB], is de ned to be the closure of Sol (KB ~ ])]. 5\nIf KB is not in canonical form, we de ne (KB) and S ~ KB] to be ( d KB) and S ~ d KB], respectively, where d KB is the formula in canonical form equivalent to KB obtained by the procedure appearing in the proof of Theorem 3.5.\nExample 3.9: Let P be fP 1 ; P 2 g, with the atoms ordered as in Example 3.2. Consider KB = 8x P 1 (x) ^3kP 1 (x) ^P2 (x)k x i 1:\nThe canonical formula d KB equivalent to KB is: 6\n:9x A 3 (x) ^:9x A 4 (x) ^3jjA 1 (x)jj x 1 \" i :\nAs expected, d KB constrains both jjA 3 (x)jj x and jjA 4 (x)jj x (i.e., u 3 and u 4 ) to be 0. We also see that jjA 1 (x)jj x (i.e., u 1 ) is (approximately) at most 1=3. Therefore: S ~ KB] = n (u 1 ; : : :; u 4 ) 2 4 : u 1 1=3 + i =3; u 3 = u 4 = 0 o : 5. Recall that the closure of a set X IR K consists of all K-tuples that are the limit of a sequence of K-tuples in X.\n6. Note that here we are viewing KB as a formula in L = , under the translation de ned earlier; we do this throughout the paper without further comment." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "The concentration phenomenon", "publication_ref": [ "b23", "b1", "b1", "b1" ], "table_ref": [], "text": "With every world W 2 W , we can associate a particular tuple (u 1 ; : : :; u K ), where u i is the fraction of the domain satisfying atom A i in W:\nDe nition 3.10: Given a world W 2 W , we de ne (W) 2 K to be (jjA 1 (x)jj x ; jjA 2 (x)jj x ; : : :\n; jjA K (x)jj x )\nwhere the values of the proportions are interpreted over W. We say that the vector (W) is the point associated with W.\nWe de ne the entropy of any model W to be the entropy of (W); that is, if (W) = (u 1 ; : : :; u K ), then the entropy of W is H(u 1 ; : : :; u K ). As we are about to show, the entropy of ũ turns out to be a very good asymptotic indicator of how many worlds W there are such that (W) = ũ. In fact, there are so many more worlds near points of high entropy that we can ignore all the other points when computing degrees of belief. This concentration phenomenon, as Jaynes (1982) has called it, is essentially the content of the next lemma and justi es our interest in the maximum-entropy point(s) of S ~ KB].\nFor any S K let #worlds ~ N S](KB) denote the number of worlds W of size N such that (W;~ ) j = KB and such that (W) 2 S; for any ũ 2 K let #worlds ~ N ũ](KB) abbreviate #worlds ~ N fũg](KB). Of course #worlds ~ N ũ](KB) is necessarily zero unless all components of ũ are multiples of 1=N. However, if there are any models associated with ũ at all, we can estimate their number quite accurately using the entropy function: Of course, it follows from the lemma that tuples whose entropy is near maximum have overwhelmingly more worlds associated with them than tuples whose entropy is further from maximum. This is essentially the concentration phenomenon.\nLemma 3.11 is actually fairly easy to prove. The following simple example illustrates the main idea.\nExample 3.12: Suppose = fPg and KB = true: We have K = 2 = f(u 1 ; 1 u 1 ) : 0 u 1 1g ;\nwhere the atoms are A 1 = P and A 2 = :P. For any N, partition the worlds in W N according to the point to which they correspond. For example, the graph in Figure 1 shows us the partition of W 4 . In general, consider some point ũ = (r=N; (N r)=N). The number of worlds corresponding to ũ is simply the number of ways of choosing the denotation of P. We need to choose which r elements satisfy P; hence, the number of such worlds is N r = N! r!(N r)! . Figure 2 shows the qualitative behavior of this function for large values of N. It is easy to see the asymptotic concentration around ũ = (0:5; 0:5). We can estimate the factorials appearing in this expression using Stirling's approximation, which asserts that the factorial m! is approximately m m = e m lnm . So, after substituting for the three factorials, we can estimate N r as e N logN (r logr+(N r) log(N r)) , which reduces to e NH(ũ) . The entropy term in the general case arises from the use of Stirling's approximation in an analogous way. (A more careful estimate is done in the proof of Lemma 3.11 in the appendix.)\nBecause of the exponential dependence on N times the entropy, the number of worlds associated with points of high entropy swamp all other worlds as N grows large. This concentration phenomenon, well-known in the eld of statistical physics, forms the basis for our main result in this section. It asserts that it is possible to compute degrees of belief according to random worlds while ignoring all but those worlds whose entropy is near maximum. The next theorem essentially formalizes this phenomenon. Theorem 3.13: For all su ciently small ~ , the following is true. Let Q be the points with greatest entropy in S ~ KB] and let O IR K be any open set containing Q. Then for all 2 L and for lim 2 flim sup; lim infg we have lim\nN!1 Pr ~ N ( jKB) = lim N!1 #worlds ~ N O]( ^KB) #worlds ~ N O](KB) :\nWe remark that this is quite a di cult theorem. We have discussed why Lemma 3.11 lets us look at models of KB whose entropy is (near) maximum. But the theorem tells us to look at the maximum-entropy points of S ~ KB], which we de ned using a (so far unmotivated) syntactic procedure applied to KB. It seems reasonable to expect that S ~ KB] should tell us something about models of KB. But making this connection precise, and in particular showing how the maximum-entropy points of S ~ KB] relate to models of KB with nearmaximum entropy, is di cult. However, we defer all details of the proof of that result to the appendix.\nIn general, Theorem 3.13 may seem to be of limited usefulness: knowing that we only have to look at worlds near the maximum-entropy point does not substantially reduce the number of worlds we need to consider. (Indeed, the whole point of the concentration phenomenon is that almost all worlds have high entropy.) Nevertheless, as the rest of this paper shows, this result can be quite useful when combined with the following two results. The rst of these says that if all the worlds near the maximum-entropy points have a certain property, then we should have degree of belief 1 that this property is true.\nCorollary 3.14: For all su ciently small ~ , the following is true. Let Q be the points with greatest entropy in S ~ KB], let O IR K be an open set containing Q, and let O] 2 L = be an assertion that holds for every world W such that (W) 2 O. Then Pr ~ 1 ( O]jKB) = 1: Example 3.15: For the knowledge base true in Example 3.12, it is easy to see that the maximum-entropy point is (0:5; 0:5). Fix some arbitrary > 0. Clearly, there is some open set O around this point such that the assertion = jjP(x)jj x 2 0:5 ; 0:5 + ] holds for every world in O. Therefore, we can conclude that Pr ~ 1 (jjP(x)jj x 2 0:5 ; 0:5 + ] jtrue) = 1:\nAs we show in (Bacchus et al., 1994), formulas with degree of belief 1 can essentially be treated just like other knowledge in KB. That is, the degrees of belief relative to KB and KB ^ will be identical (even if KB and KB ^ are not logically equivalent). More formally: Theorem 3.16: (Bacchus et al., 1994) If Pr ~ 1 ( jKB) = 1 and lim 2 flim sup; lim infg, then for any formula ':\nlim N!1 Pr ~ N ('jKB) = lim N!1 Pr ~ N ('jKB ^ ):\nProof: For completeness, we repeat the proof from (Bacchus et al., 1994) here. Basic probabilistic reasoning shows that, for any N and ~ : Pr ~ N ('jKB) = Pr ~ N ('jKB ^ ) Pr ~ N ( jKB) + Pr ~ N ('jKB ^: ) Pr ~ N (: jKB): By assumption, Pr ~ N ( jKB) tends to 1 when we take limits, so the rst term tends to Pr ~ N ('jKB ^ ). On the other hand, Pr ~ N (: jKB) has limit 0. Because Pr ~ N ('jKB ^: ) is bounded, we conclude that the second product also tends to 0. The result follows.\nAs we shall see in the next section, the combination of Corollary 3.14 and Theorem 3.16 is quite powerful." }, { "figure_ref": [], "heading": "Computing degrees of belief", "publication_ref": [ "b31", "b34", "b34", "b15" ], "table_ref": [], "text": "Although the concentration phenomenon is interesting, its application to actually computing degrees of belief may not be obvious. Since we know that almost all worlds will have high entropy, a direct application of Theorem 3.13 does not substantially reduce the number of worlds we must consider. Yet, as we show in this section, the concentration theorem can form the basis of a practical technique for computing degrees of belief in many cases. We begin in Section 4.1 by presenting the intuitions underlying this technique. In Section 4.2 we build on these intuitions by presenting results for a restricted class of formulas: those queries which are quanti er-free formulas over a unary language with a single constant symbol. In spite of this restriction, many of the issues arising in the general case can be seen here. Moreover, as we show in Section 4.3, this restricted sublanguage is rich enough to allow us to embed two well-known propositional approaches that make use of maximum entropy: Nilsson's probabilistic logic (Nilsson, 1986) and the maximum-entropy extension of -semantics (Ge ner & Pearl, 1990) due to Goldszmidt, Morris, Pearl (1990) (see also (Goldszmidt, Morris, & Pearl, 1993)). In Section 4.4, we consider whether the results for the restricted language can be extended. We show that they can, but several di cult and subtle issues arise." }, { "figure_ref": [], "heading": "The general strategy", "publication_ref": [ "b1", "b36", "b1" ], "table_ref": [], "text": "Although the random-worlds method is de ned by counting worlds, we can sometimes nd more direct ways to calculate the degrees of belief it yields. In (Bacchus et al., 1994) we present a number of such techniques, most of which apply only in very special cases. One of the simplest and most intuitive is the following version of what philosophers have termed direct inference (Reichenbach, 1949). Suppose that all we know about an individual c is some assertion (c); in other words, KB has the form (c) ^KB 0 , and the constant c does not appear in KB 0 . Also suppose that KB, together with a particular tolerance ~ , implies that k'(x)j (x)k x is in some interval ; ]. It seems reasonable to argue that c is should be treated as a \\typical\" element satisfying (x), because by assumption KB contains no information suggesting otherwise. Therefore, we might hope to use the statistics directly, and conclude that Pr ~ 1 ('(c)jKB) 2 ; ]: This is indeed the case, as the following theorem shows.\nTheorem 4.1: (Bacchus et al., 1994) Let KB be a knowledge base of the form (c) ^KB 0 , and assume that for all su ciently small tolerance vectors ~ , KB ~ ] j = k'(x)j (x)k x 2 ; ]: If no constant in c appears in KB 0 , in '(x), or in (x), then Pr 1 ('(c )jKB) 2 ; ] (if the degree of belief exists at all).\nThis result, in combination with the results of the previous section, provides us with a very powerful tool. Roughly speaking, we propose to use the following strategy: The basic concentration phenomenon says that most worlds are very similar in a certain sense. As shown in Corollary 3.14, we can use this to nd some assertions that are \\almost certainly\" true (i.e., with degree of belief 1) even if they are not logically implied by KB. Theorem 3.16 then tells us that we can treat these new assertions as if they are in fact known with certainty. When these new assertions state statistical \\knowledge\", they can vastly increase our opportunities to apply direct inference. The following example illustrates this idea.\nExample 4.2: Consider a very simple knowledge base over a vocabulary containing the single unary predicate fPg: KB = (jjP(x)jj x 1 0:3):\nThere are two atoms A 1 and A 2 over P, with A 1 = P and A 2 = :P. The solution space of this KB given ~ is clearly S ~ KB] = f(u 1 ; u 2 ) 2 2 : u 1 0:3 + 1 g:\nA straightforward computation shows that, for 1 < 0:2, this has a unique maximum-entropy point ṽ = (0:3 + 1 ; 0:7 1 ). Now, consider the query P(c). For all > 0, let ] be the formula jjP(x)jj x 2 (0:3 + 1 ) ; (0:3 + 1 ) + ]. This satis es the condition of Corollary 3.14, so it follows that Pr ~ 1 ( ]jKB) = 1: Using Theorem 3.16, we know that for lim 2 flim inf; lim supg,\nlim N!1 Pr ~ N (P(c)jKB) = lim N!1 Pr ~ N (P(c)jKB ^ ]):\nBut now we can use direct inference. (Note that here, our \\knowledge\" about c is vacuous, i.e., \\true(c)\".) We conclude that, if there is any limit at all, then necessarily Pr ~ 1 (P(c)jKB ^ ]) 2 (0:3 + 1 ) ; (0:3 + 1 ) + ]:\nSo, for all > 0, Pr ~ 1 (P(c)jKB) 2 (0:3 + 1 ) ; (0:3 + 1 ) + ]: Since this is true for all , the only possible value for Pr ~ 1 (P(c)jKB) is 0:3+ 1 , which is the value of u 1 (i.e., jjP(x)jj x ) at the maximum-entropy point. Note that it is also clear what happens as ~ tends to 0: Pr 1 (P(c)jKB) is 0:3. This example demonstrates the main steps of one possible strategy for computing degrees of belief. First the maximum-entropy points of the space S ~ KB] are computed as a function of ~ . Then, these are used to compute Pr ~ 1 ('jKB), assuming the limit exists (if not, the lim sup and lim inf of Pr N ('jKB) are computed instead). Finally, we compute the limit of this probability as ~ goes to zero.\nUnfortunately, this strategy has a serious potential problem. We clearly cannot compute Pr ~ 1 ('jKB) separately for each of the in nitely many tolerance vectors ~ and then take the limit as ~ goes to 0. We might hope to compute this probability as an explicit function of ~ , and then compute the limit. For instance, in Example 4.2 Pr ~ 1 (P(c)jKB) was found to be 0:3 + 1 , and so it is easy to see what happens as 1 ! 0. But there is no reason to believe that Pr ~ 1 ('jKB) is, in general, an easily characterizable function of ~ . If it is not, then computing the limit as ~ goes to 0 can be di cult or impossible. We would like to nd a way to avoid this explicit limiting process altogether. It turns out that this is indeed possible in some circumstances. The main requirement is that the maximum-entropy points of S ~ KB] converge to the maximum-entropy points of S 0 KB]. (For future reference, notice that S 0 KB] is the closure of the solution space of the constraints obtained from KB by replacing all occurrences of i by = and all occurrences of i by .) In many such cases, we can compute Pr 1 ('jKB) directly in terms of the maximum-entropy points of S 0 KB],\nwithout taking limits at all.\nAs the following example shows, this type of continuity does not hold in general: the maximum-entropy points of S ~ KB] do not necessarily converge to those of S 0 KB].\nExample 4.3: Consider the knowledge base KB = (jjP(x)jj x 1 0:3 _ jjP(x)jj x 2 0:4) ^jjP(x)jj x 6 3 0:4 :\nIt is easy to see that S 0 KB] is just f(0:3; 0:7)g: The point (0:4; 0:6) is disallowed by the second conjunct. Now, consider S ~ KB] for ~ > 0. If 2 3 , then S ~ KB] indeed does not contain points where u 1 is near 0:4; the maximum-entropy point of this space is easily seen to be 0:3 + 1 . However, if 2 > 3 then there will be points in S ~ KB] where u 1 is around 0:4; for instance, those where 0:4 + 3 < u 1 0:4 + 2 . Since these points have a higher entropy than the points in the vicinity of 0:3, the former will dominate. Thus, the set of maximum-entropy points of S ~ KB] does not converge to a single well-de ned set. What it converges to (if anything) depends on how ~ goes to 0. This nonconvergence has consequences for degrees of belief. It is not hard to show Pr ~ 1 (P(c)jKB) can be either 0:3 + 1 or 0:4 + 2 , depending on the precise relationship between 1 , 2 , and 3 . It follows that Pr 1 (P(c)jKB) does not exist.\nWe say that a degree of belief Pr 1 ('jKB) is not robust if the behavior of Pr ~ 1 ('jKB) (or of lim inf Pr ~ N ('jKB) and lim sup Pr ~ N ('jKB)) as ~ goes to 0 depends on how ~ goes to 0. In other worlds, nonrobustness describes situations when Pr 1 ('jKB) does not exist because of sensitivity to the exact choice of tolerances. We shall see a number of other examples of nonrobustness in later sections.\nIt might seem that the notion of robustness is an artifact of our approach. In particular, it seems to depend on the fact that our language has the expressive power to say that the two tolerances represent a di erent degree of approximation, simply by using di erent subscripts ( 2 vs. 3 in the example). In an approach to representing approximate equality that does not make these distinctions, we are bound to get the answer 0:3 in the example above, since then jjP(x)jj x 6 3 0:4 really would be the negation of jjP(x)jj x 2 0:4. We would argue that the answer 0:3 is not as reasonable as it might at rst seem. Suppose one of the two di erent instances of 0:4 in the previous example had been slightly di erent; for example, suppose we had used 0:399 rather than 0:4 in the rst of them. In this case, the second conjunct is essentially vacuous, and can be ignored. The maximum-entropy point in S 0 KB] is now 0:399, and we indeed derive a degree of belief of 0:399 in P(c). Thus, arbitrarily small changes to the numbers in the original knowledge base can cause large changes in our degrees of belief. But these numbers are almost always the result of approximate observations; this is re ected by our decision to use approximate equality rather than equality when referring to them. It does not seem reasonable to base actions on a degree of belief that can change so drastically in the face of small changes in the measurement of data. Note that, if we know that the two instances of 0:4 do, in fact, denote exactly the same number, we can represent this by using the same approximate equality connective in both disjuncts. In this case, it is easy to see that we do get the answer 0:3.\nA close look at the example shows that the nonrobustness arises because of the negated proportion expression jjP(x)jj x 6 3 0:4. Indeed, we can show that if we start with a KB in canonical form that does not contain negated proportion expressions then, in a precise sense, the set of maximum-entropy points of S ~ KB] necessarily converges to the set of maximum-entropy points of S 0 KB]. An argument can be made that we should eliminate negated proportion expressions from the language altogether. It is one thing to argue that sometimes we have statistical values whose accuracy we are unsure about, so that we want to make logical assertions less stringent than exact numerical equality. It is harder to think of cases in which the opposite is true, and all we know is that some statistic is \\not even approximately\" equal to some value. However, we do not eliminate negated proportion expressions from the language, since without them we would not be able to prove an analogue to Theorem 3.5. (They arise when we try to atten nested proportion expressions, for example.) Instead, we have identi ed a weaker condition that is su cient to prevent problems such as that seen in Example 4.3. Essential positivity simply tests that negations are not interacting with the maximum-entropy computation in a harmful way.\nDe nition 4.4: Let (KB 0]) be the result of replacing each strict inequality in (KB 0])\nwith its weakened version. More formally, we replace each subformula of the form t < 0 with t 0, and each subformula of the form t > 0 with t 0. (Recall that these are the only constraints possible in (KB 0]), since all tolerance variables \" i are assigned 0.) Let S 0 KB] be Sol (KB 0])], where we use X to denote the closure of X. We say that KB is essentially positive if the sets S 0 KB] and S 0 KB] have the same maximum-entropy points.\nExample 4.5: Consider again the knowledge base KB from Example 4.3. The constraint formula (KB 0]) is (after simpli cation):\n(u 1 = 0:3 _ u 1 = 0:4) ^(u 1 < 0:4 _ u 1 > 0:4):\nIts \\weakened\" version is (KB 0]):\n(u 1 = 0:3 _ u 1 = 0:4) ^(u 1 0:4 _ u 1 0:4); which is clearly equivalent to u 1 = 0:3 _ u 1 = 0:4. Thus, S 0 KB] = f(u 1 ; u 2 ) 2 2 : u 1 0:3g whereas S 0 KB] = S 0 KB] f(0:4; 0:6)g. Since the two spaces have di erent maximum-entropy points, the knowledge base KB is not essentially positive.\nAs the following result shows, essential positivity su ces to guarantee that the maximumentropy points of S ~ KB] converge to those of S 0 KB].\nProposition 4.6: Assume that KB is essentially positive and let Q be the set of maximumentropy points of S 0 KB] (and thus also of S 0 KB]). Then for all > 0 and all su ciently small tolerance vectors ~ (where \\su ciently small\" may depend on ), every maximumentropy point of S ~ KB] is within of some maximum-entropy point in Q." }, { "figure_ref": [], "heading": "Queries for a single individual", "publication_ref": [ "b32", "b1" ], "table_ref": [], "text": "We now show how to compute Pr 1 ('jKB) for a certain restricted class of rst-order formulas ' and knowledge bases KB. The most signi cantly restriction is that the query ' should be a quanti er-free ( rst-order) sentence over the vocabulary P fcg; thus, it is a query about a single individual, c. While this class is rather restrictive, it su ces to express many real-life examples. Moreover, it is signi cantly richer than the language considered by Paris and Vencovska (1989).\nThe following de nition helps de ne the class of interest.\nDe nition 4.7: A formula is essentially propositional if it is a quanti er-free and proportionfree formula in the language L (fP 1 ; : : :; P k g) (so that, in particular, it has no constant symbols) and has only one free variable x.\nWe say that '(c) is a simple query for KB if: '(x) is essentially propositional, KB is of the form (c) ^KB 0 , where (x) is essentially propositional and KB 0 does not mention c.\nThus, just as in Theorem 4.1, we suppose that (c) summarizes all that is known about c. In this section, we focus on computing the degree of belief Pr 1 ('(c)jKB) for a formula '(c) which is a simple query for KB.\nNote that an essentially propositional formula (x) is equivalent to a disjunction of atoms. For example, over the vocabulary fP 1 ; P 2 g, the formula P 1 (x) _ P 2 (x) is equivalent to A 1 (x)_A 2 (x)_A 3 (x) (where the atoms are ordered as in Example 3.2). For an essentially propositional formula , we take A( ) be the (unique) set of atoms such that is equivalent to W A j 2A( ) A j (x).\nIf we view a tuple ũ 2 K as a probability assignment to the atoms, we can extend ũ to a probability assignment on all essentially propositional formulas using this identi cation of an essentially propositional formula with a set of atoms:\nDe nition 4.8: Let be an essentially propositional formula. We de ne a function F ] : K ! IR as follows:\nF ] (ũ) = X A j 2A( ) u j :\nFor essentially propositional formulas '(x) and (x) we de ne the (partial) function F 'j ] :\nK ! IR to be:\nF 'j ] (ũ) = F '^ ] (ũ)\nF ] (ũ) : Note that this function is unde ned when F ] (ũ) = 0.\nAs the following result shows, if ' is a simple query for KB (of the form (c)^KB 0 ), then all that matters in computing Pr 1 ('jKB) is F 'j ] (ũ) for tuples ũ of maximum entropy.\nThus, in a sense, we are only using KB 0 to determine the space over which we maximize entropy. Having de ned this space, we can focus on and ' in determining the degree of belief.\nTheorem 4.9: Suppose '(c) is a simple query for KB. For all ~ su ciently small, if Q is the set of maximum-entropy points in S ~ KB] and F ] (ṽ) > 0 for all ṽ 2 Q, then for lim 2 flim sup; lim infg we have lim\nN!1 Pr ~ N ('(c)jKB) 2 \" inf ṽ2Q F 'j ] (ṽ); sup ṽ2Q F 'j ] (ṽ) # :\nThe following is an immediate but important corollary of this theorem. It asserts that, if the space S ~ KB] has a unique maximum-entropy point, then its value uniquely determines the probability Pr ~ 1 ('(c)jKB).\nCorollary 4.10: Suppose '(c) is a simple query for KB. For all ~ su ciently small, if ṽ is the unique maximum-entropy point in S ~ KB] and F ] (ṽ) > 0, then Pr ~ 1 ('(c)jKB) = F 'j ] (ṽ):\nWe are interested in Pr 1 ('(c)jKB), which means that we are interested in the limit of Pr ~ 1 ('(c)jKB) as ~ ! 0. Suppose KB is essentially positive. Then, by the results of the previous section and the continuity of F 'j ] , it is enough to look directly at the maximumentropy points of S 0 KB]. More formally, by combining Theorem 4.9 with Proposition 4.6, we can show: Theorem 4.11: Suppose '(c) is a simple query for KB. If the space S 0 KB] has a unique maximum-entropy point ṽ, KB is essentially positive, and F ] (ṽ) > 0, then Pr 1 ('(c)jKB) = F 'j ] (ṽ):\nWe believe that this theorem will turn out to cover a lot of cases that occur in practice. As our examples and the discussion in the next section show, we often do get simple queries and knowledge bases that are essentially positive. Concerning the assumption of a unique maximum-entropy point, note that the entropy function is convex and so this assumption is automatically satis ed if S 0 KB] is a convex space. Recall that a space S is convex if for all ũ;ũ 0 2 S, and all 2 0; 1], it is also the case that ũ + (1 )ũ 0 2 S. The space S 0 KB] is surely convex if it is de ned using a conjunction of linear constraints. While it is clearly possible to create knowledge bases where S 0 KB] has multiple maximum-entropy points (for example, using disjunctions), we expect that such knowledge bases arise rarely in practical applications. Perhaps the most restrictive assumption made by this theorem is the seemingly innocuous requirement that F ] (ṽ) > 0. This assumption is obviously necessary for the theorem to hold; without it, the function F 'j ] is simply not de ned. Unfortunately, we show in Section 4.4 that this requirement is, in fact, a severe one; in particular, it prevents the theorem from being applied to most examples derived from default reasoning, using our statistical interpretation of defaults (Bacchus et al., 1994).\nWe close this subsection with an example of the theorem in action.\nExample 4.12: Let the language consist of P = fHepatitis; Jaundice; BlueEyedg and the constant Eric. There are eight atoms in this language. We use A P 0 1 P 0 2 P 0 3 to denote the atom P 0 1 (x)^P 0 2 (x)^P 0 3 (x), where P 0 1 is either H (denoting Hepatitis) or H (denoting :Hepatitis), P 0 2 is J or J (for Jaundice and :Jaundice, respectively), and P 0 3 is B or B (for BlueEyed and :BlueEyed, respectively).\nConsider the knowledge base KB hep : 8x (Hepatitis(x) ) Jaundice(x)) kHepatitis(x)jJaundice(x)k\nx 1 0:8 ĵjBlueEyed(x)jj\nx 2 0:25 Ĵaundice(Eric):\nIf we order the atoms as A HJB ,A HJB ,A HJ B ,A HJ B , A H JB ,A H JB ,A H J B ,A H J B , then it is not hard to show that (KB hep ) is:\nu 3 = 0 û4 = 0\n(u 1 + u 2 ) (0:8 + \" 1 )(u 1 + u 2 + u 5 + u 6 ) (u 1 + u 2 ) (0:8 \" 1 )(u 1 + u 2 + u 5 + u 6 )\n(u 1 + u 3 + u 5 + u 7 ) (0:25 + \" 2 )\n(u 1 + u 3 + u 5 + u 7 ) (0:25 \" 2 ) (u 1 + u 2 + u 5 + u 6 ) > 0: To nd the space S 0 KB hep ] we simply set \" 1 = \" 2 = 0. Then it is quite straightforward to nd the maximum-entropy point in this space, which, taking = 2 1:6 , is:\n(v 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 8 ) = 1 5 + ; 3 5 + ; 0; 0; 1 4(5 + ) ; 3 4(5 + ) ; 4(5 + ) ; 3 4(5 + ) : Using ṽ, we can compute various asymptotic probabilities very easily. For example, Pr 1 (Hepatitis(Eric)jKB hep ) = F HepatitisjJaundice] (ṽ\n) = v 1 + v 2 v 1 + v 2 + v 5 + v 6 = 1 5+ + 3 5+ 1 5+ + 3 5+ + 1 4(5+ ) ; 3 4(5+ )\n= 0:8; as expected. Similarly, we can show that Pr 1 (BlueEyed(Eric)jKB hep ) = 0:25 and that Pr 1 (BlueEyed(Eric) ^Hepatitis(Eric)jKB hep ) = 0:2. Note that the rst two answers also follow from the direct inference principle (Theorem 4.1), which happens to be applicable in this case. The third answer shows that BlueEyed and Hepatitis are being treated as independent. It is a special case of a more general independence phenomenon that applies to random worlds; see (Bacchus et al., 1994, Theorem 5.27)." }, { "figure_ref": [], "heading": "Probabilistic propositional logic", "publication_ref": [ "b31", "b9", "b38", "b32", "b38", "b34", "b14", "b34", "b1", "b14", "b14", "b14", "b14", "b14", "b14", "b14", "b14" ], "table_ref": [], "text": "In this section we consider two variants of probabilistic propositional logic. As the following discussion shows, both can easily be captured by our framework. The embedding we discuss uses simple queries throughout, allowing us to appeal to the results of the previous section. Nilsson (1986) considered the problem of what could be inferred about the probability of certain propositions given some constraints. For example, we might know that Pr( yjbird) 0:7 and that Pr(yellow) 0:2, and be interested in Pr( yjbird ^yellow).\nRoughly speaking, Nilsson suggests computing this by considering all probability distributions consistent with the constraints, and then computing the range of values given to Pr( yjbird ^yellow) by these distributions. Formally, suppose our language consists of k primitive proposition, p 1 ; : : :; p k . Consider the set of K = 2 k truth assignments these propositions. We give semantics to probabilistic statements over this language in terms of a probability distribution over the set (see (Fagin, Halpern, & Megiddo, 1990) for details). Since each truth assignment ! 2 determines the truth value of every propositional formula , we can determine the probability of every such formula:\nPr ( ) = X !j = (!):\nClearly, we can determine whether a probability distribution satis es a set of probabilistic constraints. The standard notion of probabilistic propositional inference would say that j = Pr( ) 2 1 ; 2 ] if Pr ( ) is within the range 1 ; 2 ] for every distribution that satis es the constraints in .\nUnfortunately, while this is a very natural de nition, the constraints that one can derive from it are typically quite weak. For that reason, Nilsson suggested strengthening this notion of inference by applying the principle of maximum entropy: rather than considering all distributions satisfying , we consider only the distribution(s) that have the greatest entropy among those satisfying the constraints. As we now show, one implication of our results is that the random-worlds method provides a principled motivation for this introduction of maximum entropy to probabilistic propositional reasoning. In fact, the connection between probabilistic propositional reasoning and random worlds should now be fairly clear:\nThe primitive propositions p 1 ; : : :; p k correspond to the unary predicates P 1 ; : : :; P k . A propositional formula over p 1 ; : : :; p k corresponds uniquely to an essentially propositional formula as follows: we replace each occurrence of the propositional symbol p i with P i (x).\nThe set of probabilistic constraints corresponds to a knowledge base KB 0 ]|a constant-free knowledge base containing only proportion expressions. The correspondence is as follows: Vencovska (1989) and Shastri (1989). The work of Paris and Vencovska is particularly relevant because they also realize the necessity of adopting a formal notion of \\approximation\", although the precise details of their approach di er from ours.\nTo the best of our knowledge, most of the work on probabilistic propositional reasoning and all formal presentations of the entropy/worlds connection (in particular, those of (Paris & Vencovska, 1989;Shastri, 1989)) have limited themselves to conjunctions of linear constraints. Our more general language gives us a great deal of additional expressive power. For example, it is quite reasonable to want the ability to express that properties are (approximately) statistically independent. For example, we may wish to assert that Bird(x) and Yellow(x) are independent properties by saying jjBird(x) ^Yellow(x)jj x jjBird(x)jj x jjYellow(x)jj x . Clearly, such constraints are not linear. Nevertheless, our Theorem 4.11 covers such cases and much more.\nA version of probabilistic propositional reasoning has also been used to provide probabilistic semantics for default reasoning (Pearl, 1989). Here also, the connection to random worlds is of interest. In particular, it follows from Corollary 4.10 that the recent work of Goldszmidt, Morris, and Pearl (1990) can be embedded in the random-worlds framework. In the rest of this subsection, we explain their approach and the embedding.\nConsider a language consisting of propositional formulas over the propositional variables p 1 ; : : :; p k , and default rules of the form B ! C (read \\B's are typically C's\"), where B and C are propositional formulas. A distribution is said to -satisfy a default rule B ! C if (CjB) 1 . In addition to default rules, the framework also permits the use of material implication in a rule, as in B ) C. A distribution is said to satisfy such a rule if (CjB) = 1. A parameterized probability distribution (PPD) is a collection f g >0 of probability distributions over , parameterized by . A PPD f g >0 -satis es a set R of rules if for every , -satis es every default rule r 2 R and satis es every non-default rule r 2 R. A set R of default rules -entails B ! C if for every PPD that -satis es R, lim !0 (CjB) = 1.\nAs shown in (Ge ner & Pearl, 1990), -entailment possesses a number of reasonable properties typically associated with default reasoning, including a preference for more speci c information. However, there are a number of desirable properties that it does not have. Among other things, irrelevant information is not ignored. (See (Bacchus et al., 1994) for an extensive discussion of this issue.)\nTo obtain additional desirable properties, -semantics is extended in (Goldszmidt et al., 1990) by an application of the principle of maximum entropy. Instead of considering all possible PPD's, as above, we consider only the PPD n ;R o >0 such that, for each , ;R has the maximum entropy among distributions that -satisfy all the rules in R. (See (Goldszmidt et al., 1990) for precise de nitions and technical details.) Note that, since the constraints used to de ne ;R are all linear, there is indeed a unique such point of maximum\nentropy. A rule B ! C is an ME-plausible consequence of R if lim !0 ;R (CjB) = 1.\nThe notion of ME-plausible consequence is analyzed in detail in (Goldszmidt et al., 1990), where it is shown to inherit all the nice properties of -entailment (such as the preference for more speci c information), while successfully ignoring irrelevant information. Equally importantly, algorithms are provided for computing the ME-plausible consequences of a set of rules in certain cases.\nOur maximum-entropy results can be used to show that the approach of (Goldszmidt et al., 1990) can be embedded in our framework in a straightforward manner. We simply translate a default rule r of the form B ! C into a rst-order default rule r = def k C (x)j B (x)k x 1 1; as in our earlier translation of Nilsson's approach. Note that the formulas that arise under this translation all use the same approximate equality connective 1 . The reason is that the approach of (Goldszmidt et al., 1990) uses the same for all default rules. We can similarly translate a (non-default) rule r of the form B ) C into a rst-order constraint using universal quanti cation:\nr = def 8x ( B (x) ) C (x)):\nUnder this translation, we can prove the following theorem. In particular, this theorem implies that all the computational techniques and results described in (Goldszmidt et al., 1990) carry over to this special case of the random-worlds method. It also shows that random-world provides a principled justi cation for the approach (Goldszmidt et al., 1990) present (one which is quite di erent from the justi cation given in (Goldszmidt et al., 1990) itself)." }, { "figure_ref": [], "heading": "Beyond simple queries", "publication_ref": [ "b14", "b1" ], "table_ref": [], "text": "In Section 4.2 we restricted attention to simple queries. Our main result, Theorem 4.11, needed other assumptions as well: essential positivity, the existence of a unique maximumentropy point ṽ, and the requirement that F ] (ṽ) > 0. We believe that this theorem is useful in spite of its limitations, as demonstrated by the discussion in Section 4.3. Nevertheless, this result allows us to take advantage of only a small fragment of our rich language. Can we nd a more general theorem? After all, the basic concentration result (Theorem 3.13) holds with essentially no restrictions. In this section we show that it is indeed possible to extend Theorem 4.11 signi cantly. However, there are serious limitations and subtleties. We illustrate these problems by means of examples, and then state an extended result.\nOur attempt to address these problems (so far as is possible) leads to a rather complicated nal result. In fact, the problems we discuss are as interesting and important as the theorem we actually give: they help us understand more of the limits of maximum entropy. Of course, every issue we discuss in this subsection is relatively minor compared to maximum entropy's main (apparent) restriction, which concerns the use of non-unary predicates. For the reader who is less concerned about the other, lesser, issues we remark that it is possible to skip directly to Section 5.\nWe rst consider the restrictions we placed on the KB, and show the di culties that arise if we drop them. We start with the restriction to a single maximum-entropy point. As the concentration theorem (Theorem 3.13) shows, the entropy of almost every world is near maximum. But it does not follow that all the maximum-entropy points are surrounded by similar numbers of worlds. Thus, in the presence of more than one maximum-entropy point, we face the problem of nding the relative importance, or weighting, of each maximumentropy point. As the following example illustrates, this weighting is often sensitive to the tolerance values. For this reason, non-unique entropy maxima often lead to nonrobustness.\nExample 4.16: Suppose = fP; cg, and consider the knowledge base KB = (kP(x)k x 1 0:3) _ (kP(x)k x 2 0:7):\nAssume we want to compute Pr 1 (P(c)jKB). In this case, S ~ KB] is f(u 1 ; u 2 ) 2 2 : u 1 0:3 + 1 or u 1 0:7 2 g; and S 0 KB] is f(u 1 ; u 2 ) 2 2 : u 1 0:3 or u 1 0:7g: Note that S 0 KB] has two maximum-entropy points: (0:3; 0:7) and (0:7; 0:3). Now consider the maximum-entropy points of S ~ KB] for ~ > 0. It is not hard to show that if 1 > 2 , then this space has a unique maximum-entropy point, (0:3 + 1 ; 0:7 1 ). In this case, Pr ~ 1 (P(c)jKB) = 0:3 + 1 . On the other hand, if 1 < 2 , then the unique maximum-entropy point of this space is (0:7 + 2 ; 0:3 2 ), in which case Pr ~ 1 (P(c)jKB) = 0:7 + 2 . If 1 = 2 , then the space S ~ KB] has two maximum-entropy points, and by symmetry we obtain that Pr ~ 1 (P(c)jKB) = 0:5. So, by appropriately choosing a sequence of tolerance vectors converging to 0, we can make the asymptotic value of this fraction either 0:3, 0:5, or 0:7. Thus Pr 1 (P(c)jKB) does not exist.\nIt is not disjunctions per se that cause the problem here: if we consider instead the database KB 0 = (kP(x)k x 1 0:3) _ (kP(x)k x 2 0:6), then there is no di culty. There is a unique maximum-entropy point of S 0 KB 0 ]|(0:6; 0:4)|and the asymptotic probability Pr 1 (P(c)jKB 0 ) = 0:6, as we would want. 7\nIn light of this example (and many similar ones we can construct), we continue to assume that there is a single maximum-entropy point. As we argued earlier, we expect this to be true in typical practical applications, so the restriction does not seem very serious.\nWe now turn our attention to the requirement that F ] (ṽ) > 0. As we have already observed, this seems to be an obvious restriction to make, considering that the function F 'j ] (ṽ) is not de ned otherwise. However, this di culty is actually a manifestation of a much deeper problem. As the following example shows, any approach that just uses the maximum-entropy point of S 0 KB] will necessarily fail in some cases where F ] (ṽ) = 0.\nExample 4.17: Consider the knowledge base KB = (jjPenguin(x)jj x 1 0) ^(kFly(x)jPenguin(x)k x 2 0) ^Penguin(Tweety):\n7. We remark that it is also possible to construct examples of multiple maximum-entropy points by using quadratic constraints rather than disjunction.\nSuppose we want to compute Pr 1 (Fly(Tweety)jPenguin(Tweety)). We can easily conclude from Theorem 4.1 that this degree of belief is 0, as we would expect. However, we cannot reach this conclusion using Theorem 4.11 or anything like it. For consider the maximumentropy point of S 0 KB]. The coordinates v 1 , corresponding to Fly ^Penguin, and v 2 , corresponding to :Fly^Penguin, are both 0. Hence, F Penguin] (ṽ) = 0, so that Theorem 4.11 does not apply.\nBut, as we said, the problem is more fundamental. The information we need (that the proportion of ying penguins is zero) is simply not present if all we know is the maximumentropy point ṽ. We can obtain the same space S 0 KB] (and thus the same maximumentropy point) from quite di erent knowledge bases. In particular, consider KB 0 which simply asserts that (jjPenguin(x)jj x 1 0) ^Penguin(Tweety). This new knowledge base tells us nothing whatsoever about the fraction of ying penguins, and in fact it is easy to show that Pr 1 (Fly(Tweety)jKB 0 ) = 0:5. But of course it is impossible to distinguish this case from the previous one just by looking at ṽ. It follows that no result in the spirit of Theorem 4.11 (which just uses the value of ṽ) can be comprehensive.\nThe example shows that the philosophy behind Theorem 4.11 cannot be extended very far, if at all: it is inevitable that there will be problems when F ] (ṽ) = 0. But it is natural to ask whether there is a di erent approach altogether in which this restriction can be relaxed. That is, is it possible to construct a technique for computing degrees of belief in those cases\nwhere F ] = 0? As we mentioned in Section 4.1, we might hope to do this by computing Pr ~ 1 ('jKB) as a function of ~ and then taking the limit as ~ goes to 0. In general, this seems very hard. But, interestingly, the computational technique of (Goldszmidt et al., 1990) does use this type of parametric analysis, demonstrating that things might not be so bad for various restricted cases. Another source of hope is to remember that maximum entropy is, for us, merely one tool for computing random-worlds degrees of belief. There may be other approaches that bypass entropy entirely. In particular, some of the theorems we give in (Bacchus et al., 1994) can be seen as doing this; these theorems will often apply even if F ] = 0.\nAnother assumption made throughout Section 4.2 is that the knowledge base has a special form, namely (c)^KB 0 , where is essentially propositional and KB 0 does not contain any occurrences of c. The more general theorem we state later relaxes this somewhat, as follows.\nDe nition 4.18: A knowledge base KB is said to be separable with respect to query ' if it has the form ^KB 0 , where contains neither quanti ers nor proportions, and KB 0 contains none of the constant symbols appearing in ' or in . 8It should be clear that if a query '(c) is simple for KB (as assumed in previous subsection), then the separability condition is satis ed.\nAs the following example shows, if we do not assume separability, we can easily run into nonrobust behavior:\nExample 4.19: Consider the following knowledge base KB over the vocabulary = fP; cg:\n(jjP(x)jj x 1 0:3 ^P(c)) _ (jjP(x)jj x 2 0:3 ^:P(c)):\nKB is not separable with respect to the query P(c). The space S 0 KB] consists of a unique point (0:3; 0:7), which is also the maximum-entropy point. Both disjuncts of KB are consistent with the maximum-entropy point, so we might expect that the presence of the conjuncts P(c) and :P(c) in the disjuncts would not a ect the degree of belief.\nThat is, if it were possible to ignore or discount the role of the tolerances, we would expect Pr 1 (P(c)jKB) = 0:3. However, this is not the case. Consider the behavior of Pr ~ 1 (P(c)jKB) for ~ > 0. If 1 > 2 , then the maximum-entropy point of S ~ KB] is (0:3 + 1 ; 0:7 1 ). Now, consider some > 0 su ciently small so that 2 + < 1 . By Corollary 3.14, we deduce that Pr ~ 1 ((jjP(x)jj x > 0:3 + 2 ) j KB) = 1. Therefore, by Theorem 3.16, Pr ~ 1 (P(c)jKB) = Pr ~ 1 (P(c) j KB ^(jjP(x)jj x > 0:3 + 2 )) (assuming the limit exists). But since the newly added expression is inconsistent with the second disjunct, we obtain that Pr ~ 1 (P(c)jKB) = Pr ~ 1 (P(c) j P(c) ^(jjP(x)jj x 1 0:3)) = 1, and not 0:3. On the other hand, if 1 < 2 , we get the symmetric behavior, where Pr ~ 1 (P(c)jKB) = 0. Only if 1 = 2 do we get the expected value of 0:3 for Pr ~ 1 (P(c)jKB). Clearly, by appropriately choosing a sequence of tolerance vectors converging to 0, we can make the asymptotic value of this fraction any of 0, 0:3, or 1, or not exist at all. Again, Pr 1 (P(c)jKB) is not robust. We now turn our attention to restrictions on the query. In Section 4.2, we restricted to queries of the form '(c), where '(x) is essentially propositional. Although we intend to ease this restriction, we do not intend to allow queries that involve statistical information.\nThe following example illustrates the di culties.\nExample 4.20: Consider the knowledge base KB = jjP(x)jj x 1 0:3 and the query ' = jjP(x)jj x 2 0:3. It is easy to see that the unique maximum-entropy point of S ~ KB] is (0:3+ 1 ; 0:7 1 ). First suppose 2 < 1 . From Corollary 3.14, it follows that Pr ~ 1 ((jjP(x)jj x > 0:3 + 2 ) j KB) = 1. Therefore, by Theorem 3.16, Pr ~ 1 ('jKB) = Pr ~ 1 ('jKB ^(jjP(x)jj x > 0:3+ 2 )) (assuming the limit exists). The latter expression is clearly 0. On the other hand, if 1 < 2 , then KB ~ ] j = ' ~ ], so that Pr ~ 1 ('jKB) = 1. Thus, the limiting behavior of Pr ~ 1 ('jKB) depends on how ~ goes to 0, so that Pr 1 ('jKB) is nonrobust.\nThe real problem here is the semantics of proportion expressions in queries. While the utility of the connective in expressing statistical information in the knowledge base should be fairly uncontroversial, its role in conclusions we might draw, such as ' in Example 4.20, is much less clear. The formal semantics we have de ned requires that we consider all possible tolerances for a proportion expression in ', so it is not surprising that nonrobustness is the usual result. One might argue that the tolerances in queries should be allowed to depend more closely on tolerances of expressions in the knowledge base. It is possible to formalize this intuition, as is done in (Koller & Halpern, 1992), to give an alternative semantics for dealing with proportion expressions in queries that often gives more reasonable behavior. Considerations of this alternative semantics would lead us too far a eld here; rather, we focus for the rest of the section on rst-order queries.\nIn fact, our goal is to allow arbitrary rst-order queries, even those that involve predicates of arbitrary arity and equality (although we still need to restrict the knowledge base to the unary language L 1 ). However, as the following example shows, quanti ers too can cause problems.\nExample 4.21: Let = fP; cg and consider KB 1 = 8x :P(x), KB 2 = jjP(x)jj x 1 0, and ' = 9x P(x). It is easy to see that S 0 KB 1 ] = S 0 KB 2 ] = f(0; 1)g, and therefore the unique maximum-entropy point in both is ṽ = (0; 1). However, Pr 1 ('jKB 1 ) is clearly 0, whereas Pr 1 ('jKB 2 ) is actually 1. To see the latter fact, observe that the vast majority of models of KB 2 around ṽ actually satisfy 9x P(x). There is actually only a single world associated with (0; 1) at which 9x P(x) is false. This example is related to Example 4.17, because it illustrates another case in which S 0 KB] cannot su ce to determine degrees of belief.\nIn the case of the knowledge base KB 1 , the maximum-entropy point (0; 1) is quite misleading about the nature of nearby worlds. We must avoid this sort of \\discontinuity\" when nding the degree of belief of a formula that involves rst-order quanti ers. The notion of stability de ned below is intended to deal with this problem. To de ne it, we rst need the following notion of a size description.\nDe nition 4.22: A size description (over P) is a conjunction of K formulas: for each atom A j over P, it includes exactly one of 9x A j (x) and :9x A j (x). For ũ 2 K , the size description associated with ũ, written (ũ), is that size description which includes :9x A i (x) if u i = 0 and 9x A i (x) if u i > 0.\nThe problems that we want to avoid occur when there is a maximum-entropy point ṽ with size description (ṽ) such that in a neighborhood of ṽ, most of the worlds satisfying KB are associated with other size descriptions. Intuitively, the problem with this is that the coordinates of ṽ alone give us misleading information about the nature of worlds near ṽ, and so about degrees of belief. 9 We give a su cient condition which can be used to avoid this problem in the context of our theorems. This condition is e ective and uses machinery (in particular, the ability to nd solution spaces) that is needed to use the maximum-entropy approach in any case.\nDe nition 4.23: Let ṽ be a maximum-entropy point of S ~ KB]. We say that ṽ is safe (with respect to KB and ~ ) if ṽ is not contained in S ~ KB ^: (ṽ)]. We say that KB and ~ are stable for if for every maximum-entropy point ṽ 2 S ~ KB] we have that (ṽ) = and that ṽ is safe with respect to KB and ~ .\nThe next result is the key property of stability that we need. Theorem 4.24: If KB and ~ > 0 are stable for then Pr ~ 1 ( jKB) = 1. Our theorems will use the assumption that there exists some such that, for all sufciently small ~ , KB and ~ are stable for . We note that this does not imply that is necessarily the size description associated with the maximum-entropy point(s) of S 0 KB].\nExample 4.25: Consider the knowledge base KB 2 in Example 4.21, and recall that ṽ = (0; 1) is the maximum-entropy point of S 0 KB 2 ]. The size description (ṽ) is :9x A 1 (x) 9x\nA 2 (x). However the maximum-entropy point of S ~ KB 2 ] for ~ > 0 is actually ( 1 ; 1 1 ), so that the appropriate for such a ~ is 9x A 1 (x) ^9x A 2 (x).\n9. We actually conjecture that problems of this sort cannot arise in the context of a maximum-entropy point of S ~ KB] for ~ > 0. More precisely, for su ciently small ~ and a maximum-entropy point ṽ of S ~ KB]\nwith KB 2 L 1 , we conjecture that Pr ~ 1 O]( (ṽ)jKB) = 1 where O is an open set that contains ṽ but no other maximum-entropy point of S ~ KB]. If this is indeed the case, then the machinery of stability that we are about to introduce is unnecessary, since it holds in all cases that we need it. However, we have been unable to prove this.\nAs we now show, the restrictions outlined above and in Section 4.1 su ce for our next result on computing degrees of belief. In order to state this result, we need one additional concept. Recall that in Section 4.2 we expressed an essentially propositional formula '(x) as a disjunction of atoms. Since we wish to also consider formulas ' using more than one constant and non-unary predicates, we need a richer concept than atoms. This is the motivation behind the de nition of complete descriptions.\nDe nition 4.26: Let Z be some set of variables and constants. A complete description D over and Z is an unquanti ed conjunction of formulas such that:\nFor every predicate R 2 f=g of arity r and for every z i 1 ; : : :; z ir 2 Z, D contains exactly one of R(z i 1 ; : : :; z ir ) or :R(z i 1 ; : : :; z ir ) as a conjunct." }, { "figure_ref": [], "heading": "D is consistent. 10", "publication_ref": [ "b19", "b19", "b19", "b19" ], "table_ref": [], "text": "Complete descriptions simply extend the role of atoms in the context of essentially propositional formulas to the more general setting. As in the case of atoms, if we x some arbitrary ordering of the conjuncts in a complete description, then complete descriptions are mutually exclusive and exhaustive. Clearly, a formula whose free variables and constants are contained in Z, and which is is quanti er-and proportion-free, is equivalent to some disjunction of complete descriptions over Z. For such a formula , let A( ) be a set of complete descriptions over Z such that is equivalent to the disjunction W D2A( ) D, where Z is the set of constants and free variables in .\nFor the purposes of the remaining discussion (except within proofs), we are interested only in complete descriptions over an empty set of variables. For a set of constants Z, we can view a description D over Z as describing the di erent properties of the constants in Z.\nIn our construction, when considering a KB of the form ^KB 0 which is separable with respect to a query ', we de ne the set Z to contain precisely those constants in ' and in . In particular, this means that KB 0 will mention no constant in Z.\nA complete description D over a set of constants Z can be decomposed into three parts: the unary part D 1 which consists of those conjuncts of D that involve unary predicates (and thus determines an atom for each of the constant symbols), the equality part D = which consists of those conjuncts of D involving equality (and thus determines which of the constants are equal to each other), and the non-unary part D >1 which consists of those conjuncts of D involving non-unary predicates (and thus determines the non-unary properties other than equality of the constants). As we suggested, the unary part of such a complete description D extends the notion of \\atom\" to the case of multiple constants.\nFor this purpose, we also extend F A] (for an atom A) and de ne F D] for a description D.\nIntuitively, we are treating each of the individuals as independent, so that the probability that constant c 1 satis es atom A j 1 and that constant c 2 satis es A j 2 is just the product of the probability that c 1 satis es A j 1 and the probability that c 2 satis es A j 2 .\nDe nition 4.27: For a complete description D without variables whose unary part is equivalent to A j 1 (c 1 ) ^: : : ^Ajm (c m ) (for distinct constants c 1 ; : : :; c m ) and for a point 10. Inconsistency is possible because of the use of equality. For example, if D includes z1 = z2 as well as both R(z1; z3) and :R(z2; z3), it is inconsistent.\nũ 2 K , we de ne\nF D] (ũ) = m Y `=1 u j `:\nNote that F D] is depends only on D 1 , the unary part of D.\nAs we mentioned, we can extend our approach to deal with formulas ' that also use non-unary predicate symbols. Our computational procedure for such formulas uses the maximum-entropy approach described above combined with the techniques of (Grove et al., 1993b). These latter were used in (Grove et al., 1993b) to compute asymptotic conditional probabilities when conditioning on a rst-order knowledge base KB fo . The basic idea in that case is as follows: To compute Pr 1 ('jKB fo ), we examine the behavior of ' in nite models of KB fo . We partition the models of KB fo into a nite collection of classes such that ' behaves uniformly in each individual class. By this we mean that almost all worlds in the class satisfy ' or almost none do; i.e., there is a 0-1 law for the asymptotic probability of ' when we restrict attention to models in a single class. In order to compute Pr 1 ('jKB fo ) we therefore identify the classes, compute the relative weight of each class (which is required because the classes are not necessarily of equal relative size), and then decide for each class whether the asymptotic probability of ' is zero or one.\nIt turns out that much the same ideas continue to work in this framework. In this case, the classes are de ned using complete descriptions and the appropriate size description . The main di erence is that, rather than examining all worlds consistent with the knowledge base, we now concentrate on those worlds in the vicinity of the maximum-entropy points, as outlined in the previous section. It turns out that the restriction to these worlds a ects very few aspects of this computational procedure. In fact, the only di erence is in computing the relative weight of the di erent classes. This last step can be done using maximum entropy, using the tools described in Section 4.2.\nTheorem 4.28: Let ' be a formula in L and let KB = ^KB 0 be an essentially positive knowledge base in L 1 which is separable with respect to '. Let Z be the set of constants appearing in ' or in (so that KB 0 contains none of the constants in Z) and let 6 = be the formula V c;c 0 2Z c 6 = c 0 . Assume that there exists a size description such that, for all ~ > 0, KB and ~ are stable for , and that the space S 0 KB] has a unique maximum- entropy point ṽ. Then Pr 1 ('jKB) = P D2A( ^ 6 = ) Pr 1 ('j ^D)F D] (ṽ) P D2A( ^ 6 = ) F D] (ṽ) if the denominator is positive.\nSince both ' and ^D are rst-order formulas and ^D is precisely of the required form in (Grove et al., 1993b), then Pr 1 ('j ^D) is either 0 or 1, and we can use the algorithm of (Grove et al., 1993b) to compute this limit, in the time bounds outlined there.\nOne corollary of the above is that the formula 6 = holds with probability 1 given any knowledge base KB of the form we are interested in. This corresponds to a default assumption of unique names, a property often considered to be desirable in inductive reasoning systems.\nWhile this theorem does represent a signi cant generalization of Theorem 4.11, it still has numerous restrictions. There is no question that some of these can be loosened to some extent, although we have not been able to nd a clean set of conditions signi cantly more general than the ones that we have stated. We leave it as an open problem whether such a set of conditions exists. Of course, the most signi cant restriction we have made is that of allowing only unary predicates in the KB. This issue is the subject of the next section." }, { "figure_ref": [], "heading": "Beyond unary predicates", "publication_ref": [ "b18", "b18" ], "table_ref": [], "text": "The random-worlds method makes complete sense for the full language L (and, indeed, for even richer languages). On the other hand, our application of maximum entropy is limited to unary knowledge bases. Is this restriction essential? While we do not have a theorem to this e ect (indeed, it is not even clear what the wording of such a theorem would be), we conjecture that it is.\nCertainly none of the techniques we have used in this paper can be generalized significantly. One di culty is that, once we have a binary or higher arity predicate, we see no analogue to the notion of atoms and no canonical form theorem. In Section 3.2 and in the proof of Theorem 3.5, we discuss why it becomes impossible to get rid of nested quanti ers and proportions when we have non-unary predicates. Even considering matters on a more intuitive level, the problems seem formidable. In a unary language, atoms are useful because they are simple descriptions that summarize everything that might be known about a domain element in a model. But consider a language with a single binary predicate R(x; y). Worlds over this language include all nite graphs (where we think of R(x; y) as holding if there is an edge from x to y). In this language, there are in nitely many properties that may be true or false about a domain element. For example, the assertions \\the node x has m neighbors\" are expressible in the language for each m. Thus, in order to partition the domain elements according to the properties they satisfy, we would need to de ne in nitely many partitions. Furthermore, it can be shown that \\typically\" (i.e., in almost all graphs of su ciently great size) each node satis es a di erent set of rst-order properties. Thus, in most graphs, all the nodes are \\di erent\", so a partition of domain elements into a nite number of \\atoms\" makes little sense. It is very hard to see how the basic proof strategy we have used, of summarizing a model by listing the number of elements with various properties, can possibly be useful here.\nThe di culty of nding an analogue to entropy in the presence of higher-arity predicates is supported by results from (Grove et al., 1993a). In this paper we have shown that maximum entropy can be a useful tool for computing degrees of belief in certain cases, if the KB involves only unary predicates. In (Grove et al., 1993a) we show that there can be no general computational technique to compute degrees of belief once we have non-unary predicate symbols in the KB. The problem of nding degrees of belief in this case is highly undecidable. This result was proven without statistical assertions in the language, and in fact holds for quite weak sublanguages of rst-order logic. (For instance, in a language without equality and with only depth-two quanti er nesting.) So even if there is some generalized version of maximum entropy, it will either be extremely restricted in application or will be useless as a computational tool." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b32", "b38", "b13", "b22", "b1", "b33", "b14", "b6", "b5", "b2", "b2", "b2", "b40", "b32", "b19", "b19", "b19", "b8", "b19" ], "table_ref": [], "text": "This paper has had two major thrusts. The rst is to establish a connection between maximum entropy and the random-worlds approach for a signi cant fragment of our language, one far richer than that considered by Paris and Vencovska (1989) or Shastri (1989). The second is to suggest that such a result is unlikely to obtain for the full language.\nThe fact that we have a connection between maximum entropy and random worlds is signi cant. For one thing, it allows us to utilize all the tools that have been developed for computing maximum entropy e ciently (see (Goldman, 1987) and the further references therein), and may thus lead to e cient algorithms for computing degrees of belief for a large class of knowledge bases. In addition, maximum entropy is known to have many attractive properties (Jaynes, 1978). Our result shows these properties are shared by the randomworlds approach in the domain where these two approaches agree. Indeed, as shown in (Bacchus et al., 1994), the random-worlds approach has many of these properties for the full (non-unary) language.\nOn the other hand, a number of properties of maximum entropy, such as its dependence on the choice of language and its inability to handle causal reasoning appropriately, have been severely criticized (Pearl, 1988;Goldszmidt et al., 1990). Not surprisingly, these criticisms apply to random worlds as well. A discussion of these criticisms, and whether they really should be viewed as shortcomings of the random-worlds method, is beyond the scope of this paper; the interested reader should consult (Bacchus et al., 1994, Section 7) for a more thorough discussion of these issues and additional references.\nWe believe that our observations regarding the limits of the connection between the random-worlds method and maximum entropy are also signi cant. The question of how widely maximum entropy applies is quite important. Maximum entropy has been gaining prominence as a means of dealing with uncertainty both in AI and other areas. However, the di culties of using the method once we move to non-unary predicates seem not to have been fully appreciated. In retrospect, this is not that hard to explain; in almost all applications where maximum entropy has been used (and where its application can be best justi ed in terms of the random-worlds method) the knowledge base is described in terms of unary predicates (or, equivalently, unary functions with a nite range). For example, in physics applications we are interested in such predicates as quantum state (see (Denbigh & Denbigh, 1985)). Similarly, AI applications and expert systems typically use only unary predicates such as symptoms and diseases (Cheeseman, 1983). We suspect that this is not an accident, and that deep problems will arise in more general cases. This poses a challenge to proponents of maximum entropy since, even if one accepts the maximum-entropy principle, the discussion above suggests that it may simply be inapplicable in a large class of interesting examples.\nProof: We show how to e ectively transform 2 L = 1 to an equivalent formula in canonical form. We rst rename variables if necessary, so that all variables used in are distinct (i.e., no two quanti ers, including proportion expressions, ever bind the same variable symbol).\nWe next transform into an equivalent at formula f 2 L 1 , where a at formula is one where no quanti ers (including proportion quanti ers) have within their scope a constant or variable other than the variable(s) the quanti er itself binds. (Note that in this transformation we do not require that be closed. Also, observe that atness implies that there are no nested quanti ers.)\nWe de ne the transformation by induction on the structure of . There are three easy steps:\nIf is an unquanti ed formulas, then f = .\n( 0 _ 00 ) f = 0 f _ 00 f (: 0 ) f = :( f ). All that remains is to consider quanti ed formulas of the form 9x 0 , jj 0 jj x, or k 0 j 00 k x. It turns out that the same transformation works in all three cases. We illustrate the transformation by looking at the case where is of the form jj 0 jj x. By the inductive hypothesis, we can assume that 0 is at. For the purposes of this proof, we de ne a basic formula to be an atomic formula (i.e., one of the form P(z)), a proportion formula, or a quanti ed formula (i.e., one of the form 9x ). Let 1 ; : : :; k be all basic subformulas of 0 that do not mention any variable in x. Let z be a variable or constant symbol not in x that is mentioned in 0 . Clearly z must occur in some basic subformula of 0 , say 0 . By the inductive hypothesis, it is easy to see that 0 cannot mention any variable in x and so, by construction, it is in f 1 ; : : :; `g. In other words, not only do f 1 ; : : :; `g not mention any variable in x, but they also contain all occurrences of the other variables and constants. (Notice that this argument fails if the language contains any high-arity predicates, including equality. For then 0 might include subformulas of the form R(x; y) or x = y, which can mix variables outside x with those in x.)\nNow, let B 1 ; : : :; B 2 `be all the \\atoms\" over 1 ; : : :; `. That is, we consider all formulas 0 1 ^: : : ^ 0 `where 0 i is either i or : i . Now consider the disjunction:\n2 _ i=1 (B i ^jj 0 jj x):\nThis is surely equivalent to jj 0 jj x, because some B i must be true. However, if we assume that a particular B i is true, we can simplify jj 0 jj x by replacing all the i subformulas by true or false, according to B i . (Note that this is allowed only because the i do not mention any variable in x). The result is that we can simplify each disjunct (B i ^jj 0 jj x) considerably.\nIn fact, because of our previous observation about f 1 ; : : :; `g, there will be no constants or variables outside x left within the proportion quanti er. This completes this step of the induction. Since the other quanti ers can be treated similarly, this proves the atness result.\neven though KB mentions only unary predicates, if there are any non-unary predicates in the vocabulary we must choose a denotation for them.\nSuppose ũ = (u 1 ; : : :; u K ), and let N i = u i N for i = 1; : : :; K. The number of partitions of the domain into atoms is N N 1 ;:::;N K ; each such partition completely determines the denotation for the unary predicates. We must also specify the denotations of the constant symbols. There are at most N jCj ways of choosing these. On the other hand, we know there is at least one model (W;~ ) of KB such that (W) = ũ, so there there at least one choice. In fact, there is at least one world W 0 2 W N such that (W 0 ;~ ) j = KB for each of the N N 1 ;:::;N K ways of partitioning the elements of the domain (and each such world W 0 is isomorphic to W). Finally we must choose the denotation of the non-unary predicates. However, ũ does not constrain this choice and, by assumption, neither does KB. Therefore the number of such choices is some function h(N) which is independent of ũ. 11 We conclude that: for all m. Using these bounds, as well as the fact that N i N, we get:\nL U K N K N N Q K i=1 e N i e N Q K i=1 N N i i N! N 1 !N 2 ! : : :N K ! UN L K N N Q K i=1 e N i e N Q K i=1 N N i i :\nNow, consider the expression common to both bounds:\nN N Q K i=1 e N i e N Q K i=1 N N i i = N N Q K i=1 N N i i = K Y i=1 N N i N i = K Y i=1\ne N i ln(N=N i ) = e N P K i=1 u i ln(u i ) = e NH(ũ) :\n11. It is easy to verify that in fact\nh(N) = Y R2 2 N arity(R) ;\nwhere is the unary fragment of and arity(R) denotes the arity of the predicate symbol R.\nA i . Note that, by De nition 3.6, if j has such a conjunct then u i > 0. If j contains no atomic conjunct mentioning the constant c, then we make c satisfy A i for some arbitrary atom with u i > 0. It should now be clear that (W;~ ) satis es j , and so satis es KB. Note that in this construction it is important that we started with w in Sol (KB ~ ])], rather than just in the closure space S ~ KB]; otherwise, the point would not necessarily satisfy (KB ~ ]).\nWe now consider condition (i). This is surprisingly di cult to prove; the proof involves techniques from algebraic geometry. Our job would be relatively easy if Sol (KB ~ ])] were an open set. Unfortunately, it is not. On the other hand, it would behave essentially like an open set if we could replace the occurrences of in (KB ~ ]) by <. It turns out that, for our purposes here, this replacement is possible.\nLet < (KB ~ ]) be the same as (KB ~ ]) except that every (unnegated) conjunct of the form (t i t 0 ) is replaced by (t < i t 0 ). (Notice that this is essentially the opposite transformation to the one used when de ning essential positivity in De nition 4.4.) Finally, let S <~ KB] be Sol < (KB ~ ])]. It turns out that, for all su ciently small ~ , S <~ KB] = S ~ KB]. This result, which we label as Lemma B.5, will be stated and proved later. For now we use the lemma to continue the proof of the main result.\nConsider some ũ 2 S ~ KB]. It su ces to show that for all > 0 there exists N 0 such that for all N > N 0 , there exists a point ũN 2 Sol < (KB ~ ])] such that all the coordinates ofũ N are integer multiples of 1=N and such that j ũ ũN j < . (For then we can take smaller and smaller 's to create a sequence ũN converging to ũ.) Hence, let > 0. By Lemma B.5, we can nd someũ 0 2 Sol < (KB ~ ])] such that j ũ ũ0 j < =2. By de nition, every conjunct in < (KB ~ ]) is of the form q 0 (w) = 0, q 0 (w) > 0, q(w) < i q 0 (w), or q(w) > i q 0 (w), where q 0 is a positive polynomial. Ignore for the moment the constraints of the form q 0 (w) = 0, and consider the remaining constraints that ũ0 satis es. These constraints all involve strict inequalities, and the functions involved (q and q 0 ) are continuous. Thus, there exists some 0 > 0 such that for all w for which j ũ0 wj < 0 , these constraints are also satis ed by w. Now consider a conjunct of the form q 0 (w) = 0 that is satis ed byũ 0 . Since q 0 is positive, this happens if and only if the following condition holds: for every coordinate w i that actually appears in q 0 , we have u 0 i = 0. In particular, if w and ũ0 have the same coordinates with value 0, then q 0 (w) = 0. It follows that for all w, if j ũ0 wj < 0 and ũ0 and w have the same coordinates with value 0, then w also satis es < (KB ~ ]).\nWe now construct ũN that satis es the requirements. Let i be the index of that component of ũ0 with the largest value. We de ne ~uN by considering each of its components u N i , for 1 i K:\nu N i = 8 > < > : 0 u 0 i = 0\ndNu 0 i e=N i 6 = i and u 0 i > 0 u N i P j6 =i (u N j u 0 j ) i = i : It is easy to verify that the components of ũN sum to 1. All the components in ũ0 , other than the i 'th, are increased by at most 1=N. The component u N i is decreased by at most K=N. We will show that ũN has the right properties for all N > N 0 , where N 0 is such that 1=N 0 < min(u i ; =2; 0 )=2K. The fact that K=N 0 < u i guarantees that ũN is in K for all N > N 0 . The fact that 2K=N 0 < =2 guarantees that ũN is within =2 of ũ0 , and hence within of ũ. Since 2K=N 0 < 0 , it follows that j ũ0 ũN j < 0 . Since ũN is constructed to have exactly the same 0 coordinates as ũ0 , we conclude that ũN 2 Sol < (KB ~ ])], as required. Condition (i), and hence the entire theorem, now follows.\nIt now remains to prove Lemma B.5, which was used in the proof just given. As we hinted earlier, this requires tools from algebraic geometry. We base our de nitions on the presentation in (Bochnak, Coste, & Roy, 1987). A subset A of IR `is said to be semi-algebraic if it is de nable in the language of real-closed elds. That is, A is semi-algebraic if there is a rst-order formula '(x 1 ; : : :; x `) whose free variables are x 1 ; : : :; x `and whose only nonlogical symbols are 0, 1, +, , < and =, such that IR j = '(u 1 ; : : :; u `) i (u 1 ; : : :; u `) 2 A. 12 A function f : X ! Y , where X IR h and Y IR `, is said to be semi-algebraic if its graph (i.e., f(ũ; w) : f(ũ) = wg) is semi-algebraic. The main tool we use is the following Curve Selection Lemma (see (Bochnak et al., 1987, p. 34)):\nLemma B.3: Suppose that A is a semi-algebraic set in IR `and ũ 2 A. Then there exists a continuous, semi-algebraic function f : 0; 1] ! IR `such that f(0) = ũ and f(t) 2 A for all t 2 (0; 1].\nOur rst use of the Curve Selection Lemma is in the following, which says that, in a certain sense, semi-algebraic functions behave \\nicely\" near limits. The type of phenomenon we wish to avoid is illustrated by x sin 1\nx which is continuous at 0, but has in nitely many local maxima and minima near 0. Proposition B.4: Suppose that g : 0; 1] ! IR is a continuous, semi-algebraic function such that g(u) > 0 if u > 0 and g(0) = 0. Then there exists some > 0 such that g is strictly increasing in the interval 0; ].\nProof: Suppose, by way of contradiction, that g satis es the hypotheses of the proposition but there is no such that g is increasing in the interval 0; ]. We de ne a point u in 0; 1] to be bad if for some u 0 2 0; u) we have g(u 0 ) g(u). Let A be the set of all the bad points. Since g is semi-algebraic so is A, since u 0 2 A i 9u 0 ((0 u 0 < u) ^(g(u) g(u 0 ))):\nSince, by assumption, g is not increasing in any interval 0; ], we can nd bad points arbitrarily close to 0 and so 0 2 A. By the Curve Selection Lemma, there is a continuous semi-algebraic curve f : 0; 1] ! IR such that f(0) = 0 and f(t) 2 A for all t 2 (0; 1]. Because of the continuity of f, the range of f, i.e., f( 0; 1]), is 0; r] for some r 2 0; 1]. By the de nition of f, (0; r] A. Since 0 6 2 A, it follows that f(1) 6 = 0; therefore r > 0 and so, by assumption, g(r) > 0. Since g is a continuous function, it achieves a maximum v > 0 over the range 0; r]. Consider the minimum point in the interval where this maximum is achieved. More precisely, let u be the in mum of the set fu 0 2 0; r] : g(u 0 ) = vg; clearly, g(u) = v. Since v > 0 we obtain that u > 0 and therefore u 2 A. Thus, u is bad. But that means that there is a point u 0 < u for which g(u 0 ) g(u), which contradicts the choice of v and u.\nWe can now prove Lemma B.5. Recall, the result we need is as follows.\n12. In (Bochnak et al., 1987), a set is taken to be semi-algebraic if it is de nable by a quanti er-free formula in the language of real closed elds. However, as observed in (Bochnak et al., 1987), since the theory of real closed elds admits elimination of quanti ers (Tarski, 1951), the two de nitions are equivalent.\nLemma B.5: For all su ciently small ~ , S <~ KB] = S ~ KB].\nProof: Clearly S <~ KB] S ~ KB]. To prove the reverse inclusion we consider d KB, a canonical form equivalent of KB. We consider each disjunct of d KB separately. Let be a conjunction that is one of the disjuncts in d KB. It clearly su ces to show that Sol ( ~ ])] S <~ ] = Sol < ( ~ ])]. Assume, by way of contradiction, that for arbitrarily small ~ , there exists some ũ 2 Sol ( ~ ])] which is \\separated\" from the set Sol < ( ~ ])],\ni.e., is not in its closure. More formally, we say that ũ is -separated from Sol < ( ~ ])] if there is no ũ0 2 Sol < ( ~ ])] such that j ũ ~u0 j < .\nWe now consider those ~ and those points in Sol ( ~ ])] that are separated from Sol < ( ~ ])]:13 A = f(~ ;ũ; ) : ~ > 0; > 0; ũ 2 Sol ( ~ ])] is -separated from Sol < ( ~ ])]g:\nClearly A is semi-algebraic. By assumption, there are points in A for arbitrarily small tolerance vectors ~ . Since A is a bounded subset of IR m+K+1 (where m is the number of tolerance values in ~ ), we can use the Bolzano{Weierstrass Theorem to conclude that this set of points has an accumulation point whose rst component is 0. Thus, there is a point ( 0; w; 0 ) in A. By the Curve Selection Lemma, there is a continuous semi-algebraic function f : 0; 1] ! IR m+K+1 such that f(0) = ( 0; w; 0 ) and f(t) 2 A for t 2 (0; 1]. Since f is semi-algebraic, it is semi-algebraic in each of its coordinates. By Lemma B.4, there is some v > 0 such that f is strictly increasing in each of its rst m coordinates over the domain 0; v]. Suppose that f(v) = (~ ;ũ; ). Now, consider the constraints in ( ~ ]) that have the form q(w) > j q 0 (w). These constraints are all satis ed by ũ and they all involve strong inequalities. By the continuity of the polynomials q and q 0 , there exists some > 0 such that, for all ũ0 such that j ũ ũ0 j < , ũ0 also satis es these constraints. Now, by the continuity of f, there exists a point v 0 2 (0; v) su ciently close to v such that if f(v 0 ) = (~ 0 ;ũ 0 ; 0 ), then j ũ ũ0 j < min( ; ). Since f(v) = (~ ;ũ; ) 2 A and j ũ ũ0 j < , it follows that ũ0 6 2 Sol < ( ~ ])]. We conclude the proof by showing that this is impossible. That is, we show that ũ0 2 Sol < ( ~ ])]. The constraints appearing in < ( ~ ]) can be of the following forms: q 0 (w) = 0, q 0 (w) > 0, q(w) < j q 0 (w), or q(w) > j q 0 (w), where q 0 is a positive polynomial. Since f(v 0 ) 2 A, we know that ũ0 2 Sol ( ~ 0 ])]. The constraints of the form q 0 (w) = 0 and q 0 (w) > 0 are identical in ( ~ 0 ]) and in < ( ~ ]), and are therefore satis ed by ũ0 . Since j ũ0 ũj < , our discussion in the previous paragraph implies that the constraints of the form q(w) > j q 0 (w) are also satis ed by ũ0 . Finally, consider a constraint of the form q(w) < j q 0 (w). The corresponding constraint in ( ~ 0 ]) is q(w) 0 j q 0 (w). Since ũ0 satis es this latter constraint, we know that q(ũ 0 ) 0 j q 0 (ũ 0 ). But now, recall that we proved that f is increasing over 0; v] in the rst m coordinates.\nIn particular, 0 j < j . By the de nition of canonical form, q 0 (ũ 0 ) > 0, so that we conclude q(ũ 0 ) 0 j q 0 (ũ 0 ) < j q 0 (ũ 0 ). Hence the constraints of this type are also satis ed by ũ0 . This concludes the proof that ũ0 2 Sol < (KB ~ ])], thus deriving a contradiction and proving the result. We are nally ready to prove Theorem 3.13.\nTheorem 3.13: For all su ciently small ~ , the following is true. Let Q be the points with greatest entropy in S ~ KB] and let O IR K be any open set containing Q. Then for all 2 L and for lim 2 flim sup; lim infg we have Proof: Let ~ be small enough so that Theorem B.2 applies and let Q and O be as in the statement of the theorem. It clearly su ces to show that the set O contains almost all of the worlds that satisfy KB. More precisely, the fraction of such worlds that are in O tends to 1 as N ! 1:\nLet be the entropy of the points in Q. We begin the proof by showing the existence of L < U (< ) such that (for su ciently large N) (a) every point ũ 2 ~ N KB] where ũ 6 2 O has entropy at most L and (b) there is at least one point ũ 2 ~ N KB] with ũ 2 O and entropy at least U .\nFor part (a), consider the space S ~ KB] O. Since this space is closed, the entropy function takes on a maximum value in this space; let this be L . Since this space does not include any point with entropy (these are all in Q O), we must have L < . By Theorem B.2, ~ N KB] S ~ KB]. Therefore, for any N, the entropy of any point in\n~ N KB] O is at most L .\nFor part (b), let U be some value in the interval ( L ; ) (for example ( L + )=2) and let ṽ be any point in Q. By the continuity of the entropy function, there exists some > 0 such that for all ũ with j ũ ṽj < , we have H(ũ) U . Because O is open we can, by considering a smaller if necessary, assume that j ũ ṽj < implies ũ 2 O. By the second part of Theorem B.2, there is a sequence of pointsũ N 2 ~ N KB] such that lim N!1 ũN = ṽ. In particular, for N large enough we have j ũN ṽj < , so that H(ũ N ) > U , proving part (b).\nTo complete the proof, we use Lemma 3.11 to conclude that for all N,\n#worlds ~ N (KB) #worlds ~ N ũN ](KB) (h(N)=f(N))e NH(ũ N ) (h(N)=f(N))e N U : On the other hand, #worlds ~ N K O](KB) X ũ2 ~ N KB] O #worlds ~ N ũ](KB) jfw 2 ~ N KB] : w 6 2 Ogj h(N)g(N)e N L (N + 1) K h(N)g(N)e N L :\nTherefore the fraction of models of KB which are outside O is at most\n(N + 1) K h(N)f(N)g(N)e N L h(N)e N U = (N + 1) K f(N)g(N)\ne N( U L ) :\nSince (N + 1) k f(N)g(N) is a polynomial in N, this fraction tends to 0 as N grows large.\nThe result follows.\nThis contradiction proves that our assumption was false, so that the conclusion of the proposition necessarily holds.\nTheorem 4.9: Suppose '(c) is a simple query for KB. For all ~ su ciently small, if Q is the set of maximum-entropy points in S ~ KB] and F ] (ṽ) > 0 for all ṽ 2 Q, then for lim 2 flim sup; lim infg we have If F ] (ũ) > 0, then by the same reasoning we conclude that the value of k'(x)j (x)k x at W is equal to F 'j ] (ũ). Now, let L and R be inf ṽ2Q F 'j ] (ṽ) and sup ṽ2Q F 'j ] (ṽ) respectively; by our assumption, F 'j ] (ṽ) is well-de ned for all ṽ 2 Q. Since the denominator is not 0, F 'j ] is a continuous function at each maximum-entropy point. Thus, since F 'j ] (ṽ) 2 L ; R ] for all maximum-entropy points, the value of F 'j ] (ũ) for ũ \\close\" to some ṽ 2 Q, will either be in the range L ; U ] or very close to it. More precisely, choose any > 0, and de ne ] to be the formula k'(x)j (x)k x 2 L ; U + ]: Since > 0, it is clear that there is some su ciently small open set O around Q such that this proportion expression is well-de ned and within these bounds at all worlds in O. Theorem 4.11: Suppose '(c) is a simple query for KB. If the space S 0 KB] has a unique maximum-entropy point ṽ, KB is essentially positive, and F ] (ṽ) > 0, then Pr 1 ('(c)jKB) = F 'j ] (ṽ): where by equality we also mean that one side is de ned i the other is also de ned. It is easy to verify that a point ũ in K satis es (KB 0 ~ ]) i the corresponding distribution -satis es R. Therefore, the maximum-entropy point ṽ of S ~ KB 0 ] (which is unique, by linearity) corresponds precisely to . Now, there are two cases: either (B) > 0 or (B) = 0. In the rst case, by Remark 4.13, Pr ( B (c)) = F B (c)] (ṽ), so the latter is also positive. This also implies that ṽ is consistent with the constraints ( (c)) entailed by (c) = B (c), so that ṽ is also the unique maximum-entropy point of S ~ KB] (where KB = B (c)^KB 0 ). We can therefore use Corollary 4.10 and Remark 4.13 to conclude that Pr ~ 1 ( C (c)jKB) = F C (c)j B (c)] (ṽ) = Pr (CjB) and that all three terms are well-de ned.\nAssume, on the other hand, that (B) = 0, so that Pr (CjB) is not well-de ned. In this case, we can use a known result (see (Paris & Vencovska, 1989)) for the maximum-entropy point over a space de ned by linear constraints, and conclude that for all satisfying R, necessarily (B) = 0. Using the connection between distributions satisfying R and points ũ in S ~ KB 0 ], we conclude that this is also the case for all ũ 2 S ~ KB 0 ]. By part (a) of Theorem B.2, this means that in any world satisfying KB 0 , the proportion jj B (x)jj x is necessarily 0. Thus, KB 0 is inconsistent with B (c), and Pr ~ 1 ( C (c)j B (c) ^KB 0 ) is also not well-de ned.\nLet Z = fc 1 ; : : :; c m g be the set of constant symbols appearing in and in '. Due to the separability assumption, KB 0 contains none of the constant symbols in Z. Let 6 = be the formula V i6 =j c i 6 = c j . We rst prove that 6 = has probability 1 given KB 0 . Lemma D.1: For 6 = and KB 0 as above, Pr 1 ( 6 = jKB 0 ) = 1. Proof: We actually show that Pr 1 (: 6 = jKB 0 ) = 0. Let c and c 0 be two constant symbols in fc 1 ; : : :; c m g and consider Pr 1 (c = c 0 jKB 0 ). We again use the direct inference technique. Note that for any world of size N the proportion expression jjx = x 0 jj x;x 0 denotes exactly 1=N. It is thus easy to see that Pr 1 (jjx = x 0 jj x;x 0 i 0jKB 0 ) = 1 (for any choice of i). Thus, by Theorem 3.16, Pr 1 (c = c 0 jKB 0 ) = Pr 1 (c = c 0 jKB 0 ^jjx = x 0 jj x;x 0 i 0). But since c and c 0 appear nowhere in KB 0 we can use Theorem 4.1 to conclude that Pr 1 (c = c 0 jKB 0 ) = 0. It is straightforward to verify that, since : 6 = is equivalent to a nite disjunction, each disjunct of which implies c = c 0 for at least one pair of constants c and c 0 , we must have Pr 1 (: 6 = jKB 0 ) = 0.\nAs we stated in Section 4.4, our general technique for computing the probability of an arbitrary formula ' is to partition the worlds into a nite collection of classes such that ' behaves uniformly over each class and then to compute the relative weights of the classes. As we show later, the classes are essentially de ned using complete descriptions. Their relative weight corresponds to the probabilities of the di erent complete descriptions given KB.\nProposition D.2: Let KB = KB 0 ^ and ṽ be as above. Assume that Pr 1 ( jKB 0 ) > 0. Let D be a complete description over Z that is consistent with . Proof: First, observe that if all limits exist and the denominator is nonzero, then Pr 1 (: 6 = j ^KB 0 ) = Pr 1 (: 6 = ^ jKB 0 ) Pr 1 ( jKB 0 ) : By hypothesis, the denominator is indeed nonzero. Furthermore, by Lemma D.1, Pr 1 (: 6 = ^ jKB 0 ) Pr 1 (: 6 = jKB 0 ) = 0. Hence Pr 1 ( 6 = jKB) = Pr 1 ( 6 = jKB 0 ^ ) = 1. We can therefore use Theorem 3.16 to conclude that Pr 1 (DjKB) = Pr 1 (DjKB ^ 6 = ):\nPart (a) of the proposition follows immediately.\nTo prove part (b), recall that is equivalent to the disjunction W E2A( ) E. By simple probabilistic reasoning, the assumption that Pr 1 ( jKB 0 ) > 0, and part (a), we conclude that Pr 1 (Dj ^KB 0 ) = Pr 1 (D ^ jKB 0 ) Pr 1 ( jKB 0 ) = Pr 1 (D ^ jKB 0 ) P E2A( ^ 6 = ) Pr 1 (EjKB 0 ) :\nBy assumption, D is consistent with 6 = and is in A( ). Since D is a complete description, we must have that D ) is valid. Thus, the numerator on the right-hand side of this equation is simply Pr 1 (DjKB 0 ). Hence, the problem of computing Pr 1 (DjKB) reduces to a series of computations of the form Pr 1 (EjKB 0 ) for various complete descriptions E. Fix any such description E. Recall that E can be decomposed into three parts: the unary part E 1 , the non-unary part E >1 , and the equality part E = . Since E is in A( 6 = ), we conclude that 6 = is equivalent to E = . Using Theorem 3.16 twice and some probabilistic reasoning, we get:\nPr 1 (E >1 ^E1 ^E= jKB 0 ) = Pr 1 (E >1 ^E1 ^E= jKB 0 ^ 6 = ) = Pr 1 (E >1 ^E1 jKB 0 ^ 6 = ) = Pr 1 (E >1 jKB 0 ^ 6 = ^E1 ) Pr 1 (E 1 jKB 0 ^ 6 = ) = Pr 1 (E >1 jKB 0 ^ 6 = ^E1 ) Pr 1 (E 1 jKB 0 ):\nIn order to simplify the rst expression, recall that none of the predicate symbols in E >1 occur anywhere in KB 0 ^ 6 = ^E1 . Therefore, the probability of E >1 given KB 0 ^ 6 = is equal to the probability that the elements denoting the jZj (di erent) constants satisfy some particular con guration of non-unary properties. It should be clear that, by symmetry, all such con gurations are equally likely. Therefore, the probability of any one of them is a constant, equal to 1 over the total number of con gurations. 14 Let denote the constant which is equal to Pr 1 (E >1 jKB 0 ^ 6 = ^E1 ) for all E. The last step is to show that, if E 1 is equivalent to V m j=1 A i j (c j ), then Pr 1 (E 1 jKB 0 ) = F D] (ṽ):\nPr 1 ( m ĵ=1 A i j (c j )jKB 0 ) = Pr 1 (A i 1 (c 1 )j m ĵ=2 A i j (c j ) ^KB 0 ) Pr 1 (A i 2 (c 2 )j m ĵ=3 A i j (c j ) ^KB 0 )\n: : : Pr 1 (A i m 1 (c m 1 )jA im (c m ) ^KB 0 ) Pr 1 (A im (c m )jKB 0 ) = v i 1 : : : v im (using Theorem 4.11; see below) = F D] (ṽ): The rst step is simply probabilistic reasoning. The second step uses m applications of Theorem 4.11. It is easy to see that A i j (c j ) is a simple query for A i j+1 (c j+1 ) ^: : : Âim (c m ) ^KB 0 . We would like to show that Pr 1 (A i j (c j )j m =j+1 A i `(c `) ^KB 0 ) = Pr 1 (A i j (c j )jKB 0 ) = v i j ;\nwhere Theorem 4.11 justi es the last equality. To prove the rst equality, we show that for all j, the spaces S 0 KB 0 ] and S 0 V m `=j+1 A i `(c j ) ^KB 0 ] have the same maximum-entropy point, namely ṽ. This is proved by backwards induction; the j = m case is trivially true. The di erence between the (j 1)st and jth case is the added conjunct A i j (c j ), which amounts to adding the new constraint w i j > 0. There are two possibilities. First, if v i j > 0, 14. Although we do not need the value of this constant in our calculations below, it is in fact easy to verify that its value is Q R2(\n) 2 m arity (R) , where m = jZj.\nProposition D.4: For ', KB, , D, and ] as above, if Pr ~ 1 (DjKB) > 0, then Pr ~ 1 ('jKB ^ ] ^ ^D) = Pr 1 ('j ^D);\nand its value is either 0 or 1. Note that since the latter probability only refers to rst-order formulas, it is independent of the tolerance values.\nProof: That the right-hand side is either 0 or 1 is proved in (Grove et al., 1993b), where it is shown that the asymptotic probability of any pure rst-order sentence when conditioned on knowledge of the form ^D (which is, essentially, what was called a model description in (Grove et al., 1993b)) is either 0 or 1. Very similar techniques can be used to show that the left-hand side is also either 0 or 1, and that the conjuncts KB ^ ] do not a ect this limit (so that the left-hand side and the right-hand side are in fact equal). We brie y sketch the relevant details here, referring the reader to (Grove et al., 1993b) for full details.\nThe idea (which actually goes back to Fagin (1976)) is to associate with a model description such as ^D a theory T which essentially consists of extension axioms. Intuitively, an extension axiom says that any nite substructure of the model de ned by a complete description D 0 can be extended in all possible ways de nable by another description D 00 . We say that a description D 00 extends a description D 0 if all conjuncts of D 0 are also conjuncts in D 00 . An extension axiom has the form 8x 1 ; : : :; x j (D 0 ) 9x j+1 D 00 ), where D 0 is a complete description over X = fx 1 ; : : :; x j g and D 00 is a complete description over X fx j+1 g, such that D 00 extends D 0 , both D 0 and D 00 extend D, and both are consistent with . It is then shown that (a) T is complete (so that for each formula , either T j = or T j = : ) and (b) if 2 T then Pr 1 ( j ^D) = 1. From (b) it easily follows that if T j = , then Pr 1 ( j ^D) is also 1. Using (a), the desired 0-1 law follows. The only di erence from the proof in (Grove et al., 1993b) is that we need to show that (b) holds even when we condition on KB ^ ] ^ ^D, instead of just on ^D. So suppose is the extension axiom 8x 1 ; : : :; x j (D 0 ) 9x j+1 D 00 ). We must show that Pr 1 ( jKB ^ ] ^ ^D) = 1. We rst want to show that the right-hand side of the conditional is consistent. As observed in the previous proof, it follows from Theorem 3.16 that Pr 1 (DjKB) = Pr ~ 1 ('jKB ^ ]^ ). Since we are assuming that Pr 1 (DjKB) > 0, it follows that Pr 1 (KB ^ ] ^ ^D) > 0, and hence KB ^ ] ^ ^D must be consistent.\nFix a domain size N and consider the set of worlds satisfying KB ^ ] ^ ^D. Now consider some particular j domain elements, say d 1 ; : : :; d j , that satisfy D 0 . Observe that, since D 0 extends D, the denotations of the constants are all among d 1 ; : : :; d j . For a given d 6 2 fd 1 ; : : :; d j g, let B(d) denote the event that d 1 ; : : :; d j ; d satisfy D 00 , given that d 1 ; : : :; d j satisfy D 0 . What is the probability of B(d) given KB ^ ] ^ ^D? First, note that since d does not denote any constant, it cannot be mentioned in any way in the knowledge base. Thus, this probability is the same for all d. The description D 00 determines two types of properties for x j+1 . The unary properties of x j+1 itself|i.e., the atom A i to which x j+1 must belong|and the relations between x j+1 and the remaining variables x 1 ; : : :; x j using the non-unary predicate symbols. Since D 00 is consistent with , the description must contain a conjunct 9x A i (x) if D 00 implies A i (x j+1 ). By de nition, ] must therefore contain the conjunct jjA i (x)jj x > . Hence, the probability of picking d in A i is at least . For any su ciently large N, the probability of picking d in A i which is di erent from d 1 ; : : :; d j (as required by the de nition of the extension axiom) is at least =2 > 0. The" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We are very grateful to Professor Gregory Brum el, of the Department of Mathematics at Stanford University, for his invaluable help with the proof of Proposition B.5. We would like to thank Fahiem Bacchus, with whom we started working on this general area of research, and Moshe Vardi for useful comments on a previous draft of this paper. A preliminary version of this paper appeared in Proc. 7th IEEE Symposium on Logic in Computer Science. Some of this research was performed while Adam Grove and Daphne Koller were at Stanford University and at the IBM Almaden Research Center. This research was sponsored in part by the Air Force O ce of Scienti c Research (AFSC), under Contract F49620-91-C-0080, by an IBM Graduate Fellowship, and by a University of California President's Postdoctoral Fellowship." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "{ A probability expression of the form Pr( ) appearing in is replaced by the proportion expression jj (x)jj x . Similarly, a conditional probability expression Pr( j 0 ) is replaced by k (x)j 0(x)k x . { Each comparison connective = is replaced by i for some i, and each with i .\n(The particular choices for the approximate equality connectives do not matter in this context.) The other elements that can appear in a proportion formula (such as rational numbers and arithmetical connectives) remain unchanged. For example, the formula Pr( yjbird) 0:7 would correspond to the proportion formula kFly(x)jBird(x)k x i 0:7. There is a one-to-one correspondence between truth assignments and atoms: the truth assignment ! corresponds to the atom A = P 0 1 ^: : :^P 0 k where P 0 i is P i if !(p i ) = true and :P i otherwise. Let ! 1 ; : : :; ! K be the truth assignments corresponding to the atoms A 1 ; : : :; A K , respectively.\nThere is a one-to-one correspondence between probability distributions over the set of truth assignments and points in K . For each point ũ 2 K , let ũ denote the corresponding probability distribution over , where ũ(! i ) = u i .\nRemark 4.13: Clearly, ! j j = i A j 2 A( ). Therefore, for all ũ, we have F ] (ũ) = Pr ũ ( ):\nThe following result demonstrates the tight connection between probabilistic propositional reasoning using maximum entropy and random worlds.\nTheorem 4.14 : Let be a conjunction of constraints of the form Pr( j 0 ) = or Pr( j 0 ) 2 1 ; 2 ]. There is a unique probability distribution of maximum entropy satisfying . Moreover, for all and 0 , if Pr ( 0 ) > 0, then Pr 1 ( (c)j 0(c) ^KB 0 ]) = Pr ( j 0 ): Theorem 4.14 is an easy corollary of Theorem 4.11. To check that the preconditions of the latter theorem apply, note that the constraints in are linear, and so the space S 0 KB 0 ]] has a unique maximum-entropy point ṽ. In fact, it is easy to show that ṽ is the (unique) maximum-entropy probability distribution over satisfying the constraints . In addition, because there are no negated proportion expressions in , the formula KB = 0(c) ^KB 0 ] is certainly essentially positive.\nMost applications of probabilistic propositional reasoning consider simple constraints of the form used in the theorem, and so such applications can be viewed as very special cases of the random-words approach. In fact, this theorem is essentially a very old one. The connection between counting \\worlds\" and the entropy maximum in a space de ned as a conjunction of linear constraints is very well-known. It has been extensively studied in the eld of thermodynamics, starting with the 19th century work of Maxwell and Gibbs. Recently, this type of reasoning has been applied to problems in an AI context by Paris and Appendix A. Proofs for Section 3.2 Theorem 3.5: Every formula in L = 1 is equivalent to a formula in canonical form. More- over, there is an e ective procedure that, given a formula 2 L = 1 constructs an equivalent formula b in canonical form.\nIt now remains to show how a at formula can be transformed to canonical form. Suppose 2 L 1 is at. Let 2 L = 1 be the formula equivalent to obtained by using the translation of Section 2.1. Every proportion comparison in is of the form t t 0 \" i where t and t 0 are polynomials over at unconditional proportions. In fact, t 0 is simply a product of at unconditional proportions (where the empty product is taken to be 1). Note also that since we cleared away conditional proportions by multiplying by t 0 , if t 0 = 0 then so is t, and so the formula t t 0 \" i is automatically true. We can therefore replace the comparison by (t 0 = 0) _ (t t 0 \" i ^t0 > 0). Similarly, we can replace a negated comparison by an expression of the form :(t t 0 \" i ) ^t0 > 0.\nThe next step is to rewrite all the at unconditional proportions in terms of atomic proportions. In any such proportion jj 0 jj x, the formula 0 is a Boolean combination of P(x i ) for predicates P 2 P and x i 2 x. Thus, the formula 0 is equivalent to a disjunction W j (A j 1 (x i 1 ) ^: : : ^Aj m (x im )), where each A j i is an atom over P and x = fx i 1 ; : : :; x im g. These disjuncts are mutually exclusive and the semantics treats distinct variables as being independent, so jj 0 jj x = X j m Y i=1 jjA j i (x)jj x :\nWe perform this replacement for each proportion expression. Furthermore, any term t 0 in an expression of the form t t 0 \" i will be a product of such expressions, and so will be positive.\nNext, we must put all pure rst-order formulas in the right form. We rst rewrite to push all negations inwards as far as possible, so that only atomic subformulas and existential formulas are negated. Next, note that since is at, each existential subformula must have the form 9x 0 , where 0 is a quanti er-free formula which mentions no constants and only the variable x. Hence, 0 is a Boolean combination of P(x) for predicates P 2 P. Again, the formula 0 is equivalent to a disjunction of atoms of the form W A2A( ) A(x), so 9x 0 is equivalent to W A2A( ) 9x A(x). We replace 9x 0 by this expression. Finally, we must deal with formulas of the form P(c) or :P(c) for P 2 P. This is easy: We can again replace a formula of the form P(c) or :P(c) by the disjunction W A2A( ) A(c).\nThe penultimate step is to convert into disjunctive normal form. This essentially brings things into canonical form. Note that since we dealt with formulas of the form :P(c) in the previous step, we do not have to deal with conjuncts of the form :A i (c).\nThe nal step is to check that we do not have A i (c) and either :9x A i (x) or A j (c) for some j 6 = i as conjuncts of some disjunct. If we do, we simply remove that disjunct. Proof: To choose a world W 2 W N satisfying KB such that (W) = ũ, we must partition the domain among the atoms according to the proportions in ũ, and then choose an assignment for the constants in the language subject to the constraints imposed by KB. Finally, ũ) ; which is the desired result.\nWe next want to prove Theorem 3.13. To do this, it is useful to have an alternative representation of the solution space S ~ KB]. Towards this end, we have the following de nition." }, { "figure_ref": [], "heading": "De nition B.1: Let", "publication_ref": [], "table_ref": [], "text": "Let ~ 1 KB] be the limit of these spaces. Formally,\nThe following theorem establishes a tight connection between S ~ KB] and ~ 1 KB].\nTheorem B.2: For the opposite inclusion, the general strategy of the proof is to show the following:\n(i) If ~ is su ciently small, then for all ũ 2 S ~ KB], there is some sequence of points n ũN 0 ;ũ N 0 +1 ;ũ N 0 +2 ;ũ N 0 +3 ; : : : o Sol (KB ~ ])] such that, for all N N 0 , the coordinates of ũN are all integer multiples of 1=N and lim N!1 ũN = ũ.\n(ii) if w 2 Sol (KB ~ ])] and all its coordinates are integer multiples of 1=N, then w 2\nThis clearly su ces to prove that ũ 2 ~ 1 KB].\nWe begin with the proof of (ii), which is straightforward. Suppose the point w = (r 1 =N; r 2 =N; : : :; r K =N) is in Sol (KB ~ ])]. We construct a world W 2 W N such that (W) = w as follows. The denotation of atom A 1 is the set of elements f1; : : :; r 1 g, the denotation of atom A 2 is the set fr 1 + 1; : : :; r 1 + r 2 g, and so on. It remains to choose the denotations of the constants (since the denotation of the predicates of arity greater than 1 is irrelevant). Without loss of generality we can assume KB is in canonical form. (If not, we consider d KB.) Thus, KB is a disjunction of conjunctions, say W j j . Since w 2 Sol (KB ~ ])], we must have w 2 Sol ( j ~ ])] for some j. We use j to de ne the properties of the constants. If j contains A i (c) for some atom A i , then we make c satisfy Appendix C. Proofs for Section 4\nProposition 4.6: Assume that KB is essentially positive and let Q be the set of maximumentropy points of S 0 KB] (and thus also of S 0 KB]). Then for all > 0 and all su ciently small tolerance vectors ~ (where \\su ciently small\" may depend on ), every maximumentropy point of S ~ KB] is within of some maximum entropy-point in Q. Proof: Fix > 0. By way of contradiction, assume that that there is some sequence of tolerance vectors ~ m , m = 1; 2; : : :, that converges to 0, and for each m a maximumentropy point ũm of S ~ m KB] such that for all m, ũm is at least away from Q. Since the space K is compact, we can assume without loss of generality that this sequence converges to some point ũ. Recall that (KB) is a nite combination (using \\and\" and \\or\") of constraints, where every such constraint is of the form q 0 (w) = 0, q 0 (w) > 0, q(w) \" j q 0 (w), or q(w) > \" j q 0 (w), such that q 0 is a positive polynomial. Since the overall number of constraints is nite we can assume, again without loss of generality, that all the ũm 's satisfy precisely the same constraints. We claim that the corresponding conjuncts in (KB 0]) are satis ed by ũ. For a conjunct of the form q 0 (w) = 0 note that, if q 0 (ũ m ) = 0 for all m, then this also holds at the limit, so that q(ũ) = 0. A conjunct of the form q 0 (w) > 0 translates into q 0 (w) 0 in (KB 0]); such conjuncts are trivially satis ed by any point in K . If a conjunct of the form q(w) \" j q 0 (w) is satis ed for all ũm and ~ m , then at the limit we have q(ũ) 0, which is precisely the corresponding conjunct in (KB 0]). Finally, for a conjunct of the form q(w) > \" j q 0 (w), if q(ũ m ) > m j q 0 (ũ m ) for all m, then at the limit we have q(ũ) 0, which again is the corresponding conjunct in (KB 0]). It follows that ũ is in S 0 KB].\nBy assumption, all points ũm are at least away from Q. Hence, ũ cannot be in Q. If we let represent the entropy of the points in Q, since Q is the set of all maximumentropy points in S 0 KB], it follows that H(ũ) < . Choose L and U such that H(ũ) < L < U < . Since the entropy function is continuous, we know that for su ciently large m, H(ũ m ) L . Since ũm is a maximum-entropy point of S ~ m KB], it follows that the entropy achieved in this space for su ciently large m is at most L . We derive a contradiction by showing that for su ciently large m, there is some point in Sol (KB ~ m ])] with entropy at least U . The argument is as follows. Let ṽ be some point in Q. Since ṽ is a maximum-entropy point of S 0 KB], there are points in Sol (KB 0])] arbitrarily close to ṽ. In particular, there is some point ũ0 2 Sol (KB 0])] whose entropy is at least U .\nAs we now show, this point is also in Sol (KB ~ ])] for all su ciently small ~ . Again, consider all the conjuncts in (KB 0]) satis ed by ũ0 and the corresponding conjuncts in (KB ~ ]). Conjuncts of the form q 0 (w) = 0 and q 0 (w) > 0 in (KB 0]) remain unchanged in (KB ~ ]). Conjuncts of the form q(w) j q 0 (w) in (KB ~ ]) are certainly satis ed by ũ0 , since the corresponding conjunct in (KB 0]), namely q(w) 0, is satis ed by ũ0 , so that q(ũ 0 ) 0 j q 0 (ũ 0 ) (recall that q 0 is a positive polynomial). Finally, consider a conjunct in (KB ~ ]) of the form q(w) > j q 0 (w). The corresponding conjunct in (KB 0]) is q(w) > 0. Suppose q(ũ 0 ) = > 0. Since the value of q 0 is bounded over the compact space K , it follows that for all su ciently small j , j q 0 (ũ 0 ) < . Thus, q(ũ 0 ) > j q 0 (ũ 0 ) for all su ciently small j , as required. It follows that ũ0 is in Sol (KB ~ ])] for all su ciently small ~ and, in particular, in Sol (KB ~ m ])] for all su ciently large m. But H(ũ 0 ) U , whereas we showed that the maximum entropy achieved in S ~ m KB] is at most L < U .\nProof: Note that the fact that S 0 KB] has a unique maximum-entropy point does not guarantee that this is also the case for S ~ KB]. However, Proposition 4.6 implies that the maximum-entropy points of the latter space are necessarily close to ṽ. More precisely, if we choose some > 0, we conclude that for all su ciently small ~ , all the maximumentropy points of S ~ KB] will be within of ṽ. Now, pick some arbitrary > 0. Since F ] (ṽ) > 0, it follows that F 'j ] is continuous at ṽ. Therefore, there exists some > 0 such that if ũ is within of ṽ, F 'j ] (ũ) is within of F 'j ] (ṽ). In particular, this is the case for all maximum-entropy points of S ~ KB] for all su ciently small ~ . This allows us to apply Theorem 4.9 and conclude that for all su ciently small ~ and for lim 2 flim sup; lim infg, lim N!1 Pr ~ N ('(c)jKB) is within of F 'j ] (ṽ). Hence, this is also the case for lim ~ ! 0 lim N!1 Pr ~ N ('(c)jKB). Since this holds for all > 0, it follows that lim\nPr ~ N ('(c)jKB) = F 'j ] (ṽ):\nThus, by de nition, Pr 1 ('(c)jKB) = F 'j ] (ṽ).\nTheorem 4.14: Let be a conjunction of constraints of the form Pr( j 0 ) = or Pr( j 0 ) 2 1 ; 2 ]. There is a unique probability distribution of maximum entropy satisfying . Moreover, for all and 0 , if Pr ( 0 ) > 0, then Pr 1 ( (c)j 0(c) ^KB 0 ]) = Pr ( j 0 ): Proof: Clearly, the formulas '(x) = (x) and (x) = 0(x) are essentially propositional.\nThe knowledge base KB 0 ] is in the form of a conjunction of simple proportion formulas, none of which are negated. As a result, the set of constraints associated with KB = (c) ^KB 0 ] also has a simple form. KB 0 ] generates a conjunction of constraints which can be taken as having the form q(w) \" j q 0 (w). On the other hand, (c) generates some Boolean combination of constraints all of which have the form w j > 0. We begin by considering the set S 0 KB] (rather than S 0 KB]), so we can ignore the latter constraints for now. S 0 KB] is de ned by a conjunction of linear constraints which (as discussed earlier) implies that it is convex, and thus has a unique maximum-entropy point, say ṽ. Let = ṽ be the distribution over corresponding to ṽ. It is clear that the constraints of (KB 0]) on the points of K are precisely the same ones as those of . Therefore, is the unique maximum-entropy distribution satisfying the constraints of . By Remark 4.13, it follows that F 0 ] (ṽ) = ( 0 ). Since we have assumed that ( 0 ) > 0, we are are almost in a position to use Theorem 4.11. It remains to prove essential positivity.\nRecall that the di erence between (KB 0]) and (KB 0]) is that the latter may have some conjuncts of the form w j > 0. Checking de nitions 3.4 and 3.6 we see that such terms can appear only due to 0(c) and, in fact, together they assert that F 0 ] (w) > 0. But we have assumed that F 0] (ṽ) > 0 and so ṽ is a maximum-entropy point of S 0 KB] as well.\nThus, essential positivity holds and so, by Theorem 4.11, Pr 1 ('(c)j (c) ^KB 0 ]) = F 'j ] ( ) = Pr ( j 0 ) as required.\nAppendix D. Proofs for Section 4.4 Theorem 4.24: If KB and ~ > 0 are stable for then Pr ~ 1 ( jKB) = 1. Proof: By Theorem 3.14, it su ces to show that there is some open neighborhood containing Q, the maximum-entropy points of S ~ KB], such that every world W of KB in this neighborhood has (W) = . So suppose this is not the case. Then there is some sequence of worlds W 1 ; W 2 ; : : : such that (W i ;~ ) j = KB ^: and lim i!1 min ṽ2Q j (W i ) ṽj = 0.\nSince K is compact the sequence (W 1 ); (W 2 ); : : : must have at least one accumulation point, say ũ. This point must be in the closure of the set Q. But, in fact, Q is a closed set (because entropy is a continuous function) and so ũ 2 Q. By part (a) of Theorem B.2, (W i ) 2 S ~ KB ^: ] for every i and so, since this space is closed, ũ 2 S ~ KB ^: ] as well. But this means that ũ is an unsafe maximum-entropy point, contrary to the de nition and assumption of stability.\nIn the remainder of this section we prove Theorem 4.28. For this purpose, x KB = ^KB 0 , ', and to be as in the statement of this theorem, and let ṽ be the unique maximum-entropy point of S 0 KB].\nthen ṽ satis es this new constraint anyway and so remains the maximum-entropy point, completing this step of the induction. If v i j = 0 this is not the case, and indeed, the property we are trying to prove can be false (for j < m). But this does not matter, because we then know that Pr 1 (A i j (c j )j V m `=j+1 A i `(c `)^KB 0 ) = Pr 1 (A i j (c j )jKB 0 ) = v i j = 0. Since both of the products in question include a 0 factor, it is irrelevant as to whether the other terms agree.\nWe can now put everything together to conclude that Pr 1 (DjKB) = Pr 1 (DjKB 0 ) P E2A( ^ 6 = ) Pr 1 (EjKB 0 ) = F D] (ṽ) P E2A( ^ 6 = ) F E] (ṽ) ;\nproving part (b).\nWe now address the issue of computing Pr 1 ('jKB) for an arbitrary formula '. To do that, we must rst investigate the behavior of Pr ~ 1 ('jKB) for small ~ . Fix some su ciently small ~ > 0, and let Q be the set of maximum-entropy points of S ~ KB]. Assume KB and ~ are stable for . By de nition, this means that for every ṽ 2 Q, we have (ṽ) = . Let I be the set of i's for which contains the conjunct 9xA i (x). Since (ṽ) = for all ṽ, we must have that v i > 0 for all i 2 I. Since Q is a closed set, this implies that there exists some > 0 such that for all ṽ 2 Q and for all i 2 I, we have v i > . Let ] be the formula î2I jjA i (x)jj x > :\nThe following proposition is now easy to prove: Proposition D.3: Suppose that KB and ~ are stable for and that Q, i, ], and 6 = are as above. Then We now simplify the expression Pr ~ 1 ('jKB ^ ] ^ ^D): probability that d 1 ; : : :; d j ; d also satisfy the remaining conjuncts of D 00 , given that d is in atom A i and d 1 ; : : :; d j satisfy D 0 , is very small but bounded away from 0. (For this to hold, we need the assumption that the non-unary predicates are not mentioned in the KB.) This is the case because the total number of possible ways to choose the properties of d (as they relate to d 1 ; : : :; d j ) is independent of N. We can therefore conclude that the probability of B(d) (for su ciently large N), given that d 1 ; : : :; d j satisfy D, is bounded away from 0 by some independent of N. Since the properties of an element d and its relation to d 1 ; : : :; d j can be chosen independently of the properties of a di erent element d 0 , the di erent events B(d); B(d 0 ); : : : are all independent. Therefore, the probability that there is no domain element at all that, together with d 1 ; : : :; d j , satis es D 00 is at most (1 ) N j . This bounds the probability of the extension axiom being false, relative to xed d 1 ; : : :; d j . There are N j ways of these choosing j elements, so the probability of the axiom being false anywhere in a model is at most N j (1 ) N j . This tends to 0 as N goes to in nity. Therefore, the extension axiom 8x 1 ; : : :; x j (D 0 ) 9x j+1 D 00 ) has asymptotic probability 1 given KB ^ ] ^ ^D, as desired.\nFinally, we are in a position to prove Theorem 4.28.\nTheorem 4.28: Let ' be a formula in L and let KB = KB 0 ^ be an essentially positive knowledge base in L 1 which is separable with respect to '. Let Z be the set of constants appearing in ' or in (so that KB 0 contains none of the constants in Z) and let 6 = be the formula V c;c 0 2Z c 6 = c 0 . Assume that there exists a size description such that, for all ~ > 0, KB and ~ are stable for , and that the space S 0 KB] has a unique maximum-entropy point ṽ. Then Pr 1 ('jKB) = P D2A( ^ 6 = ) Pr 1 ('j ^D)F D] (ṽ) P D2A( ^ 6 = ) F D] (ṽ)\nif the denominator is positive.\nProof: Assume without loss of generality that mentions all the constant symbols in ', so that A( ^ 6 = ) A( ). By Proposition D. We can now take the limit as ~ goes to 0. To do this, we use Proposition D.2. The hypotheses of the theorem imply that Pr 1 ( jKB 0 ) > 0 (for otherwise, the denominator P D2A( ^ 6 = ) F D] (ṽ) would be zero). Part (a) of the proposition tells us we can ignore those complete descriptions that are inconsistent with 6 = . We can now apply part (b) to get the desired result." } ]
[ { "authors": "F Bacchus", "journal": "MIT Press", "ref_id": "b0", "title": "Representing and Reasoning with Probabilistic Knowledge", "year": "1990" }, { "authors": "F Bacchus; A J Grove; J Y Halpern; D Koller", "journal": "", "ref_id": "b1", "title": "From statistical knowledge bases to degrees of belief", "year": "1993" }, { "authors": "J Bochnak; M Coste; M Roy", "journal": "Springer-Verlag", "ref_id": "b2", "title": "G eom etrie Alg ebrique R eelle", "year": "1987" }, { "authors": "R Carnap", "journal": "University of Chicago Press", "ref_id": "b3", "title": "Logical Foundations of Probability", "year": "1950" }, { "authors": "R Carnap", "journal": "University of Chicago Press", "ref_id": "b4", "title": "The Continuum of Inductive Methods", "year": "1952" }, { "authors": "P C Cheeseman", "journal": "", "ref_id": "b5", "title": "A method of computing generalized Bayesian probability values for expert systems", "year": "1983" }, { "authors": "K G Denbigh; J S Denbigh", "journal": "Cambridge University Press", "ref_id": "b6", "title": "Entropy in Relation to Incomplete Knowledge", "year": "1985" }, { "authors": "K ", "journal": "", "ref_id": "b7", "title": "", "year": "" }, { "authors": "R Fagin", "journal": "Journal of Symbolic Logic", "ref_id": "b8", "title": "Probabilities on nite models", "year": "1976" }, { "authors": "R Fagin; J Y Halpern; N Megiddo", "journal": "Information and Computation", "ref_id": "b9", "title": "A logic for reasoning about probabilities", "year": "1990" }, { "authors": "H Ge Ner; J Pearl", "journal": "Kluwer Academic Press", "ref_id": "b10", "title": "A framework for reasoning with defaults", "year": "1990" }, { "authors": "", "journal": "Morgan Kaufmann", "ref_id": "b11", "title": "Readings in Nonmonotonic Reasoning", "year": "1987" }, { "authors": "Y V Glebski; D I Kogan; M I Liogon'ki; V A Talanov", "journal": "Kibernetika", "ref_id": "b12", "title": "Range and degree of realizability of formulas in the restricted predicate calculus", "year": "1969" }, { "authors": "S A Goldman", "journal": "", "ref_id": "b13", "title": "E cient methods for calculating maximum entropy distributions", "year": "1987" }, { "authors": "M Goldszmidt; P Morris; J Pearl", "journal": "", "ref_id": "b14", "title": "A maximum entropy approach to nonmonotonic reasoning", "year": "1990" }, { "authors": "M Goldszmidt; P Morris; J Pearl", "journal": "IEEE Transactions of Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "A maximum entropy approach to nonmonotonic reasoning", "year": "1993" }, { "authors": "E Grandjean", "journal": "Information and Control", "ref_id": "b16", "title": "Complexity of the rst{order theory of almost all structures", "year": "1983" }, { "authors": "A J Grove; J Y Halpern; D Koller", "journal": "", "ref_id": "b17", "title": "Random worlds and maximum entropy", "year": "1992" }, { "authors": "A J Grove; J Y Halpern; D Koller", "journal": "", "ref_id": "b18", "title": "Asymptotic conditional probabilities: the non-unary case", "year": "1993" }, { "authors": "A J Grove; J Y Halpern; D Koller", "journal": "", "ref_id": "b19", "title": "Asymptotic conditional probabilities: the unary case. Research report RJ", "year": "1993" }, { "authors": "J Y Halpern", "journal": "Arti cial Intelligence", "ref_id": "b20", "title": "An analysis of rst-order logics of probability", "year": "1990" }, { "authors": "E T Jaynes", "journal": "Physical Review", "ref_id": "b21", "title": "Information theory and statistical mechanics", "year": "1957" }, { "authors": "E T Jaynes", "journal": "MIT Press", "ref_id": "b22", "title": "Where do we stand on maximum entropy?", "year": "1978" }, { "authors": "E T Jaynes", "journal": "Proc. IEEE", "ref_id": "b23", "title": "On the rationale of maximum-entropy methods", "year": "1982" }, { "authors": "E T Jaynes", "journal": "Kluwer", "ref_id": "b24", "title": "Concentration of distributions at entropy maxima", "year": "1983" }, { "authors": "J M Keynes", "journal": "Macmillan", "ref_id": "b25", "title": "A Treatise on Probability", "year": "1921" }, { "authors": "D Koller; J Y Halpern", "journal": "", "ref_id": "b26", "title": "A logic for approximate reasoning", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b27", "title": "", "year": "" }, { "authors": "H E Kyburg", "journal": "Philosophy of Science", "ref_id": "b28", "title": "The reference class", "year": "1983" }, { "authors": "P S Laplace", "journal": "Dover Publications", "ref_id": "b29", "title": "Essai Philosophique sur les Probabilit es. English translation is Philosophical Essay on Probabilities", "year": "1820" }, { "authors": "R D Luce; H ", "journal": "Wiley", "ref_id": "b30", "title": "Games and Decisions", "year": "1957" }, { "authors": "N Nilsson", "journal": "Arti cial Intelligence", "ref_id": "b31", "title": "Probabilistic logic", "year": "1986" }, { "authors": "J B Paris; A Vencovska", "journal": "International Journal of Approximate Reasoning", "ref_id": "b32", "title": "On the applicability of maximum entropy to inexact reasoning", "year": "1989" }, { "authors": "J Pearl", "journal": "Morgan Kaufmann", "ref_id": "b33", "title": "Probabilistic Reasoning in Intelligent Systems", "year": "1988" }, { "authors": "J Pearl", "journal": "Morgan Kaufmann", "ref_id": "b34", "title": "Probabilistic semantics for nonmonotonic reasoning: A survey", "year": "1989" }, { "authors": "J L Pollock", "journal": "Theory and Decision", "ref_id": "b35", "title": "Foundations for direct inference", "year": "1984" }, { "authors": "H Reichenbach", "journal": "University of California Press", "ref_id": "b36", "title": "Theory of Probability", "year": "1949" }, { "authors": "C Shannon; W Weaver", "journal": "University of Illinois Press", "ref_id": "b37", "title": "The Mathematical Theory of Communication", "year": "1949" }, { "authors": "L Shastri", "journal": "Arti cial Intelligence", "ref_id": "b38", "title": "Default reasoning in semantic networks: a formalization of recognition and inheritance", "year": "1989" }, { "authors": "D J Spiegelhalter", "journal": "", "ref_id": "b39", "title": "Probabilistic reasoning in predictive expert systems", "year": "1986" }, { "authors": "A Tarski", "journal": "Univ. of California Press", "ref_id": "b40", "title": "A Decision Method for Elementary Algebra and Geometry", "year": "1951" } ]
[ { "formula_coordinates": [ 9, 393.96, 220.44, 90.36, 24.18 ], "formula_id": "formula_0", "formula_text": "; x i k =d k ];~ ) j = o :" }, { "formula_coordinates": [ 13, 90, 288.84, 432, 30.32 ], "formula_id": "formula_1", "formula_text": "A (closed) sentence 2 L = 1 is in canonical form if it is a disjunction of conjunctions," }, { "formula_coordinates": [ 14, 197.28, 570, 217.68, 17.74 ], "formula_id": "formula_2", "formula_text": ":9x A 3 (x) ^:9x A 4 (x) ^3jjA 1 (x)jj x 1 \" i :" }, { "formula_coordinates": [ 15, 335.88, 169.68, 58.2, 17.74 ], "formula_id": "formula_3", "formula_text": "; jjA K (x)jj x )" }, { "formula_coordinates": [ 17, 188.4, 230.7, 235.44, 35.68 ], "formula_id": "formula_4", "formula_text": "N!1 Pr ~ N ( jKB) = lim N!1 #worlds ~ N O]( ^KB) #worlds ~ N O](KB) :" }, { "formula_coordinates": [ 18, 205.44, 125.1, 201.12, 23.92 ], "formula_id": "formula_5", "formula_text": "lim N!1 Pr ~ N ('jKB) = lim N!1 Pr ~ N ('jKB ^ ):" }, { "formula_coordinates": [ 19, 185.52, 541.02, 240.96, 24.16 ], "formula_id": "formula_6", "formula_text": "lim N!1 Pr ~ N (P(c)jKB) = lim N!1 Pr ~ N (P(c)jKB ^ ]):" }, { "formula_coordinates": [ 22, 258.96, 674.04, 94.32, 33.3 ], "formula_id": "formula_7", "formula_text": "F ] (ũ) = X A j 2A( ) u j :" }, { "formula_coordinates": [ 23, 253.68, 112.56, 100.56, 25.66 ], "formula_id": "formula_8", "formula_text": "F 'j ] (ũ) = F '^ ] (ũ)" }, { "formula_coordinates": [ 23, 181.68, 294.6, 248.64, 36.58 ], "formula_id": "formula_9", "formula_text": "N!1 Pr ~ N ('(c)jKB) 2 \" inf ṽ2Q F 'j ] (ṽ); sup ṽ2Q F 'j ] (ṽ) # :" }, { "formula_coordinates": [ 24, 168, 393.84, 527.04, 30.2 ], "formula_id": "formula_10", "formula_text": "u 3 = 0 û4 = 0" }, { "formula_coordinates": [ 24, 289.68, 592.2, 144, 80.16 ], "formula_id": "formula_11", "formula_text": ") = v 1 + v 2 v 1 + v 2 + v 5 + v 6 = 1 5+ + 3 5+ 1 5+ + 3 5+ + 1 4(5+ ) ; 3 4(5+ )" }, { "formula_coordinates": [ 25, 259.92, 374.76, 92.4, 32.5 ], "formula_id": "formula_12", "formula_text": "Pr ( ) = X !j = (!):" }, { "formula_coordinates": [ 27, 90, 622.08, 432.24, 19.18 ], "formula_id": "formula_13", "formula_text": "entropy. A rule B ! C is an ME-plausible consequence of R if lim !0 ;R (CjB) = 1." }, { "formula_coordinates": [ 28, 244.8, 238.08, 127.68, 17.74 ], "formula_id": "formula_14", "formula_text": "r = def 8x ( B (x) ) C (x)):" }, { "formula_coordinates": [ 34, 257.76, 99.18, 81.6, 37.36 ], "formula_id": "formula_15", "formula_text": "F D] (ũ) = m Y `=1 u j `:" }, { "formula_coordinates": [ 37, 268.8, 549.9, 74.64, 38.8 ], "formula_id": "formula_16", "formula_text": "2 _ i=1 (B i ^jj 0 jj x):" }, { "formula_coordinates": [ 39, 162.96, 460.68, 287.52, 37.06 ], "formula_id": "formula_17", "formula_text": "L U K N K N N Q K i=1 e N i e N Q K i=1 N N i i N! N 1 !N 2 ! : : :N K ! UN L K N N Q K i=1 e N i e N Q K i=1 N N i i :" }, { "formula_coordinates": [ 39, 198.24, 521.64, 144.48, 108.82 ], "formula_id": "formula_18", "formula_text": "N N Q K i=1 e N i e N Q K i=1 N N i i = N N Q K i=1 N N i i = K Y i=1 N N i N i = K Y i=1" }, { "formula_coordinates": [ 39, 261.36, 661.08, 104.4, 30.42 ], "formula_id": "formula_19", "formula_text": "h(N) = Y R2 2 N arity(R) ;" }, { "formula_coordinates": [ 41, 184.32, 567, 185.52, 42 ], "formula_id": "formula_20", "formula_text": "u N i = 8 > < > : 0 u 0 i = 0" }, { "formula_coordinates": [ 44, 97.92, 343.98, 123.6, 19.36 ], "formula_id": "formula_21", "formula_text": "~ N KB] O is at most L ." }, { "formula_coordinates": [ 44, 90, 477.42, 418.56, 119.62 ], "formula_id": "formula_22", "formula_text": "#worlds ~ N (KB) #worlds ~ N ũN ](KB) (h(N)=f(N))e NH(ũ N ) (h(N)=f(N))e N U : On the other hand, #worlds ~ N K O](KB) X ũ2 ~ N KB] O #worlds ~ N ũ](KB) jfw 2 ~ N KB] : w 6 2 Ogj h(N)g(N)e N L (N + 1) K h(N)g(N)e N L :" }, { "formula_coordinates": [ 44, 176.88, 627.66, 254.16, 32.26 ], "formula_id": "formula_23", "formula_text": "(N + 1) K h(N)f(N)g(N)e N L h(N)e N U = (N + 1) K f(N)g(N)" } ]
Random Worlds and Maximum Entropy
Given a knowledge base KB containing rst-order and statistical facts, we consider a principled method, called the random-worlds method, for computing a degree of belief that some formula ' holds given KB. If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or rst-order models, with domain f1; : : :; Ng that satisfy KB, and compute the fraction of them in which ' is true. We de ne the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying ' and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum-entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and arti cial intelligence, but is far more general. Of equal interest to the result itself are the limitations on its scope. Most importantly, the restriction to unary predicates seems necessary. Although the random-worlds method makes sense in general, the connection to maximum entropy seems to disappear in the non-unary case. These observations suggest unexpected limitations to the applicability of maximum-entropy methods.
Adam J Grove; Joseph Y Halpern; Daphne Koller
[ { "figure_caption": "Lemma 3.11: There exist some function h : IN ! IN and two strictly positive polynomial functions f; g : IN ! IR such that, for KB 2 L 1 and ũ 2 K , if #worlds ~ N ũ](KB) 6 = 0, then (h(N)=f(N))e NH(ũ) #worlds ~ N ũ](KB) h(N)g(N)e NH(ũ) :", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Partition of W 4 according to (W).", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Concentration phenomenon for worlds of size N.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Theorem 4.15: Let c be a constant symbol. Using the translation described above, for a set R of defeasible rules, B ! C is an ME-plausible consequence of R i Pr 1 C (c) B (c)", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "N 2 ! : : :N K ! : To obtain our result, we use Stirling's approximation for the factorials, which says that m! = p 2 mm m e m (1 + O(1=m)): It follows that exist constants L; U > 0 such that L m m e m m! Um m m e m", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "W 2 W , and let ũ = (W). The value of the proportion expression jj (x", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Thus, by Corollary 3.14, Pr ~ 1 ( ]jKB) = 1. Using Theorem 3.16, we obtain that for lim as above, lim N!1 Pr ~ N ('(c)jKB) = lim N!1 Pr ~ N ('(c)jKB ^ ]): But now we can use the direct inference technique outlined earlier. We are interested in the probability of '(c), where the only information we have about c in the knowledge base is (c) and where we have statistics for k'(x)j (x)k x . These are precisely the conditions under which Theorem 4.1 applies. We conclude that lim N!1 Pr ~ N ('(c)jKB) 2 L ; U + ]:Since this holds for all > 0, it is necessarily the case that lim N!1 Pr ~ N ('(c)jKB) 2 L ; U ]; as required.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Theorem 4.15: Let c be a constant symbol. Using the translation described in Section 4.3, for a set R of defeasible rules, B ! C is an ME-plausible consequence of R For all su ciently small ~ and for = 1 , let denote ;R . It clearly su ces to prove that Pr ~ 1 ( C (c)j B (c) ^KB 0 ) = Pr (CjB);", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a) If D is inconsistent with 6 = , then Pr 1 (DjKB) = 0. (b) If D is consistent with 6 ( ^ 6 = ) F D 0 ] (ṽ) :", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b15", "b24", "b2", "b9", "b18", "b8", "b7", "b7", "b16", "b6", "b12" ], "table_ref": [], "text": "In recent information extraction systems, most individual pieces of information to be extracted directly from a text are usually identi ed by key word search or simple pattern search in the preprocessing stage (Lehnert et al., 1993;Weischedel et al., 1993;Cowie et al., 1993;Jacobs et al., 1993). Among the systems presented at the Fifth Message Understanding Conference (muc-5), however, the main architectures ranged from pattern matching to full or fragment parsing (Onyshkevych, 1993). Full or fragment parsing systems, in which several knowledge sources such as syntax, semantics, and domain knowledge are combined at run-time, are generally so complicated that changing a part of the system tends to a ect the other components. In past information extraction research, this interference has slowed development (Jacobs, 1993;Hobbs et al., 1992). A pattern matcher, which identi es only patterns of interest, is more appropriate for information extraction from texts in narrow domains, since this task does not require full understanding of the text. textract, the information extraction system described here, uses a pattern matcher similar to sri's fastus pattern matcher (Hobbs et al., 1992). The matcher is implemented as a nite-state automaton. Unlike other pattern matchers, textract's matcher deals with word matching problems caused by the word segmentation ambiguities often found in Japanese compound words.\nThe goal of the pattern matcher is to identify the concepts represented by words and phrases in the text. The pattern matcher rst performs a simple key-word-based concept search, locating individual words associated with concepts. The second step is a template pattern search which locates phrasal patterns involving critical pieces of information identi ed by the preprocessor. The template pattern search identi es relationships between matched objects in the de ned pattern as well as recognizing the concept behind the relationship. One typical concept is the relationship of \\economic activity\" which companies can participate in with each other.\nIt is usually di cult to determine the relationships among pieces of information which have been identi ed in separate sentences. These relationships are often stated implicitly, and even if the text explicitly mentions them the descriptions are often located far enough apart to make detection di cult. Although the importance of discourse processing for information extraction has been emphasized in the Message Understanding Conferences (Lehnert & Sundheim, 1991;Hirschman, 1992), no system presented has satisfactorily addressed the issue.\nThe discourse processor in textract is able to correlate individual pieces of information throughout the text. textract merges concepts which the pattern matcher has identi ed separately (and usually in di erent sentences) when the concepts involve the same companies. textract can unify multiple references to the same company even when the company name is missing, abbreviated, or pronominalized. Furthermore, the processor segments the discourse to isolate portions of text relevant to a particular conceptual relationship. The discourse segmentation lessens the chance of merging unrelated information (Kitani, 1994).\nThis paper analyzes some evaluation results for textract's discourse module and describes the tipster/muc-5 evaluation results in order to assess overall system performance." }, { "figure_ref": [ "fig_0" ], "heading": "tipster information extraction task", "publication_ref": [ "b21" ], "table_ref": [], "text": "The goal of the tipster project sponsored by arpa is to capture information of interest from English and Japanese newspaper articles about corporate joint ventures and microelectronics. A system must ll a generic template with information extracted from the text by a fully automated process. The template is composed of several objects, each containing several slots. Slots may contain pointers to related objects (Tipster, 1992). Extracted information is to be stored in an object-oriented database.\nIn the joint ventures domain, the task is to extract information concerning joint venture relationships which organizations form and dissolve. The template structure represents these relationships with tie-up-relationship objects, which contain pointers to organization entity objects representing the organizations involved. Entity objects contain pointers to other objects such as person and facility objects, as shown in Figure 1.\nIn the microelectronics domain, extraction focuses on layering, lithography, etching, and packaging processes in semiconductor manufacturing for microchip fabrication. The entities extracted include manufacturer, distributor, and user, in addition to detailed manufacturing information such as materials used and microchip speci cations such as wafer size and device speed. The microelectronics template structure is similar to that of the joint ventures but has fewer objects and slots.\nBoth of these extraction tasks must identify not only individual entities but also certain relationships among them. Often, however, a particular piece of extracted information describes only part of a relationship. This partial information must be merged with other pieces of information referring to the same entities. For merging to produce correct results, therefore, correct identi cation of entity references is crucial." }, { "figure_ref": [], "heading": "Problem de nition", "publication_ref": [], "table_ref": [], "text": "This section rst describes word matching problems caused by the word segmentation ambiguities. Di culties of reference resolution of company names are then explained. Issues of discourse segmentation and concept merging are also discussed using an example text." }, { "figure_ref": [], "heading": "Word segmentation", "publication_ref": [ "b5" ], "table_ref": [], "text": "Japanese word segmentation in the preprocessor gives rise to a subsequent under-matching problem. When a key word in the text is not found in the word segmentor's lexicon, the segmentor tends to divide it into separate words. With our current lexicon, for example, the compound noun \\ F \" (teikei-kaisyo), consisting of two words, \\ \" (teikei: joint venture) and \\ F \" (kaisyo: dissolve), is segmented into the two individual nouns. Thus a key word search for \\ F \" (teikei-kaisyo) does not succeed in the segmented sentence.\nOn the other hand, the pattern matching process allows, by default, partial matching between a key word and a word in the text. \\ \" (teikei) and \\ Z \" (gyoum-teikei), both meaning \\a joint venture\", can both be matched against the single key word \\ \" (teikei). This exibility creates an over-matching problem. For example, the key word \\ J S \" (silicon) matches \\ f) J S\" (nisanka-silicon: silicon dioxide), although they are di erent materials to be reported in the microelectronics domain. These segmentation di culties for compound nouns also cause major problems in word-based Japanese information retrieval systems (Fujii & Croft, 1993)." }, { "figure_ref": [], "heading": "Company name references", "publication_ref": [], "table_ref": [], "text": "In the corporate joint ventures domain, output templates mostly describe relationships among companies (as described in Section 2). Information of interest is therefore found in sentences which mention companies or their activities. It is essential for the extractor to identify topic companies|the main concern of the sentences they appear in|in order to correlate other information identi ed in the sentence. There are three problems which make it di cult to identify topic companies." }, { "figure_ref": [], "heading": "Missing subject", "publication_ref": [ "b3", "b17" ], "table_ref": [], "text": "Topic companies are usually the subject of a sentence. Japanese sentences frequently omit subjects, however|even in formal newspaper articles. The veniex system which nec presented at muc-5 can identify the company implied by a missing subject if there is an explicit reference to it in the immediately preceding sentence (Doi et al., 1993;Muraki et al., 1993). It is not clear whether veniex can resolve the missing reference when the explicit reference appears in a sentence further separated from the subjectless sentence." }, { "figure_ref": [], "heading": "Company name abbreviations", "publication_ref": [ "b10" ], "table_ref": [], "text": "As is also seen in English, company names are often abbreviated in a Japanese text after their rst appearance. A variety of ways to abbreviate company names in Japanese is given in (Karasawa, 1993). The following examples show some typical abbreviations of Japanese company names: Locating company name abbreviations is di cult, since many are not identi ed as companies by either a morphological analyzer or the name recognizer in the preprocessor. Another problem is that the variety of ways of abbreviating names makes it di cult to unify multiple references to one company. Almost all muc-5 systems include a string matching mechanism to identify company name abbreviations. These abbreviations are speci ed in an aliases slot in the company entity object. To the authors' knowledge, none of the systems other than textract can detect company name abbreviations of type (d) or (e) above without using a pre-de ned abbreviation table." }, { "figure_ref": [], "heading": "Company name pronouns", "publication_ref": [ "b3", "b23" ], "table_ref": [], "text": "Company name pronouns are often used in formal texts. Frequently used expressions include \\ ! \" (ryosya: both companies), \\ $ \" (dosya: the company), and \\ r Reference resolution for \\ $ \" (dosya: the company) is implemented in veniex (Doi et al., 1993). veniex resolves the pronominal reference in the same way as it identi es missing company references. The crl/brandeis diderot system presented at muc-5 simply chooses the nearest company name as the referent of \\dosya\". This algorithm was later improved by Wakao using corpus-based heuristic knowledge (Wakao, 1994). These systems do not handle pronominalized company names other than \\dosya\". The three problems described in this section often cause individual information to be correlated with the wrong company or tie-up-relationship object. To avoid this error, the topic companies must be tracked from the context, since they can be used to determine which company objects an information fragment should be assigned to. Abbreviated and pronominalized company names must be uni ed as references to the same company." }, { "figure_ref": [ "fig_0" ], "heading": "Discourse segmentation and concept merging", "publication_ref": [ "b2" ], "table_ref": [], "text": "In the joint ventures domain, a tie-up-relationship object contains pointers to other objects such as economic activities (as shown in Figure 1). When a company is involved in multiple tie-ups, merging information into a tie-up relationship according to topic companies sometimes yields incorrect results. Consider the following example: \"X Corp. has tied up with Y Corp. X will start selling products in Japan next month. Last year X started a similar joint venture with Z Inc.\"\nObviously, the sale in the second sentence is related to the tie-up relationship of X and Y. However, since the topic company, which is the subject of a sentence, is X in all three sentences, the sale could also be related to the X and Z tie-up relationship. This incorrect merging can be avoided by separating the text into two blocks: the rst two sentences describe the X and Y tie-up, and the last sentence describes the X and Z tie-up. Thus, discourse segmentation is necessary to identify portions of text containing related pieces of information. The crl/brandeis diderot system segments the joint ventures text into two types of text structures (Cowie et al., 1993). It is not known how well their discourse segmentation performed, however.\nOnce the text is segmented, concepts or identi ed pieces of information can be merged within the same discourse segment. For example, the expected income from a joint venture is often stated in a sentence which does not explicitly mention the participating companies; they appear in the previous sentence. In this case, the joint venture concept identifying the companies and the income concept identifying the expected income must be merged so that the latter will be linked to the correct entity objects." }, { "figure_ref": [], "heading": "The solution", "publication_ref": [], "table_ref": [], "text": "This section describes details of textract's pattern matcher and discourse processor as well as the system architecture." }, { "figure_ref": [], "heading": "textract architecture", "publication_ref": [ "b8", "b9", "b11", "b13", "b14" ], "table_ref": [], "text": "textract is an information extraction system developed for the tipster Japanese domains of corporate joint ventures and microelectronics (Jacobs, 1993;Jacobs et al., 1993). As shown in Figure 2, the textract joint ventures system comprises four major components: preprocessor, pattern matcher, discourse processor, and template generator. Because of its shorter development time, the textract microelectronics system has a simpler conguration than the joint ventures system. It does not include the template pattern search in the pattern matcher, or the discourse segmentation and concept merging in the discourse processor, as also shown in Figure 2.\nIn the preprocessor, a Japanese segmentor called majesty segments Japanese text into primitive words tagged with their parts of speech (Kitani, 1991). Next, the name recognizer identi es proper names and monetary, numeric, and temporal expressions. majesty tags proper names which appear in its lexicon; the name recognizer identi es additional proper names by locating name designators such as \\ \" (sya, corresponding to \\Inc.\" or \\Corp.\") for company names. The recognizer extends the name string forward and backward from the designator until it meets search stop conditions (Kitani & Mitamura, 1993). The name segments are grouped into units which are meaningful to the pattern matching process (Kitani & Mitamura, 1994). Most strings to be extracted directly from the text are identi ed by majesty and the name recognizer. Details of the pattern matcher and discourse processor are given in the following sections. The template generator assembles the extracted information and creates the output described in Section 2." }, { "figure_ref": [], "heading": "Pattern matcher", "publication_ref": [], "table_ref": [], "text": "The following subsections describe the concept search and the template pattern search in the pattern matcher which identify concepts in the sentence. Whereas the former simply searches for key words, the latter searches for phrasal patterns within a sentence. The template pattern search also identi es relationships between matched objects in the de ned pattern. In the course of textract development, key words and template patterns were obtained manually by a system developer using a kwic (Key Word In Context) tool and referring to a word frequency list obtained from the corpus." }, { "figure_ref": [], "heading": "Concept search", "publication_ref": [], "table_ref": [], "text": "Key words representing the same concept are grouped into a list and used to recognize the concept in a sentence. The list is written in a simple format: (concept-name word1 word2 ...). For example, key words for recognizing a dissolved joint venture concept can be written in the following way: The concept search module recognizes a concept when it locates one of the associated words in a sentence. This simple procedure sometimes yields incorrect concepts. For example, the concept \\dissolved\" is erroneously identi ed from an expression such as \\cancel a hotel reservation\". Key-word-based concept search is most successful when processing text in a narrow domain in which words are used with restricted meanings.\nThe under-matching problem occurs when a compound noun in the key word list of a concept fails to match the text because the instance of the compound in the text has been segmented into separate primitive words. To avoid the problem, adjacent nouns in the text are automatically concatenated during the concept search process, generating compound nouns at run-time. The over-matching problem, on the other hand, arises when a key word successfully matches part of a compound noun which as a whole is not associated with the concept. Over-matching can be prevented by anchoring the beginning and/or end of a key word pattern to word boundaries (with the symbol \\>\" at the beginning and \\<\" at the end). For example, \\> J S <\" (silicon) must be matched against a single complete word in the text. Since this problem is rare, its solution is not automatic: system developers attach anchors to key words which are likely to over-match." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Template pattern search", "publication_ref": [ "b19", "b7", "b20" ], "table_ref": [], "text": "textract's pattern matcher is implemented as a nite-state recognizer. This choice of implementation is based on the assumption that a nite-state grammar can e ciently handle many of the inputs that a context-free grammar covers (Pereira, 1990). The pattern matcher is similar to the pattern recognizer used in the muc-4 fastus system developed at sri (Hobbs et al., 1992).\nPatterns for the textract template pattern matcher are de ned with rules similar to regular expressions. Each pattern de nition speci es the concept associated with the pattern. (For the joint ventures domain, textract uses eighteen concepts.)\nIn the matcher, state transitions are driven by segmented words or grouped units from the preprocessor. The matcher identi es all possible patterns of interest in the text that match de ned patterns, recognizing the concepts associated with the patterns. For some inputs, the matcher must skip words that are not explicitly de ned in the pattern.\nFigure 3 shows de nitions of equivalent Japanese and English patterns for recognizing the concept *joint-venture*. This English pattern is used to capture expressions such as \\XYZ Corp. created a joint venture with PQR Inc.\" The notation \\@string\" represents a variable matching an arbitrary string. Variables whose names begin with \\@cname\" are called company-name variables and are used where a company name is expected to appear. In the de nitions shown, a string matched by \\@cname partner subj\" is likely to contain at least one company name referring to a joint venture partner and functioning as the subject in a sentence.\nThe pattern \\ /# :strict:P\" matches the grammatical particles \\ / \" (wa) and \\ \" (ga), which serve as subject case markers. The symbol \\strict\" speci es a full string match (the default in case of template pattern search), whereas \\loose\" allows a partial string The verbal nominal pattern \\ : loose:VN\" matches compound words such as \\ \" (kigyo-teikei: a corporate joint venture) as well as \\ \" (teikei: a joint venture).\nThe rst eld in a pattern is the pattern name, which refers to the concept associated with the pattern. The second eld is a number indexing a eld in the pattern. This eld's contents are used to decide quickly whether or not to search within a given string. The matcher only applies the entire pattern to a string when the string contains the text in the indexed eld. For e ciency, therefore, this eld should contain the least frequent word in the entire pattern (in this case, \\ \" (teikei) for Japanese and \\a joint venture\" for English).\nThe order of noun phrases is relatively unconstrained in a Japanese sentence. Case markers, usually attached to the ends of noun phrases, provide a strong clue for identifying the case role of each phrase (subject, object, etc.). Thus pattern matching driven mainly by case markers recognizes the case roles well without parsing the sentence.\nApproximately 150 patterns are used to extract various concepts in the Japanese joint ventures domain. Several patterns usually match a single sentence. Moreover, since patterns are often de ned with case markers such as \\ / \" (wa), \\ \" (ga), and \\ ( \" (to), a single pattern can match a sentence in more than one way when several of the same case markers appear in the sentence. The template generator accepts only the best matched pattern, which is chosen by applying the following three heuristic rules in the order shown:\n1. select patterns that include the largest number of matched company-name variables containing at least one company name; 2. select patterns that consume the fewest input segments (the shortest string match); and 3. select patterns that include the largest number of variables and de ned words.\nThese heuristic rules were obtained from an examination of matched patterns reported by the system. To obtain more reliable heuristics, a large-scale statistical evaluation must be performed. Heuristics for a similar problem of pattern selection in English are discussed in (Rau & Jacobs, 1991). Their system chooses the pattern which consumes the most input segments (the longest string match), as opposed to textract's choice of the shortest string match in its second heuristic rule. 2Another important feature of the pattern matcher is that rules can be grouped according to their concept. The rule name \\JointVenture1\" in Figure 3, for example, represents the concept *joint-venture*. Using this grouping, the best matched pattern can be selected from matched patterns of a particular concept group instead of from all the matched patterns. This feature enables the discourse and template generation processes to narrow their search for the best information to ll a particular slot." }, { "figure_ref": [], "heading": "Discourse processor", "publication_ref": [], "table_ref": [], "text": "The following subsections describe the algorithm of company name reference resolution throughout the discourse. Discourse segmentation and concept merging processes are also discussed." }, { "figure_ref": [], "heading": "Identifying topic companies", "publication_ref": [], "table_ref": [], "text": "Since no syntactic analysis is performed in textract, topic companies are simply identied wherever a subject case marker such as \\ \" (ga), \\ / \" (wa), or \\ B \" (mo) follows company names. If no topic companies are found in a sentence, the previous sentence's topic companies are inherited (even if the current sentence contains a non-company subject). This is based on the supposition that a sentence which introduces new companies usually mentions them explicitly in its subject." }, { "figure_ref": [], "heading": "Abbreviation detection and unification", "publication_ref": [ "b22" ], "table_ref": [], "text": "Company name abbreviations have the following observed characteristics: majesty tags most abbreviations as \\unknown\", \\company\", \\person\", or \\place\"; a company name precedes its abbreviations; an abbreviation is composed of two or more characters from the company name, in their original order; the characters need not be consecutive within the company name; and English word abbreviations must be identical with an English word appearing in the company name.\nThus the following are regarded as abbreviations: \\unknown\", \\company\", \\person\", and \\place\" segments composed of two or more characters which also appear in company names previously identi ed in the text. When comparing possible abbreviations against known company names, the length of the longest common subsequence or LCS (Wagner & Fischer, 1974) is computed to determine the maximum number of characters appearing in the same order in both strings. 3To unify multiple references to the same company, a unique number is assigned to the source and abbreviated companies. Repeated company names which contain strings appearing earlier in the text are treated as abbreviations (and thus given unique numbers) 1.\nStep 1: Initialization to assign each entity in C a unique number.\nfor i in C do (1 i cmax) C i; \\id\"] i done 2.\nStep 2: Search abbreviations and give unique numbers for i in C do (1 i cmax) if C i; \\id\"] 6 = i then # already recognized as an abbreviation continue i loop LENSRC length of C i; \\string\"] for j in C do (i + 1 j cmax) if C j; \\id\"] 6 = j then # already recognized as an abbreviation continue j loop LEN length of C j; \\string\"] LCS length of the LCS of C i; \\string\"] and C j; \\string\"] if LCS 2 then do if C i;\\eg\"] = \\YES\" and LENSRC = LCS = LEN then C j; \\id\"] C i; \\id\"] # an English word abbreviation else if C i;\\eg\"] = \\NO\" and LCS = LEN then # an abbreviation C j; \\id\"] C i; \\id\"] done done done Figure 4: Algorithm to unify multiple references to the same company by the algorithm described in Figure 4. In the pseudocode shown, all identi ed company names are stored in an associative array named C. \\Unknown\", \\company\", \\person\", and \\place\" segments are also stored in the array as possible abbreviations. Company names are sorted in ascending order of their starting position in the text and numbered from 1 to cmax (Step 1). A company name string which is indexed i can be addressed by C i; \\string\"]. A ag C i; \\eg\"] records whether the company name is an English word abbreviation or not.\nStep 2 compares each company name in the array C with all names higher in the array (and thus later in the text). When the LCS of a pair of earlier and later company names is equal to the length of the later company name, the later company name is recognized as an abbreviation of the earlier company name. Then, the \\id\" of the later company name is replaced with that of the earlier company name. The LCS must be two or more characters, and if the abbreviation is an English word, the LCS must be equal to the length of the earlier company name.\nAt the end of execution, a number is given in C i; \\id\"]. If C i; \\id\"] was changed during execution, C i; \\string\"] was recognized as a company name abbreviation." }, { "figure_ref": [], "heading": "Anaphora resolution of company name pronouns", "publication_ref": [], "table_ref": [], "text": "The approach for reference resolution described in this section is based on heuristics obtained by corpus analysis rather than linguistic theories. Three company name pronouns are the target of reference resolution: \\ ! \" (ryosya), \\ $ \" (dosya), and \\ r \" (jisya), meaning \\both companies\", \\the company\", and \\the company itself\". They are three of the most frequent company name pronouns appearing in our corpus provided by arpa for the tipster information extraction project. \\Ryosya\", \\dosya\", and \\jisya\" appeared 456, 277, and 129 times, respectively, in 1100 newspaper articles containing an average of 481 characters per article.\nThe following heuristics, derived from analysis of pronoun reference in the corpus, were used for reference resolution: \\ryosya\" almost always referred to the \\current\" tie-up company, with one exception in a hundred occurrences; about ninety percent of \\dosya\" occurrences referred to the topic company when there was only one possible referent in the same sentence, but: when more than two companies, including the topic company, preceded \\dosya\" in the same sentence, about seventy-ve percent of the pronoun occurrences referred to the nearest company, not necessarily the topic company; and about eighty percent of \\jisya\" occurrences referred to the topic company.\nTwo additional heuristic rules were discovered but not implemented in textract: about four percent of \\jisya\" occurrences referred to more than one company; and about eight percent of \\jisya\" occurrences referred to entities which are general expressions about a company such as \\ \" (kaisya: a company). As a result of the discourse processing described above, every company name, including abbreviations and pronominal references, is given a unique number." }, { "figure_ref": [ "fig_5", "fig_6", "fig_7", "fig_7" ], "heading": "Discourse segmentation and concept merging", "publication_ref": [], "table_ref": [], "text": "In the 150 articles of the tipster/muc-5 joint ventures test set, multiple tie-up relationships appeared in thirty-one articles which included ninety individual tie-up relationships. The two typical discourse models representing the discourse structures of tie-up relationships are shown in Figure 5.\nType-I: tie-ups are described sequentially Descriptions of tie-ups appear sequentially in this model. One tie-up is not mentioned again after a new tie-up has been described. Type-II: a main tie-up reappears after other tie-ups are mentioned A major di erence from the Type-I model is that a description of a main tie-up reappears in the text after other tie-up relationships have been introduced. Non-main tie-ups are usually mentioned brie y. The two types of text structure described above are similar to the ones implemented in the crl/brandeis diderot joint ventures system. The di erence is only in the Type-II structure: diderot processes all tie-up relationships which reappear in the text, not just the reappearing main tie-up focused on by textract.\ntextract's discourse processor divides the text when a di erent tie-up relationship is identi ed by the template pattern search. A di erent tie-up relationship is recognized when the numbers assigned to the joint venture companies are not identical to those appearing in the previous tie-up relationships. diderot segments the discourse if any other related pieces of information such as date and entity location are di erent between the tie-up relationships. Such strict merging is preferable when the pieces of information in comparison are correctly identi ed. The merging conditions of discourse segments should be chosen according to the accuracy of identi cation of the information to be compared.\nAfter the discourse is segmented, identi ed concepts and extracted words and phrases are merged. Figure 6 shows the merging process for the following text passage which actually appeared in the tipster/muc-5 test set (a direct English translation follows):\n\\ /8o ;.D A AK . .o% U'.$ R K R S . ' KH +*K5 6 '+/ ! Z & R K (B 4\n\" \\On the eighth (of this month), Tanabe Pharmaceuticals made a joint venture contract with a German pharmaceutical maker, Merck and Co. Inc., to develop and sell its new medicine in Japan. They also agreed that both companies would invest equally to establish a joint venture company in ve or six years when they start selling new medicine.\" The two company names in the rst sentence, \\ \" (tanabe seiyaku: Tanabe Pharmaceuticals) and \\ AK \" (ei meruku sya: Merck and Co. Inc.), are identi ed by either majesty or the name recognizer during preprocessing. Next, the template pattern search locates in the rst sentence the \\economic activity\" pattern shown in Figure 7 (a). The *economic-activity* concept relating the two companies has now been recognized. The template pattern search also recognizes the *establish* concept in the second sentence by the template pattern shown in Figure 7 (b).\nAfter sentence-level processing, discourse processing recognizes that \\ ! \" (ryosya: both companies) in the second sentence refers to Tanabe Pharmaceuticals and Merck in the rst sentence because they are the current tie-up companies. Since the second sentence does not introduce a new tie-up relationship, both sentences are in the same discourse segment. Concepts separately identi ed in the two sentences can now be merged because the subjects of the two sentences are the same. The *establish* concept is therefore joined to the *economic-activity* concept. " }, { "figure_ref": [], "heading": "Performance evaluation", "publication_ref": [], "table_ref": [], "text": "This section shows some evaluation results for textract's discourse module. muc-5 evaluation metrics and overall textract performance are also discussed." }, { "figure_ref": [], "heading": "Unique identi cation of company name abbreviations", "publication_ref": [], "table_ref": [], "text": "A hundred joint ventures newspaper articles used for the tipster 18-month evaluation were chosen as a blind test set for this evaluation. The evaluation measures were recall, the percentage of correct answers extracted compared to possible answers, and precision, the percentage of correct answers extracted compared to actual answers. majesty and the name recognizer identi ed company names in the evaluation set with recall of seventyve percent and precision of ninety-ve percent when partial matches between expected and recognized strings were allowed, and with recall of sixty-nine percent and precision of eighty-seven percent in an exact matching condition.\nCompany names that appeared in a form di erent from their rst appearance in an article were considered to be company name abbreviations. Among 318 abbreviations, the recall and precision of abbreviation detection were sixty-seven and eighty-nine percent, respectively. Most importantly, detected abbreviations were uni ed correctly with the source companies as long as the source companies were identi ed correctly by majesty and the name recognizer.\nThe evaluation results clearly show that company name abbreviations were accurately detected and uni ed with the source companies as long as company names were correctly identi ed by the preceding processes. It is possible, however, that the simple string matching algorithm currently used could erroneously unify similar company names, which are often seen among family companies." }, { "figure_ref": [], "heading": "Anaphora resolution of company name pronouns", "publication_ref": [], "table_ref": [], "text": "The accuracy of reference resolution for \\ryosya\", \\dosya\", and \\jisya\" is shown in Table 1. The numbers in parentheses were obtained by restricting attention to pronouns which referred to companies identi ed correctly by the preceding processes. Since companies referred to by \\ryosya\" (both companies) were usually \\current\" tie-up companies in the joint ventures domain, reference resolution accuracy depended on the accuracy with which tie-up relationships were identi ed. company name pronouns number of resolution references accuracy \\ ! \" (ryosya: both companies) 101 (93) 64% (70%) \\ $ \" (dosya: the company) 100 (90) 78% (87%) \\ r \" (jisya: the company itself) 60 (53) 78% (89%)\nTable 1: Accuracy of reference resolutions A major cause of incorrect references of \\dosya\" was the failure to locate topic companies. The simple mechanism of searching for topic companies using case markers did not work well. A typical problem can be seen in the following example: \\ '/X \" (A joint venture partner is X Corp.). Here X Corp is a topic company, but the subject \\ X \" (X Corp.) is not followed by a subject case marker. Other errors can be attributed to the fact that \\dosya\" did not always refer to a topic company as discussed in the heuristic rules of \\dosya\" reference resolution.\nRegarding \\jisya\" resolutions, ve instances which should have referred to multiple companies were bound to a single company. Since multiple companies are usually listed using conjunctions such as \\ ( \" (to: and) and \\ \" (comma), they can be identi ed easily if a simple phrase analysis is performed.\nIt became clear from this evaluation that resolving \\dosya\" references to a non-topic company required intensive text understanding. Forty-seven percent of the occurrences of \\dosya\" and \\jisya\" were bound to topic companies inherited from a previous sentence. This result strongly supported the importance of keeping track of topic companies throughout the discourse." }, { "figure_ref": [], "heading": "Discourse segmentation", "publication_ref": [ "b1" ], "table_ref": [ "tab_2", "tab_2" ], "text": "Thirty-one of the 150 tipster/muc-5 evaluation test articles included ninety multiple tieup relationships. textract's discourse processor segmented the thirty-one articles into seventy-one individual tie-up relationship blocks. Only thirty-eight of the blocks were correctly segmented. Main tie-up relationships which reappeared in Type-II discourse structures were not detected well, which caused the structures to be incorrectly recognized as Type-I. This error was caused by the fact that the joint venture relationships were usually mentioned implicitly when they reappeared in the text. For example, a noun phrase, \\ . / \" (the joint venture this time), which was not detected by the template patterns used, brought the focus back to the main tie-up. As a result, textract identi ed eight percent fewer tie-up relationships than the possible number expected in the tipster/muc-5 evaluation. This merging error must have a ected system performance since the information in the reappearing main tie-up segment would not have been correctly linked to the earlier main tie-up segment.\nThis preliminary study suggested that recognizing segmentation points in the text should be regarded as crucial for performance. The template pattern matching alone was not good enough to recognize the segmentation points. The discourse processor simply segmented the text when it found a new tie-up relationship. The discourse models, currently unused at run-time in textract, could be used to help infer the discourse structure when the system is not sure whether to merge or separate discourse segments. Reference resolution of de nite and inde nite noun phrases must also be solved for accurate discourse segmentation in future research.\nThe accuracy of discourse segmentation might be improved by checking the di erence or identity of date and entity location, as well as entity name, when deciding whether or not to merge a tie-up relationship. textract did not take date and location objects into account in making segmentation decisions, because textract's identi cation of these objects was not considered reliable enough. For example, the date object was identi ed with recall of only twenty-seven percent and precision of fty-nine percent. On the other hand, entities were identi ed with over eighty percent accuracy in both recall and precision. To avoid incorrect discourse segmentation, therefore, textract's merging conditions included only entity names as reliable information.\n5.4 Overall textract performance 250 newspaper articles, 150 about Japanese corporate joint ventures and 100 about Japanese microelectronics, were provided by arpa for use in the tipster/muc-5 system evaluation. Six joint ventures systems and ve microelectronics systems, including textract developed at cmu as an optional system of ge-cmu shogun, were presented in the Japanese system evaluation at muc-5. A scoring program automatically compared the system output with answer templates created by human analysts. When a human decision was necessary, analysts instructed the scoring program whether the two strings in comparison were completely matched, partially matched, or unmatched. Finally, the scoring program calculated an overall score combined from all the newspaper article scores. Although various evaluation metrics were measured in the evaluation (Chinchor & Sundheim, 1993), only the following error-based and recall-precision-based metrics are discussed in this paper. The basic scoring categories used are: correct (COR), partially correct (PAR), incorrect (INC), missing (MIS), and spurious (SPU), counted as the number of pieces of information in the system output compared to the possible information.\n( The error per response ll (ERR) was the o cial measure of muc-5 system performance. Secondary evaluation metrics were undergeneration (UND), overgeneration (OVG), and substitution (SUB). The recall, precision, and F-measure metrics were used as uno cial metrics for muc-5.\nTable 2 shows scores of textract and two other top-ranking o cial systems taken from the tipster/muc-5 system evaluation results. 4 textract processed only Japanese domains of corporate joint ventures (JJV) and microelectronics (JME), whereas the two other systems processed both English and Japanese text. textract performed as well as the top-ranking systems in the two Japanese domains.\nThe human performance of four well-trained analysts was reported to be about eighty percent in both recall and precision in the English microelectronics domain (Will, 1993). This is about thirty percent better than the best tipster/muc-5 systems' performance in P&R F-measure in the same language domain. In the Japanese joint ventures domain, textract scored recall of seventy-ve percent and precision of eighty-one percent with a core template comprising only essential objects. This result suggests that the current technology could be used to support human extraction work if the task is well-constrained.\n4. The textract scores submitted to muc-5 were uno cial. It was scored o cially after the conference.\nTable 2 shows textract's o cial scores.\nRunning on a SUN SPARCstation IPX, textract processed a joint ventures article in about sixty seconds and a microelectronics article in about twenty-four seconds on average. The human analysts took about fteen minutes to complete an English microelectronics template and about sixty minutes for a Japanese joint ventures template (Will, 1993). Thus a human-machine integrated system would be the best solution for fast, high quality, information extraction. Some tipster/muc-5 systems processed both Japanese and English domains. These systems generally performed better in the Japanese domains than in the corresponding English domains. One likely reason is that the structure of Japanese articles is fairly standard, particularly in the Japanese joint ventures domain, and can be readily analyzed into the two discourse structure types described in this paper. Another possible reason is a characteristic of writing style: expressions which need to be identi ed tend to appear in the rst few sentences in a form suitable for pattern matching.\nThe textract Japanese microelectronics system copied the preprocessor, the concept search of the pattern matcher, and the company name uni cation of the discourse processor used in the textract Japanese joint ventures system. The microelectronics system was developed in only three weeks by one person who replaced joint ventures concepts and key words with representative microelectronics concepts and key words. The lower performance of the textract microelectronics system compared to the joint ventures system is largely due to the short development time. It is also probably due to the less homogeneous discourse structure and writing style of the microelectronics articles." }, { "figure_ref": [], "heading": "Conclusions and future research", "publication_ref": [ "b0", "b4" ], "table_ref": [], "text": "This paper has described the importance of discourse processing in three aspects of information extraction: identifying key information throughout the text, i.e. topic companies and company name references in the tipster/muc-5 domains; segmenting the text to select relevant portions of interest; and merging concepts identi ed by the sentence level processing. The basic performance of the system depends on the preprocessor, however, since many pieces of identi ed information are put directly into slots or are otherwise used to ll slots during later processing. textract's pattern matcher solves the matching problem caused by the segmentation ambiguities often found in Japanese compound words. The pattern matching system based on a nite-state automaton is simple and runs fast. These factors are essential for rapid system development and performance improvement.\nTo improve system performance with the pattern matching architecture, an increase in the number of patterns is unavoidable. Since matching a large number of patterns is a lengthy process, an e cient pattern matcher is required to shorten the running time. Tomita's new generalized LR parser, known to be one of the fastest parsers for practical purposes, skips unnecessary words during parsing (Bates & Lavie, 1991). The parser is under evaluation to investigate if it is appropriate for information extraction from Japanese text (Eriguchi & Kitani, 1993). Pattern matching alone, however, will not be able to improve the system performance to human levels in a complicated information extraction task such as tipster/muc-5, even if the task is well-de ned and suitable for pattern matching. More e orts should be made in discourse processing such as discourse segmentation and reference resolution for de nite and inde nite noun phrases.\nThe research discussed in this paper is based on an application-oriented, domain-speci c, and language-speci c approach relying on patterns and heuristic rules collected from a particular corpus. It is obvious that the patterns and heuristic rules described in this paper do not cover a wide range of applications, domains, or languages. The empirical approach described here is worth investigating even for an entirely new task, however, since it can achieve a high level of system performance in a relatively short development time. While linguistic theory-based systems tend to become complex and di cult to maintain, especially if they incorporate full text parsing, the simplicity of an empirically-based, pattern-oriented system such as textract keeps the development time short and the evaluation cycle quick.\nCorpus analysis is a key element in this corpus-based paradigm. It is estimated that corpus analysis took about half of the development time for textract. Statistically-based corpus analysis tools are necessary to obtain better performance in a shorter development time. Such tools could help developers not only extract important patterns and heuristic rules from the corpus, but also monitor the system performance during the evaluationimprovement cycle." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors wish to express their appreciation to Jaime Carbonell, who provided the opportunity to pursue this research at the Center for Machine Translation, Carnegie Mellon University. Thanks are also due to Teruko Mitamura and Michael Mauldin for their many helpful suggestions." } ]
[ { "authors": "J Bates; A Lavie", "journal": "", "ref_id": "b0", "title": "Recognizing Substrings of LR(k) Languages in Linear Time", "year": "1991" }, { "authors": "N Chinchor; B Sundheim", "journal": "", "ref_id": "b1", "title": "MUC-5 Evaluation Metrics", "year": "1993" }, { "authors": "J Cowie; L Guthrie", "journal": "", "ref_id": "b2", "title": "CRL/BRANDEIS: Description of the Diderot System as Used for MUC-5", "year": "1993" }, { "authors": "S Doi; S Ando; K Muraki", "journal": "", "ref_id": "b3", "title": "Context Analysis in Information Extraction System Based on Keywords and Text Structure", "year": "1993" }, { "authors": "Y Eriguchi; T Kitani", "journal": "", "ref_id": "b4", "title": "A Preliminary Study of Using Tomita's Generalized LR Parser for Information Extraction", "year": "1993" }, { "authors": "H Fujii; B Croft", "journal": "", "ref_id": "b5", "title": "A Comparison of Indexing Techniques for Japanese Text Retrieval", "year": "1993" }, { "authors": "L Hirschman", "journal": "", "ref_id": "b6", "title": "An Adjunct Test for Discourse Processing in MUC-4", "year": "1992" }, { "authors": "J Hobbs; D Appelt", "journal": "SRI International", "ref_id": "b7", "title": "FASTUS: A System for Extracting Information from Natural-Language Text", "year": "1992" }, { "authors": "P Jacobs", "journal": "", "ref_id": "b8", "title": "TIPSTER/SHOGUN 18-Month Progress Report", "year": "1993" }, { "authors": "P Jacobs; G Krupka", "journal": "", "ref_id": "b9", "title": "GE-CMU: Description of the Shogun System Used for MUC-5", "year": "1993" }, { "authors": "I Karasawa", "journal": "", "ref_id": "b10", "title": "Detection of Company Name Abbreviations in Japanese Texts", "year": "1993" }, { "authors": "T Kitani", "journal": "", "ref_id": "b11", "title": "An OCR Post-processing Method for Handwritten Japanese Documents", "year": "1991" }, { "authors": "T Kitani", "journal": "", "ref_id": "b12", "title": "Merging Information by Discourse Processing for Information Extraction", "year": "1994" }, { "authors": "T Kitani; T Mitamura", "journal": "", "ref_id": "b13", "title": "A Japanese Preprocessor for Syntactic and Semantic Parsing", "year": "1993" }, { "authors": "T Kitani; T Mitamura", "journal": "Journal of Information Processing Society of Japan", "ref_id": "b14", "title": "An Accurate Morphological Analysis and Proper Name Identi cation for Japanese Text Processing", "year": "1994" }, { "authors": "W Lehnert; J Mccarthy", "journal": "", "ref_id": "b15", "title": "UMASS/HUGHES: Description of the CIRCUS System Used for MUC-5", "year": "1993" }, { "authors": "W Lehnert; B Sundheim", "journal": "AI Magazine, Fall", "ref_id": "b16", "title": "A Performance Evaluation of Text-Analysis Technologies", "year": "1991" }, { "authors": "K Muraki; S Doi; S Ando", "journal": "", "ref_id": "b17", "title": "NEC: Description of the VENIEX System as Used for MUC-5", "year": "1993" }, { "authors": "B Onyshkevych", "journal": "", "ref_id": "b18", "title": "Technology Perspective", "year": "1993" }, { "authors": "F Pereira", "journal": "", "ref_id": "b19", "title": "Finite-State Approximations of Grammars", "year": "1990" }, { "authors": "L Rau; P Jacobs", "journal": "", "ref_id": "b20", "title": "Creating Segmented Databases from Free Text for Text Retrieval", "year": "1991" }, { "authors": " Tipster", "journal": "", "ref_id": "b21", "title": "Joint Venture Template Fill Rules", "year": "1992" }, { "authors": "R Wagner; M Fischer", "journal": "Journal of ACM", "ref_id": "b22", "title": "The String-to-String Correction Problem", "year": "1974" }, { "authors": "T Wakao", "journal": "", "ref_id": "b23", "title": "Reference Resolution Using Semantic Patterns in Japanese Newspaper Articles", "year": "1994" }, { "authors": "R Weischedel; D Ayuso", "journal": "", "ref_id": "b24", "title": "BBN: Description of the PLUM System as Used for MUC-5", "year": "1993" }, { "authors": "C Will", "journal": "", "ref_id": "b25", "title": "Comparing Human and Machine Performance for Natural Language Information Extraction: Results for English Microelectronics from the MUC-5 Evaluation", "year": "1993" } ]
[ { "formula_coordinates": [ 13, 121.2, 569.52, 344.89, 45.36 ], "formula_id": "formula_0", "formula_text": "\\ /8o ;.D A AK . .o% U'.$ R K R S . ' KH +*K5 6 '+/ ! Z & R K (B 4" } ]
Pattern Matching and Discourse Processing in Information Extraction from Japanese Text
Information extraction is the task of automatically picking up information of interest from an unconstrained text. Information of interest is usually extracted in two steps. First, sentence level processing locates relevant pieces of information scattered throughout the text; second, discourse processing merges coreferential information to generate the output. In the rst step, pieces of information are locally identi ed without recognizing any relationships among them. A key word search or simple pattern search can achieve this purpose. The second step requires deeper knowledge in order to understand relationships among separately identi ed pieces of information. Previous information extraction systems focused on the rst step, partly because they were not required to link up each piece of information with other pieces. To link the extracted pieces of information and map them onto a structured output format, complex discourse processing is essential. This paper reports on a Japanese information extraction system that merges information using a pattern matcher and discourse processor. Evaluation results show a high level of system performance which approaches human performance.
Tsuyoshi Kitani; Masami Hara
[ { "figure_caption": "Figure 1 :1Figure 1: Object-oriented template structure of the joint ventures domain", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: textract system architecture", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "terminate cancel).", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: A matching pattern for (a) Japanese and (b) English", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Discourse structure of tie-up relationships", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Example of concept merging", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Economic activity pattern (a) and establish pattern (b)", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Scores of textract and two other top-ranking o cial systems in tipster/muc-5", "figure_data": "1) Error-based metricsError per response ll (ERR):wrong total = INC + P AR=2 + MIS + SPU COR + PAR + INC + MIS + SP UUndergeneration (UND):MIS possible =MIS COR + PAR + INC + MISOvergeneration (OVG):SPU actual =SPU COR + P AR + INC + SP USubstitution (SUB):INC + P AR=2 COR + P AR + INC", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b41", "b57", "b50", "b51", "b8", "b51", "b48", "b49", "b58", "b11", "b8", "b22", "b67", "b70", "b67", "b28", "b43", "b11" ], "table_ref": [], "text": "Current data collection technology provides a unique challenge and opportunity for automated machine learning techniques. The advent of major scienti c projects such as the Human Genome Project, the Hubble Space Telescope, and the human brain mapping initiative are generating enormous amounts of data on a daily basis. These streams of data require automated methods to analyze, lter, and classify them before presenting them in digested form to a domain scientist. Decision trees are a particularly useful tool in this context because they perform classi cation by a sequence of simple, easy-to-understand tests whose semantics is intuitively clear to domain experts. Decision trees have been used for classi cation and other tasks since the 1960s (Moret, 1982;Safavin & Landgrebe, 1991). In the 1980's, Breiman et al.'s book on classi cation and regression trees (CART) and Quinlan's work on ID3 (Quinlan, 1983(Quinlan, , 1986) ) provided the foundations for what has become a large body of research on one of the central techniques of experimental machine learning.\nMany variants of decision tree (DT) algorithms have been introduced in the last decade. Much of this work has concentrated on decision trees in which each node checks the value of a single attribute (Breiman, Friedman, Olshen, & Stone, 1984;Quinlan, 1986Quinlan, , 1993a)). Quinlan initially proposed decision trees for classi cation in domains with symbolic-valued attributes (1986), and later extended them to numeric domains (1987). When the attributes are numeric, the tests have the form x i > k, where x i is one of the attributes of an example and k is a constant. This class of decision trees may be called axis-parallel, because the tests at each node are equivalent to axis-parallel hyperplanes in the attribute space. An example of such a decision tree is given in Figure 1, which shows both a tree and the partitioning it creates in a 2-D attribute space. c 1994 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nFigure 1: The left side of the gure shows a simple axis-parallel tree that uses two attributes.\nThe right side shows the partitioning that this tree creates in the attribute space.\nResearchers have also studied decision trees in which the test at a node uses boolean combinations of attributes (Pagallo, 1990;Pagallo & Haussler, 1990;Sahami, 1993) and linear combinations of attributes (see Section 2). Di erent methods for measuring the goodness of decision tree nodes, as well as techniques for pruning a tree to reduce over tting and increase accuracy have also been explored, and will be discussed in later sections.\nIn this paper, we examine decision trees that test a linear combination of the attributes at each internal node. More precisely, let an example take the form X = x 1 ; x 2 ; : : :; x d ; C j where C j is a class label and the x i 's are real-valued attributes. 1 The test at each node will then have the form:\nd X i=1\na i x i + a d+1 > 0\n(1)\nwhere a 1 ; : : :; a d+1 are real-valued coe cients. Because these tests are equivalent to hyperplanes at an oblique orientation to the axes, we call this class of decision trees oblique decision trees. (Trees of this form have also been called \\multivariate\" (Brodley & Utgo , 1994). We prefer the term \\oblique\" because \\multivariate\" includes non-linear combinations of the variables, i.e., curved surfaces. Our trees contain only linear tests.) It is clear that these are simply a more general form of axis-parallel trees, since by setting a i = 0 for all coe cients but one, the test in Eq. 1 becomes the familiar univariate test. Note that oblique decision trees produce polygonal (polyhedral) partitionings of the attribute space, while axis-parallel trees produce partitionings in the form of hyper-rectangles that are parallel to the feature axes.\nIt should be intuitively clear that when the underlying concept is de ned by a polygonal space partitioning, it is preferable to use oblique decision trees for classi cation. For example, there exist many domains in which one or two oblique hyperplanes will be the best model to use for classi cation. In such domains, axis-parallel methods will have to ap-1. The constraint that x1; : : : ; xd are real-valued does not necessarily restrict oblique decision trees to numeric domains. Several researchers have studied the problem of converting symbolic (unordered) domains to numeric (ordered) domains and vice versa; e.g., (Breiman et al., 1984;Hampson & Volper, 1986;Utgo & Brodley, 1990;Van de Merckt, 1992, 1993). To keep the discussion simple, however, we will assume that all attributes have numeric values.\nFigure 2: The left side shows a simple 2-D domain in which two oblique hyperplanes de ne the classes. The right side shows an approximation of the sort that an axis-parallel decision tree would have to create to model this domain.\nproximate the correct model with a staircase-like structure, while an oblique tree-building method could capture it with a tree that was both smaller and more accurate. 2 Figure 2 gives an illustration. Breiman et al. rst suggested a method for inducing oblique decision trees in 1984. However, there has been very little further research on such trees until relatively recently (Utgo & Brodley, 1990;Heath, Kasif, & Salzberg, 1993b;Murthy, Kasif, Salzberg, & Beigel, 1993;Brodley & Utgo , 1994). A comparison of existing approaches is given in more detail in Section 2. The purpose of this study is to review the strengths and weaknesses of existing methods, to design a system that combines some of the strengths and overcomes the weaknesses, and to evaluate that system empirically and analytically. The main contributions and conclusions of our study are as follows:\nWe have developed a new, randomized algorithm for inducing oblique decision trees from examples. This algorithm extends the original 1984 work of Breiman et al. Randomization helps signi cantly in learning many concepts.\nOur algorithm is fully implemented as an oblique decision tree induction system and is available over the Internet. The code can be retrieved from Online Appendix 1 of this paper (or by anonymous ftp from ftp://ftp.cs.jhu.edu/pub/oc1/oc1.tar.Z).\nThe randomized hill-climbing algorithm used in OC1 is more e cient than other existing randomized oblique decision tree methods (described below). In fact, the current implementation of OC1 guarantees a worst-case running time that is only O(log n) times greater than the worst-case time for inducing axis-parallel trees (i.e., O(dn 2 log n) vs. O(dn 2 )).\nThe ability to generate oblique trees often produces very small trees compared to axis-parallel methods. When the underlying problem requires an oblique split, oblique trees are also more accurate than axis-parallel trees. Allowing a tree-building system to use both oblique and axis-parallel splits broadens the range of domains for which the system should be useful.\nThe remaining sections of the paper follow this outline: the remainder of this section brie y outlines the general paradigm of decision tree induction, and discusses the complexity issues involved in inducing oblique decision trees. Section 2 brie y reviews some existing techniques for oblique DT induction, outlines some limitations of each approach, and introduces the OC1 system. Section 3 describes the OC1 system in detail. Section 4 describes experiments that (1) compare the performance of OC1 to that of several other axis-parallel and oblique decision tree induction methods on a range of real-world datasets and (2) demonstrate empirically that OC1 signi cantly bene ts from its randomization methods. In Section 5, we conclude with some discussion of open problems and directions for further research." }, { "figure_ref": [], "heading": "Top-Down Induction of Decision Trees", "publication_ref": [], "table_ref": [], "text": "Algorithms for inducing decision trees follow an approach described by Quinlan as top-down induction of decision trees (1986). This can also be called a greedy divide-and-conquer method. The basic outline is as follows: Quinlan's original model only considered attributes with symbolic values; in that model, a test at a node splits an attribute into all of its values. Thus a test on an attribute with three values will have at most three child nodes, one corresponding to each value. The algorithm considers all possible tests and chooses the one that optimizes a pre-de ned goodness measure. (One could also split symbolic values into two or more subsets of values, which gives many more choices for how to split the examples.) As we explain next, oblique decision tree methods cannot consider all tests due to complexity considerations." }, { "figure_ref": [ "fig_1" ], "heading": "Complexity of Induction of Oblique Decision Trees", "publication_ref": [ "b8", "b26", "b41", "b6" ], "table_ref": [], "text": "One reason for the relatively few papers on the problem of inducing oblique decision trees is the increased computational complexity of the problem when compared to the axis-parallel case. There are two important issues that must be addressed. In the context of top-down decision tree algorithms, we must address the complexity of nding optimal separating hyperplanes (decision surfaces) for a given node of a decision tree. An optimal hyperplane will minimize the impurity measure used; e.g., impurity might be measured by the total number of examples mis-classi ed. The second issue is the lower bound on the complexity of nding optimal (e.g., smallest size) trees. Let us rst consider the issue of the complexity of selecting an optimal oblique hyperplane for a single node of a tree. In a domain with n training instances, each described using d real-valued attributes, there are at most 2 d n d distinct d-dimensional oblique splits; i.e., hyperplanes3 that divide the training instances uniquely into two nonoverlapping subsets.\nThis upper bound derives from the observation that every subset of size d from the n points can de ne a d-dimensional hyperplane, and each such hyperplane can be rotated slightly in 2 d directions to divide the set of d points in all possible ways. Figure 3 illustrates these upper limits for two points in two dimensions. For axis-parallel splits, there are only n d distinct possibilities, and axis-parallel methods such as C4.5 (Quinlan, 1993a) and CART (Breiman et al., 1984) can exhaustively search for the best split at each node. The problem of searching for the best oblique split is therefore much more di cult than that of searching for the best axis-parallel split. In fact, the problem is NP-hard. More precisely, Heath (1992) proved that the following problem is NP-hard: given a set of labelled examples, nd the hyperplane that minimizes the number of misclassi ed examples both above and below the hyperplane. This result implies that any method for nding the optimal oblique split is likely to have exponential cost (assuming P 6 = NP).\nIntuitively, the problem is that it is impractical to enumerate all 2 d n d distinct hyperplanes and choose the best, as is done in axis-parallel decision trees. However, any non-exhaustive deterministic algorithm for searching through all these hyperplanes is prone to getting stuck in local minima.\nOn the other hand, it is possible to de ne impurity measures for which the problem of nding optimal hyperplanes can be solved in polynomial time. For example, if one minimizes the sum of distances of mis-classi ed examples, then the optimal solution can be found using linear programming methods (if distance is measured along one dimension only). However, classi ers are usually judged by how many points they classify correctly, regardless of how close to the decision boundary a point may lie. Thus most of the standard measures for computing impurity base their calculation on the discrete number of examples of each category on either side of the hyperplane. Section 3.3 discusses several commonly used impurity measures. Now let us address the second issue, that of the complexity of building a small tree. It is easy to show that the problem of inducing the smallest axis-parallel decision tree is NP-hard. This observation follows directly from the work of Hya l and Rivest (1976). Note that one can generate the smallest axis-parallel tree that is consistent with the training set in polynomial time if the number of attributes is a constant. This can be done by using dynamic programming or branch and bound techniques (see Moret (1982) for several pointers). But when the tree uses oblique splits, it is not clear, even for a xed number of attributes, how to generate an optimal (e.g., smallest) decision tree in polynomial time. This suggests that the complexity of constructing good oblique trees is greater than that for axis-parallel trees.\nIt is also easy to see that the problem of constructing an optimal (e.g., smallest) oblique decision tree is NP-hard. This conclusion follows from the work of Blum and Rivest (1988).\nTheir result implies that in d dimensions (i.e., with d attributes) the problem of producing a 3-node oblique decision tree that is consistent with the training set is NP-complete. More speci cally, they show that the following decision problem is NP-complete: given a training set T with n examples and d Boolean attributes, does there exist a 3-node neural network consistent with T? From this it is easy to show that the following question is NP-complete: given a training set T, does there exist a 3-leaf-node oblique decision tree consistent with T?\nAs a result of these complexity considerations, we took the pragmatic approach of trying to generate small trees, but not looking for the smallest tree. The greedy approach used by OC1 and virtually all other decision tree algorithms implicitly tries to generate small trees. In addition, it is easy to construct example problems for which the optimal split at a node will not lead to the best tree; thus our philosophy as embodied in OC1 is to nd locally good splits, but not to spend excessive computational e ort on improving the quality of these splits." }, { "figure_ref": [], "heading": "Previous Work on Oblique Decision Tree Induction", "publication_ref": [ "b38", "b3", "b5", "b9", "b16", "b30", "b69", "b10", "b66", "b67", "b46", "b20", "b28", "b35" ], "table_ref": [], "text": "Before describing the OC1 algorithm, we will brie y discuss some existing oblique DT induction methods, including CART with linear combinations, Linear Machine Decision Trees, and Simulated Annealing of Decision Trees. There are also methods that induce tree-like classi ers with linear discriminants at each node, most notably methods using linear programming (Mangasarian, Setiono, & Wolberg, 1990;Bennett & Mangasarian, 1992, 1994a, 1994b). Though these methods can nd the optimal linear discriminants for speci c goodness measures, the size of the linear program grows very fast with the number To induce a split at node T of the decision tree:\nNormalize values for all d attributes. L = 0 While (TRUE)\nL = L + 1\nLet the current split s L be v c, where v = P d i=1 a i x i . For i = 1; : : :; d For = -0.25,0,0.25 Search for the that maximizes the goodness of the split v (a i + ) c. Let , be the settings that result in highest goodness in these 3 searches.\na i = a i , c = c\n. Perturb c to maximize the goodness of s L , keeping a 1 ; : : :; a d constant. If jgoodness(s L ) -goodness(s L 1 )j exit while loop. Eliminate irrelevant attributes in fa 1 ; : : :; a d g using backward elimination. Convert s L to a split on the un-normalized attributes. Return the better of s L and the best axis-parallel split as the split for T.\nFigure 4: The procedure used by CART with linear combinations (CART-LC) at each node of a decision tree.\nof instances and the number of attributes. There is also some less closely related work on algorithms to train arti cial neural networks to build decision tree-like classi ers (Brent, 1991;Cios & Liu, 1992;Herman & Yeung, 1992).\nThe rst oblique decision tree algorithm to be proposed was CART with linear combinations (Breiman et al., 1984, chapter 5). This algorithm, referred to henceforth as CART-LC, is an important basis for OC1. Figure 4 summarizes (using Breiman et al.'s notation) what the CART-LC algorithm does at each node in the decision tree. The core idea of the CART-LC algorithm is how it nds the value of that maximizes the goodness of a split. This idea is also used in OC1, and is explained in detail in Section 3.1.\nAfter describing CART-LC, Breiman et al. point out that there is still much room for further development of the algorithm. OC1 represents an extension of CART-LC that includes some signi cant additions. It addresses the following limitations of CART-LC: CART-LC is fully deterministic. There is no built-in mechanism for escaping local minima, although such minima may be very common for some domains. Figure 5 shows a simple example for which CART-LC gets stuck.\nCART-LC produces only a single tree for a given data set.\nCART-LC sometimes makes adjustments that increase the impurity of a split. This feature was probably included to allow it to escape some local minima.\nThere is no upper bound on the time spent at any node in the decision tree. It halts when no perturbation changes the impurity more than , but because impurity may increase and decrease, the algorithm can spend arbitrarily long time at a node.\n2 2 1 1 1 1 2 2 Ini t i al Loc. C A R T -L C O C 1\nFigure 5: The deterministic perturbation algorithm of CART-LC fails to nd the correct split for this data, even when it starts from the location of the best axis-parallel split. OC1 nds the correct split using one random jump.\nAnother oblique decision tree algorithm, one that uses a very di erent approach from CART-LC, is the Linear Machine Decision Trees (LMDT) system (Utgo & Brodley, 1991;Brodley & Utgo , 1992), which is a successor to the Perceptron Tree method (Utgo , 1989;Utgo & Brodley, 1990). Each internal node in an LMDT tree is a Linear Machine (Nilsson, 1990). The training algorithm presents examples repeatedly at each node until the linear machine converges. Because convergence cannot be guaranteed, LMDT uses heuristics to determine when the node has stabilized. To make the training stable even when the set of training instances is not linearly separable, a \\thermal training\" method (Frean, 1990) is used, similar to simulated annealing.\nA third system that creates oblique trees is Simulated Annealing of Decision Trees (SADT) (Heath et al., 1993b) which, like OC1, uses randomization. SADT uses simulated annealing (Kirkpatrick, Gelatt, & Vecci, 1983) to nd good values for the coe cients of the hyperplane at each node of a tree. SADT rst places a hyperplane in a canonical location, and then iteratively perturbs all the coe cients by small random amounts. Initially, when the temperature parameter is high, SADT accepts almost any perturbation of the hyperplane, regardless of how it changes the goodness score. However, as the system \\cools down,\" only changes that improve the goodness of the split are likely to be accepted. Though SADT's use of randomization allows it to e ectively avoid some local minima, it compromises on e ciency. It runs much slower than either CART-LC, LMDT or OC1, sometimes considering tens of thousands of hyperplanes at a single node before it nishes annealing.\nOur experiments in Section 4.3 include some results showing how all of these methods perform on three arti cial domains.\nWe next describe a way to combine some of the strengths of the methods just mentioned, while avoiding some of the problems. Our algorithm, OC1, uses deterministic hill climbing most of the time, ensuring computational e ciency. In addition, it uses two kinds of randomization to avoid local minima. By limiting the number of random choices, the algorithm is guaranteed to spend only polynomial time at each node in the tree. In addition, randomization itself has produced several bene ts: for example, it means that the algorithm To nd a split of a set of examples T:\nFind the best axis-parallel split of T . Let I be the impurity of this split. Repeat R times: Choose a random hyperplane H.\n(For the rst iteration, initialize H to be the best axis-parallel split.)\nStep 1: Until the impurity measure does not improve, do: Perturb each of the coe cients of H in sequence.\nStep 2: Repeat at most J times:\nChoose a random direction and attempt to perturb H in that direction.\nIf this reduces the impurity of H, go to Step 1. Let I 1 = the impurity of H. If I 1 < I, then set I = I 1 . Output the split corresponding to I. can produce many di erent trees for the same data set. This o ers the possibility of a new family of classi ers: k-decision-tree algorithms, in which an example is classi ed by the majority vote of k trees. Heath et al. (1993a) have shown that k-decision tree methods (which they call k-DT) will consistently outperform single tree methods if classi cation accuracy is the main criterion. Finally, our experiments indicate that OC1 e ciently nds small, accurate decision trees for many di erent types of classi cation problems." }, { "figure_ref": [ "fig_2" ], "heading": "Oblique Classi er 1 (OC1)", "publication_ref": [], "table_ref": [], "text": "In this section we discuss details of the oblique decision tree induction system OC1. As part of this description, we include: the method for nding coe cients of a hyperplane at each tree node, methods for computing the impurity or goodness of a hyperplane, a tree pruning strategy, and methods for coping with missing and irrelevant attributes.\nSection 3.1 focuses on the most complicated of these algorithmic details; i.e. the question of how to nd a hyperplane that splits a given set of instances into two reasonably \\pure\" nonoverlapping subsets. This randomized perturbation algorithm is the main novel contribution of OC1. Figure 6 summarizes the basic OC1 algorithm, used at each node of a decision tree. This gure will be explained further in the following sections." }, { "figure_ref": [ "fig_3" ], "heading": "Perturbation algorithm", "publication_ref": [ "b28", "b8" ], "table_ref": [], "text": "OC1 imposes no restrictions on the orientation of the hyperplanes. However, in order to be at least as powerful as standard DT methods, it rst nds the best axis-parallel (univariate) split at a node before looking for an oblique split. OC1 uses an oblique split only when it improves over the best axis-parallel split. 4The search strategy for the space of possible hyperplanes is de ned by the procedure that perturbs the current hyperplane H to a new location. Because there are an exponential number of distinct ways to partition the examples with a hyperplane, any procedure that simply enumerates all of them will be unreasonably costly. The two main alternatives considered in the past have been simulated annealing, used in the SADT system (Heath et al., 1993b), and deterministic heuristic search, as in CART-LC (Breiman et al., 1984). OC1 combines these two ideas, using heuristic search until it nds a local minimum, and then using a non-deterministic search step to get out of the local minimum. (The nondeterministic step in OC1 is not simulated annealing, however.)\nWe will start by explaining how we perturb a hyperplane to split the training set T at a node of the decision tree. Let n be the number of examples in T, d be the number of attributes (or dimensions) for each example, and k be the number of categories. Then we can write T j = (x j1 ; x j2 ; : : :; x jd ; C j ) for the jth example from the training set T, where x ji is the value of attribute i and C j is the category label. As de ned in Eq. 1, the equation of the current hyperplane H at a node of the decision tree is written as P d i=1 (a i x i )+a d+1 = 0. If we substitute a point (an example) T j into the equation for H, we get P d i=1 (a i x ji )+a d+1 = V j , where the sign of V j tells us whether the point T j is above or below the hyperplane H; i.e., if V j > 0, then T j is above H. If H splits the training set T perfectly, then all points belonging to the same category will have the same sign for V j . i.e., sign\n(V i ) = sign(V j ) i category(T i ) = category(T j ).\nOC1 adjusts the coe cients of H individually, nding a locally optimal value for one coe cient at a time. This key idea was introduced by Breiman et al. It works as follows.\nTreat the coe cient a m as a variable, and treat all other coe cients as constants. Then V j can be viewed as a function of a m . In particular, the condition that T j is above H is equivalent to\nV j > 0 a m > a m x jm V j x jm def = U j (2)\nassuming that x jm > 0, which we ensure by normalization. Using this de nition of U j , the point T j is above H if a m > U j , and below otherwise. By plugging all the points from T into this equation, we will obtain n constraints on the value of a m . The problem then is to nd a value for a m that satis es as many of these constraints as possible. (If all the constraints are satis ed, then we have a perfect split.) This problem is easy to solve optimally: simply sort all the values U j , and consider setting a m to the midpoint between each pair of di erent values. This is illustrated in Figure 7. In the gure, the categories are indicated by font size; the larger U i 's belong to one category, and the smaller to another. For each distinct placement of the coe cient a m , OC1 computes the impurity of the resulting split; e.g., for the location between U 6 and U 7 illustrated here, two examples on the left and one example on the right would be misclassi ed (see Section 3.3.1 for di erent ways of computing impurity). As the gure illustrates, the problem is simply to nd the best one-dimensional split of the Us, which requires considering just n 1 values for a m . The value a 0 m obtained by solving this one-dimensional problem is then considered data under ts the concept. By default, OC1 uses only axis-parallel splits at tree nodes at which n < 2d.\nThe user can vary this threshold. Perturb(H,m) For j = 1; : : :; n Compute U j (Eq. 2) Sort U 1 ; : : :; U n in non-decreasing order. a 0 m = best univariate split of the sorted U j s. H 1 = result of substituting a 0 m for a m in H." }, { "figure_ref": [], "heading": "If (impurity(H 1 ) < impurity(H))", "publication_ref": [ "b43" ], "table_ref": [], "text": "f a m = a 0 m ; P move = P stag g Else if (impurity(H) = impurity(H 1 ))\nf a m = a 0 m with probability P move P move = P move 0:1 P stag g Figure 8: Perturbation algorithm for a single coe cient a m .\nas a replacement for a m . Let H 1 be the hyperplane obtained by \\perturbing\" a m to a 0 m . If H has better (lower) impurity than H 1 , then H 1 is discarded. If H 1 has lower impurity, H 1 becomes the new location of the hyperplane. If H and H 1 have identical impurities, then H 1 replaces H with probability P stag . 5 Figure 8 contains pseudocode for our perturbation procedure. Now that we have a method for locally improving a coe cient of a hyperplane, we need to decide which of the d + 1 coe cients to pick for perturbation. We experimented with three di erent methods for choosing which coe cient to adjust, namely, sequential, best rst and random.\nSeq: Repeat until none of the coe cient values is modi ed in the For loop: 5. The parameter Pstag, denoting \\stagnation probability\", is the probability that a hyperplane is perturbed to a location that does not change the impurity measure. To prevent the impurity from remaining stagnant for a long time, Pstag decreases by a constant amount each time OC1 makes a \\stagnant\" perturbation; thus only a constant number of such perturbations will occur at each node. This constant can be set by the user. Pstag is reset to 1 every time the global impurity measure is improved.\nOur previous experiments (Murthy et al., 1993) indicated that the order of perturbation of the coe cients does not a ect the classi cation accuracy as much as other parameters, especially the randomization parameters (see below). Since none of these orders was uniformly better than any other, we used sequential (Seq) perturbation for all the experiments reported in Section 4." }, { "figure_ref": [], "heading": "Randomization", "publication_ref": [ "b56" ], "table_ref": [], "text": "The perturbation algorithm halts when the split reaches a local minimum of the impurity measure. For OC1's search space, a local minimum occurs when no perturbation of any single coe cient of the current hyperplane will decrease the impurity measure. (Of course, a local minimum may also be a global minimum.) We have implemented two ways of attempting to escape local minima: perturbing the hyperplane with a random vector, and re-starting the perturbation algorithm with a di erent random initial hyperplane. The technique of perturbing the hyperplane with a random vector works as follows. When the system reaches a local minimum, it chooses a random vector to add to the coe cients of the current hyperplane. It then computes the optimal amount by which the hyperplane should be perturbed along this random direction. To be more precise, when a hyperplane H = P d i=1 a i x i + a d+1 cannot be improved by deterministic perturbation, OC1 repeats the following loop J times (where J is a user-speci ed parameter, set to 5 by default).\nChoose a random vector R = (r 1 ; r 2 ; : : :; r d+1 ). Let be the amount by which we want to perturb H in the direction R. In other words, let H 1 = P d i=1 (a i + r i )x i + (a d+1 + r d+1 ).\nFind the optimal value for .\nIf the hyperplane H 1 thus obtained decreases the overall impurity, replace H with H 1 , exit this loop and begin the deterministic perturbation algorithm for the individual coe cients.\nNote that we can treat as the only variable in the equation for H 1 . Therefore each of the n examples in T, if plugged into the equation for H 1 , imposes a constraint on the value of . OC1 therefore can use its coe cient perturbation method (see Section 3.1) to compute the best value of . If J random jumps fail to improve the impurity, OC1 halts and uses H as the split for the current tree node.\nAn intuitive way of understanding this random jump is to look at the dual space in which the algorithm is actually searching. Note that the equation H = P d i=1 a i x i + a d+1 de nes a space in which the axes are the coe cients a i rather than the attributes x i . Every point in this space de nes a distinct hyperplane in the original formulation. The deterministic algorithm used in OC1 picks a hyperplane and then adjusts coe cients one at a time. Thus in the dual space, OC1 chooses a point and perturbs it by moving it parallel to the axes.\nThe random vector R represents a random direction in this space. By nding the best value for , OC1 nds the best distance to adjust the hyperplane in the direction of R.\nNote that this additional perturbation in a random direction does not signi cantly increase the time complexity of the algorithm (see Appendix A). We found in our experiments that even a single random jump, when used at a local minimum, proves to be very helpful. Classi cation accuracy improved for every one of our data sets when such perturbations were made. See Section 4.3 for some examples.\nThe second technique for avoiding local minima is a variation on the idea of performing multiple local searches. The technique of multiple local searches is a natural extension to local search, and has been widely mentioned in the optimization literature (see Roth (1970) for an early example). Because most of the steps of our perturbation algorithm are deterministic, the initial hyperplane largely determines which local minimum will be encountered rst. Perturbing a single initial hyperplane is thus unlikely to lead to the best split of a given data set. In cases where the random perturbation method fails to escape from local minima, it may be helpful to simply start afresh with a new initial hyperplane. We use the word restart to denote one run of the perturbation algorithms, at one node of the decision tree, using one random initial hyperplane. 6 That is, a restart cycles through and perturbs the coe cients one at a time and then tries to perturb the hyperplane in a random direction when the algorithm reaches a local minimum. If this last perturbation reduces the impurity, the algorithm goes back to perturbing the coe cients one at a time. The restart ends when neither the deterministic local search nor the random jump can nd a better split. One of the optional parameters to OC1 speci es how many restarts to use. If more than one restart is used, then the best hyperplane found thus far is always saved. In all our experiments, the classi cation accuracies increased with more than one restart. Accuracy tended to increase up to a point and then level o (after about 20{50 restarts, depending on the domain). Overall, the use of multiple initial hyperplanes substantially improved the quality of the decision trees found (see Section 4.3 for some examples).\nBy carefully combining hill-climbing and randomization, OC1 ensures a worst case time of O(dn 2 log n) for inducing a decision tree. See Appendix A for a derivation of this upper bound.\nBest Axis-Parallel Split. It is clear that axis-parallel splits are more suitable for some data distributions than oblique splits. To take into account such distributions, OC1 computes the best axis-parallel split and an oblique split at each node, and then picks the better of the two. 7 Calculating the best axis-parallel split takes an additional O(dn log n) time, and so does not increase the asymptotic time complexity of OC1. As a simple variant of the OC1 system, the user can opt to \\switch o \" the oblique perturbations, thus building an axis-parallel tree on the training data. Section 4.2 empirically demonstrates that this axis-parallel variant of OC1 compares favorably with existing axis-parallel algorithms." }, { "figure_ref": [], "heading": "Other Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Impurity Measures", "publication_ref": [ "b8", "b51", "b40", "b13", "b19", "b28", "b44", "b8" ], "table_ref": [], "text": "OC1 attempts to divide the d-dimensional attribute space into homogeneous regions; i.e., regions that contain examples from just one category. The goal of adding new nodes to a tree is to split up the sample space so as to minimize the \\impurity\" of the training set. Some algorithms measure \\goodness\" instead of impurity, the di erence being that goodness values should be maximized while impurity should be minimized. Many di erent measures of impurity have been studied (Breiman et al., 1984;Quinlan, 1986;Mingers, 1989b;Buntine & Niblett, 1992;Fayyad & Irani, 1992;Heath et al., 1993b).\nThe OC1 system is designed to work with a large class of impurity measures. Stated simply, if the impurity measure uses only the counts of examples belonging to every category on both sides of a split, then OC1 can use it. (See Murthy and Salzberg (1994) for ways of mapping other kinds of impurity measures to this class of impurity measures.) The user can plug in any impurity measure that ts this description. The OC1 implementation includes six impurity measures, namely:\n1. Information Gain 2. The Gini Index 3. The Twoing Rule 4. Max Minority 5. Sum Minority 6. Sum of Variances Though all six of the measures have been de ned elsewhere in the literature, in some cases we have made slight modi cations that are de ned precisely in Appendix B. Our experiments indicated that, on average, Information Gain, Gini Index and the Twoing Rule perform better than the other three measures for both axis-parallel and oblique trees. The Twoing Rule is the current default impurity measure for OC1, and it was used in all of the experiments reported in Section 4. There are, however, arti cial data sets for which Sum Minority and/or Max Minority perform much better than the rest of the measures. For instance, Sum Minority easily induces the exact tree for the POL data set described in Section 4.3.1, while all other methods have di culty nding the best tree.\nTwoing Rule. The Twoing Rule was rst proposed by Breiman et al. (1984). The value to be computed is de ned as:\nTwoingValue = (jT L j=n) (jT R j=n) ( k X i=1 jL i =jT L j R i =jT R jj) 2\nwhere jT L j (jT R j) is the number of examples on the left (right) of a split at node T, n is the number of examples at node T, and L i (R i ) is the number of examples in category i on the left (right) of the split. The TwoingValue is actually a goodness measure rather than an impurity measure. Therefore OC1 attempts to minimize the reciprocal of this value.\nThe remaining ve impurity measures implemented in OC1 are de ned in Appendix B." }, { "figure_ref": [], "heading": "Pruning", "publication_ref": [ "b52", "b45", "b15", "b36", "b17", "b24", "b74", "b62", "b8" ], "table_ref": [], "text": "Virtually all decision tree induction systems prune the trees they create in order to avoid over tting the data. Many studies have found that judicious pruning results in both smaller and more accurate classi ers, for decision trees as well as other types of machine learning systems (Quinlan, 1987;Niblett, 1986;Cestnik, Kononenko, & Bratko, 1987;Kodrato & Manago, 1987;Cohen, 1993;Hassibi & Stork, 1993;Wolpert, 1992;Scha er, 1993). For the OC1 system we implemented an existing pruning method, but note that any tree pruning method will work ne within OC1. Based on the experimental evaluations of Mingers (1989a) Brie y, the idea behind CC pruning is to create a set of trees of decreasing size from the original, complete tree. All these trees are used to classify the pruning set, and accuracy is estimated from that. CC pruning then chooses the smallest tree whose accuracy is within k standard errors squared of the best accuracy obtained. When the 0-SE rule (k = 0) is used, the tree with highest accuracy on the pruning set is selected. When k > 0, smaller tree size is preferred over higher accuracy. For details of Cost Complexity pruning, see Breiman et al. (1984) or Mingers (1989a)." }, { "figure_ref": [], "heading": "Irrelevant attributes", "publication_ref": [ "b8", "b0", "b1", "b33", "b60", "b14", "b63", "b37", "b11", "b8", "b11", "b61" ], "table_ref": [], "text": "Irrelevant attributes pose a signi cant problem for most machine learning methods (Breiman et al., 1984;Aha, 1990;Almuallin & Dietterich, 1991;Kira & Rendell, 1992;Salzberg, 1992;Cardie, 1993;Schlimmer, 1993;Langley & Sage, 1993;Brodley & Utgo , 1994). Decision tree algorithms, even axis-parallel ones, can be confused by too many irrelevant attributes. Because oblique decision trees learn the coe cients of each attribute at a DT node, one might hope that the values chosen for each coe cient would re ect the relative importance of the corresponding attributes. Clearly, though, the process of searching for good coe cient values will be much more e cient when there are fewer attributes; the search space is much smaller. For this reason, oblique DT induction methods can bene t substantially by using a feature selection method (an algorithm that selects a subset of the original attribute set) in conjunction with the coe cient learning algorithm (Breiman et al., 1984;Brodley & Utgo , 1994).\nCurrently, OC1 does not have a built-in mechanism to select relevant attributes. However, it is easy to include any of several standard methods (e.g., stepwise forward selection or stepwise backward selection) or even an ad hoc method to select features before running the tree-building process. For example, in separate experiments on data from the Hubble Space Telescope (Salzberg, Chandar, Ford, Murthy, & White, 1994), we used feature selection methods as a preprocessing step to OC1, and reduced the number of attributes from 20 to 2. The resulting decision trees were both simpler and more accurate. Work is currently underway to incorporate an e cient feature selection technique into the OC1 system.\nRegarding missing values, if an example is missing a value for any attribute, OC1 uses the mean value for that attribute. One can of course use other techniques for handling missing values, but those were not considered in this study." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we present two sets of experiments to support the following two claims.\n1. OC1 compares favorably over a variety of real-world domains with several existing axis-parallel and oblique decision tree induction methods. 2. Randomization, both in the form of multiple local searches and random jumps, improves the quality of decision trees produced by OC1. The experimental method used for all the experiments is described in Section 4.1. Sections 4.2 and 4.3 describe experiments corresponding to the above two claims. Each experimental section begins with a description of the data sets, and then presents the experimental results and discussion." }, { "figure_ref": [], "heading": "Experimental Method", "publication_ref": [ "b12" ], "table_ref": [ "tab_1" ], "text": "We used ve-fold cross validation (CV) in all our experiments to estimate classi cation accuracy. A k-fold CV experiment consists of the following steps.\n1. Randomly divide the data into k equal-sized disjoint partitions.\n2. For each partition, build a decision tree using all data outside the partition, and test the tree on the data in the partition.\n3. Sum the number of correct classi cations of the k trees and divide by the total number of instances to compute the classi cation accuracy. Report this accuracy and the average size of the k trees.\nEach entry in Tables 1 and2 is a result of ten 5-fold CV experiments; i.e., the result of tests that used 50 decision trees. Each of the ten 5-fold cross validations used a di erent random partitioning of the data. Each entry in the tables reports the mean and standard deviation of the classi cation accuracy, followed by the mean and standard deviation of the decision tree size (measured as the number of leaf nodes). Good results should have high values for accuracy, low values for tree size, and small standard deviations.\nIn addition to OC1, we also included in the experiments an axis-parallel version of OC1, which only considers axis-parallel hyperplanes. We call this version, described in Section 3.2, OC1-AP. In all our experiments, both OC1 and OC1-AP used the Twoing Rule (Section 3.3.1) to measure impurity. Other parameters to OC1 took their default values unless stated otherwise. (Defaults include the following: number of restarts at each node: 20. Number of random jumps attempted at each local minimum: 5. Order of coe cient perturbation: Sequential. Pruning method: Cost Complexity with the 0-SE rule, using 10% of the training set exclusively for pruning.)\nIn our comparison, we used the oblique version of the CART algorithm, CART-LC. We implemented our own version of CART-LC, following the description in Breiman et al. (1984, Chapter 5); however, there may be di erences between our version and other versions of this system (note that CART-LC is not freely available). Our implementation of CART-LC measured impurity with the Twoing Rule and used 0-SE Cost Complexity pruning with a separate test set, just as OC1 does. We did not include any feature selection methods in CART-LC or in OC1, and we did not implement normalization. Because the CART coe cient perturbation algorithm may alternate inde nitely between two locations of a hyperplane (see Section 2), we imposed an arbitrary limit of 100 such perturbations before forcing the perturbation algorithm to halt.\nWe also included axis-parallel CART and C4.5 in our comparisons. We used the implementations of these algorithms from the IND 2.1 package (Buntine, 1992). The default cart0 and c4.5 \\styles\" de ned in the package were used, without altering any parameter settings. The cart0 style uses the Twoing Rule and 0-SE cost complexity pruning with 10-fold cross validation. The pruning method, impurity measure and other defaults of the c4.5 style are the same as those described in Quinlan (1993a)." }, { "figure_ref": [], "heading": "OC1 vs. Other Decision Tree Induction Methods", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Table 1 compares the performance of OC1 to three well-known decision tree induction methods plus OC1-AP on six di erent real-world data sets. In the next section we will consider arti cial data, for which the concept de nition can be precisely characterized." }, { "figure_ref": [], "heading": "Description of Data Sets", "publication_ref": [ "b47", "b47", "b26", "b60", "b38", "b3", "b42", "b28", "b59", "b30", "b72", "b23", "b2", "b54" ], "table_ref": [ "tab_1" ], "text": "Star/Galaxy Discrimination. Two of our data sets came from a large set of astronomical images collected by Odewahn et al. (Odewahn, Stockwell, Pennington, Humphreys, & Zumach, 1992). In their study, they used these images to train arti cial neural networks running the perceptron and back propagation algorithms. The goal was to classify each example as either \\star\" or \\galaxy.\" Each image is characterized by 14 real-valued attributes, where the attributes were measurements de ned by astronomers as likely to be relevant for this task. The objects in the image were divided by Odewahn et al. into \\bright\" and \\dim\" data sets based on the image intensity values, where the dim images are inherently more di cult to classify. (Note that the \\bright\" objects are only bright in relation to others in this data set. In actuality they are extremely faint, visible only to the most powerful telescopes.) The bright set contains 2462 objects and the dim set contains 4192 objects.\nIn addition to the results reported in Table 1, the following results have appeared on the Star/Galaxy data. Odewahn et al. (1992) reported accuracy of 99.8% accuracy on the bright objects, and 92.0% on the dim ones, although it should be noted that this study used a single training and test set partition. Heath (1992) reported 99.0% accuracy on the bright objects using SADT, with an average tree size of 7.03 leaves. This study also used a single training and test set. Salzberg (1992) reported accuracies of 98.8% on the bright objects, and 95.1% on the dim objects, using 1-Nearest Neighbor (1-NN) coupled with a feature selection method that reduces the number of features.\nBreast Cancer Diagnosis. Mangasarian and Bennett have compiled data on the problem of diagnosing breast cancer to test several new classi cation methods (Mangasarian et al., 1990;Bennett & Mangasarian, 1992, 1994a). This data represents a set of patients with breast cancer, where each patient was characterized by nine numeric attributes plus the diagnosis of the tumor as benign or malignant. The data set currently has 683 entries Algorithm Bright S/G Dim S/G Cancer Iris Housing Diabetes OC1 98.9 0.2 95.0 0.3 96.2 0.3 94.7 3.1 82.4 0.8 74.4 1.0 4.3 1.0 13.0 8.7 2.8 0.9 3.1 0.2 6.9 3.2 5.4 3.8 CART-LC 98.8 0.2 92.8 0.5 95.3 0.6 93.5 2.9 81.4 1.2 73.7 1.2 3.9 1.3 24.2 8.7 3.5 0.9 3.2 0.3 5.8 3.2 8.0 5.2 OC1-AP 98.1 0.2 94.0 0.2 94.5 0.5 92.7 2.4 81.8 1.0 73.8 1.0 6.9 2.4 29.3 8.8 6.4 1.7 3.2 0.3 8.6 4.5 11.4 7.5 CART-AP 98.5 0.5 94.2 0.7 95.0 1.6 93.8 3.7 82.1 3.5 73.9 3.4 13.9 5.7 30. and is available from the UC Irvine machine learning repository (Murphy & Aha, 1994). Heath et al. (1993b) reported 94.9% accuracy on a subset of this data set (it then had only 470 instances), with an average decision tree size of 4.6 nodes, using SADT. Salzberg (1991) reported 96.0% accuracy using 1-NN on the same (smaller) data set. Herman and Yeung (1992) reported 99.0% accuracy using piece-wise linear classi cation, again using a somewhat smaller data set.\nClassifying Irises. This is Fisher's famous iris data, which has been extensively studied in the statistics and machine learning literature. The data consists of 150 examples, where each example is described by four numeric attributes. There are 50 examples of each of three di erent types of iris ower. Weiss and Kapouleas (1989) obtained accuracies of 96.7% and 96.0% on this data with back propagation and 1-NN, respectively.\nHousing Costs in Boston. This data set, also available as a part of the UCI ML repository, describes housing values in the suburbs of Boston as a function of 12 continuous attributes and 1 binary attribute (Harrison & Rubinfeld, 1978). The category variable (median value of owner-occupied homes) is actually continuous, but we discretized it so that category = 1 if value < $21000, and 2 otherwise. For other uses of this data, see (Belsley, 1980;Quinlan, 1993b)." }, { "figure_ref": [], "heading": "Diabetes diagnosis. This data catalogs the presence or absence of diabetes among Pima", "publication_ref": [ "b65" ], "table_ref": [], "text": "Indian females, 21 years or older, as a function of eight numeric-valued attributes. The original source of the data is the National Institute of Diabetes and Digestive and Kidney Diseases, and it is now available in the UCI repository. Smith et al. (1988) reported 76% accuracy on this data using their ADAP learning algorithm, using a di erent experimental method from that used here." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "The table shows that, for the six data sets considered here, OC1 consistently nds better trees than the original oblique CART method. Its accuracy was greater in all six domains, although the di erence was signi cant (more than 2 standard deviations) only for the dim star/galaxy problem. The average tree sizes were roughly equal for ve of the six domains, and for the dim stars and galaxies, OC1 found considerably smaller trees. These di erences will be analyzed and quanti ed further by using arti cial data, in the following section.\nOut of the ve decision tree induction methods, OC1 has the highest accuracy on four of the six domains: bright stars, dim stars, cancer diagnosis, and diabetes diagnosis. On the remaining two domains, OC1 has the second highest accuracy in each case. Not surprisingly, the oblique methods (OC1 and CART-LC) generally nd much smaller trees than the axisparallel methods. This di erence can be quite striking for some domains|note, for example, that OC1 produced a tree with just 13 nodes on average for the dim star/galaxy problem, while C4.5 produced a tree with 78 nodes, 6 times larger. Of course, in domains for which an axis-parallel tree is the appropriate representation, axis-parallel methods should compare well with oblique methods in terms of tree size. In fact, for the Iris data, all the methods found similar-sized trees." }, { "figure_ref": [], "heading": "Randomization Helps OC1", "publication_ref": [ "b31", "b10", "b28" ], "table_ref": [], "text": "In our second set of experiments, we examine more closely the e ect of introducing randomized steps into the algorithm for nding oblique splits. Our experiments demonstrate that OC1's ability to produce an accurate tree from a set of training data is clearly enhanced by the two kinds of randomization it uses. More precisely, we use three arti cial data sets (for which the underlying concept is known to the experimenters) to show that OC1's performance improves substantially when the deterministic hill climbing is augmented in any of three ways: with multiple restarts from random initial locations, with perturbations in random directions at local minima, or with both of the above randomization steps.\nIn order to nd clear di erences between algorithms, one needs to know that the concept underlying the data is indeed di cult to learn. For simple concepts (say, two linearly separable classes in 2-D), many di erent learning algorithms will produce very accurate classi ers, and therefore the advantages of randomization may not be detectable. It is known that many of the commonly-used data sets from the UCI repository are easy to learn with very simple representations (Holte, 1993); therefore those data sets may not be ideal for our purposes. Thus we created a number of arti cial data sets that present di erent problems for learning, and for which we know the \\correct\" concept de nition. This allows us to quantify more precisely how the parameters of our algorithm a ect its performance.\nA second purpose of this experiment is to compare OC1's search strategy with that of two existing oblique decision tree induction systems { LMDT (Brodley & Utgo , 1992) and SADT (Heath et al., 1993b). We show that the quality of trees induced by OC1 is as good as, if not better than, that of the trees induced by these existing systems on three arti cial domains. We also show that OC1 achieves a good balance between amount of e ort expended in search and the quality of the tree induced.\nBoth LMDT and SADT used information gain for this experiment. However, we did not change OC1's default measure (the Twoing Rule) because we observed, in experiments not reported here, that OC1 with information gain does not produce signi cantly di erent results. The maximum number of successive, unproductive perturbations allowed at any node was set at 10000 for SADT. For all other parameters, we used default settings provided with the systems." }, { "figure_ref": [], "heading": "Description of Artificial Data", "publication_ref": [], "table_ref": [], "text": "LS10 The LS10 data set has 2000 instances divided into two categories. Each instance is described by ten attributes x 1 ,: : :,x 10 , whose values are uniformly distributed in the range 0,1]. The data is linearly separable with a 10-D hyperplane (thus the name LS10) de ned by the equation x 1 + x 2 + x 3 + x 4 + x 5 < x 6 + x 7 + x 8 + x 9 + x 10 . The instances were all generated randomly and labelled according to which side of this hyperplane they fell on. Because oblique DT induction methods intuitively should prefer a linear separator if one exists, it is interesting to compare the various search techniques on this data set where we know a separator exists. The task is relatively simple for lower dimensions, so we chose 10-dimensional data to make it more di cult.\nPOL This data set is shown in Figure 9. It has 2000 instances in two dimensions, again divided into two categories. The underlying concept is a set of four parallel oblique lines (thus the name POL), dividing the instances into ve homogeneous regions. This concept is more di cult to learn than a single linear separator, but the minimal-size tree is still quite small. RCB RCB stands for \\rotated checker board\"; this data set has been the subject of other experiments on hard classi cation problems for decision trees (Murthy & Salzberg, 1994). The data set, shown in Figure 9, has 2000 instances in 2-D, each belonging to one of eight categories. This concept is di cult to learn for any axis-parallel method, for obvious reasons. It is also quite di cult for oblique methods, for several reasons. The biggest problem is that the \\correct\" root node, as shown in the gure, does not separate out any class by itself. Some impurity measures (such as Sum Minority) will fail miserably on this problem, although others (e.g., the Twoing Rule) work much better. Another problem is that a deterministic coe cient perturbation algorithm can get stuck in local minima in many places on this data set.\nTable 2 summarizes the results of this experiment in three smaller tables, one for each data set. In each smaller table, we compare four variants of OC1 with LMDT and SADT. The di erent results for OC1 were obtained by varying both the number of restarts and the number of random jumps. When random jumps were used, up to twenty random jumps were tried at each local minimum. As soon as one was found that improved the impurity of the current hyperplane, the algorithm moved the hyperplane and started running the deterministic perturbation procedure again. If none of the 20 random jumps improved the impurity, the search halted and further restarts (if any) were tried. The same training and test partitions were used for all methods for each cross-validation run (recall that the results Table 2: The e ect of randomization in OC1. The rst column, labelled R:J, shows the number of restarts (R) followed by the maximum number of random jumps (J) attempted by OC1 at each local minimum. Results with LMDT and SADT are included for comparison after the four variants of OC1. Size is average tree size measured by the number of leaf nodes. The third column shows the average number of hyperplanes each algorithm considered while building one tree.\n1 2 2 2 1 2 1 2 1 1 2 2 2 1 1 1 2 2 2 2 1 2 2 2 2 2 2 1 1 1 2 2 1 2 2 2 2 1 2 2 1 2 1 1 1 2 2 2 2 2 1 1 2 1 1 1 2 1 1 2 1 2 1 2 2 1 1 2 2 2 2 2 2 2 2 2 2 1 1 2 1 1 1 2 2 1 2 2 1 2 1 1 1 2 2 2 1 2 2 1 1 2 1 2 1 2 2 2 2 1 2 1 2 1 1 2 1 1 2 2 1 2 2 2 2 2 2 2 1 2 2 2 2 1 1 1 1 2 1 2 2 1 1 1 2 1 2 2 2 2 2 1 2 2 2 2 2 1 2 1 2 2 1 2 2 1 2 2 2 1 2 1 2 1 2 2 2 2 1 2 1 1 1 1 2 1 1 1 1 2 1 2 1 1 1 2 1 2 1 2 1 1 1 2 1 2 1 1 1 2 1 2 2 2 1 2 1 2 2 2 1 1 2 1 1 1 1 2 2 2 1 1 1 1 2 2 1 2 2 1 2 2 1 2 2 2 1 2 2 2 2 1 1 2 1 2 1 2 2 2 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 2 2 1 1 2 2 2 2 2 1 2 1 2 1 2 2 2 1 1 2 2 2 2 1 2 2 2 2 1 2 1 1 1 1 2 2 2 2 2 1 1 1 2 1 1 1 1 2 1 1 2 2 2 2 1 1 2 1 2 1 2 1 2 1 2 2 2 2 1 1 2 1 2 2 2 1 1 2 2 1 2 2 1 1 2 1 2 1 1 2 2 2 2 1 1 1 2 2 1 1 2 1 1 2 1 1 1 1 2 2 1 2 2 2 1 1 2 1 2 1 1 1 2 1 2 2 2 1 1 2 2 1 2 1 2 1 1 1 2 1 1 2 1 2 2 1 1 2 1 1 2 2 2 2 1 2 2 1 1 2 1 2 2 1 1 1 1 2 1 2 2 1 2 2 2 1 2 1 2 2 1 1 2 2 1 1 2 2 1 1 2 1 2 1 1 2 1 1 1 1 1 2 2 2 2 1 2 1 2 1 2 2 1 1 2 1 2 1 1 2 2 2 1 1 1 1 2 1 2 1 2 1 2 1 1 2 1 1 1 1 1 1 2 2 2 1 2 1 1 1 2 2 1 2 1 2 1 1 1 1 2 1 2 2 1 2 1 1 1 2 1 2 1 1 1 2 2 1 2 2 2 2 2 1 1 1 1 1 1 2 1 2 1 1 2 1 2 1 1 1 2 1 1 1 1 1 1 1 1 2 1 2 2 2 2 1 2 1 2 2 1 2 2 2 2 1 2 1 1 2 2 1 1 1 2 2 2 1 1 2 2 1 1 1 2 1 1 2 1 1 1 2 2 1 2 2 2 2 1 1 1 1 1 1 1 2 1 1 1 1 2 1 1 1 2 2 1 1 2 2 2 2 1 2 2 2 1 2 2 2 2 2 1 1 1 2 1 2 1 2 2 2 1 1 1 1 1 2 2 1 1 1 2 1 2 1 1 1 1 2 1 2 1 1 1 1 1 2 2 1 1 2 2 2 2 1 2 2 2 2 2 2 1 1 2 2 2 2 1 1 2 1 2 2 2 1 2 2 1 1 1 2 2 2 1 2 2 1 1 2 1 2 2 2 1 2 1 1 1 2 2 1 1 2 1 2 1 2 2 1 2 2 1 2 2 2 1 2 1 2 1 2 2 2 1 2 2 1 1 1 1 2 2 2 2 2 2 1 1 2 2 2 2 1 1 1 2 1 1 2 1 2 2 1 2 1 2 2 2 2 2 1 1 2 1 1 2 2 2 1 1 2 2 2 2 2 2 1 2 1 2 2 2 2 2 2 2 2 2 1 1 2 1 2 2 1 2 2 1 1 2 2 1 1 2 2 2 1 1 1 2 1 1 1 2 2 1 1 2 2 2 2 2 2 2 2 2 1 1 2 1 2 2 2 1 1 2 1 1 2 1 2 1 1 1 2 1 1 2 2 2 2 1 1 1 1 1 2 2 2 1 2 1 1 2 2 2 1 1 2 1 2 1 1 1 2 2 2 2 1 2 1 2 2 1 1 1 1 1 2 2 1 2 2 1 1 1 1 1 2 2 2 1 2 1 2 2 1 2 2 2 1 2 2 2 1 1 1 2 2 1 2 2 2 2 2 2 1 2 1 2 2 2 2 2 1 1 2 2 1 1 2 1 2 R o o t- 1 r-1 rr -1 rr r\nare an average of ten 5-fold CVs). The trees were not pruned for any of the algorithms, because the data were noise-free and furthermore the emphasis was on search. Table 2 also includes the number of hyperplanes considered by each algorithm while building a complete tree. Note that for OC1 and SADT, the number of hyperplanes considered is generally much larger than the number of perturbations actually made, because both these algorithms compare newly generated hyperplanes to existing hyperplanes before adjusting an existing one. Nevertheless, this number is a good estimate of much e ort each algorithm expends, because every new hyperplane must be evaluated according to the impurity measure. For LMDT, the number of hyperplanes considered is identical to the actual number of perturbations." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b21" ], "table_ref": [], "text": "The OC1 results here are quite clear. The rst line of each table, labelled 0:0, gives the accuracies and tree sizes when no randomization is used | this variant is very similar to the CART-LC algorithm. As we increase the use of randomization, accuracy increases while tree size decreases, which is exactly the result we had hoped for when we decided to introduce randomization into the method.\nLooking more closely at the tables, we can ask about the e ect of random jumps alone. This is illustrated in the second line (0:20) of each table, which attempted up to 20 random jumps at each local minimum and no restarts. Accuracy increased by 1-2% on each domain, and tree size decreased dramatically, roughly by a factor of two, in the POL and RCB domains. Note that because there is no noise in these domains, very high accuracies should be expected. Thus increases of more than a few percent in accuracy are not possible.\nLooking at the third line of each sub-table in Table 2, we see the e ect of multiple restarts on OC1. With 20 restarts but no random jumps to escape local minima, the improvement is even more noticeable for the LS10 data than when random jumps alone were used. For this data set, accuracy jumped signi cantly, from 89.8 to 95.0%, while tree size dropped from 67 to 26 nodes. For the POL and RCB data, the improvements were comparable to those obtained with random jumps. For the RCB data, tree size dropped by a factor of 3 (from 36 leaf nodes to 12 leaf nodes) while accuracy increased from 98.4 to 99.6%.\nThe fourth line of each table shows the e ect of both the randomized steps. Among the OC1 entries, this line has both the highest accuracies and the smallest trees for all three data sets, so it is clear that randomization is a big win for these kinds of problems. In addition, note that the smallest tree for the RCB data should have eight leaf nodes, and OC1's average trees, without pruning, had just 8.7 leaf nodes. It is clear that for this data set, which we thought was the most di cult one, OC1 came very close to nding the optimal tree on nearly every run. (Recall that numbers in the table are the average of 10 5-fold CV experiments; i.e., an average of 50 decision trees.) The LS10 data show how di cult it can be to nd a very simple concept in higher dimensions|the optimal tree there is just a single hyperplane (two nodes), but OC1 was unable to nd it with the current parameter settings. 8 The POL data required a minimum of 5 leaf nodes, and OC1 found this minimalsize tree most of the time, as can be seen from the table. Although not shown in the Table , \nOC1 using Sum Minority performed better for the POL data than the Twoing Rule or any other impurity measure; i.e., it found the correct tree using less time.\nThe results of LMDT and SADT on this data lead to some interesting insights. Not surprisingly, LMDT does very well on the linearly separable (LS10) data, and does not require an inordinate amount of search. Clearly, if the data is linearly separable, one should use a method such as LMDT or linear programming. OC1 and SADT have di culty nding the linear separator, although in our experiments OC1 did eventually nd it, given su cient time.\nOn the other hand, for both of the non-linearly separable data sets, LMDT produces much larger trees that are signi cantly less accurate than those produced by OC1 and SADT. Even the deterministic variant of OC1 (using zero restarts and zero random jumps) outperforms LMDT on these problems, with much less search.\nAlthough SADT sometimes produces very accurate trees, its main weakness was the enormous amount of search time it required, roughly 10-20 times greater than OC1 even using the 20:20 setting. One explanation of OC1's advantage is its use of directed search, as opposed to the strictly random search used by simulated annealing. Overall, Table 2 shows that OC1's use of randomization was quite e ective for the non-linearly separable data.\nIt is natural to ask why randomization helps OC1 in the task of inducing decision trees. Researchers in combinatorial optimization have observed that randomized search usually succeeds when the search space holds an abundance of good solutions (Gupta, Smolka, & Bhaskar, 1994). Furthermore, randomization can improve upon deterministic search when many of the local maxima in a search space lead to poor solutions. In OC1's search space, a local maximum is a hyperplane that cannot be improved by the deterministic search procedure, and a \\solution\" is a complete decision tree. If a signi cant fraction of local maxima lead to bad trees, then algorithms that stop at the rst local maximum they encounter will perform poorly. Because randomization allows OC1 to consider many di erent local maxima, if a modest percentage of these maxima lead to good trees, then it has a good chance of nding one of those trees. Our experiments with OC1 thus far indicate that the space of oblique hyperplanes usually contains numerous local maxima, and that a substantial percentage of these locally good hyperplanes lead to good decision trees." }, { "figure_ref": [], "heading": "Conclusions and Future Work", "publication_ref": [ "b8", "b32" ], "table_ref": [], "text": "This paper has described OC1, a new system for constructing oblique decision trees. We have shown experimentally that OC1 can produce good classi ers for a range of real-world and arti cial domains. We have also shown how the use of randomization improves upon the original algorithm proposed by Breiman et al. (1984), without signi cantly increasing the computational cost of the algorithm.\nThe use of randomization might also be bene cial for axis-parallel tree methods. Note that although they do nd the optimal test (with respect to an impurity measure) for each node of a tree, the complete tree may not be optimal: as is well known, the problem of nding the smallest tree is NP-Complete (Hya l & Rivest, 1976). Thus even axis-parallel decision tree methods do not produce \\ideal\" decision trees. Quinlan has suggested that his windowing algorithm might be used as a way of introducing randomization into C4.5, even though the algorithm was designed for another purpose (Quinlan, 1993a). (The windowing algorithm selects a random subset of the training data and builds a tree using that.) We believe that randomization is a powerful tool in the context of decision trees, and our experiments are just one example of how it might be exploited. We are in the process of conducting further experiments to quantify more accurately the e ects of di erent forms of randomization.\nIt should be clear that the ability to produce oblique splits at a node broadens the capabilities of decision tree algorithms, especially as regards domains with numeric attributes. Of course, axis-parallel splits are simpler, in the sense that the description of the split only uses one attribute at each node. OC1 uses oblique splits only when their impurity is less than the impurity of the best axis-parallel split; however, one could easily penalize the additional complexity of an oblique split further. This remains an open area for further research. A more general point is that if the domain is best captured by a tree that uses oblique hyperplanes, it is desirable to have a system that can generate that tree. We have shown that for some problems, including those used in our experiments, OC1 builds small decision trees that capture the domain well." }, { "figure_ref": [], "heading": "Appendix A. Complexity Analysis of OC1", "publication_ref": [], "table_ref": [], "text": "In the following, we show that OC1 runs e ciently even in the worst case. For a data set with n examples (points) and d attributes per example, OC1 uses at most O(dn 2 log n) time. We assume n > d for our analysis.\nFor the analysis here, we assume the coe cients of a hyperplane are adjusted in sequential order (the Seq method described in the paper). The number of restarts at a node will be r, and the number of random jumps tried will be j. Both r and j are constants, xed in advance of running the algorithm.\nInitializing the hyperplane to a random position takes just O(d) time. We need to consider rst the maximum amount of work OC1 can do before it nds a new location for the hyperplane. Then we need to consider how many times it can move the hyperplane.\n1. Attempting to perturb the rst coe cient (a 1 ) takes O(dn+n log n) time. Computing U i 's for all the points (equation 2) requires O(dn) time, and sorting the U i 's takes O(n log n). This gives us O(dn + n log n) work. 2. If perturbing a 1 does not improve things, we try to perturb a 2 . Computing all the new U i 's will take just O(n) time because only one term is di erent for each U i . Re-sorting will take O(n log n), so this step takes O(n) + O(n log n) = O(n log n) time. 3. Likewise a 3 ; : : :; a d will each take O(n log n) additional time, assuming we still have not found a better hyperplane after checking each coe cient. Thus the total time to cycle through and attempt to perturb all these additional coe cients is (d 1) O(n logn) = O(dn log n). 4. Summing up, the time to cycle through all coe cients is O(dn log n)+O(dn+n log n) = O(dn log n). 5. If none of the coe cients improved the split, then we attempt to make up to j random jumps. Since j is a constant, we will just consider j = 1 for our analysis. This step involves choosing a random vector and running the perturbation algorithm to solve for , as explained in Section 3.2. As before, we need to compute a set of U i 's and sort them, which takes O(dn+n log n) time. Because this amount of time is dominated by the time to adjust all the coe cients, the total time so far is still O(dn log n). This is the most time OC1 can spend at a node before either halting or nding an improved hyperplane.\n6. Assuming OC1 is using the Sum Minority or Max Minority error measure, it can only reduce the impurity of the hyperplane n times. This is clear because each improvement means one more example will be correctly classi ed by the new hyperplane. Thus the total amount of work at a node is limited to n O(dn log n) = O(dn 2 log n). (This analysis extends, with at most linear cost factors, to Information Gain, Gini Index and Twoing Rule when there are two categories. It will not apply to a measure that, for example, uses the distances of mis-classi ed objects to the hyperplane.) In practice,\nwe have found that the number of improvements per node is much smaller than n.\nAssuming that OC1 only adjusts a hyperplane when it improves the impurity measure, it will do O(dn 2 log n) work in the worst case.\nHowever, OC1 allows a certain number of adjustments to the hyperplane that do not improve the impurity, although it will never accept a change that worsens the impurity. The number allowed is determined by a constant known as \\stagnant-perturbations\". Let this value be s. This works as follows.\nEach time OC1 nds a new hyperplane that improves on the old one, it resets a counter to zero. It will move the new hyperplane to a di erent location that has equal impurity at most s times. After each of these moves it repeats the perturbation algorithm. Whenever impurity is reduced, it re-starts the counter and again allows s moves to equally good locations. Thus it is clear that this feature just increases the worst-case complexity of OC1 by a constant factor, s.\nFinally, note that the overall cost of OC1 is also O(dn 2 log n), i.e., this is an upper bound on the total running time of OC1 independent of the size of the tree it ends up creating. (This upper bound applies to Sum Minority and Max Minority; an open question is whether a similar upper bound can be proven for Information Gain or the Gini Index.) Thus the worst-case asymptotic complexity of our system is comparable to that of systems that construct axis-parallel decision trees, which have O(dn 2 ) worst-case complexity. To sketch the intuition that leads to this bound, let G be the total impurity summed over all leaves in a partially constructed tree (i.e., the sum of currently misclassi ed points in the tree). Now observe that each time we run the perturbation algorithm on any node in the tree, we either halt or improve G by at least one unit. The worst-case analysis for one node is realized when the perturbation algorithm is run once for every one of the n examples, but when this happens, there would no longer be any mis-classi ed examples and the tree would be complete." }, { "figure_ref": [], "heading": "Appendix B. De nitions of impurity measures available in OC1", "publication_ref": [ "b51", "b8", "b28" ], "table_ref": [], "text": "In addition to the Twoing Rule de ned in the text, OC1 contains built-in de nitions of ve additional impurity measures, de ned as follows. In each of the following de nitions, the set of examples T at the node about to be split contains n (> 0) instances that belong to one of k categories. (Initially this set is the entire training set.) A hyperplane H divides T into two non-overlapping subsets T L and T R (i.e., left and right). L j and R j are the number of instances of category j in T L and T R respectively. All the impurity measures initially check to see if T L and T R are homogeneous (i.e., all examples belong to the same category), and if so return minimum (zero) impurity.\nInformation Gain. This measure of information gained from a particular split was popularized in the context of decision trees by Quinlan (1986). Quinlan's de nition makes information gain a goodness measure; i.e., something to maximize. Because OC1 attempts to minimize whatever impurity measure it uses, we use the reciprocal of the standard value of information gain in the OC1 implementation.\nGini Index. The Gini Criterion (or Index) was proposed for decision trees by Breiman et al. (1984). The Gini Index as originally de ned measures the probability of misclassi cation of a set of instances, rather than the impurity of a split. We implement the following variation:\nGiniL = 1:0 k X i=1 (L i =jT L j) 2 GiniR = 1:0 k X i=1\n(R i =jT R j) 2 Impurity = (jT L j GiniL + jT R j GiniR)=n\nwhere GiniL is the Gini Index on the \\left\" side of the hyperplane and GiniR is that on the right.\nMax Minority. The measures Max Minority, Sum Minority and Sum Of Variances were de ned in the context of decision trees by Heath, Kasif, and Salzberg (1993b). 9 Max Minority has the theoretical advantage that a tree built minimizing this measure will have depth at most log n. Our experiments indicated that this is not a great advantage in practice: seldom do other impurity measures produce trees substantially deeper than those produced with Max Minority. The de nition is:\nMinorityL = k X i=1;i6 =max L i L i MinorityR = k X i=1;i6 =max R i R i\nMax Minority = max(MinorityL; MinorityR) 9. Sum Of Variances was called Sum of Impurities by Heath et al." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors thank Richard Beigel of Yale University for suggesting the idea of jumping in a random direction. Thanks to Wray Buntine of Nasa Ames Research Center for providing the IND 2.1 package, to Carla Brodley for providing the LMDT code, and to David Heath for providing the SADT code and for assisting us in using it. Thanks also to three anonymous reviewers for many helpful suggestions. This material is based upon work supported by the National Science foundation under Grant Nos. IRI-9116843, IRI-9223591, and IRI-9220960. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Sum Minority. This measure is very similar to Max Minority. If MinorityL and Minor-ityR are de ned as for the Max Minority measure, then Sum Minority is just the sum of these two values. This measure is the simplest way of quantifying impurity, as it simply counts the number of misclassi ed instances.\nThough Sum Minority performs well on some domains, it has some obvious aws. As one example, consider a domain in which n = 100; d = 1, and k = 2 (i.e., 100 examples, 1 numeric attribute, 2 classes). Suppose that when the examples are sorted according to the single attribute, the rst 50 instances belong to category 1, followed by 24 instances of category 2, followed by 26 instances of category 1. Then all possible splits for this distribution have a sum minority of 24. Therefore it is impossible when using Sum Minority to distinguish which split is preferable, although splitting at the alternations between categories is clearly better.\nSum Of Variances. The de nition of this measure is:\nSum of Variances = VarianceL + VarianceR where Cat(T i ) is the category of instance T i . As this measure is computed using the actual class labels, it is easy to see that the impurity computed varies depending on how numbers are assigned to the classes. For instance, if T 1 consists of 10 points of category 1 and 3 points of category 2, and if T 2 consists of 10 points of category 1 and 3 points of category 5, then the Sum Of Variances values are di erent for T 1 and T 2 . To avoid this problem, OC1 uniformly reassigns category numbers according to the frequency of occurrence of each category at a node before computing the Sum Of Variances." } ]
[ { "authors": "D Aha", "journal": "", "ref_id": "b0", "title": "A Study of Instance-Based Algorithms for Supervised Learning: Mathematical, empirical and psychological evaluations", "year": "1990" }, { "authors": "H Almuallin; T Dietterich", "journal": "", "ref_id": "b1", "title": "Learning with many irrelevant features", "year": "1991" }, { "authors": "D Belsley", "journal": "Wiley & Sons", "ref_id": "b2", "title": "Regression Diagnostics: Identifying In uential Data and Sources of Collinearity", "year": "1980" }, { "authors": "K Bennett; O Mangasarian", "journal": "Optimization Methods and Software", "ref_id": "b3", "title": "Robust linear programming discrimination of two linearly inseparable sets", "year": "1992" }, { "authors": "K Bennett; O Mangasarian", "journal": "Optimization Methods and Software", "ref_id": "b4", "title": "Multicategory discrimination via linear programming", "year": "1994" }, { "authors": "K Bennett; O Mangasarian", "journal": "SIAM Journal on Optimization", "ref_id": "b5", "title": "Serial and parallel multicategory discrimination", "year": "1994" }, { "authors": "A Blum; R Rivest", "journal": "", "ref_id": "b6", "title": "Training a 3-node neural network is NP-complete", "year": "1988" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b7", "title": "", "year": "" }, { "authors": "L Breiman; J Friedman; R Olshen; C Stone", "journal": "Wadsworth International Group", "ref_id": "b8", "title": "Classi cation and Regression Trees", "year": "1984" }, { "authors": "R P Brent", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b9", "title": "Fast training algorithms for multilayer neural nets", "year": "1991" }, { "authors": "C E Brodley; P E Utgo", "journal": "", "ref_id": "b10", "title": "Multivariate versus univariate decision trees", "year": "1992" }, { "authors": "C E Brodley; P E Utgo", "journal": "", "ref_id": "b11", "title": "Multivariate decision trees", "year": "1994" }, { "authors": "W Buntine", "journal": "Technology", "ref_id": "b12", "title": "Tree classi cation software", "year": "1992" }, { "authors": "W Buntine; T Niblett", "journal": "Machine Learning", "ref_id": "b13", "title": "A further comparison of splitting rules for decision-tree induction", "year": "1992" }, { "authors": "C Cardie", "journal": "", "ref_id": "b14", "title": "Using decision trees to improve case-based learning", "year": "1993" }, { "authors": "G Cestnik; I Kononenko; I Bratko", "journal": "Sigma Press", "ref_id": "b15", "title": "Assistant 86: A knowledge acquisition tool for sophisticated users", "year": "1987" }, { "authors": "K J Cios; N Liu", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b16", "title": "A machine learning method for generation of a neural network architecture: A continuous ID3 algorithm", "year": "1992" }, { "authors": "W Cohen", "journal": "", "ref_id": "b17", "title": "E cient pruning methods for separate-and-conquer rule learning systems", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b18", "title": "", "year": "" }, { "authors": "U M Fayyad; K B Irani", "journal": "AAAI Press", "ref_id": "b19", "title": "The attribute speci cation problem in decision tree generation", "year": "1992" }, { "authors": "M Frean", "journal": "", "ref_id": "b20", "title": "Small Nets and Short Paths: Optimising neural computation", "year": "1990" }, { "authors": "R Gupta; S Smolka; S Bhaskar", "journal": "ACM Computing Surveys", "ref_id": "b21", "title": "On randomization in sequential and distributed algorithms", "year": "1994" }, { "authors": "S Hampson; D Volper", "journal": "Biological Cybernetics", "ref_id": "b22", "title": "Linear function neurons: Structure and training", "year": "1986" }, { "authors": "D Harrison; D Rubinfeld", "journal": "Journal of Environmental Economics and Management", "ref_id": "b23", "title": "Hedonic prices and the demand for clean air", "year": "1978" }, { "authors": "B Hassibi; D Stork", "journal": "", "ref_id": "b24", "title": "Second order derivatives for network pruning: optimal brain surgeon", "year": "1993" }, { "authors": "Morgan Kaufmann; San Mateo; Ca", "journal": "", "ref_id": "b25", "title": "", "year": "" }, { "authors": "D Heath", "journal": "", "ref_id": "b26", "title": "A Geometric Framework for Machine Learning", "year": "1992" }, { "authors": "D Heath; S Kasif; S Salzberg", "journal": "", "ref_id": "b27", "title": "k-DT: A multi-tree learning method", "year": "1993" }, { "authors": "D Heath; S Kasif; S Salzberg", "journal": "", "ref_id": "b28", "title": "Learning oblique decision trees", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b29", "title": "", "year": "" }, { "authors": "G T Herman; K D Yeung", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b30", "title": "On piecewise-linear classi cation", "year": "1992" }, { "authors": "R Holte", "journal": "Machine Learning", "ref_id": "b31", "title": "Very simple classi cation rules perform well on most commonly used datasets", "year": "1993" }, { "authors": "L Hya L; R L Rivest", "journal": "Information Processing Letters", "ref_id": "b32", "title": "Constructing optimal binary decision trees is NPcomplete", "year": "1976" }, { "authors": "K Kira; L Rendell", "journal": "", "ref_id": "b33", "title": "A practical approach to feature selection", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b34", "title": "", "year": "" }, { "authors": "S Kirkpatrick; C Gelatt; M Vecci", "journal": "Science", "ref_id": "b35", "title": "Optimization by simulated annealing", "year": "1983" }, { "authors": "Y Kodrato; M Manago", "journal": "International Journal of Man-Machine Studies", "ref_id": "b36", "title": "Generalization and noise", "year": "1987" }, { "authors": "P Langley; S Sage", "journal": "", "ref_id": "b37", "title": "Scaling to domains with many irrelevant features", "year": "1993" }, { "authors": "O Mangasarian; R Setiono; W Wolberg", "journal": "SIAM Workshop on Optimization", "ref_id": "b38", "title": "Pattern recognition via linear programming: Theory and application to medical diagnosis", "year": "1990" }, { "authors": "J Mingers", "journal": "Machine Learning", "ref_id": "b39", "title": "An empirical comparison of pruning methods for decision tree induction", "year": "1989" }, { "authors": "J Mingers", "journal": "Machine Learning", "ref_id": "b40", "title": "An empirical comparison of selection measures for decision tree induction", "year": "1989" }, { "authors": "B M Moret", "journal": "Computing Surveys", "ref_id": "b41", "title": "Decision trees and diagrams", "year": "1982" }, { "authors": "P Murphy; D Aha", "journal": "", "ref_id": "b42", "title": "UCI repository of machine learning databases { a machinereadable data repository", "year": "1994" }, { "authors": "S K Murthy; S Kasif; S Salzberg; R Beigel", "journal": "MIT Press", "ref_id": "b43", "title": "OC1: Randomized induction of oblique decision trees", "year": "1993" }, { "authors": "S K Murthy; S Salzberg", "journal": "", "ref_id": "b44", "title": "Using structure to improve decision trees", "year": "1994" }, { "authors": "T Niblett", "journal": "Sigma Press", "ref_id": "b45", "title": "Constructing decision trees in noisy domains", "year": "1986" }, { "authors": "N Nilsson", "journal": "Morgan Kaufmann", "ref_id": "b46", "title": "Learning Machines", "year": "1990" }, { "authors": "S Odewahn; E Stockwell; R Pennington; R Humphreys; W Zumach", "journal": "Astronomical Journal", "ref_id": "b47", "title": "Automated star-galaxy descrimination with neural networks", "year": "1992" }, { "authors": "G Pagallo", "journal": "", "ref_id": "b48", "title": "Adaptive Decision Tree Algorithms for Learning From Examples", "year": "1990" }, { "authors": "G Pagallo; D Haussler", "journal": "Machine Learning", "ref_id": "b49", "title": "Boolean feature discovery in empirical learning", "year": "1990" }, { "authors": "J R Quinlan", "journal": "Morgan Kaufmann", "ref_id": "b50", "title": "Learning e cient classi cation procedures and their application to chess end games", "year": "1983" }, { "authors": "J R Quinlan", "journal": "Machine Learning", "ref_id": "b51", "title": "Induction of decision trees", "year": "1986" }, { "authors": "J R Quinlan", "journal": "International Journal of Man-Machine Studies", "ref_id": "b52", "title": "Simplifying decision trees", "year": "1987" }, { "authors": "J R Quinlan", "journal": "Morgan Kaufmann Publishers", "ref_id": "b53", "title": "C4.5: Programs for Machine Learning", "year": "1993" }, { "authors": "J R Quinlan", "journal": "", "ref_id": "b54", "title": "Combining instance-based and model-based learning", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b55", "title": "", "year": "" }, { "authors": "R H Roth", "journal": "Journal of the ACM", "ref_id": "b56", "title": "An approach to solving linear discrete optimization problems", "year": "1970" }, { "authors": "S R Safavin; D Landgrebe", "journal": "IEEE Transactions on Systems, Man and Cybernetics", "ref_id": "b57", "title": "A survey of decision tree classi er methodology", "year": "1991" }, { "authors": "M Sahami", "journal": "AAAI Press", "ref_id": "b58", "title": "Learning non-linearly separable boolean functions with linear threshold unit trees and madaline-style networks", "year": "1993" }, { "authors": "S Salzberg", "journal": "Machine Learning", "ref_id": "b59", "title": "A nearest hyperrectangle learning method", "year": "1991" }, { "authors": "S Salzberg", "journal": "", "ref_id": "b60", "title": "Combining learning and search to create good classi ers", "year": "1992" }, { "authors": "S Salzberg; R Chandar; H Ford; S K Murthy; R White", "journal": "Publications of the Astronomical Society of the Paci c", "ref_id": "b61", "title": "Decision trees for automated identi cation of cosmic rays in Hubble Space Telescope images", "year": "1994" }, { "authors": "C Scha Er", "journal": "Machine Learning", "ref_id": "b62", "title": "Over tting avoidance as bias", "year": "1993" }, { "authors": "J Schlimmer", "journal": "", "ref_id": "b63", "title": "E ciently inducing determinations: A complete and systematic search algorithm that uses optimal pruning", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b64", "title": "", "year": "" }, { "authors": "J Smith; J Everhart; W Dickson; W Knowler; R Johannes", "journal": "IEEE Computer Society Press", "ref_id": "b65", "title": "Using the ADAP learning algorithm to forecast the onset of diabetes mellitus", "year": "1988" }, { "authors": "P E Utgo", "journal": "Connection Science", "ref_id": "b66", "title": "Perceptron trees: A case study in hybrid concept representations", "year": "1989" }, { "authors": "P E Utgo; C E Brodley", "journal": "", "ref_id": "b67", "title": "An incremental method for nding multivariate splits for decision trees", "year": "1990" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b68", "title": "", "year": "" }, { "authors": "P E Utgo; C E Brodley", "journal": "", "ref_id": "b69", "title": "Linear machine decision trees", "year": "1991" }, { "authors": "T Van De Merckt", "journal": "", "ref_id": "b70", "title": "NFDT: A system that learns exible concepts based on decision trees for numerical attributes", "year": "1992" }, { "authors": "T Van De Merckt", "journal": "", "ref_id": "b71", "title": "Decision trees in numerical attribute spaces", "year": "1993" }, { "authors": "S Weiss; I Kapouleas", "journal": "", "ref_id": "b72", "title": "An empirical comparison of pattern recognition, neural nets, and machine learning classi cation methods", "year": "1989" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b73", "title": "", "year": "" }, { "authors": "D Wolpert", "journal": "The Santa Fe Institute", "ref_id": "b74", "title": "On over tting avoidance as bias", "year": "1992" } ]
[ { "formula_coordinates": [ 2, 262.08, 407.82, 14.4, 37.12 ], "formula_id": "formula_0", "formula_text": "d X i=1" }, { "formula_coordinates": [ 7, 140.16, 148.32, 44.16, 14.4 ], "formula_id": "formula_1", "formula_text": "L = L + 1" }, { "formula_coordinates": [ 7, 160.8, 220.32, 80.88, 16.34 ], "formula_id": "formula_2", "formula_text": "a i = a i , c = c" }, { "formula_coordinates": [ 8, 121.27, 86.16, 365.73, 140.18 ], "formula_id": "formula_3", "formula_text": "2 2 1 1 1 1 2 2 Ini t i al Loc. C A R T -L C O C 1" }, { "formula_coordinates": [ 10, 90, 333.48, 425.52, 29.86 ], "formula_id": "formula_4", "formula_text": "(V i ) = sign(V j ) i category(T i ) = category(T j )." }, { "formula_coordinates": [ 10, 247.92, 428.64, 274.32, 51.34 ], "formula_id": "formula_5", "formula_text": "V j > 0 a m > a m x jm V j x jm def = U j (2)" }, { "formula_coordinates": [ 14, 157.68, 588.54, 296.16, 37.12 ], "formula_id": "formula_6", "formula_text": "TwoingValue = (jT L j=n) (jT R j=n) ( k X i=1 jL i =jT L j R i =jT R jj) 2" }, { "formula_coordinates": [ 21, 179.15, 88.03, 116.37, 141.99 ], "formula_id": "formula_7", "formula_text": "1 2 2 2 1 2 1 2 1 1 2 2 2 1 1 1 2 2 2 2 1 2 2 2 2 2 2 1 1 1 2 2 1 2 2 2 2 1 2 2 1 2 1 1 1 2 2 2 2 2 1 1 2 1 1 1 2 1 1 2 1 2 1 2 2 1 1 2 2 2 2 2 2 2 2 2 2 1 1 2 1 1 1 2 2 1 2 2 1 2 1 1 1 2 2 2 1 2 2 1 1 2 1 2 1 2 2 2 2 1 2 1 2 1 1 2 1 1 2 2 1 2 2 2 2 2 2 2 1 2 2 2 2 1 1 1 1 2 1 2 2 1 1 1 2 1 2 2 2 2 2 1 2 2 2 2 2 1 2 1 2 2 1 2 2 1 2 2 2 1 2 1 2 1 2 2 2 2 1 2 1 1 1 1 2 1 1 1 1 2 1 2 1 1 1 2 1 2 1 2 1 1 1 2 1 2 1 1 1 2 1 2 2 2 1 2 1 2 2 2 1 1 2 1 1 1 1 2 2 2 1 1 1 1 2 2 1 2 2 1 2 2 1 2 2 2 1 2 2 2 2 1 1 2 1 2 1 2 2 2 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 2 2 1 1 2 2 2 2 2 1 2 1 2 1 2 2 2 1 1 2 2 2 2 1 2 2 2 2 1 2 1 1 1 1 2 2 2 2 2 1 1 1 2 1 1 1 1 2 1 1 2 2 2 2 1 1 2 1 2 1 2 1 2 1 2 2 2 2 1 1 2 1 2 2 2 1 1 2 2 1 2 2 1 1 2 1 2 1 1 2 2 2 2 1 1 1 2 2 1 1 2 1 1 2 1 1 1 1 2 2 1 2 2 2 1 1 2 1 2 1 1 1 2 1 2 2 2 1 1 2 2 1 2 1 2 1 1 1 2 1 1 2 1 2 2 1 1 2 1 1 2 2 2 2 1 2 2 1 1 2 1 2 2 1 1 1 1 2 1 2 2 1 2 2 2 1 2 1 2 2 1 1 2 2 1 1 2 2 1 1 2 1 2 1 1 2 1 1 1 1 1 2 2 2 2 1 2 1 2 1 2 2 1 1 2 1 2 1 1 2 2 2 1 1 1 1 2 1 2 1 2 1 2 1 1 2 1 1 1 1 1 1 2 2 2 1 2 1 1 1 2 2 1 2 1 2 1 1 1 1 2 1 2 2 1 2 1 1 1 2 1 2 1 1 1 2 2 1 2 2 2 2 2 1 1 1 1 1 1 2 1 2 1 1 2 1 2 1 1 1 2 1 1 1 1 1 1 1 1 2 1 2 2 2 2 1 2 1 2 2 1 2 2 2 2 1 2 1 1 2 2 1 1 1 2 2 2 1 1 2 2 1 1 1 2 1 1 2 1 1 1 2 2 1 2 2 2 2 1 1 1 1 1 1 1 2 1 1 1 1 2 1 1 1 2 2 1 1 2 2 2 2 1 2 2 2 1 2 2 2 2 2 1 1 1 2 1 2 1 2 2 2 1 1 1 1 1 2 2 1 1 1 2 1 2 1 1 1 1 2 1 2 1 1 1 1 1 2 2 1 1 2 2 2 2 1 2 2 2 2 2 2 1 1 2 2 2 2 1 1 2 1 2 2 2 1 2 2 1 1 1 2 2 2 1 2 2 1 1 2 1 2 2 2 1 2 1 1 1 2 2 1 1 2 1 2 1 2 2 1 2 2 1 2 2 2 1 2 1 2 1 2 2 2 1 2 2 1 1 1 1 2 2 2 2 2 2 1 1 2 2 2 2 1 1 1 2 1 1 2 1 2 2 1 2 1 2 2 2 2 2 1 1 2 1 1 2 2 2 1 1 2 2 2 2 2 2 1 2 1 2 2 2 2 2 2 2 2 2 1 1 2 1 2 2 1 2 2 1 1 2 2 1 1 2 2 2 1 1 1 2 1 1 1 2 2 1 1 2 2 2 2 2 2 2 2 2 1 1 2 1 2 2 2 1 1 2 1 1 2 1 2 1 1 1 2 1 1 2 2 2 2 1 1 1 1 1 2 2 2 1 2 1 1 2 2 2 1 1 2 1 2 1 1 1 2 2 2 2 1 2 1 2 2 1 1 1 1 1 2 2 1 2 2 1 1 1 1 1 2 2 2 1 2 1 2 2 1 2 2 2 1 2 2 2 1 1 1 2 2 1 2 2 2 2 2 2 1 2 1 2 2 2 2 2 1 1 2 2 1 1 2 1 2 R o o t- 1 r-1 rr -1 rr r" }, { "formula_coordinates": [ 26, 237.84, 314.22, 134.88, 84.64 ], "formula_id": "formula_8", "formula_text": "GiniL = 1:0 k X i=1 (L i =jT L j) 2 GiniR = 1:0 k X i=1" }, { "formula_coordinates": [ 26, 240.96, 564.54, 129.6, 91.26 ], "formula_id": "formula_9", "formula_text": "MinorityL = k X i=1;i6 =max L i L i MinorityR = k X i=1;i6 =max R i R i" } ]
A System for Induction of Oblique Decision Trees
This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to nd a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and arti cial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the bene ts of randomization for the construction of oblique decision trees.
Sreerama K Murthy; Simon Kasif; Steven Salzberg
[ { "figure_caption": "1.Begin with a set of examples called the training set, T. If all examples in T belong to one class, then halt. 2. Consider all tests that divide T into two or more subsets. Score each test according to how well it splits up the examples. 3. Choose (\\greedily\") the test that scores the highest. 4. Divide the examples into subsets and run this procedure recursively on each subset.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: For n points in d dimensions (n d), there are n d distinct axis-parallel splits, while there are 2 d n d distinct d-dimensional oblique splits. This shows all distinct oblique and axis-parallel splits for two speci c points in 2-D.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Overview of the OC1 algorithm for a single node of a decision tree.", "figure_data": "", "figure_id": "fig_2", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Finding the optimal value for a single coe cient a m . Large U's correspond to examples in one category and small u's to another.", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "For i = 1 to d, Perturb(H; i) Best: Repeat until coe cient m remains unmodi ed: m = coe cient which when perturbed, results in the maximum improvement of the impurity measure. Perturb(H; m) R-50: Repeat a xed number of times (50 in our experiments): m = random integer between 1 and d + 1 Perturb(H; m)", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9: The POL and RCB data sets Linearly Separable 10-D (LS10) data R:J Accuracy Size Hyperplanes 0:0 89.8 1.2 67.0 5.8 2756 0:20 91.5 1.5 55.2 7.0 3824 20:0 95.0 0.6 25.6 2.4 24913 20:20 97.2 0.7 13.9 3.2 30366 LMDT 99.7 0.2 2.2 0.5 9089 SADT 95.2 1.8 15.5 5.7 349067 Parallel Oblique Lines (POL) data R:J Accuracy Size Hyperplanes 0:0 98.3 0.3 21.6 1.9 164 0:20 99.3 0.2 9.0 1.0 360 20:0 99.1 0.2 14.2 1.1 3230 20:20 99.6 0.1 5.5 0.3 4852 LMDT 89.6 10.2 41.9 19.2 1732 SADT 99.3 0.4 8.4 2.1 85594 Rotated Checker Board (RCB) data R:J Accuracy Size Hyperplanes 0:0 98.4 0.2 35.5 1.4 573 0:20 99.3 0.3 19.7 0.8 1778 20:0 99.6 0.2 12.0 1.4 6436 20:20 99.8 0.1 8.7 0.4 11634 LMDT 95.7 2.3 70.1 9.6 2451 SADT 97.9 1.1 32.5 4.9 359112", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "and other work cited above, we choseBreiman et al.'s Cost Complexity (CC) pruning (1984) as the default pruning method for OC1. This method, which is also called Error Complexity or Weakest Link pruning, requires a separate pruning set. The pruning set can be a randomly chosen subset of the training set, or it can be approximated using cross validation. OC1 randomly chooses 10% (the default value) of the training data to use for pruning. In the experiments reported below, we only used this default value.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of OC1 and other decision tree induction methods on six di erent data sets. The rst line for each method gives accuracies, and the second line gives average tree sizes. The highest accuracy for each domain appears in boldface.", "figure_data": "C4.54 10 11.5 7.2 4.3 1.6 15.1 10 11.5 9.1 98.5 0.5 93.3 0.8 95.3 2.0 95.1 3.2 83.2 3.1 71.4 3.3 14.3 2.2 77.9 7.4 9.8 2.2 4.6 0.8 28.2 3.3 56.3 7.9", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Suppose you nd yourself in a complex labyrinth, with no recollection as to what brought you there or how to get out. You do have some knowledge as to the possible outcomes of your actions (e.g., gravitation works as usual). However, several basic characteristics of your surrounding are unknown (e.g., the map of the labyrinth, or where you are in it). Your goal is to plan your way out of there while learning enough facts about your surroundings to enable that goal.\nThe above example is a special case of the following general setting: An agent P operating in an environment, is trying to achieve a given goal. At each point in time, the agent is in a speci c state. The agent can be fully described by a decision procedure, which determines the next action to be taken, as a function of its history of states. An environment is taken to have some behavior, determining, for every state and action taken by the agent, the next state that will be reached. P is given a set of possible behaviors of the environment, only one of which is the actual behavior of the speci c environment. We say that P has partial information on the behavior of the environment. P's goal is given as a subset of the states. Reaching any of these states is considered a success. P's goal is, of course, not necessarily achievable; it may be the case that for one of the possible behaviors of the environment, there does not exist a sequence of actions that would lead P to a success. Moreover, even if for every possible speci c behavior of the environment there exists a sequence of actions that leads P to its goal, it may still be the case that P cannot achieve its goal. For example, consider an environment with two possible behaviors E 1 and E 2 . It may be the case that the only action a that leads P to its goal when the environment follows E 1 , leads to a state from which the goal is not achievable if the environment behaves according to E 2 .\nNevertheless, even in the case in which P's knowledge of the environment is not complete, it may sometimes be possible for P to achieve its goal. Suppose that we change the above example so that there exists an action c that, if taken by P, leads in the case that the environment behaves according to E 1 , to a state which is observably di erent from the state that results from the same action (c) taken when the environment behaves according to E 2 . In addition, suppose that there exists an action d that, in both cases, reverses c's e ects. In this case, P can always achieve its goal, by following the following plan: rst take c, and, according to the resulting state decide whether the environment behaves according to E 1 or according to E 2 . Then, take action d to get back to the initial state. Finally, apply the applicable sequence of actions for either the E 1 or the E 2 case.\nIn general, P may perform some actions that reduce the number of possible behaviors of the environment (i.e., increase the knowledge that P has on the environment), while avoiding actions that may lead to failure in any of the still possible behaviors of the environment, according to P's knowledge. P may eventually learn enough about the behavior of the environment to choose the applicable action that leads to success. This process is referred to as Planning while Learning.\nThis paper discusses the framework of Planning while Learning, while concentrating on the tractability of nding a satisfactory plan (i.e., a way to achieve the goal regardless of which possible behavior of the environment is the actual one), or checking that a given plan is satisfactory. The next section de nes a basic framework where Planning while Learning can be studied. In Section 3 we discuss the computational aspects we study in this paper. In particular, we distinguish between three main types of representation, and between three main computational categories. In Sections 4{5 we classify Planning while Learning based on these computational categories and representation types. In Section 6 we discuss several extensions to our basic framework. In Section 7 we put our framework and results in the perspective of related work." }, { "figure_ref": [], "heading": "The Basic Framework", "publication_ref": [ "b25", "b10", "b25", "b26", "b10", "b11", "b25", "b10", "b22", "b17" ], "table_ref": [], "text": "Consider the following examples which have motivated our study. The rst example is taken from a medical domain. Consider a trauma-care system, where there are many 1 observations that can be made on a patient's state. Actions taken by the doctor may change these observations. For example, the doctor may be able to observe whether the patient's blood pressure is high or low, and whether the patient has high or low temperature. Based on the observations made, the doctor may need to take an action, which may in turn lead to new observations. Based on these new observations, the doctor may need to choose a subsequent action, and so on. There is a list of possible injuries that the patient might su er from, but the exact nature of the actual injury is not known apriori. Naturally, the e ects of the action taken by the doctor may depend on the actual injury of the patient. The doctor needs to devise a plan that will take the patient from his initial observable state to a goal state (i.e., a \\physically stable state\"). The doctor can observe the patient at each point in time, and learn facts about the actual injury (i.e., the actual environment behavior) during the execution of the plan. Hence, we get a natural situation where Planning while Learning is necessary.\nOur second example is taken from a transportation domain. Let G = (V; E) be a directed graph, where the vertices denote locations in a hostile environment. The edges denote safe routes from one location to another, that are taken to be one-way (two-way routes are described as a pair of one-way routes). One-way routes in this particular domain occur as a result of the structure of the environment and of the vehicle used by the agent. Uncertainty in this transportation domain arises from the fact that there is incomplete information about the end-point of some routes originating from some particular locations (i.e., the map is partially unknown). In some cases this incomplete information concerns only a small number of locations and routes where the number of possible end-points of a route is also small.2 An agent moving along these routes knows the possible alternatives for the structure of the environment and can identify the locations it arrives at. The objective of the agent is to reach a given target location starting from a given initial location.\nThe above examples are taken from real-life situations. They are typical situations of bounded uncertainty. Similar situations occur whenever we have to operate a machine that works in one of several options. In many of those cases, the possible observations can be stated, and the set of possible environment behaviors can be listed; the actual behavior, however, may be unknown apriori. Illuminating results regarding these examples are implied by our study. Nevertheless, we rst have to de ne our basic framework.\nDe nition 2.1: An agent-environment system M = (Q; A; q 0 ; M ) consists of a set of observable states Q, a set of possible actions A, an initial state q 0 2 Q and an actual transition function M : Q A ! Q that determines for each state q 2 Q and action a 2 A the next state q 0 = M (q; a). Based on the above de nition we can de ne what a Planning while Learning system is. Notice that we associate the informal term \\behavior\" with the term \\transition function\", where the actual behavior is the actual transition function.\nDe nition 2.2: A Planning while Learning system S = (M; ) consists of an agentenvironment system M = (Q; A; q 0 ; M ), and a set of possible transition functions = fE 1 ; :::; E n g, all sharing the same set of observable states Q, where the actual transition function is one of these possible transition functions.\nNotice that we used the term observable states rather than just states. An observable state of an agent is what the agent perceives at a given point (e.g., its physical location) rather than its complete state of knowledge. We assume that an agent can always distinguish between di erent observable states. The complete state of knowledge can be de ned based on the history of actions and observable states of the agent. This history is an ordered sequence of observable states the agent visited and actions it performed. For example, if an agent performed an action a that led from an observable state s 1 to an observable state s 2 , and a leads to s 2 if and only if the environment behavior is not b, then we can say that the agent learned that the environment behavior is not b. The agent will know that the environment does not behave according to b in the state it reaches following this action, but this knowledge is not implied by the observable state s 2 ! This enables us to obtain a succinct and natural representation of agents. This type of representation has already been used in Rosenschein's situated automata (Rosenschein, 1985) and in work on reasoning about knowledge (Halpern & Moses, 1984).\nThe observable states in the transportation domain example are the locations, and the actual transition function corresponds to the actual routes in the environment. In the trauma care domain, the observable states are the possible observations of the doctor, while the actual transition function corresponds to the e ects of the doctor's actions given the actual injury of the patient. In both examples the agent may reach a state where it knows complex facts about the environment, based on the facts it learned by acting and observing. However, these complex states need not be represented explicitly. Further discussion of this topic can be found in the situated automata (Rosenschein, 1985;Rosenschein & Kaelbling, 1986) and knowledge in distributed systems (Halpern & Moses, 1984;Halpern, 1988) literature.\nThe reader should not confuse our use of automata-like structures with other common uses of it. We don't assume that an agent acts as if it is a nite-state machine, but only that the number of possible observations and possible environment behaviors it considers is nite. The agent's decisions will be based on its history of observations and actions which determine its local state (Rosenschein, 1985;Halpern & Moses, 1984) and is much more complex than its observable state. The agent's local state is not necessarily represented explicitly. This gives succinct and useful representations, as the ones discussed in Discrete Event Systems (DES) (Ramadge & Wonham, 1989) and in work in AI that incorporates uncertainty to control-theoretic models (Moses & Tennenholtz, 1991).\nThe above model is fundamental and some extensions of it will be discussed in Section 6. Given this model, we are now able to de ne the basic problem in Planning while Learning. The problem is to nd a satisfactory plan that achieves a goal given any possible behavior of the environment. This problem is further discussed in the following section and investigated in subsequent sections. Similar de nitions hold, and similar results can be obtained, if we require the agent to achieve its goal only in a fraction (e.g., 90%) of the possible behaviors.\nDe nition 2.3: Let S = (M; ) be a Planning while Learning system, where M = (Q; A; q 0 ; M ), and = fE 1 ; :::; E n g. A goal g for the agent P, is a subset of the states Q. A plan for an agent is a function from its history of states (in Q) and actions (in A) to an action (in A). Given a goal g, a satisfactory plan is a plan that guarantees that the agent will reach a state in g starting from q 0 , under any possible transition function in . A plan is called e cient if the number of actions that are executed in a course of it is polynomially bounded (in the representation of the Planning while Learning system).\nA satisfactory plan is therefore a plan in which the agent learns enough about the environment behavior, in order to guarantee the achievement of the agent's goal. Notice that, in general, a plan might be very complex. An agent might arrive, in the course of its learning process, to situations where its goal is no longer achievable. Hence, the agent has to nd a proper combination of learning and acting phases. A satisfactory plan can be viewed as a decision tree where each edge is associated with a pair of an observable state and an action to be performed in that state. This is a general representation of conditional plans. An e cient plan will therefore correspond to a decision tree of polynomial depth. Notice, however, that the size of an e cient plan may still be exponential." }, { "figure_ref": [], "heading": "A Computational Study", "publication_ref": [], "table_ref": [], "text": "In the previous section we de ned a basic framework of Planning while Learning, and what a satisfactory plan for an agent P is. In this section and in the following ones we would like to consider the complexity of nding such a plan or checking that a given plan is satisfactory. In order to discuss the issue of complexity we need to discuss our measures of complexity, and the type of representations of Planning while Learning systems we would like to look at." }, { "figure_ref": [], "heading": "Basic Representations", "publication_ref": [ "b12" ], "table_ref": [], "text": "We will distinguish between three basic Planning while Learning system representations:\n1. General Representations: Both the number of agent's observable states (i.e., jQj) and the number of possible transition functions may be exponential3 in the size of the actual representation.\n2. Quasi-Moderate Representations: The number of agent's observable states might be exponential in the size of the system representation, but the number of possible transition functions is at most polynomial in that size. This is a most appealing type of representation for systems with bounded uncertainty (Halpern & Vardi, 1991). The trauma-care system mentioned in Section 2 is an example of a system with a quasi-moderate representation. In such a system we usually have a set of atomic observations (e.g., whether the blood pressure is high or low). The number of atomic observations is linear in the problem's input, but the number of possible observations (i.e., observable states which are tuples of atomic observations) is exponential. The list of possible injuries that the patient might have is usually polynomial in the problem's input. Hence, we get a quasi-moderate representation of a Planning while Learning system.\n3. Moderate Representations: Both the number of agent's observable states and the number of possible transition functions is polynomial in the representation size. This type of representation is less general than a quasi-moderate representation, but it is still expressive and completely non-trivial as we will later discuss. The transportation domain example of the previous section is moderately represented by a graph-like structure, in cases where there are at most polynomially many alternatives for the actual structure of that graph (e.g., in a particular application, there are a constant number of possibilities to a logarithmic number of routes).\nIn the remainder of this paper we use the term moderate system (resp. quasi-moderate system) to refer to a moderate representation (resp. quasi-moderate representation) of a Planning while Learning system." }, { "figure_ref": [], "heading": "Basic Computational Categories", "publication_ref": [], "table_ref": [], "text": "Given a Planning while Learning system, there are three main computational categories that we consider.\n1. Intractable: Checking whether a plan for the agent is satisfactory is computationally hard, and may take exponential time. In Planning while Learning systems that fall into this category, even the problem of representing the plan and verifying that this supplied plan is satisfactory is computationally intractable (i.e., either the space needed for representation is exponential or the veri cation process takes exponential time)." }, { "figure_ref": [], "heading": "O -Line Tractable:", "publication_ref": [ "b13", "b18", "b29" ], "table_ref": [], "text": "A satisfactory plan for the agent P has a short representation (i.e., polynomial in the representation of the problem), and checking whether a plan is satisfactory can be carried out e ciently in polynomial time. Systems that fall into this category may succumb to a trial and error process, in which an intelligent designer (i.e., a human) suggests some plan for solving a problem. The suggested plan is represented and veri ed. If it fails the veri cation process, then the exact failure is reported, and the designer may try to generate a new plan. This trial and error process is a typical solution for design problems. A designer is given a speci c problem, and may use her experience in suggesting a plan. The plan should be represented and veri ed e ciently. If the plan is not satisfactory, the e cient veri cation process locates the failures and informs the designer, who may choose to generate a new plan, etc. This approach was at rst made explicit in the AI literature, by the seminal paper of McCarthy and Hayes (McCarthy & Hayes, 1969) where it is referred to as the Missouri program. This approach is indeed the one used in many practical situations, such as the ones mentioned in the previous section. A more detailed demonstration of that idea and further discussion can be found in (Moses & Tennenholtz, 1993;Shoham & Tennenholtz, 1994). Hence, in systems that fall into this category, various plans can be tried in an o -line design process, supported by a computerized e cient veri cation procedure, which hopefully results in a satisfactory plan.\n3. On-Line Tractable: A satisfactory plan for the agent P has a polynomial representation, that is not only e ciently veri able, but can be actually computed (algorithmically) in polynomial time." }, { "figure_ref": [], "heading": "Basic Results", "publication_ref": [ "b32", "b17", "b31" ], "table_ref": [], "text": "We would like to classify Planning while Learning systems based on the above categories. The following results are simple corollaries of results proved by Moses and Tennenholtz in another context (Tennenholtz & Moses, 1989;Moses & Tennenholtz, 1991;Tennenholtz, 1991) and their proof is omitted from the body of this paper.\n1. Given a general representation of a Planning while Learning system, nding a satisfactory plan is PSPACE-hard. The problem remains PSPACE-hard even if we consider only e cient plans. The size of the related plan may be exponential. 2. If we do not restrict ourselves to e cient plans, then nding a satisfactory plan in quasi-moderate systems is PSPACE-hard. The size of the related plan may be exponential. 3. Finding an e cient satisfactory plan in quasi-moderate Planning while Learning systems with only one possible transition function (i.e., planning with complete information) is NP-hard. In this case, it is enough to consider plans of polynomial size.\nThe above results give several restrictions as to what we will be able to obtain in our study: we can not hope that nding a satisfactory plan, either e cient or ine cient, will be (even o -line) tractable given arbitrary representations of Planning while Learning systems. In addition, we can not hope that Planning while Learning in quasi-moderate representations will be on-line tractable. We remain however with several basic questions:\n1. Given a quasi-moderate representation of a Planning while Learning system, is the problem of nding an e cient satisfactory plan o -line tractable? 2. Given a moderate representation of a Planning while Learning system, is the problem of nding an either e cient or arbitrary satisfactory plan tractable (either o -line or on-line)?\nWe will treat moderate representations rst. Our results regarding quasi-moderate representations will be a simple modi cation of a result regarding moderate representations. We would like now to show why the problem of Planning while Learning, even in moderate representations, is non-trivial.\nConsider an agent P who does not have complete information on the environment behavior, i.e., there may be more than one possible behavior of the environment. P's plan, instead of being a sequence of actions, becomes a decision tree. P's action, in this case, is a function, not only of the observable state, but also of the past history of P. In the example mentioned in the introduction, where P has to rst take the action c to distinguish behavior E 1 from behavior E 2 , P's plan has a di erent branch for the case the environment behaves according to E 1 and for the case it behaves according to E 2 .\nNote that, introducing P's memory as a parameter in its plan | this is essentially the di erence between a sequence and a decision tree as P's plan | may cause an exponential blow-up in the size of that plan, and may make intractable the task of devising or verifying a plan, even in moderate representations. This holds even when we consider e cient plans! Hence, Planning while Learning even in moderate systems is completely non-trivial." }, { "figure_ref": [], "heading": "O -Line Tractability", "publication_ref": [ "b4", "b12" ], "table_ref": [], "text": "In this section we show that given a moderate (resp. quasi-moderate) representation of a Planning while Learning system, whenever there is a satisfactory plan (resp. an e cient satisfactory plan) for an agent, there is a satisfactory plan (resp. an e cient satisfactory plan) that can be represented in polynomial space, and can be checked in polynomial time. As we mentioned, this is a non-trivial fact even for moderate systems. We prove the result for moderate systems, and then show why it is applicable for the richer context of quasimoderate systems.\nThe proof of our o -line tractability result will follow from the following lemmas.\nLemma 4.1: Let P be a satisfactory plan for achieving a goal g, in a moderate Planning while Learning system S with s possible behaviors (i.e., transition functions), and t observable states. Then, there exists a satisfactory plan P 0 for achieving g in that system, where the longest path of P 0 is bounded by s t.\nProof: If the agent P performs along any path of P more than t actions without learning anything (i.e., along t actions the agent does not get any new information about the actual behavior), then it must visit a particular observable state twice, without getting any new information about the actual behavior, and therefore we can shrink P by dropping actions which took place between these visits. We can perform this process until there will be no sequence of t actions in which no learning occurs. The learning of the agent is monotonic: whenever it learned something about the environment behavior, future information can just make this knowledge more concrete. Since the number of possible behaviors is s, we get that a knowledge increase can occur at most s times.\nCombining the above observations leads to the desired result.\nLemma 4.2: Let P be a satisfactory plan, for a moderate Planning while Learning system S with s possible behaviors, and where the longest path in P is of length t. Then, there exists a representation of P (in size polynomial in s and t) such that verifying that P is satisfactory can be carried out in time polynomial in s and t.\nProof: The concise representation P 0 of P consists of a table, where each entry of the table corresponds to a distinct observable history of an interaction of the agent P with the environment, and contains an action to be taken by P for that speci c (partial) scenario.\nThe number of distinct entries in the table (P 0 ) can be limited to include only the plausible distinct histories (i.e., the histories which can be generated) for the system S and the plan P . The number of such distinct histories is bounded by s (the number of possible behaviors of the environment) times t (the di erent stages in a speci c interaction).\nIn order to verify that a plan P 0 which is represented in that manner is satisfactory, one needs to go over all possible behaviors and for each one of them check that P 0 leads to P's goal.\nAs an immediate corollary we obtain the following:\nTheorem 4.3: Finding a satisfactory plan for any moderate Planning while Learning system is o -line tractable.\nConsider an agent who wishes to reach his destination in the hostile environment of Section 2. In principle, there might be exponentially-many histories of observations the agent may encounter. Nevertheless, our result says that it is enough to consider only polynomially-many of them in order to specify the appropriate plan. This is most helpful for the designer; she will be able to represent her suggested solution in a relatively concise way. If the suggested solution is not satisfactory, this fact will be e ciently detected, and perhaps can be repaired. The problem of navigation in a hostile environment we mentioned above is actually solved this way.\nNotice that Lemma 4.1 is quite satisfactory for moderate systems. However, in quasimoderate systems this Lemma is not useful, since in that case t might be exponential in the actual representation size. However, the properties obtained by Lemma 4.1 can be regained by considering e cient plans. For most practical purposes, we do not lose generality by restricting our attention to e cient plans, since a planner will not be able to execute exponentially many actions in the course of a plan. Given that Lemma 4.2 does hold for quasi-moderate representations, we get: Theorem 4.4: Given a quasi-moderate Planning while Learning system, nding an ecient satisfactory plan is o -line tractable.\nThis result is quite satisfactory, since quasi-moderate systems are a rich context. For example, some architectures such as the ones discussed by Brooks and his colleagues (Brooks, 1986) can be treated as quasi-moderate systems. They include a polynomial number of sensors, which correspond to an exponential number of possible observations, and are tested against a list of possible environment behaviors (i.e., the appropriate sensor-e ector mechanism is checked for a list of environment behaviors). As we mentioned before, quasimoderate systems correspond to complex systems where the number of possible worlds describing the environment is e ciently enumerable. These constitute a rich and appealing family of systems (Halpern & Vardi, 1991). Our results show, for example, that the trauma-care system discussed in the previous section can be built as an expert system that devises the next action to be performed based on the history of observations by the doctor. The problem of coming up with the plan may not be trivial, but our results show that a concise representation of a plan which is e ciently veri able does exist whenever an e cient satisfactory plan exists. Therefore, the e ort of generating the appropriate plan o -line is worthwhile." }, { "figure_ref": [], "heading": "On-Line Intractability", "publication_ref": [], "table_ref": [], "text": "In this section we show that it is not likely that there is a general algorithm to come up with a satisfactory plan for any moderate Planning while Learning system, since just deciding whether such a plan exists is NP-hard. We prove the result for the basic framework of Section 2. A similar result holds regarding e cient satisfactory plans. This will imply similar results for the case of e cient satisfactory plans in quasi-moderate Planning while Learning systems, and for the extended frameworks discussed in the following section.\nThis result together with the results obtained in the previous section complete the classi cation of Planning while Learning discussed in Section 3.\nTheorem 5.1: Given a moderate Planning while Learning system, deciding whether there exists an (arbitrary or e cient) satisfactory plan for the agent P is an NP-hard problem.\nProof: Given any 3-SAT formula ', over variables v 1 ; :::; v n and consisting of clauses c 1 ; :::; c t , we construct, in polynomial time, a moderate Planning while Learning system S ' , such that there exists a satisfactory plan for P in S ' if and only if there exists an assignment to v 1 ; :::; v n that satis es '. Since satis ability of a 3-SAT formula is NP-hard, this implies that deciding whether there exists a satisfactory plan, even for moderate systems, is an NP-hard problem. Our reduction will hold for the case of e cient satisfactory plans as well.\nThe set of observable states Q, in the system S ' , is fb; q 1 ; :::; q n+t+1 g. The possible behaviors of the environment are n E 1; 0 ; :::; E n; 0 ; E 1; 1 ; :::; E n; 1 o (there are 2n possible behaviors). The initial state is q 1 . The set of possible actions for P is f 0; 1; a 1 ; :::; a 7 g. P's goal is to reach the state q n+t+1 . The state b is a black-whole, where any action that P takes from b results back at b (which is an unsuccessful state).\nFrom any state q i , i 2 f1; :::; ng, in both the cases, where P takes the action 1 and the environment behaves according to E i; 0 , and where P takes the action 0 and the environment follows the behavior E i; 1 , the resulting state is q n+t+1 (P's goal). For all other behaviors, if P takes the actions 0 or 1 from state q i , the resulting state is q i+1 . (Taking the actions a 1 ; :::; a 7 leads to the state b).\nFor any clause c j , with each assignment (to the variables mentioned in c j ) that satis es c j , we associate one of the actions a 1 ; :::; a 7 (a clause with 3 variables has 7 satisfying assignments to its variables). If the observable state is q n+j , and P takes the action a k , which is associated with an assignment that assigns 0 (1) to variable v l , and the environment behaves according to E l; 1 (E l; 0 ), the resulting state is b (hence P's goal is not achievable anymore); taking the action a k from the state q n+j , under other possible behaviors, leads to state q n+j+1 . We show now that if ' is satis able then there exists a satisfactory plan for P in S ' . Let S: f1; :::; ng ! f0; 1g be an assignment to variables v 1 ; :::; v n , that satis es '. We construct a plan P S for P as follows: in the i th step, the agent takes the action 0 or 1 depending on the value of S(i). Then, in step n + j, the agent takes action a k , that corresponds to the restriction of S to the variables that appear in c j . It is easy to see that P S leads to success regardless of the actual environment behavior.\nOn the other hand, given a satisfactory plan P for P, we show there exists an assignment S P that satis es '. S P is constructed according to the rst n steps of P (for the behaviors that did not reach success yet) | which must be either 0 or 1. S P satis es ', otherwise there would be a clause c j , such that any assignment that satis es c j , contradicts the assignment of S P 's value to one of the variables v l , which would cause failure, on the (n + j) th step, for either behavior E l; 0 or E l; 1 ." }, { "figure_ref": [], "heading": "Extending the Framework", "publication_ref": [], "table_ref": [], "text": "The previous sections introduced and investigated a general framework of Planning while Learning. A major feature of the model discussed in the previous sections is that the agent does not a ect the environment behavior. This is quite natural in many applications. In many cases we may wish to consider a particular set of possible worlds (i.e., behaviors, transition functions), and there is no reason to assume they may change, given that a possible world speci es a full transition function. An interesting extension results from relaxing this feature. For example, in the transportation domain described in Section 2, one may wish to consider a case where moving along a particular route prevents future movements along other routes. This is due to the fact that movements along some routes may reveal the agent's existence to an enemy and will prevent the agent's movement along some routes that are under the enemy's control. Another interesting extension we would like to consider is the case of a multi-agent system instead of a single-agent one.\nBoth of the above extensions are strict generalizations of our basic framework. Therefore, our on-line intractability results hold in the extended frameworks as well. However, questions regarding o -line tractability should be carefully considered. We will de ne these extended frameworks and investigate the o -line tractability of the related problems.\n6.1 Dynamic Behaviors De nition 6.1: An extended Planning while Learning system S e = (Q; A; q 0 ; B; b 0 ; e ) consists of a set of observable states Q, a set of possible actions A, an initial agent's state q 0 2 Q, a set of environment behaviors B, an initial environment behavior b 0 2 B, and a global transition function e : Q B A ! Q B, that determines for each state q 2 Q, behavior b 2 B, and action a 2 A, the next state and behavior (q 0 ; b 0 ) = e (q; b; a).\nNotice that in extended Planning while Learning systems, the global transition function may change the behavior (i.e., the actual transition function) of the environment.\nThe de nition of a goal and of a satisfactory plan will remain as in the basic framework. More speci cally, we assume that the agent does not initially know the identity of b 0 , but wishes to devise a plan that will succeed regardless of the identity of b 0 . The agent however knows e . These assumptions will capture Planning while Learning in the extended framework. A moderate (resp. quasi-moderate) extended Planning while Learning system is a Planning while Learning system in which the number of elements in B is polynomial, and the number of elements in Q is polynomial (resp. exponential) in the size of the actual representation. The meaning of these de nitions is as in the basic Planning while Learning framework.\nUnfortunately, Lemma 4.1 does not hold even for moderate extended Planning while Learning systems. However, as we mentioned in Section 4, the properties obtained by Lemma 4.1 can be regained by considering e cient plans. For most practical purposes, we do not lose generality by restricting our attention to e cient plans, since a planner will not be able to execute exponentially many actions in the course of a plan. As we mentioned before, blow-up in the size of satisfactory plans may still be possible, even if we restrict ourselves to e cient plans only. We make no assumptions about the size of the related decision tree.\nFortunately, Lemma 4.2 does hold for extended Planning while Learning systems. The proof of this lemma for the extended framework is similar to its proof in the basic framework. Combining the above we get: Theorem 6.1: Given a quasi-moderate extended Planning while Learning system, nding an e cient satisfactory plan is o -line tractable." }, { "figure_ref": [], "heading": "Multi-Agent Systems", "publication_ref": [ "b28", "b20", "b14", "b19", "b7", "b5", "b18", "b29", "b23", "b3", "b32", "b22", "b6", "b32", "b30", "b25" ], "table_ref": [], "text": "Another interesting extension is concerned with the case where there is more than one agent in the system. For ease of exposition, we will assume that there are two agents that generate actions. 4An interesting feature of the multi-agent case is that an agent might not be familiar with the goal and the initial state of the other agent. Hence, Planning while Learning refers now to the case in which an agent tries to achieve its goal while learning about the behavior of the environment, and about the goals and initial states of other agents.\nDe nition 6.2: A multi-agent Planning while Learning system is a tuple S m = (Q 1 ; Q 2 ; A; q 1 0 ; q 2 0 ; B; b 0 ; m ) where Q i is a set of observable states for agent i, A is a set of possible actions, q i 0 2 Q i is the initial state of agent i, B is a set of environment behaviors, b 0 2 B is an initial environment behavior, and m :\nQ 1 Q 2 B A 2 ! Q 1 Q 2 B is\na global transition function that determines for each pair of states q 1 2 Q 1 ,q 2 2 Q 2 , behavior b 2 B, and a joint action of the agents (a 1 ; a 2 ) 2 A 2 , the next observable states of the agents and the next environment behavior: (q 1 0 ; q 2 0 ; b 0 ) = m (q 1 ; q 2 ; b; a 1 ; a 2 ).\nEach agent has its own goal, and its plan is a decision tree that refers only to that agent's observable states. The de nitions of moderate and quasi-moderate representations are straightforward generalizations of their de nitions for extended Planning while Learning systems. In addition, we assume that each agent can start in one of polynomially many initial observable states, and may have one of polynomially many goals it might be required to achieve. Nevertheless, each agent may not know what the exact initial state of the other agent is, and what the exact goal of the other agent is. We are interested in satisfactory multi-agent plans. Formally, we have: De nition 6.3: Given a multi-agent Planning while Learning system, a multi-agent plan is a pair of sets of plans, one set for each agent. Let Goal i denote the set of plans for agent i. A multi-agent plan is satisfactory if for each agent i and for each possible goal g of agent i, there is a plan in Goal i , that achieves g starting from any possible initial state, regardless of the plan (in the corresponding Goal j ) and initial state of the other agent, and regardless of the initial behavior of the environment. An e cient satisfactory multi-agent plan is a satisfactory multi-agent plan that consists of plans which are decision trees of polynomial depth.\nThe above de nition captures intuitive situations of Planning while Learning in multiagent domains. Assume for example that there are two forces that have to move in the hostile environment of Section 2. They start moving on 5AM, and need to reach their destinations by 9PM. Nevertheless, they can not be sure about the exact initial location of each other and about each other's destination. What the commander attempts to do in that case, is to devise a master-plan that should be good for all goals, initial locations, and environment behaviors. This master-plan is the satisfactory multi-agent plan we look for. Notice that movements of one agent may a ect the behavior of the system and the results of other agents' movements. It is easy to see that similar scenarios occur in the trauma-care example and in many other natural systems.\nWe now show that our o -line tractability result can be extended to the multi-agent case as well. We will use the following two lemmas. Lemma 6.2: Given a quasi-moderate multi-agent Planning while Learning system, where each agent has only one goal, if an e cient satisfactory multi-agent plan for achieving these goals exists, then there exists such an e cient satisfactory multi-agent plan that can be encoded in polynomial space, and be veri ed in polynomial time.\nProof: In this case each agent knows the goal of the other agent, and hence it is clear that it might learn only facts about the possible initial states and behaviors.\nGiven that there is only a polynomial number of possible initial states and environment behaviors, and given the polynomial bound on the depth of the plans, there are only polynomially many sequences of observations (each of which of polynomial length) of each agent that are of interest (as in Lemma 4.2). Hence, we can encode, in polynomial space, a decision table for each agent mentioning only these sequences, and check, in polynomial time, whether it determines a satisfactory multi-agent plan. Lemma 6.3: Given a quasi-moderate multi-agent Planning while Learning system S, where each agent has n possible goals (where n is polynomially bounded in the actual representation size), there exists a quasi-moderate multi-agent Planning while Learning system S 0 (where quasi-moderate refers to the actual representation size of the original system S), with a unique goal for each agent, such that there exists an (e cient) satisfactory multi-agent plan in S 0 if and only if there exists an (e cient) satisfactory multi-agent plan in S. Proof: S 0 will be built as follows. The observable states of agent i in S 0 will be the cartesian product of the observable states of agent i in S with the set of states: fstart i ; observe i 1 ; : : : ; observe in ; goal i g. The initial state of agent i in S 0 will be taken to be the pair consisting of its initial state in S and start i , and its goal is taken to be the set of states in which goal i is a component. The environment in S 0 will be a cartesian product of the behaviors in B with two sets G 1 and G 2 , where G i has n distinct elements: fg 0 i 1 ; : : : ; g 0 in g. Agent i will have a distinguished action, called observe goal i , which he must execute in its initial state. The state transition function will be as in S, but when i performs observe goal i its \\new component\" in the cartesian product (and only it) will change; the change will be to observe i j if and only if the projection of the initial behavior on G i is g 0 i j . In addition, assume that fg i 1 ; : : : ; g in g are the possible goals for agent i in S, then the transition function in S 0 will change the new component of the observable state to goal i if and only if the new component of the environment is in state g 0 i j , and a state satisfying g i j has been reached.\nThe above transformation from S to S 0 makes the identity of an agent's goal a component of the initially unknown behavior. However, agent i and no other agent will observe its goal after its rst action. It is easy to see that the above transformation keeps the system quasimoderate, and that there exists a satisfactory multi-agent plan in S if and only if there exists such a plan in S 0 , where in S 0 each agent has only one possible goal.\nOur approach crystallizes the notion of an agent interacting with an environment, and having to come up with a conditional plan, which would lead to the agent's goal in every possible behavior of the environment (possible world). We are mainly concerned with general computational aspects of Planning while Learning, and classify Planning while Learning based on several computational categories and representation types.\nAnother suggested approach for planning in uncertain environments, whose applicability aroused quite a heated discussion recently, is referred to as universal plans (Schoppers, 1987). A universal plan is one in which the reaction of the agent to every possible event of the environment is speci ed explicitly. Our results isolate general classes of systems in which the agent's actions can be speci ed explicitly in an e cient manner, in order to enable automatic veri cation. Furthermore, systems that do not fall into the above classes may be intractable even if the agent has complete information on the environment behavior.\nOther somewhat related work is concerned with planning routes where the geography is unknown (Papadimitriou & Yannakakis, 1989;Mcdermott & Davis, 1984). For example, one may be interested in nding a route leading from one city to another without access to an appropriate map. This work may be viewed as a special case of the general framework of Planning while Learning. Work on the design of physical part orienters (belts, panhandlers) that accept an object in one of several possible orientations and output it in a predetermined orientation (Natarajan, 1986) may also be viewed as a special case of our framework.\nOur work is concerned with the o -line and on-line tractability of Planning while Learning. This relates it to work concerned with the tractability of di erent types of planning (Erol, Nau, & Subrahmanian, 1992;Bylander, 1992). This work mainly concentrated on on-line tractability of a single-agent planning with complete information. Our work concentrates on general computational aspects of planning with incomplete information, considers also multi-agent situations, and discusses both on-line and o -line tractability. Recall that o -line design and tractability, although considered an attractive option (Mc-Carthy & Hayes, 1969), has been almost neglected in the recent years (but see (Moses & Tennenholtz, 1993;Shoham & Tennenholtz, 1994)).\nResearch on inference of nite-automata (Rivest & Schapire, 1987, 1989) assumes an agent that tries to infer the structure of an automaton. The agent is given a limited access to the automaton, and is expected to gain enough information to deduce the complete structure of the automaton. By contrast, in the framework discussed in this paper, the agent needs only gain information that would help in reaching the given goal. Therefore, in what is probably a most natural case, the automaton is fairly complicated, thus learning its complete structure is computationally infeasible. However, being only interested in a speci c goal, one may be able to obtain the necessary information, and succeed in that goal. In addition, work on computational learning assumes that the given automaton is fully connected, to enable reaching any state of the automaton and eliminating the need of avoiding states from which other states are not reachable. This assumption | that the automaton is fully connected | may very well be false in many real-life applications.\nThe part of our work which discusses multi-agent plans is related to issues in distributed AI (Bond & Gasser, 1988) and to the complexity of multi-agent planning (Tennenholtz & Moses, 1989); we investigate the computational di culty that arises due to uncertainty concerning the activities of an additional agent(s).\nAs far as related representations are concerned, the model we present is di erent from classical representations in the spirit of STRIPS. It is a classical Discrete Event Systems model (Ramadge & Wonham, 1989). The general connection between planning and control theory has been discussed in previous work (Dean & Wellman, 1991). In addition, Tennenholtz and Moses show a reduction from control-theoretic models as the ones we discuss to the more classical STRIPS-like representations (Tennenholtz & Moses, 1989). They show how a typical STRIPS-like representation can be reduced to a quasi-moderate representation. However, the control-theoretic representations we considered are conceptually di erent from classical planning models, due to the fact that they model explicitly the possible observations of agents and the e ects of actions given di erent environment behaviors, rather than represent general facts about an environment. The local (or mental) state of an agent, which is the general agent's state discussed in the AI literature (Shoham, 1990), will not be represented explicitly in our representation and will be built implicitly based on the agent's actions and observations. Hence, the most appropriate similar model of knowledge representation in AI is the situated automata (Rosenschein, 1985). Notice that, in general, the number of local states an agent might reach is exponential in the number of its observable states." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b0" ], "table_ref": [], "text": "A useful planning system needs to have three essential properties. First, it should supply a mechanism for the generation of plans. Second, it should supply a concise way for representing plans. Third, it should supply an e cient mechanism for the veri cation of plans or for testing candidate plans.\nIn this paper we concentrate on planning in uncertain territory, where the agent has only partial information on the environment behavior. We show that it is intractable to build a useful planning system even for moderate representations (i.e., representations in which the number of observable states and possible behaviors is polynomial in the actual representation size). However, our positive results show that it is possible, in moderate and quasi-moderate representations (where the number of observable states might be exponential), to satisfy the 2nd and 3rd properties mentioned above. Hence, o -line design becomes tractable, as discussed and demonstrated in the paper.\nNotice that if we consider quasi-moderate systems and e cient plans, which is a most natural situation, our results imply that Planning while Learning is as e cient as planning with complete information. Both are o -line tractable and on-line intractable. However, in moderate systems, planning with complete information is quite trivial (this is the case of graph search (Aho, Hopcroft, & Ullman, 1974)), while in that case we show that Planning while Learning is NP-hard. More generally, we obtain a complete classi cation of Planning while Learning systems based on several representation types and computational categories. In addition, we discuss extensions of Planning while Learning, such as Planning while Learning in multi-agent domains.\nThe framework of Planning while Learning is a general framework where planning in uncertain territory can be studied. The introduction of this framework, and the related (positive and negative) results, facilitate that study." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank Dan Weld, and three anonymous reviewers, for their helpful comments." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Combining the above lemmas we get: Theorem 6.4: Given a quasi-moderate multi-agent Planning while Learning system, nding an e cient multi-agent satisfactory plan is o -line tractable.\nThe above proof shows that Planning while Learning is o -line tractable in multi-agent cases such as the ones described above. Given the structure of the above lemmas, it is easy to prove similar results for other contexts where there is a polynomially bounded uncertainty about a multi-agent system. For example, if we would like to nd a multi-agent plan where crash failures of agents might occur (in that case the faulty agent might not achieve its goal, but we require that the other agent will still be able to achieve its goal), then we can show that this problem is o -line tractable, using the above techniques." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b15", "b11", "b6", "b2", "b4", "b15", "b11", "b16", "b27", "b34", "b33", "b21", "b8", "b9" ], "table_ref": [], "text": "Early work in the area of planning was devoted to various cases of planning with complete information (see (Allen, Hendler, & Tate, 1990) for many papers on that topic). As research in this area progressed in various directions, several independent works observed that the assumption that a planner has complete information is unrealistic for many situations; the sub-area that treats that aspect of planning is usually referred to as planning in uncertain territories.\nExamples of research in this sub-area include work concerning knowledge and action (Moore, 1980;Halpern, 1988), work on conditional and reactive plans (Dean & Wellman, 1991) and work on interleaving planning and execution (Ambros-Ingerson & Steel, 1988). The reactive approach is proposed as a tool in the control of robots operating in uncertain environments, and in the design of real-life control architectures that would be able to react in a satisfactory manner, given unpredicted events (Brooks, 1986). The interleaving of planning and execution may sometimes be a useful alternative to conditional planning. However, in many realistic domains there is a need to consider a whole or large portion of a plan before deciding on an action. This is the case in the transportation domain and the trauma-care domain we discussed. Nevertheless, we see the interleaving of execution with Planning while Learning a promising direction for future research.\nResearch in the direction of conditional plans deals with plans in which the outcome of the agent's action may a ect the next action taken by the agent. Theoretical work on this issue is mainly devoted to aspects of reasoning about knowledge and action (Moore, 1980;Halpern, 1988;Morgenstern, 1987), and to the logical formulation of conditional plans (Rosenschein, 1981). Speci c mechanisms to construct conditional plans in which observable events and tests are explicitly declared are discussed as well (Wellman, 1990). These as well as the more classical work on conditional plans (Warren, 1976), and work that followed and extended it in various directions (Peot & Smith, 1992;Etzioni, Hanks, Weld, Draper, Lesh, & Williamson, 1992) have not concentrated on general computational aspects of Planning while Learning. Our work does not concentrate on speci c mechanisms for the construction of conditional plans; Rather, it concentrates on general computational aspects of conditional planning. Some recent work has also been concerned with computational aspects of conditional plans, but concentrated on several natural pruning rules that can be used in the construction of conditional plans (Genesereth & Nourbakhsh, 1993)." } ]
[ { "authors": "A V Aho; J V Hopcroft; J Ullman", "journal": "Addison-Wesley Publishing Company", "ref_id": "b0", "title": "The Design and Analysis of Computer Algorithms", "year": "1974" }, { "authors": "", "journal": "Morgan Kaufmann Publishers", "ref_id": "b1", "title": "Readings in Planning", "year": "1990" }, { "authors": "J Ambros-Ingerson; S Steel", "journal": "", "ref_id": "b2", "title": "Interleaving Planning, Execution and Monitoring", "year": "1988" }, { "authors": "A H Bond; L Gasser", "journal": "Ablex Publishing Corporation", "ref_id": "b3", "title": "Readings in Distributed Arti cial Intelligence", "year": "1988" }, { "authors": "R A Brooks", "journal": "IEEE Journal of Robotics and Automation", "ref_id": "b4", "title": "A Robust Layered Control System for a Mobile Robot", "year": "1986" }, { "authors": "T Bylander", "journal": "", "ref_id": "b5", "title": "Complexity Results for Serial Decomposability", "year": "1992" }, { "authors": "T L Dean; M P Wellman", "journal": "Morgan Kaufmann Publishers", "ref_id": "b6", "title": "Planning and Control", "year": "1991" }, { "authors": "K Erol; D Nau; V Subrahmanian", "journal": "", "ref_id": "b7", "title": "On the Complexity of Domain-Independent Planning", "year": "1992" }, { "authors": "O Etzioni; S Hanks; D Weld; D Draper; N Lesh; M Williamson", "journal": "", "ref_id": "b8", "title": "An Approach to Planning with Incomplete Information", "year": "1992" }, { "authors": "M Genesereth; I R Nourbakhsh", "journal": "", "ref_id": "b9", "title": "Time Saving Tips for Problem Solving with Incomplete Information", "year": "1993" }, { "authors": "J Halpern; Y Moses", "journal": "", "ref_id": "b10", "title": "Knowledge and common knowledge in a distributed environment", "year": "1984" }, { "authors": "J Y Halpern", "journal": "", "ref_id": "b11", "title": "Reasoning about knowledge: An overview", "year": "1988" }, { "authors": "J Y Halpern; M Y Vardi", "journal": "", "ref_id": "b12", "title": "Model checking vs. theorem proving: a manifesto", "year": "1991" }, { "authors": "J Mccarthy; P Hayes", "journal": "Machine Intelligence", "ref_id": "b13", "title": "Some Philosophical Problems from the Standpoint of Arti cial Intelligence", "year": "1969" }, { "authors": "D Mcdermott; E Davis", "journal": "Articial Intelligence", "ref_id": "b14", "title": "Planning Routes Through Uncertain Territory", "year": "1984" }, { "authors": "R C Moore", "journal": "SRI International", "ref_id": "b15", "title": "Reasoning about Knowledge and Action", "year": "1980" }, { "authors": "L Morgenstern", "journal": "", "ref_id": "b16", "title": "Knowledge Preconditions for Actions and Plans", "year": "1987" }, { "authors": "Y Moses; M Tennenholtz", "journal": "Elsevier Science", "ref_id": "b17", "title": "Cooperation in Uncertain Territory Using a Multi-Entity Model", "year": "1991" }, { "authors": "Y Moses; M Tennenholtz", "journal": "", "ref_id": "b18", "title": "O -Line Reasoning for On-Line E ciency", "year": "1993" }, { "authors": "K Natarajan", "journal": "", "ref_id": "b19", "title": "An Algorithmic Approach to the Automatic Design of Parts Orienters", "year": "1986" }, { "authors": "C Papadimitriou; M Yannakakis", "journal": "", "ref_id": "b20", "title": "Shortest Paths Without a Map", "year": "1989" }, { "authors": "M A Peot; D Smith", "journal": "", "ref_id": "b21", "title": "Conditional Nonlinear Planning", "year": "1992" }, { "authors": "P Ramadge; W Wonham", "journal": "", "ref_id": "b22", "title": "The Control of Discrete Event Systems", "year": "1989" }, { "authors": "R L Rivest; R E Schapire", "journal": "", "ref_id": "b23", "title": "Diversity-Based Inference of Finite Automata", "year": "1987" }, { "authors": "R L Rivest; R E Schapire", "journal": "", "ref_id": "b24", "title": "Inference of Finite Automata Using Homing Sequences", "year": "1989" }, { "authors": "S J Rosenschein", "journal": "New Generation Computing", "ref_id": "b25", "title": "Formal Theories of Knowledge in AI and Robotics", "year": "1985" }, { "authors": "S J Rosenschein; L P Kaelbling", "journal": "", "ref_id": "b26", "title": "The synthesis of digital machines with provable epistemic properties", "year": "1986" }, { "authors": "S Rosenschein", "journal": "", "ref_id": "b27", "title": "Plan Synthesis: A logical Perspective", "year": "1981" }, { "authors": "M Schoppers", "journal": "", "ref_id": "b28", "title": "Universal Plans for Reactive Robots in Unpredictable Environments", "year": "1987" }, { "authors": "Y Shoham; M Tennenholtz", "journal": "", "ref_id": "b29", "title": "Social Laws for Arti cial Agent Societies: O -line Design", "year": "1994" }, { "authors": "Y Shoham", "journal": "", "ref_id": "b30", "title": "Agent Oriented Programming", "year": "1990" }, { "authors": "M Tennenholtz", "journal": "", "ref_id": "b31", "title": "E cient Representation and Reasoning in Multi-Agent Systems", "year": "1991" }, { "authors": "M Tennenholtz; Y Moses", "journal": "", "ref_id": "b32", "title": "On Cooperation in a Multi-Entity Model", "year": "1989" }, { "authors": "D H D Warren", "journal": "", "ref_id": "b33", "title": "Generating Conditional Plans and Programs", "year": "1976" }, { "authors": "M P Wellman", "journal": "", "ref_id": "b34", "title": "Formulation of Tradeo s in Planning Under Uncertainty", "year": "1990" } ]
[ { "formula_coordinates": [ 12, 333.6, 253.2, 179.04, 17.64 ], "formula_id": "formula_0", "formula_text": "Q 1 Q 2 B A 2 ! Q 1 Q 2 B is" } ]
On Planning while Learning
This paper introduces a framework for Planning while Learning where an agent is given a goal to achieve in an environment whose behavior is only partially known to the agent. We discuss the tractability of various plan-design processes. We show that for a large natural class of Planning while Learning systems, a plan can be presented and veri ed in a reasonable time. However, coming up algorithmically with a plan, even for simple classes of systems is apparently intractable. We emphasize the role of o -line plan-design processes, and show that, in most natural cases, the veri cation (projection) part can be carried out in an e cient algorithmic manner.
Shmuel
[]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b5", "b21", "b9", "b1", "b2", "b18", "b8", "b15", "b10" ], "table_ref": [], "text": "An information extraction (IE) system analyzes unrestricted, real world text such as newswire stories. In contrast to information retrieval systems which return a pointer to the entire document, an IE system returns a structured representation of just the information from within the text that is relevant to a user's needs, ignoring irrelevant information.\nThe rst stage of an IE system, sentence analysis, identi es references to relevant objects and typically creates a case frame to represent each object. The second stage, discourse analysis, merges together multiple references to the same object, identi es logical relationships between objects, and infers information not explicitly identi ed by sentence analysis. The IE system operates in terms of domain speci cations that prede ne what types of information and relationships are considered relevant to the application. Considerable domain knowledge is used by an IE system: about domain objects, relationships between objects, and how texts typically describe these objects and relationships.\nMuch of the domain knowledge can be automatically acquired by corpus-based techniques. Previous work has centered on knowledge acquisition for some of the lower level processing such as part-of-speech tagging and lexical disambiguation. N-gram statistics have been highly successful in part-of-speech tagging (Church, 1988;DeRose, 1988). Weischedel (1993) has used corpus-based probabilities both for part-of-speech tagging and to guide parsing. Collocation data has been used for lexical disambiguation by Hindle (1989), Brent (1993), and others. Examples from a training corpus have driven both part-of-speech and semantic tagging (Cardie, 1993) and dictionary construction (Rilo , 1993). local linguistic patterns to instantiate case frames, called concept nodes (CN's) used by CIRCUS.\nEach CN de nition has a trigger word and a syntactic pattern relative to that word. Whenever the trigger word occurs in the text, CIRCUS looks in one of the syntactic bu ers for appropriate information to extract. Some CN de nitions will extract information from the subject or from the direct object, rst testing for active or passive voice. Other CN de nitions look for a prepositional phrase with a particular preposition. Examples of CN extraction patterns from a particular domain are shown in Section 2.3.\nDiscourse analysis starts with the output from the sentence analyzer, in this case a set of concept nodes representing locally extracted information. Other work on discourse has often involved tracking shifts in topic and in the speaker/writer's goals (Grosz & Sidner, 1986;Liddy et al., 1993) or in resolving anaphoric references (Hobbs, 1978). Discourse processing in an IE system may concern itself with some of these issues, but only as a means to its main objective of transforming bits and pieces of extracted information into a coherent representation.\nOne of the rst tasks of discourse analysis is to merge together multiple references to the same object. In a domain where company names are important, this will involve recognizing the equivalence of a full company name (\\International Business Machines, Inc.\") with shortened forms of that name (\\IBM\") and generic references (\\the company\", \\the U.S. computer maker\"). Some manually engineered rules seem unavoidable for coreference merging. Another example is merging a domain object with a less speci c reference to that object. In the microelectronics domain a reference to \\DRAM\" chips may be merged with a reference to \\memory\" or an \\I-line\" process merged with \\lithography.\"\nMuch of the work of discourse analysis is to identify logical relationships between extracted objects, represented as pointers between objects in the output. Discourse analysis must also be able to infer missing objects that are not explicitly stated in the text and in some cases split an object into multiple copies or discard an object that was erroneously extracted.\nThe current implementation of Wrap-Up begins discourse processing after coreference merging has been done by a separate module. This is primarily because manual engineering seems unavoidable in coreference. Work is underway to extend Wrap-Up to include all of IE discourse processing by incorporating a limited amount of domain-speci c code to handle such things as company name aliases and generic references to domain objects.\nWrap-Up divides its processing into six stages, which will be described more fully in Section 3. They are:\n1. Filtering out spuriously extracted information 2. Merging objects with their attributes 3. Linking logically related objects 4. Deciding when to split objects into multiple copies 5. Inferring missing objects 6. Adding default slot values At this point an example from a speci c domain might help. The following sections introduce the microelectronics domain, then illustrate sentence analysis and discourse analysis with a short example from this domain." }, { "figure_ref": [], "heading": "The Microelectronics Domain", "publication_ref": [], "table_ref": [], "text": "The microelectronics domain was one of the two domains targetted by the Fifth Message Understanding Conference (MUC-5, 1993). According to the domain and task guidelines developed for the MUC-5 microelectronics corpus, the information to be extracted are microchip fabrication processes along with the companies, equipment, and devices associated with these processes. There are seven types of domain objects to be identi ed: entities (i.e. companies), equipment, devices, and four chip fabrication processes (layering, lithography, etching, and packaging).\nIdentifying relationships between objects is of equal importance in this domain to identifying the objects themselves. A company must be identi ed as playing at least one of four possible roles with respect to the microchip fabrication process: developer, manufacturer, distributor, or purchaser/user. Microchip fabrication processes are reported only if they are associated with a speci c company in at least one of these roles. Each equipment object must be linked to a process which uses that equipment, and each device object linked to a process which fabricates that device. Equipment objects may point to a company as manufacturer and to other equipment as modules.\nThe following sample from the MUC-5 microelectronics domain has two companies in the rst sentence, which are associated with two lithography processes from the second sentence. GCA and Sematech are developers of both the UV and I-line lithography processes, with GCA playing the additional role of manufacturer. Each lithography process is linked to the stepper equipment mentioned in sentence one.\nGCA unveiled its new XLS stepper, which was developed with assistance from Sematech. The system will be available in deep-ultraviolet and I-line configurations.\nFigure 1 shows the ve domain objects extracted by sentence analysis and the nal representation of the text after discourse analysis has identi ed relationships between objects. Some of these relationships are directly indicated by pointers between objects. The roles that companies play with respect to a microchip fabrication process are indicated by creating a \\microelectronics-capability\" object with pointers to both the process and the companies." }, { "figure_ref": [], "heading": "Extraction Patterns", "publication_ref": [ "b18" ], "table_ref": [], "text": "How does sentence analysis identify GCA and Sematech as company names, and extract the other domain objects such as stepper equipment, UV lithography and I-line lithography?\nThe CN dictionary for this domain includes an extraction pattern \\X unveiled\" to identify company names. The subject of the active verb \\unveiled\" in this domain is nearly always a company developing or distributing a new device or process. However, this pattern will occasionally pick up a company that fails the domain's reportability criteria. A company that unveils a new type of chip should be discarded if the text does not specify the fabrication process.\nExtracting the company name \\Sematech\" is more di cult since the pattern \\assistance from X\" is not a reliable predictor of relevant company names. There is always a trade-o between accuracy and complete coverage in deciding what extraction patterns are reliable Figure 1: Output of (A) sentence analysis and (B) discourse analysis enough to include in the CN dictionary. Including less reliable patterns increases coverage but does so at the expense of spurious extraction. The more speci c pattern \\developed with assistance from X\" is reliable, but was missed by the dictionary construction tool (Rilo , 1993).\nFor many of the domain objects, such as equipment, devices, and microchip fabrication processes, the set of possible objects is prede ned and a list of keywords that refer to these objects can be created. The extraction pattern \\unveiled X\" looks in the direct object of the active verb \\unveiled\", instantiating an equipment object if a keyword indicating an equipment type is found. In this example an equipment object with type \\stepper\" is created with the equipment name \\XLS\". The same stepper equipment is also extracted by the pattern \\X was developed\", which looks for equipment in the subject of the passive verb \\developed\". This equipment object is extracted a third time by the keyword \\stepper\" itself, which is su cient to instantiate a stepper equipment object whether or not it occurs in a reliable extraction pattern.\nThe keyword \\deep-ultraviolet\" and the extraction pattern \\available in X\" are used to extract a lithography object with type \\UV\" from the second sentence. Another lithography object of type \\I-line\" is similarly extracted. Case frames are created for each of the objects identi ed by sentence analysis. This set of objects becomes input for the next stage of processing, discourse analysis." }, { "figure_ref": [], "heading": "Discourse Processing", "publication_ref": [], "table_ref": [], "text": "In the full text from which this fragment comes, there are likely to be other references to \\GCA\" or to \\GCA Corp.\" One of the rst jobs of discourse analysis is to merge these multiple references. It is a much harder task to merge pronominal references and generic references such as \\the company\" with the appropriate company name. This is all part of the coreference problem that is handled by processes separate from Wrap-Up.\nThe main job of discourse analysis is to determine the relationships between the objects passed to it by sentence analysis. Considerable domain knowledge is needed to make these discourse-level decisions. Some of this knowledge concerns writing style, and speci c phrases writers typically use to imply relationships between referents in a given domain. Is the phrase \\<company> unveiled <equipment>\" su cient evidence to infer that the company is the developer of a microelectronics process? The word \\unveiled\" alone is not enough, since a company that unveiled a new DRAM chip may not be the developer of any new process. It may simply be using someone else's microelectronics process to produce its chip. Such inferences, particularly those about what role a company plays in a process, are often so subtle that two human analysts may disagree on the output for a given text. A human performance study for this task found that experienced analysts agreed with each other on only 80% on their text interpretations in this domain (Will, 1993).\nWorld knowledge is also needed about the relationships possible between domain objects. A lithography process may be linked to stepper equipment, but steppers are never used in layering, etching, or packaging processes. There are delicate dependencies about what types of process are likely to fabricate what types of devices. Knowledge about the kinds of relationships typically reported in this domain can also help guide discourse processing. Stories about lithography, for example, often give the developer, manufacturer, or distributor of the process, but these roles are hardly ever mentioned for packaging processes. Companies associated with packaging tend to be limited to the purchaser/user of the packaging technology.\nA wide range of domain knowledge is needed for discourse processing, some of it related to world knowledge, some to writing style. The next section discusses the need for trainable components at all levels of IE processing, including discourse analysis. Wrap-Up uses machine learning techniques to avoid months of manual knowledge engineering otherwise required to develop a speci c IE application." }, { "figure_ref": [], "heading": "The Need for Trainable IE Components", "publication_ref": [ "b14", "b18", "b14", "b6", "b14" ], "table_ref": [], "text": "The highest performance at the ARPA-sponsored Fifth Message Understanding Conference (MUC-5, 1993) was achieved at the cost of nearly two years of intense programming e ort, adding domain-speci c heuristics and domain-speci c linguistic patterns one by one, followed by various forms of system tuning to maximize performance. For many real world applications, two years of development time by a team of half a dozen programmers would be prohibitively expensive. To make matters worse, the knowledge used in one domain cannot be readily transferred to other IE applications.\nResearchers at the University of Massachusetts have worked to facilitate IE system development through the use of corpus-driven knowledge acquisition techniques (Lehnert et al., 1993). In 1991 a purely hand-crafted UMass system had the highest performance of any site in the MUC-3 evaluation. The following year UMass ran both a hand-crafted system and an alternate system that replaced a key component with output from AutoSlog, a trainable dictionary construction tool (Rilo , 1993). The AutoSlog variant exhibited performance levels comparable to a dictionary based on 1500 hours of manual coding. Encouraged by the success of this one trainable component, an architecture for corpus-driven system development was proposed which uses machine learning techniques to address a number of natural language processing problems (Lehnert et al., 1993). In the MUC-5 evaluation, output from the CIRCUS sentence analyzer was sent to TTG (Trainable Template Generator), a discourse component developed by Hughes Research Laboratories (Dolan, et al., 1991;Lehnert et al., 1993). TTG used machine learning techniques to acquire much of the needed domain knowledge, but still required hand-coded heuristics to turn this acquired knowledge into a fully functioning discourse analyzer.\nThe remainder of this paper will focus on Wrap-Up, a new IE discourse module now under development which explores the possibility of fully automated knowledge acquisition for discourse analysis. As detailed in the following sections, Wrap-Up builds ID3 decision trees to guide discourse processing and requires no hand-coded customization for a new domain once a training corpus has been provided. Wrap-Up automatically decides what ID3 trees are needed for the domain and derives the feature set for each tree from the output of the sentence analyzer." }, { "figure_ref": [], "heading": "Wrap-Up, a Trainable IE Component", "publication_ref": [], "table_ref": [], "text": "This section describes the Wrap-Up algorithm, how decision trees are used for discourse analysis, and how the trees and tree features are automatically generated. We conclude with a discussion of the requirements of Wrap-Up and our experience porting to a new domain." }, { "figure_ref": [], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "Wrap-Up is a domain-independent framework for IE discourse processing which is instantiated with automatically acquired knowledge for each new IE application. During its training phase, Wrap-Up builds ID3 decision trees based on a representative set of training texts, paired against hand-coded output keys. These ID3 trees guide Wrap-Up's processing during run time.\nAt run time Wrap-Up receives as input all objects extracted from the text during sentence analysis. Each of these objects is represented as a case frame along with a list of references in the text, the location of each reference, and the linguistic patterns used to extract it. Multiple references to the same object throughout the text are merged together before passing it on to Wrap-Up. Wrap-Up transforms this set of objects by discarding spurious objects, merging objects that add further attributes to an object, adding pointers between objects, and inferring the presence of any missing objects or slot values.\nWrap-Up has six stages of processing, each with its own set of decision trees designed to transform objects as they are passed from one stage to the next.\nStages in the Wrap-up Algorithm:" }, { "figure_ref": [], "heading": "Slot Filtering", "publication_ref": [], "table_ref": [], "text": "Each object slot has its own decision tree that judges whether the slot contains reliable information. Discard the slot value from an object if a tree returns \\negative\"." }, { "figure_ref": [], "heading": "Slot Merging", "publication_ref": [], "table_ref": [], "text": "Create an instance for each pair of objects of the same type. Merge the two objects if a decision tree for that object type returns \\positive\". This stage can merge an object with separately extracted attributes for that object." }, { "figure_ref": [], "heading": "Link Creation", "publication_ref": [], "table_ref": [], "text": "Consider all possible pairs of objects that might possibly be linked. Add a pointer between objects if a Link Creation decision tree returns \\positive\"." }, { "figure_ref": [], "heading": "Object Splitting", "publication_ref": [], "table_ref": [], "text": "Suppose object A is linked to both object B and to object C. If an Object Splitting decision tree returns \\positive\", split A into two copies with one pointing to B and the other to C." }, { "figure_ref": [], "heading": "Inferring Missing Objects", "publication_ref": [], "table_ref": [], "text": "When an object has no other object pointing to it, an instance is created for a decision tree which returns the most likely parent object. Create such a parent and link it to the \\orphan\" object unless the tree returns \\none\". Then use decision trees from the Link Creation and Object Splitting stages to tie the new parent in with other objects." }, { "figure_ref": [], "heading": "Inferring Missing Slot Values", "publication_ref": [], "table_ref": [], "text": "When an object slot with a closed class of possible values is empty, create an instance for a decision tree which returns a context-sensitive default value for that slot, possibly \\none\"." }, { "figure_ref": [ "fig_1", "fig_3", "fig_3", "fig_1" ], "heading": "Decision Trees for Discourse Analysis", "publication_ref": [ "b17", "b19", "b17" ], "table_ref": [], "text": "A key to making machine learning work for a complex task such as discourse processing is to break the problem into a number of small decisions and build a separate classi er for each. Each of the six stages of Wrap-Up described in Section 3.1 has its own set of ID3 trees, with the exact number of trees depending on the domain speci cations. The Slot Filtering stage has a separate tree for each slot of each object in the domain; the Slot Merging stage has a separate tree for each object type; the Link Creation stage has a tree for each pointer de ned in the output structure; and so forth for the other stages. The MUC-5 microelectronics domain (as explained in Section 2.2) required 91 decision trees: 20 for the Slot Filtering stage, 7 for Slot Merging, 31 for Link Creation, 13 for Object Splitting, 7 for Inferring Missing Objects , and 13 for Inferring Missing Slot Values.\nAn example from the Link Creation stage is the tree that determines pointers from lithography objects to equipment objects. Every pair of lithography and equipment objects found in a text is encoded as an instance and sent to the Lithography-Equipment-Link tree. If the classi er returns \\positive\", Wrap-Up adds a pointer between these two objects in the output to indicate that the equipment was used for that lithography process.\nThe ID3 decision tree algorithm (Quinlan, 1986) was used in these experiments, although any machine learning classi er could be plugged into the Wrap-Up architecture. A vector space approach might seem appropriate, but its performance would depend on the weights assigned to each feature (Salton et al., 1975). It is hard to see a principled way to assign weights to the heterogeneous features used in Wrap-Up's classi ers (see Section 3.3), since some features encode attributes of the domain objects and others encode linguistic context or relative position in the text.\nLet's look again at the example from Section 2.2 with the \\XLS stepper\" and see how Wrap-Up makes the discourse decision of whether to add a pointer from UV lithography to this equipment object. Wrap-Up encodes this as an instance for the Lithography-Equipment-Link decision tree with features representing attributes of both the lithography and equipment objects, their extraction patterns, and relative position in the text.\nDuring Wrap-Up's training phase, an instance is encoded for every pair of lithography and equipment objects in a training text. Training instances must be classi ed as positive or negative, so Wrap-Up consults the hand-coded target output provided with the training text and classi es the instance as positive if a pointer is found between matching lithography and equipment objects. The creation of training instances will be discussed more fully in Section 3.4. ID3 tabulates how often each feature value is associated with a positive or negative training instance and encapsulates these statistics at each node of the tree it builds.\nFigure 2 shows a portion of a Lithography-Equipment-Link tree, showing the path used to classify the instance for UV lithography and XLS stepper as positive. The parenthetical numbers for each tree node show the number of positive and negative training instances represented by that node. The a priori probability of a pointer from lithography to equipment in the training corpus was 34%, with 282 positive and 539 negative training instances.\nID3 uses an information gain metric to select the most e ective feature to partition the training instances (p.89-90, Quinlan, 1986), in this case choosing equipment type as the test at the root of this tree. This feature alone is su cient to classify instances with equipment type such as modular equipment, radiation source, or etching system, which have only negative instances. Apparently these types of equipment are never used by lithography processes (a useful bit of domain knowledge).\nThe branch for equipment type \\stepper\" leads to a node in the tree representing 202 positive and 174 negative training instances, raising the probability of a link to 54%. ID3 recursively selects a feature to partition each partition, in this case selecting lithography type. The branch for UV lithography leads to a partition with 27 positive and 14 negative instances, in contrast to E-beam and optical lithography which have nearly all negative instances. The next test is distance, with a value of -1 in this case since the equipment reference is one sentence earlier than lithography. This branch leads to a leaf node with 4 positive and no negative instances, so the tree returns a classi cation of positive and Wrap-Up adds a pointer from UV lithography to the stepper. This example shows how a decision tree can acquire useful domain knowledge: that lithography is never linked to equipment such as etching systems, and that steppers are often used for UV lithography but hardly ever for E-beam or optical lithography. Knowledge of this sort could be manually engineered rather than acquired from machine learning, but the hundreds of rules needed might take weeks or months of e ort to create and test.\nConsider another fragment of text and the tree in Figure 3 that decides whether to add a pointer from the PLCC packaging process to the ROM chip device. The instance which is to be classi ed by a Packaging-Device-Link tree includes features for packaging type, device type, distance between the two referents, and the extraction patterns used by sentence analysis. ID3 selects \\distance\" as the root of the tree, a feature that counts the distance in sentences between the packaging and device references in the text. When the closest references were 20 or more sentences apart, hardly any of the training instances were positive. The distance is -1 in the example text, with ROM device mentioned one sentence earlier than the PLCC packaging process. As Figure 3 shows, the branch for distance of -1 is followed by a test for device type. The branch for device type ROM leads to a partition with only 15 instances, 13 positive and 2 negative. Those with PLCC packaging found in the pattern \\available in X\" (encoded as pp-available-1) were positive instances.\nThese two trees illustrate how di erent trees learn di erent types of knowledge. The most signi cant features in determining whether an equipment object is linked to a lithography process are real world constraints on what type of equipment can be used in lithography. This is re ected in the tree in Figure 2 by choosing equipment type as the root node followed by lithography type. There is no such overriding constraint on what type of device can be linked to a packaging technique. Here linguistic clues play a more prominent role, such as the relative position of references in the text and particular extraction patterns. The following section discusses how these linguistic-based features are encoded." }, { "figure_ref": [ "fig_4" ], "heading": "Generating Features for ID3 Trees", "publication_ref": [], "table_ref": [], "text": "Let's look in more detail at how Wrap-Up encodes ID3 instances, using information available from sentence analysis to automatically derive the features used for each tree. Each ID3 tree handles a discourse decision about a domain object or the relationship between a pair of objects, with di erent stages of Wrap-Up involving di erent sorts of decisions.\nThe information to be encoded about an object comes from concept nodes extracted during sentence analysis. Concept nodes have a case frame with slots for extracted information, and also have the location and extraction patterns of each reference in the text. Consider again the example from Section 2.2.\nGCA unveiled its new XLS stepper, which was developed with assistance from Sematech. The system will be available in deep-ultraviolet and I-line configurations.\nSentence analysis extracts ve objects from this text: the company GCA, the equipment XLS stepper, the company Sematech, UV lithography, and I-line lithography. One of several discourse decisions to be made is whether the UV lithography uses the XLS stepper mentioned in the previous sentence. Figure 4 shows the two objects that form the basis of an instance for the Lithography-Equipment-Link tree. Each object includes the location of each reference and the patterns used to extract them. An extraction pattern is a combination of a syntactic pattern and a speci c lexical item or \\trigger word\" (as explained in Section 2.1). The pattern pp-available-in means that a reference to UV lithography was found in a prepositional phrase following the triggers \\available\" and \\in\".\nFigure 5 shows the instance for UV lithography and XLS stepper. It encodes the attributes and extraction patterns of each object and their relative position in the text. Wrap-Up encodes each case frame slot of each object using the actual slot value for closed classes such as lithography type. Open class slots such as equipment names are encoded with the value \\t\" to indicate that a name was present, rather than the actual name. Using the exact name would result in an enormous branching factor for this feature and might overly in uence the ID3 classi cation if a low frequency name happened to occur only in positive or only in negative instances.\nExtraction patterns are encoded as binary features that include the trigger word and syntactic pattern in the feature name. Patterns with two trigger words such as \\pp-availablein\" are split into two features, \\pp-available\" and \\pp-in\". For instances that encode a pair of objects these features will be encoded as \\pp-available-1\" and \\pp-in-1\" if they refer to the rst object. The count of how many such extraction patterns were used is also encoded\n(lithography-type . UV) (extraction-count-1 . 3) (pp-available-1 . t) (pp-in-1 . t) (keyword-deep-ultraviolet-1 . t) (equipment-type . stepper) (equipment-name . t) (extraction-count-2 . 3) (obj-unveiled-2 . t) (subj-passive-developed-2 . t) (keyword-stepper-2 . t) (common-triggers . 0) (common-phrases . 0) (distance . -1)\nFigure 5: An instance for the Lithography-Equipment-Link tree.\nfor each object. The feature \\extraction-count\" was motivated by the Slot Filtering stage since objects extracted several times are more likely to be valid than those extracted only once or twice from the text.\nAnother type of feature, encoded for instances involving pairs of objects, is the relative position of references to the two objects, which may be signi cant in determining if two objects are related. One feature easily computed is the distance in sentences between references. In this case the feature \\distance\" has a value of -1, since XLS stepper is found one sentence earlier than the UV lithography process. Another feature that might indicate a strong relationship between objects is the count of how many common phrases contain references to both objects. Other features list \\common triggers\", words included in the extraction patterns for both objects. An example of this would be the word \\using\" if the text had the phrase \\the XLS stepper using UV technology\".\nIt is important to realize what is not included in this instance. A human making this discourse decision might reason as follows. The sentence with UV lithography indicates that it is associated with \\the system\", which refers back to \\its new XLS stepper\" in the previous sentence. Part of this reasoning involves domain independent use of a de nite article, and part requires domain knowledge that \\system\" can be a nonspeci c reference to an equipment object. The current version of Wrap-Up does not look beyond information passed to it by sentence analysis and misses the reference to \\the system\" entirely.\nUsing speci c linguistic patterns resulted in extremely large, sparse feature sets for most trees. The Lithography-Equipment-Link tree had 1045 features, all but 11 of them encoding extraction patterns. Since a typical instance participates in at most a dozen extraction patterns, a serious time and space bottle neck would occur if the hundreds of linguistic patterns that are not present were explicitly listed for each instance. We implemented a sparse vector version of ID3 that was able to e ciently handle large feature spaces by only tabulating the small number of true-valued features for each instance.\nAs links are added during discourse processing, objects may become complex, including many pointers to other objects. By the time Wrap-Up considers links between companies and microelectronics processes, a lithography object may have a pointer to an equipment object or to a device object, and the equipment object may in turn have pointers to other objects. Wrap-Up allows objects to inherit the linguistic context and position in the text of objects to which they point. When object A has a pointer to object B, the location and extraction patterns of references to B are treated as if they references to A. This version of inheritance is helpful, but a little too strong, ignoring the distinction between direct references and inherited references.\nWe have looked at the encoding of instances for isolated discourse decisions in this section. The entire discourse system is a complex series of decisions, each a ecting the environment used for further processing. The training phase must re ect this changing environment at run time as well as provide classi cations for each training instance based on the target output. These issues are discussed in the next section." }, { "figure_ref": [], "heading": "Creating the Training Instances", "publication_ref": [], "table_ref": [], "text": "ID3 is a supervised learning algorithm that requires a set of training instances, each labeled with the correct classi cation for that instance. To create these instances Wrap-Up begins its tree building phase by passing the training texts to the sentence analyzer, which creates a set of objects representing the extracted information. Multiple references to the same object are then merged to form the initial input to Wrap-Up's rst stage. Wrap-Up encodes instances and builds trees for this stage, then repeats the process using trees from stage one to build trees for stage two, and so forth until trees have been built for all six stages.\nAs it encodes instances, Wrap-Up repeatedly consults the target output to assign a classi cation for each training instance. When building trees for the Slot Filtering stage an instance is classi ed positive if the extracted information matches a slot in the target output. Consider the example of a reference to an \\Ultratech stepper\" in a microelectronics text. Sentence analysis creates an equipment object with two slots lled, equipment type stepper and equipment name \\Ultratech\". This stage of Wrap-Up has a separate ID3 tree to judge the validity of each slot, equipment type and equipment name.\nSuppose that the target output has an equipment object with type \\stepper\" but that \\Ultratech\" is actually the manufacturer's name and not the equipment model name. The equipment type instance will be classi ed positive and the equipment name instance classied negative since no equipment object in the target output has the name Ultratech. Does this instance include features that capture why a human analyst would not consider \\Ultratech\" to be the equipment name? The human is probably using world knowledge to recognize Ultratech as a familiar company name and recognize other names such as \\Precision 5000\" as familiar equipment names. Knowledge such as lists of known company names and known equipment names is not presently included in Wrap-Up, although this could be derived easily from the training corpus.\nTo create training instances for the second stage of Wrap-Up, the entire training corpus is processed again, this time discarding some slot values as spurious according to the Slot Filtering trees before creating instances for Slot Merging trees. An instance is created for each pair of objects of the same type. If both objects can be mapped to the same object in the target output, the instance is classi ed as positive. For example, an instance would be created for a pair of device objects, one with device type RAM and the other with size 256 KBits. It is a positive instance if the output has a single device object with type RAM and size 256 KBits.\nBy the time instances are created for later stages of Wrap-Up, errors will have crept in from previous stages. Errors in ltering, merging, and linking will have resulted in some objects retained that no longer match anything in the target output and some objects that only partially match the target output. Since some degree of error is unavoidable, it is best to let the training instances re ect the state of processing that will occur later when Wrap-Up is used to process new texts. If the training is too perfectly ltered, merged, and linked, it will not be representative of the underlying probabilities during run time use of Wrap-Up.\nIn later stages of Wrap-Up objects may become complex and only partially match anything in the target output. To aid in matching complex objects, one slot for each object type is identi ed in the output structure de nition as the key slot. An object is considered to match an object in the output if the key slots match. Thus an object with a missing equipment name or spurious equipment name will still match if equipment type, the key slot, matches. If object A has a pointer to an object B, the object matching A in the output must also have a pointer to an object matching B.\nSuch recursive matching becomes important during the Link Creation stage. Among the last links considered in microelectronics are the roles a company plays towards a process. A company may be the developer of an x-ray lithography process that uses the ABC stepper, but not developer of the x-ray lithography process linked to a di erent equipment object. Wrap-Up needs to be sensitive to such distinctions in classifying training instances for trees in the Link Creation and Object Splitting stages.\nInstances in the Inferring Missing Objects stage and the Inferring Missing Slot Values stage have classi cations that go beyond a simple positive or negative. An instance for the Inferring Missing Objects stage is created whenever an object is found during training that has no higher object pointing to it. If a matching object indeed exists in the target output, Wrap-Up classi es the instance with the type of the object that points to it in the output. For example a training text may have a reference to \\stepper\" equipment, but have no mention of any process that uses the stepper. The target output will have a lithography object of type \\unknown\" that points to the stepper equipment. This is a legitimate inference to make, since steppers are a type of lithography equipment. The instance for the orphaned stepper equipment object will be classi ed as \\lithography-unknown-equipment\". This classi cation gives Wrap-Up enough information during run time to create the appropriate object.\nAn instance for Inferring Missing Slot Values is created whenever a slot is missing from an object which has a closed class of possible values, such as the \\status\" slot for equipment objects, that has the value of \\in-use\" or \\in-development\". When a matching object is found in the target output, the actual slot value is used as the classi cation. If the slot is empty or no such object exists in the output, the instance is classi ed as negative. As in the Inferring Missing Objects stage, negative is the most likely classi cation for many trees.\nNext we consider the e ects of tree pruning and con dence thresholds that can make the ID3 more cautious or more aggressive in its classi cations." }, { "figure_ref": [], "heading": "Con dence Thresholds and Tree Pruning", "publication_ref": [], "table_ref": [], "text": "With any machine learning technique there is a tendency toward \\over tting\", making generalizations based on accidental properties of the training data. In ID3 this is more likely to happen near the leaf nodes of the decision tree, where the partition size may grow too small for ID3 to select features with much predictive power. A feature chosen to discriminate among half a dozen training instances is likely to be particular to those instances and not useful in classifying new instances.\nThe implementation of ID3 used by Wrap-Up deals with this problem by setting a pruning level and a con dence threshold for each tree empirically. A new instance is classi ed by traversing the decision tree from the root node until a node is reached where the partition size is below the pruning level. The classi cation halts at that node and a classi cation of positive is returned if the proportion of positive instances is greater than or equal to the con dence threshold.\nA high con dence threshold will make an ID3 tree cautious in its classi cations, while a low con dence threshold will allow more positive classi cations. The e ect of changing the con dence threshold is more pronounced as the pruning level increases. With a large enough pruning level, nearly all branches will terminate in internal nodes with con dence somewhere between 0.0 and 1.0. A low con dence threshold will classify most of these instances as positive, while a high con dence threshold will classify them as negative.\nWrap-Up automatically sets a pruning level and con dence threshold for each tree using tenfold cross-validation. The training instances are divided into ten sets and each set is tested on a tree built from the remaining nine tenths of the training. This is done at various settings to nd settings that optimize performance.\nThe metrics used in this domain are \\recall\" and \\precision\", rather than accuracy. Recall is the percentage of positive instances that are correctly classi ed, while precision is the percentage of positive classi cations that are correct. A metric which combines recall and precision is the f-measure, de ned by the formula f = ( 2 + 1)PR=( 2 P + R) where can be set to 1 to favor balanced recall and precision. Increasing or decreasing for selected trees can ne-tune Wrap-Up, causing it to select pruning and con dence thresholds that favor recall or favor precision.\nWe have seen how Wrap-Up automatically derives the classi ers needed and the feature set for each classi er, and how it tunes the classi ers for recall/precision balance. Now we will look at the requirements for using Wrap-Up, with special attention to the issue of manual labor during system development." }, { "figure_ref": [], "heading": "Requirements of Wrap-Up", "publication_ref": [], "table_ref": [], "text": "Wrap-Up is a domain-independent architecture that can be applied to any domain with a well de ned output structure, where domain objects are represented as case frames and relationships between objects are represented as pointers between objects. It is appropriate for any information extraction task in which it is important to identify logical relationships between extracted information. The user must supply Wrap-Up with an output de nition listing the domain objects to be extracted. Each output object has one or more slots, each of which may contain either extracted information or pointers to other objects in the output. One slot for each object is labeled as the key slot, used during training to match extracted objects with objects in the target output.\nIf the domain and application are already well de ned, a user should be able to create such an output de nition in less than an hour. For a new application, whose information needs are not established, there is likely to be a certain amount of trial and error in developing the desired representation. This need for a well de ned domain is not unique to discourse processing or to trainable components such as Wrap-Up. All IE systems require clearly de ned speci cations of what types of objects are to be extracted and what relationships are to be reported.\nThe more time consuming requirement of Wrap-Up is associated with the acquisition of training texts and most importantly, hand-coded target output. While hand-coded targets represent a labor-intensive investment on the part of domain experts, no knowledge of natural language processing or of machine learning technologies is needed to generate these answer keys, so any domain expert can produce answer keys for use by Wrap-Up. A thousand microelectronics texts were used to provide training for Wrap-Up. The actual number of training instances from these training texts varied considerably for each decision tree. Trees that handled the more common domain objects had ample training instances from only two hundred training texts, while those that dealt with the less frequent objects or relationships were undertrained from a thousand texts.\nIt is easier to generate a few hundred answer keys than it is to write down explicit and comprehensive domain guidelines. Moreover, domain knowledge implicitly present in a set of answer keys may go beyond the conventional knowledge of a domain expert when there are reliable patterns of information that transcend a logical domain model. Once available, this corpus of training texts can be used repeatedly for knowledge acquisition at all levels of processing.\nThe architecture of Wrap-Up does not depend on a particular sentence analyzer or a particular information extraction task. It can be used with any sentence analyzer that uses keywords and local linguistic patterns for extraction. The output representation produced by Wrap-Up could either be used directly to generate database entries in a MUC-like task or could serve as an internal representation to support other information extraction tasks." }, { "figure_ref": [], "heading": "The Joint Ventures Domain", "publication_ref": [], "table_ref": [], "text": "After Wrap-Up had been implemented and tested in the microelectronics domain, we tried it on another domain, the MUC-5 joint ventures domain. The information to be extracted in this domain are companies involved in joint business ventures, their products or services, ownership, capitalization, revenue, corporate o cers, and facilities. Relationships between companies must be sorted out to identify partners, child companies, and subsidiaries. The output structure is more complex than that of microelectronics, with back-pointers, cycles in the output structure, redundant information, and longer chains of linked objects.\nFigure 6 shows a text from the joint ventures domain and a diagram of the target output. With all the pointers and back-pointers, the output for even a moderately complicated text becomes di cult to understand at a glance. This text describes a joint venture between a Japanese company, Rinnai Corp., and an unnamed Indonesian company to build a factory in Jakarta. A tie-up is identi ed with Rinnai and the Indonesian company as partners and a third company, the joint venture itself, as a child company. The output includes an \\entity-relationship\" object which duplicates much of the information in the tie-up object. A corporate o cer, the amount of capital, ownership percentages, the product \\portable cookers\", and a facility are also reported in the output. Some special handling was required for the joint ventures domain since the output structure de ned for the MUC-5 evaluation included some slots such as activity site and ownership percent whose values had a mixture of extracted information and pointers. These slot values have their own internal structure and can be thought of as pseudoobjects, an activity site object with pointers to a facility object and a company, and an ownership percent object with a pointer to a company and another slot giving a numeric value. These pseudoobjects were reformulated as standard objects conforming to the requirements of Wrap-Up, the activity site slot pointing to an activity site object and so forth. These were then transformed back into the complex slot lls when printing the nal representation of the output.\nThe output speci cations for joint ventures were less well-behaved in other ways, with graph cycles, back pointers, and redundant objects whose content must agree with information elsewhere in the output. Modi cations to Wrap-Up were needed to relax some implicit requirements for the domain structure, allowing graph cycles and giving special handling to any pointer slot which the user has labeled in the output de nition as a back pointer.\nJoint ventures also has some implicit constraints on relationships between objects. A company can play only a single role in a tie-up or a joint venture relationship: it cannot be both a joint venture child and also a parent or partner company. Wrap-Up had di culty learning this constraint and performed better when certain pointer slots were labeled with a \\single-role\" constraint in the output de nition.\nThis strategy of letting the user indicate constraints by annotating slots in the output de nition was implemented in an ad hoc fashion. A more general approach would allow the user to declare several types of constraint on the output. A pointer slot may be required or optional, may have at most one pointer or allow several. Some slots of an object may be mutually exclusive, an entry in one prohibiting an entry in another slot. There may be a required agreement between the value of a slot in one object and a slot in another object. A fully domain-independent discourse tool needs a mechanism to implement such generalized constraints." }, { "figure_ref": [], "heading": "System Performance", "publication_ref": [], "table_ref": [], "text": "As a point of comparison for the performance of Wrap-Up, the UMass/Hughes system was run with the TTG discourse module, which had been used in the o cial MUC-5 evaluation. Overall system performance with Wrap-Up was compared to performance with TTG, holding the rest of the system constant.\nWrap-Up takes the idea of TTG and extends it into a fully trainable system. TTG used decision trees to acquire domain knowledge, but often relied on hand-coded heuristics to apply that acquired knowledge, in particular the decisions about splitting or merging objects, which Wrap-Up handles during its Object Splitting stage; inferring missing objects, which Wrap-Up does in its Inferring Missing Objects stage; and adding context sensitive default slot values, which Wrap-Up does in its Inferring Missing Slot Values stage.\nSeveral iterations of hand tuning were required to adjust thresholds for the decision trees produced by TTG, whereas Wrap-Up found thresholds and pruning levels to optimize recall and precision for each tree automatically. After a day of CPU time devoted to decision tree training, Wrap-Up produced a working system and no further programming was needed.\nThe comparison with TTG was made for both the microelectronics domain and the joint ventures domain. The metrics used here are recall and precision. Recall is the percentage of possible information that was reported. Correctly identifying two out of ve possible company names gives recall of 40. Precision is the percent correct of the reported information. If four companies are reported, but only two of them correct, precision is 50. Recall and precision are combined into a single metric by the f-measure, de ned as f = ( 2 + 1)PR=( 2 P + R), with is set to 1 for balanced recall and precision." }, { "figure_ref": [], "heading": "The Microelectronics Domain", "publication_ref": [], "table_ref": [], "text": "Wrap-Up's scores on the o cial MUC-5 microelectronics test sets were generally a little higher than to those of TTG, both in overall recall and precision. F 32.3 44.4 37.4 36.3 38.6 37.4 34.6 37.7 36.1 27.1 39.5 32.1 32.7 37.0 34.7 34.7 40.5 37.5 34.4 40.2 36.8 31.5 39.0 34.8 Avg." }, { "figure_ref": [], "heading": "Figure 7: Performance on MUC-5 microelectronics test sets", "publication_ref": [], "table_ref": [], "text": "To put these scores in perspective, the highest scoring systems in the MUC-5 evaluation had f-measures in the high 40's. This was a di cult task both for sentence analysis and discourse analysis.\nAnother way to assess Wrap-Up is to measure its performance against the baseline provided by output from sentence analysis. Lack of coverage by the sentence analyzer places a ceiling on performance at the discourse level. In test set part 1 there were 208 company names to be extracted. The CIRCUS analyzer extracted a total of 404 company names, with only 131 correct and 2 partially correct, giving a baseline of 63% recall and 33% precision for that slot. Wrap-Up's Entity-Name-Filter tree managed to discard a little over half of the spurious company names, keeping 77% of the good companies. This resulted in 49% recall and 44% precision for this slot, raising the f-measure by 5 points, but doing so at the expense of recall.\nLimited recall for extracted objects is compounded when it comes to links between objects. If half the possible companies and a third of the microelectronics processes are missing, discourse processing has no chance at a large proportion of the possible links between companies and processes.\nAlthough precision is often increased at the expense of recall, Wrap-Up also has mechanisms to increase recall slightly. When the Inferring Missing Objects stage infers a missing process from an equipment object or the Object Splitting stage splits a process that points to multiple equipment, Wrap-Up can sometimes gain recall above that produced by the sentence analyzer." }, { "figure_ref": [], "heading": "The Joint Ventures Domain", "publication_ref": [], "table_ref": [], "text": "In the joint ventures domain Wrap-Up's scores on the MUC-5 test sets were a little lower than the o cial UMass/Hughes scores. Wrap-Up tended to have lower recall but slightly higher precision. The performance of Wrap-Up and TTG is roughly comparable for each of the two domains. Both systems tend to favor the domain in which they were rst developed, Wrap-Up developed in microelectronics then ported to joint ventures, while the opposite was true for TTG. A certain amount of bias has probably crept into design decisions that were meant to be domain independent in each system. The higher scores of TTG for joint ventures are partly due to hand-coded heuristics that altered output from TTG before printing the nal output, something that was not done for TTG in microelectronics or for Wrap-Up in either domain.\nThe most noticeable di erence between Wrap-Up and TTG output in the joint ventures domain was in the ltering of spuriously extracted company names. Discourse processing started with 38% recall and 32% precision from sentence analysis for company names. Both systems included a ltering stage that attempted to raise precision by discarding spurious companies, but did so at the expense of discarding some valid companies as well. Each system used threshold settings to control how cautiously or aggressively this discarding is done (as in the example from Section 3.5). TTG's were set by hand and Wrap-Up's were selected automatically by cross-validation on the training set. TTG did only mild ltering on this slot, resulting in a gain of 2 precision points but a drop of 6 recall points. Wrap-Up chose aggressive settings and gained 13 precision points but lost 17 points in recall for this slot.\nAs a result, Wrap-Up ended up with only two thirds as many correct companies as TTG. This in turn meant two thirds as many pointers to companies in tie-ups and entity relationships. For other objects Wrap-Up scored higher recall than TTG, getting more than three times the total recall for activity, industry, and facility objects." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b11", "b4" ], "table_ref": [], "text": "With the recent accessibility of large on-line text databases and news services, the need for information extraction systems is growing. Such systems go beyond information retrieval and create a structured summary of selected information contained within relevant documents. This gives the user the ability to skim vast amounts of text, pulling out information on a particular topic. IE systems are knowledge-based, however, and must be individually tailored to the information needs of each application.\nSome research laboratories have focused on sophisticated user interfaces to ease the burden of knowledge acquisition. GE's NLToolset is an example of this approach (Jacobs et al., 1993), while BBN typi es systems that combine user input with corpus-based statistics (Ayuso et al., 1993). The University of Massachusetts has been moving in the direction of machine learning to create a fully trainable IE system. The ultimate goal is a turnkey system that can be tailored to new information needs by users who have no special linguistic or technical expertise.\nWrap-Up embodies this goal. The user de nes an information need and output structure, and provides a training corpus of representative texts with hand-coded target output for each text. Wrap-Up takes it from there and instantiates a fully functional IE discourse system for the new domain with no further customization needed by the user. Wrap-Up is the rst fully trainable system to handle discourse processing, and it does so with no degradation in performance. It automatically decides what classi ers are needed based on the domain output structure and derives the feature set for each classi er from sentence analyzer output.\nThe most intriguing aspect of Wrap-Up is the automatic generation of features. How e ective was this, and what did the trees actually learn? The greatest leverage seems to come from features that encode attributes of domain objects. The trees in microelectronics often based their classi cation on probabilities conditioned on the device type, equipment type, or process type. The example tree in Section 3.2 rst tested the equipment type and lithography type in determining whether a piece of equipment was used for a lithography process. This type of real world domain knowledge was the most important thing that Wrap-Up learned about microelectronics.\nUseful knowledge was also provided by features that encoded the relative position of references in the text. Distance, measured in number of sentences apart, played a prominent role in many classi cations, with other trees relying on more ne-grained features such as the number of times both references were in the same noun phrase or had overlapping linguistic context.\nAn enhancement to Wrap-Up's feature generation would be to increase its expressiveness about relative position. In addition to direct references to object A and object B, Wrap-Up could look for indirect references to A (pronominal or anaphoric) found near references to B and vice versa. The instance shown in Section 3.3 is an example where features for such indirect relationships might be useful.\nWrap-Up currently encodes an instance for each pair of objects that might be related, but is incapable of expressing the rule \\attach object B to the most recent object of type A.\" It is blind to the existence of other objects that are alternate candidates to the relationship being considered. Features could be encoded to re ect whether object A is the most recently mentioned object of its type.\nThe features that were least successful and most tantalizing were those that encoded the local linguistic context, the extraction patterns. These included an exact lexical item and were nearly all of such low frequency that they added noise more often than aiding useful discriminations. Tree pruning was only a partial solution, and an experiment in combining semantically similar terms only caused a sharp drop in classi cation accuracy.\nLow frequency terms are a built-in problem for any system that processes unrestricted text. Dunning (93) estimated that 20-30% of typical English news wire reports are composed of words of frequency less than one in 50,000 words. Yet the discourse decisions made by a human reader often seem to hinge on the use of one of these infrequent terms. It is a challenging open question to nd methods to utilize local linguistic context without drowning in the noise produced by low frequency terms.\nFinding a mechanism for choosing appropriate features is more critical than which machine learning algorithm is applied. ID3 was chosen as easy to implement, although other approaches such as vector spaces are worth trying. It is not obvious, however, how to craft a weighting scheme that gives greatest weight to the most useful features in the vector space and nearly zero to those not useful in making the desired discrimination. Cost and Salzberg (1993) describe a weighting scheme for the nearest neighbor algorithm that looks promising for lexically-based features. Another candidate for an e ective classi er is a back propagation network, which might naturally converge on weights that give most in uence to the most useful features.\nWe hope that Wrap-Up will inspire the machine learning community to consider analysis of unrestricted text as a fruitful application for ML research, while challenging the natural language processing community to consider ML techniques for complex processing tasks. In a broader context, Wrap-Up provides a paradigm for user customizable system design, where no technological background on the part of the user is assumed. A fully functional system can be brought up in a new domain without the need for months of development time, signifying substantial progress toward fully scalable and portable natural language processing systems." }, { "figure_ref": [], "heading": "Appendix A: Walk-through of a Sample Text", "publication_ref": [], "table_ref": [], "text": "To see the Wrap-Up algorithm in action, consider the sample text in Figure 9. The desired output has the company, Mitsubishi Electronics America, Inc., linked as purchaser/user to two packaging processes, TSOP and SOJ packaging. Each of these processes point to the device, 1 Mbit DRAM. The packaging material, plastic, should be attached to TSOP but not SOJ. All other details from the text are considered extraneous to the domain.\nAfter sentence analysis, followed by the step that merges multiple references, there are eight objects passed as input to Wrap-Up. Sentence analysis did fairly well in identifying the relevant information, only missing \\1 M\" as a reference to 1 MBits. Three of the eight objects are spurious and should be discarded during Wrap-Up's Slot Filtering stage. According to domain guidelines, the name \\Mitsubishi Electronics America, Inc.\" should be reported, not \\The Semiconductor Division ...\". The packaging material EPOXY and the device MEMORY should also be discarded.\nThe Slot Filtering stage creates an instance for each slot of each object. The Entity-Name-Filter tree classi es \\Mitsubishi Electronics America, Inc.\" as a positive instance, but \\The Semiconductor Division ...\" as negative and it is discarded. The most reliable discriminator of valid company names is \\extraction-count\", which was selected as root feature of this tree. Training instances participating in several extraction patterns were twice as likely to be valid as those extracted only once or twice. This held true in this text.\nThe Semiconductor Division of Mitsubishi Electonics America, Inc. now offers 1M CMOS DRAMs in Thin Small-Outline Packaging (TSOP*), providing the highest memory density available in the industry. Developed by Mitsubishi, the TSOP also lets designers increase system memory density with standard and reverse or \"mirror image,\" pin-outs. Mitsubishi's 1M DRAM TSOP provides the density of chip-on-board but with much higher reliability because the plastic epoxy-resin package allows each device to be 100% burned-in and fully tested. *Previously referred to as VSOP (very small-outline package) or USOP (ultra small-outline package). The 1M DRAM TSOP has a height of 1.2 mm, a plane measurement of 16.0 mm x 6.0 mm, and a lead pitch of 0.5 mm, making it nearly three times thinner and four times smaller in volume than the 1M DRAM SOJ package. The SOJ has a height of 3.45 mm, a plane dimension of 17.15 mm x 8.45 mm, and a lead pitch of 1.27 mm. Additionally, the TSOP weighs only 0.22 grams, in contrast with the 0.75 gram weight of the SOJ.\nFull text available on PTS New Product Announcements. As the Slot Filtering stage continues, the packaging material EPOXY is classi ed negative by the Packaging-Material-Filter tree, whose root test is packaging type. It turns out that EPOXY was usually extracted erroneously in the training corpus. This contrasts with the material PLASTIC which was usually reliable and is classi ed positive. Both TSOP and SOJ packaging types are classi ed positive by the Packaging-Type-Filter tree. Instances for these types were usually positive in the training set, particularly when extracted multiple times from the text. The Device-Type-Filter tree, with root feature device type, nds that DRAM is a reliable device type but that MEMORY was usually spurious in the training corpus. It should usually be merged with a more speci c device type.\nThe Slot Merging stage of Wrap-Up then considers each pair of remaining objects of the same type. There are three packaging objects, one with type TSOP, one with material PLASTIC, and one with type SOJ. The Packaging-Slotmerge tree easily rejects the TSOP-SOJ instance, since packaging objects never had multiple types in training. After testing that the second object has no packaging type, the feature \\distance\" is tested. This led to a positive classi cation for TSOP-PLASTIC, which are from the same sentence, and negative for SOJ-PLASTIC, with nearest references two sentences apart. At this point four objects remain:" }, { "figure_ref": [], "heading": "Entity", "publication_ref": [], "table_ref": [], "text": "Type: company Name:Mitsubishi Electronics America, Inc. The Link Creation stage considers each pair of objects that could be linked according to the output structure. The rst links considered are pointers from packaging to device objects. Separate instances for the Packaging-Device-Link tree are created for the possible TSOP-DRAM link and for the possible SOJ-DRAM link. Although only 25% of the training instances were positive, the tree found that 78% were positive with packaging type TSOP and \\distance\" of 0 sentences, and 77% were positive with packaging type SOJ and device type DRAM. After testing a few more features, the tree found each of these instances positive and pointers were added in the output. Notice how this tree interleaves knowledge about types of packaging and types of devices with knowledge about relative position of references in the text." }, { "figure_ref": [], "heading": "Device", "publication_ref": [], "table_ref": [], "text": "The next Link Creation decision concern the roles Mitsubishi plays towards each of the packaging processes. The output structure has a \\microelectronics-capability\" object with one slot pointing to a lithography, layering, etching, or packaging process, and four other slots (labeled developer, manufacturer, distributor, and purchaser/user) pointing to companies. Wrap-Up accordingly encodes four instances for Mitsubishi and TSOP packaging, one for each possible role. The same is done for Mitsubishi and SOJ packaging.\nInstances for Mitsubishi in the roles of developer, manufacturer, and distributor were all classi ed as negative. Training instances for these trees had almost no positive instances." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was supported by NSF Grant no. EEC-9209623, State/Industry/University Cooperative Research on Intelligent Information Retrieval." }, { "figure_ref": [], "heading": "Entity", "publication_ref": [], "table_ref": [], "text": "Type: company Name: Mitsubishi Electronics America, Inc. It seems that stories about packaging processes in this corpus are almost exclusively about companies purchasing or using someone else's packaging technology.\nThere are seldom explicit linguistic clues about the relationship of a company to a process in this corpus, so the Packaging-User-Link tree tests rst for the relative distance between references. Only 20% of training instances were positive, but when distance was 0 it jumped to 43% positive. Mitsubishi is in the same sentence with TSOP and the Mitsubishi-SOJ instance also has distance of 0 by inheritance. Even though the nearest reference to SOJ is two sentences after Mitsubishi, SOJ is linked to DRAM which occurs in the same sentence as Mitsubishi. Both instances are classi ed positive after further testing for packaging type and other features.\nThe last discourse decision in the Link Creation stage is to add pointers to each microelectronics capability from a \\template object\", created as a dummy root object in this domain's output. The Object Splitting stage nally gets to make a decision, albeit a vacuous one, and decides to let the template object point to multiple objects in its \\content\" slot. There were no \\orphan\" objects or missing slot values for the last two stages of Wrap-Up to consider. The nal output for this text is shown in Figure 11." } ]
[ { "authors": "D Ayuso; S Boisen; H Fox; H Gish; R Ingria; R Weischedel", "journal": "", "ref_id": "b0", "title": "BBN: Description of the PLUM System as Used for MUC-4", "year": "1992" }, { "authors": "M Brent", "journal": "", "ref_id": "b1", "title": "Robust Acquisition of Subcategorization Frames", "year": "1993" }, { "authors": "C Cardie", "journal": "", "ref_id": "b2", "title": "A Case-Based Approach to Knowledge Acquisition for Domain-Speci c Sentence Analysis", "year": "1993" }, { "authors": "K Church", "journal": "", "ref_id": "b3", "title": "A stochastic parts program and noun phrase parser for unrestricted text", "year": "1988" }, { "authors": "S Cost; S Salzberg", "journal": "Machine Learning", "ref_id": "b4", "title": "A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features", "year": "1993" }, { "authors": "S Derose", "journal": "Computational Linguistics", "ref_id": "b5", "title": "Grammatical Category Disambiguation by Statistical Optimization", "year": "1988" }, { "authors": "C P Dolan; S R Goldman; T V Cuda; A M Nakamura", "journal": "", "ref_id": "b6", "title": "Hughes Trainable Text Skimmer: Description of the TTS System as used for MUC-3", "year": "1991" }, { "authors": "T Dunning", "journal": "Computational Linguistics", "ref_id": "b7", "title": "Accurate Methods for the Statistics of Surprise and Coincidence", "year": "1993" }, { "authors": "B Grosz; C Sidner", "journal": "Computational Linguistics", "ref_id": "b8", "title": "Attention, intention and the structure of discourse", "year": "1986" }, { "authors": "D Hindle", "journal": "", "ref_id": "b9", "title": "Acquiring Disambiguation Rules from Text", "year": "1989" }, { "authors": "J Hobbs", "journal": "Lingua", "ref_id": "b10", "title": "Resolving Pronoun References", "year": "1978" }, { "authors": "P Jacobs; G Krupka; L Rau; M Mauldin; T Mitamura; T Kitani; I Sider; L Childs", "journal": "", "ref_id": "b11", "title": "GE-CMU: Description of the SHOGUN System used for MUC-5", "year": "1993" }, { "authors": "W Lehnert", "journal": "Ablex Publishing", "ref_id": "b12", "title": "Symbolic/Subsymbolic Sentence Analysis: Exploiting the Best of Two Worlds", "year": "1990" }, { "authors": "W Lehnert; C Cardie; D Fisher; J Mccarthy; E Rilo; S Soderland", "journal": "", "ref_id": "b13", "title": "University of Massachusetts: Description of the CIRCUS System as Used for MUC-4", "year": "1992" }, { "authors": "W Lehnert; J Mccarthy; S Soderland; E Rilo; C Cardie; J Peterson; F Feng; C Dolan; S Goldman", "journal": "", "ref_id": "b14", "title": "UMass/Hughes: Description of the CIRCUS System as Used for MUC-5", "year": "1993" }, { "authors": "L Liddy; K Mcvearry; W Paik; E Yu; M Mckenna", "journal": "", "ref_id": "b15", "title": "Development, Implementation, and Testing of a Discourse Model for Newspaper Texts", "year": "1993" }, { "authors": "", "journal": "Morgan Kaufmann Publishers", "ref_id": "b16", "title": "MUC-3", "year": "1991" }, { "authors": "J R Quinlan", "journal": "Machine Learning", "ref_id": "b17", "title": "Induction of Decision Trees", "year": "1986" }, { "authors": "E Rilo", "journal": "", "ref_id": "b18", "title": "Automatically Constructing a Dictionary for Information Extraction Tasks", "year": "1993" }, { "authors": "G Salton; A Wong; C S Yang", "journal": "Correspondences of the ACM", "ref_id": "b19", "title": "A vector space model for automatic indexing", "year": "1975" }, { "authors": "S Soderland; W Lehnert", "journal": "", "ref_id": "b20", "title": "Corpus-Driven Knowledge Acquisition for Discourse Analysis", "year": "1994" }, { "authors": "R Weischedel; M Meteer; R Schwartz; L Ramshaw; J Palmucci", "journal": "Computational Linguistics", "ref_id": "b21", "title": "Coping with Ambiguity and Unknown Words Through Probabilistic Models", "year": "1993" }, { "authors": "C Will", "journal": "Morgan Kaufmann Publishers", "ref_id": "b22", "title": "Comparing human and machine performance for natural language information extraction: Results for English microelectronics from the MUC-5 evaluation", "year": "1993" } ]
[ { "formula_coordinates": [ 13, 167.33, 101.27, 266.77, 108.09 ], "formula_id": "formula_0", "formula_text": "(lithography-type . UV) (extraction-count-1 . 3) (pp-available-1 . t) (pp-in-1 . t) (keyword-deep-ultraviolet-1 . t) (equipment-type . stepper) (equipment-name . t) (extraction-count-2 . 3) (obj-unveiled-2 . t) (subj-passive-developed-2 . t) (keyword-stepper-2 . t) (common-triggers . 0) (common-phrases . 0) (distance . -1)" } ]
Wrap-Up: a Trainable Discourse Module for Information Extraction
The vast amounts of on-line text now available have led to renewed interest in information extraction (IE) systems that analyze unrestricted text, producing a structured representation of selected information from the text. This paper presents a novel approach that uses machine learning to acquire knowledge for some of the higher level IE processing. Wrap-Up is a trainable IE discourse component that makes intersentential inferences and identi es logical relations among information extracted from the text. Previous corpusbased approaches were limited to lower level processing such as part-of-speech tagging, lexical disambiguation, and dictionary construction. Wrap-Up is fully trainable, and not only automatically decides what classi ers are needed, but even derives the feature set for each classi er automatically. Performance equals that of a partially trainable discourse module requiring manual customization for each domain. This paper describes Wrap-Up (Soderland & Lehnert, 1994), the rst system to automatically acquire domain knowledge for the higher level processing associated with discourse analysis. Wrap-Up uses supervised learning to induce a set of classi ers from a training corpus of representative texts, where each text is accompanied by hand-coded target output. We implemented Wrap-Up with the ID3 decision tree algorithm (Quinlan, 1986), although other machine learning algorithms could have been selected. Wrap-Up is a fully trainable system and is unique in that it not only decides what classi ers are needed for the domain, but automatically derives the feature set for each classi er. The user supplies a de nition of the objects and relationships of interest to the domain and a training corpus with hand-coded target output. Wrap-Up does the rest with no further hand coding needed to tailor the system to a new domain. Section 2 discusses the IE task in more detail, introduces the microelectronics domain, and gives an overview of the CIRCUS sentence analyzer. Section 3 describes Wrap-Up, giving details of how ID3 trees are constructed for each discourse decision, how features are automatically derived for each tree, and requirements for applying Wrap-Up to a new domain. Section 4 shows the performance of Wrap-Up in two domains and compares its performance to that of a partially trainable discourse component. In Section 5 we draw some conclusions about the contribution of this research. A detailed example from the microelectronics domain is given in an appendix.This section gives an overview of information extraction and illustrates IE processing with a sample text fragment from the microelectronics domain. We then discuss the need for trainable IE components to acquire knowledge for a new domain.An information extraction system operates at two levels. First, sentence analysis identi es information that is relevant to the IE application. Then discourse analysis, which we will focus on in this paper, takes the output from sentence analysis and assembles it into a coherent representation of the entire text. All of this is done according to prede ned guidelines that specify what objects from the text are relevant and what relationships between objects are to be reported. Sentence analysis can be further broken down into several stages, each applying di erent types of domain knowledge. The lowest level is preprocessing, which segments the text into words and sentences. Each word is assigned a part-of-speech tag and possibly a semantic tag in preparation for further processing. Di erent IE systems will do varying amounts of syntactic parsing at this point. Most research sites that participated in the ARPA-sponsored Message Understanding Conferences (MUC-3, 1991; MUC-4, 1992; MUC-5, 1993) found that robust, shallow analysis and pattern matching performed better than more elaborate, but brittle, parsing techniques. The CIRCUS sentence analyzer (Lehnert, 1990;Lehnert et al., 1992) does shallow syntactic analysis to identify simple syntactic constituents, and to distinguish active and passive voice verbs. This shallow syntactic analysis is su cient for the extraction task, which uses
Stephen Soderland; Wendy Lehnert
[ { "figure_caption": "Figure 2 :2Figure 2: A decision tree for pointers from lithography to equipment objects.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": ": : :a new line of 256 Kbit and 1 Mbit ROM chips. They are available in PLCC and priced at : : :", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: A tree for pointers from packaging to device objects.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Two objects extracted from the sample text", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Performance on MUC-5 joint ventures test sets", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "FigureFigure 9: A microelectronics text", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Input to Wrap-Up from the sample text", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" } ]
null
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b30", "b18", "b51", "b25", "b100", "b89", "b12", "b81", "b92", "b71", "b50", "b65", "b8", "b56" ], "table_ref": [], "text": "A probabilistic graphical model is graph where the nodes represent variables and the arcs (directed or undirected) represent dependencies between variables. They are used to de ne a mathematical form for the joint or conditional probability distribution between variables. Graphical models come in various forms: Bayesian networks used to represent causal and probabilistic processes, data-ow diagrams used to represent deterministic computation, in uence diagrams used to represent decision processes, and undirected Markov networks (random elds) used to represent correlation for images and hidden causes.\nGraphical models are used in domains such as diagnosis, probabilistic expert systems, and, more recently, in planning and control (Dean & Wellman, 1991;Chan & Shachter, 1992), dynamic systems and time-series (Kj ru , 1992;Dagum, Galper, Horvitz, & Seiver, 1994), and general data analysis (Gilks et al., 1993a) and statistics (Whittaker, 1990). This paper shows the task of learning can also be modeled with graphical models. This metalevel use of graphical models was rst suggested by Spiegelhalter and Lauritzen (1990) in the context of learning probabilities for Bayesian networks.\nGraphical models provide a representation for the decomposition of complex problems. They also have an associated set of mathematics and algorithms for their manipulation. When graphical models are discussed, both the graphical formalism and the associated algorithms and mathematics are implicitly included. In fact, the graphical formalism is unnecessary for the technical development of the approach, but its use conveys the important c 1994 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. structural information of a problem in a natural visual manner. Graphical operations manipulate the underlying structure of a problem unhindered by the ne detail of the connecting functional and distributional equations. This structuring process is important in the same way that a high-level programming language leads to higher productivity over assembly language.\nA graphical model can be developed to represent the basic prediction done by linear regression, a Bayesian network for an expert system, a hidden Markov model, or a connectionist feed-forward network (Buntine, 1994). A graphical model can also be used to represent and reason about the task of learning the parameters, weights, and structure of each of these representations. An extension of the standard graphical model that allows this kind of learning to be represented is used here. The extension is the notion of a plate introduced by Spiegelhalter1 (1993). Plates allow samples to be explicitly represented on the graphical model, and thus reasoned about and manipulated. This makes data analysis problems explicit in much the same way that utility and decision nodes are used for decision analysis problems (Shachter, 1986).\nThis paper develops a framework in which the basic computational techniques for learning can be directly applied to graphical models. This forms the basis of a computational theory of Bayesian learning using the language of graphical models. By a computational theory we mean that the approach shows how a wide variety of learning algorithms can be created from graphical speci cations and a few simple algorithmic criteria. The basic computational techniques of probabilistic (Bayesian) inference used in this computational theory of learning are widely reviewed (Tanner, 1993;Press, 1989;Kass & Raftery, 1993;Neal, 1993;Bretthorst, 1994). These include various exact methods, Markov chain Monte Carlo methods such as Gibbs sampling, the expectation maximization (EM) algorithm, and the Laplace approximation. More specialized computational techniques also exist for handling missing values (Little & Rubin, 1987), making a batch algorithm incremental, and adapting an algorithm to handle large samples. With creative combination, these techniques are able to address a wide range of data analysis problems.\nThe paper provides the blueprint for a software toolkit that can be used to construct many data analysis and learning algorithms based on a graphical speci cation. The conceptual architecture for such a toolkit is given in Figure 1. Probability and decision theory are used to decompose a problem into a computational prescription, and then search and optimization techniques are used to ll the prescription. A version of this toolkit already exists using Gibbs sampling as the general computational scheme (Gilks et al., 1993b). The list of algorithms that can be constructed in one form or another by the scheme in Figure 1 is impressive. But the real gain from the scheme does not arise from the potential re-implementation of existing software, but from understanding gained by putting these in a common language, the ability to create novel hybrid algorithms, and the ability to tailor special purpose algorithms for speci c problems.\nThis paper is tutorial in the sense that it collects material from di erent communities and presents it in the language of graphical models. This paper introduces graphical models, to represent rst-order inference and learning. Second, this paper develops and reviews a number of operations on graphical models. Finally, this paper gives some examples " }, { "figure_ref": [], "heading": "Introduction to graphical models", "publication_ref": [ "b90", "b0", "b24", "b27", "b69" ], "table_ref": [], "text": "This section introduces graphical models. The brief tour is necessary before introducing the operations for learning. Graphical models o er a uni ed qualitative and quantitative framework for representing and reasoning with probabilities and independencies. They combine a representation for uncertain problems with techniques for performing inference. Flexible toolkits and systems exist for applying these techniques (Srinivas & Breese, 1990;Andersen, Olesen, Jensen, & Jensen, 1989;Cowell, 1992). Graphical models are based on the notion of independence, which is worth repeating here.\nDe nition 2.1 A is independent of B given C if p(A; BjC) = p(AjC)p(BjC) whenever p(C) 6 = 0, for all A; B; C.\nThe theory of independence as a basic tool for knowledge structuring is developed by Dawid (1979) and Pearl (1988). A graphical model can be equated with the set of probability distributions that satisfy its implied constraints. Two graphical models are equivalent probability models if their corresponding sets of satisfying probability distributions are equivalent." }, { "figure_ref": [], "heading": "Directed graphical models", "publication_ref": [ "b19", "b84", "b69", "b84" ], "table_ref": [], "text": "The basic kind of graphical model is the Bayesian network, also called belief net, which is most popular in arti cial intelligence. See Charniak (1991), Shachter and Heckerman (1987), and Pearl (1988) for an introduction. This is also a graphical representation for a Markov chain. A Bayesian network is a graphical model that uses directed arcs exclusively to form a directed acyclic graph (DAG), (i.e., a directed graph without directed cycles). Figure 2, adapted from (Shachter & Heckerman, 1987) shows a simple Bayesian network for" }, { "figure_ref": [], "heading": "Occupation", "publication_ref": [], "table_ref": [], "text": "Climate Age" }, { "figure_ref": [], "heading": "Disease Symptoms", "publication_ref": [ "b55" ], "table_ref": [], "text": "Figure 2: A simpli ed medical problem a simpli ed medical problem. The graphical model represents a conditional decomposition of the joint probability (see (Lauritzen, Dawid, Larsen, & Leimer, 1990) for more details and interpretations). This decomposition works as follows (full variable names have been abbreviated). p(Age; Occ; Clim; Dis; SympjM) = (1) p(AgejM) p(OccjM) p(ClimjM) p(DisjAge; Occ; Clim; M) p(SympjDis; M) ; where M is the conditioning context, for instance the expert's prior knowledge and the choice of the graphical model in Figure 2. Each variable is written conditioned on its parents, where parents(x) is the set of variables with a directed arc into x. The general form for this equation for a set of variables X is: p(XjM) = Y x2X p(xjparents(x); M) :\n(2)\nThis equation is the interpretation of a Bayesian network used in this paper." }, { "figure_ref": [], "heading": "Undirected graphical models", "publication_ref": [ "b69", "b5", "b76", "b37", "b5", "b45", "b55", "b34", "b36", "b5" ], "table_ref": [], "text": "Another popular form of graphical model is an undirected graph, sometimes called a Markov network (Pearl, 1988). This is a graphical model for a Markov random eld. Markov random elds became used in statistics with the advent of the Hammersley-Cli ord theorem (Besag, York, & Mollie, 1991). A variant of the theorem is given later in Theorem 2.1. Markov random elds are used in imaging and spatial reasoning (Ripley, 1981;Geman & Geman, 1984;Besag et al., 1991) and various stochastic models in neural networks (Hertz, Krogh, & Palmer, 1991). Undirected graphs are also important because they simplify the theory of Bayesian networks (Lauritzen et al., 1990).\nFigure 3 shows a simple 4 4 image and an undirected model for the image. This model Figure 3: A simple 4 4 mage and its graphical model is based on the rst degree Markov assumption; that is, the current pixel is only directly in uenced by pixels positioned next to it, as indicated by the undirected arcs between variables p i;j and p i;j+1 , p i;j and p i+1;j , etc. Each node x (corresponding to a pixel) has its set of neighbors|those nodes it is directly connected to by an undirected arc. For instance, the neighbors of p 1;1 are p 1;2 , p 2;2 and p 2;1 . For the variable/node x, denote these by neighbors(x).\nIn general, there is no formula for undirected graphs in terms of conditional probabilities corresponding to Equation (1) for the Bayesian network of Figure 2. However, a functional decomposition does exist in another form based on the maximal cliques in Figure 3. Maximal cliques are subgraphs that are fully connected but are not strictly contained in other fully connected subgraphs. These are the 9 sets of 2 2 cliques such as fp 1;2 ; p 1;3 ; p 2;2 ; p 2;3 g. The interpretation of the graph is that the joint probability is a product over functions of the maximal cliques. p(p 1;1 ; : : :; p 4;4 ) = (3) f 1 (p 1;1 ; p 1;2 ; p 2;1 ; p 2;2 ) f 2 (p 1;2 ; p 1;3 ; p 2;2 ; p 2;3 ) f 3 (p 1;3 ; p 1;4 ; p 2;3 ; p 2;4 ) f 4 (p 2;1 ; p 2;2 ; p 3;1 ; p 3;2 ) f 5 (p 2;2 ; p 2;3 ; p 3;2 ; p 3;3 ) f 6 (p 2;3 ; p 2;4 ; p 3;3 ; p 3;4 ) f 7 (p 3;1 ; p 3;2 ; p 4;1 ; p 4;2 ) f 8 (p 3;2 ; p 3;3 ; p 4;2 ; p 4;3 ) f 9 (p 3;3 ; p 3;4 ; p 4;3 ; p 4;4 ) ; for some functions f 1 ; : : :; f 9 de ned up to a constant. From this formula it follows that p 1;3 is conditionally independent of its non-neighbors given its neighbors p 1;2 ; p 2;2 ; p 2;3 ; p 1;4 ; p 2;4 .\nThe general form for Equation (3) for a set of variables X is given in the next theorem.\nCompare this with Equation 2.\nTheorem 2.1 An undirected graph G is on variables in the set X. The set of maximal cliques on G is Cliques(G) 2 X . The distribution p(X) (probability or probability density) is strictly positive in the domain x2X domain(x). Then under the distribution p(X), x is independent of X fxg neighbors(x) given neighbors(x) for all x 2 X (Frydenberg (1990) refers to this condition as local G-Markovian) if, and only if, p(X) has the functional\nrepresentation p(X) = Y C2Cliques(G) f C (C) ;(4)\nfor some functions f C > 0.\nThe general form of this theorem for nite discrete domains is called the Hammersley-Cli ord Theorem (Geman, 1990;Besag et al., 1991). Again, this equation is used as the interpretation of a Markov network." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1", "fig_1", "fig_1", "fig_3" ], "heading": "Conditional probability models", "publication_ref": [ "b7", "b74", "b78", "b68", "b52", "b61", "b28", "b82", "b61" ], "table_ref": [], "text": "Consider the conditional probability p(DisjAge; Occ; Clim) found in the simple medical problem from Figure 2 and Equation (1). This conditional probability models how the disease should vary for given values of age, occupation, and climate. Class probability trees (Breiman, Friedman, Olshen, & Stone, 1984;Quinlan, 1992), graphs and rules (Rivest, 1987;Oliver, 1993;Kohavi, 1994), and feed-forward networks are representations devised to express conditional models in di erent ways. In statistics, the conditional distributions are also represented as regression models and generalized linear models (McCullagh & Nelder, 1989). The models of Figure 2 and Equation (1) and Figure 3 and Equation (3) show how the joint distribution is composed from simpler components. That is, they give a global model of variables in a problem. The conditional probability models, in contrast, give a model for a subset of variables conditioned on knowing the values of another subset. In diagnosis the concern may be a particular direction for reasoning, such as predicting the disease given patient details and symptoms, so the full joint model provides unnecessary detail. The full joint model may require extra parameters and thus more data to learn. In supervised learning applications, the general view is that conditional models are superior unless prior knowledge dictates a full joint model is more appropriate. This distinction is sometimes referred to as the diagnostic versus the discriminant approach to classi cation (Dawid, 1976).\nThere are a number of ways for explicitly representing conditional probability models. Any joint distribution implicitly gives the conditional distribution for any subset of variables, by de nition of conditional probability. For instance, if there is a model for p(Age; Occ; Clim; Dis; Symp), then by the de nition of conditional probability a conditional model follows: Conditional distributions can also be represented by a single node that is labeled to identify which functional form the node takes. For instance, in the graphs to follow, labeled Gaussian nodes, linear nodes, and other standard forms are all used. Conditional models such as rule sets and feed-forward networks can be constructed by the use of special deterministic nodes. For instance, Figure 4 shows four model constructs. In each case, the input variables to the , the shading indicates that the value for x is known, but the value for c is unknown. Presumably c will be predicted using x. Figure 4(a)\nrepresents a rule set. The nodes with double ovals are deterministic functions of their inputs in contrast to the usual nodes, which are probabilistic functions. This means that the value for rule 1 is a deterministic function of x 1 ; : : :; x n . Notice that this implies that the values for rule 1 and the others are known as well. The conditional probability for a deterministic node, as required for Equation (2), is treated as a delta function. ( 1 if unit = f(x 1 ; : : :; x n ) ; 0 otherwise for some function f not speci ed. A Bayesian network constructed entirely of double ovals is equivalent to a data ow graph where the inputs are shaded. The analysis of deterministic nodes in Bayesian networks and, more generally, in in uence diagrams is considered by Shachter (1990). For some purposes, deterministic nodes are best treated as intermediate variables and removed from the problem. The method for doing this, variable elimination, is given later in Lemma 6.1.\nThe logical or conjunctive form of each rule in Figure 4(a) is not expressed in the graph, and presumably would be given in the formulas accompanying the graph, however the basic functional structure of the rule set exists. In Figure 4(b), a node has been labeled with its functional type. The functional type for this node with a Boolean variable c is the function, p(c = 1jx) = 1\n1 + e x = Sigmoid(x) = Logistic 1 (x)\n(5) which maps a real value x onto a probability in (0; 1) for the binary variable c. This function is the inverse of the logistic or logit function used in generalized linear models (McCullagh & Nelder, 1989), and is also the sigmoid function used in feed-forward neural networks. Figure 4(c) uses a deterministic node to reproduce a single unit from a connectionist feedforward network, where the unit's activation is computed via a sigmoid function. Figure 4(d) is a simple univariate Gaussian, which makes y normally distributed with mean and standard deviation . Here the node is labeled in italics to indicate its conditional type.\nAt a more general level, networks can be conditional. Figure 5 shows two conditional versions of the simple medical problem. If the shading of nodes is ignored, the joint proba- Why do these distinct graphs become identical when viewed from the conditional perspective? Because conditional components of the model corresponding to age, occupation and climate cancel out when the conditional distribution is formed. However, the symptoms node has the unknown variable disease as a parent, so the arc from age to symptoms is kept.\nMore generally, the following simple lemma applies and is derived directly from Equation 2.\nLemma 2.1 Given a Bayesian network G with some nodes shaded representing a conditional probability distribution, if a node X and all its parents have their values given, then the Bayesian network G 0 created by deleting all the arcs into X represents an equivalent probability model to the Bayesian network G. This does not mean, for instance in the graphs just discussed, that there is no causal or in uential links between the variables age, occupation, and climate, rather that their effects become irrelevant in the conditional model considered because their values are already known. A corresponding result holds for undirected graphs, and follows directly from Theorem 2.1. Lemma 2.2 Given an undirected graph G with some nodes shaded representing a conditional probability distribution, delete an arc between nodes A and B if all their common neighbors are given. The resultant graph G 0 represents an equivalent probability model to the graph G." }, { "figure_ref": [ "fig_4", "fig_4", "fig_7", "fig_7" ], "heading": "Mixed graphical models", "publication_ref": [ "b99", "b34", "b34" ], "table_ref": [], "text": "Undirected and directed graphs can also be mixed in a sequence. These mixed graphs are called chain graphs (Wermuth & Lauritzen, 1989;Frydenberg, 1990). These chain graphs are sometimes used here, However, a precise understanding of them is not required for this paper. A simple chain graph is given in Figure 6. In this case, the single disease node where the last two conditional probabilities can take on an arbitrary form. Notice that the probabilities now have more than one variable on the left side.\nIn general, a chain graph consists of a chain of undirected graphs connected by directed arcs. Any cycle through the graph cannot have directed arcs going in opposite directions. Chain graphs can be interpreted as Bayesian networks de ned over the components of the chain instead of the original variables. This goes as follows:\nDe nition 2.2 Given a subgraph G over some variables X, the chain components are subsets of X that are maximal undirected connected subgraphs in a chain graph G (Frydenberg, 1990). Furthermore, let chain-components(A) denote all nodes in the same chain component as at least one variable in A.\nThe chain components for the graph above, ordered consistently with the directed arcs, are fAgeg, fOccg, fClimg, fHeart-Dis; Lung-Disg, and fSymp-A; Symp-B; Symp-Cg. Informally, a chain graph over variables X with chain components given by the set T is interpreted rst as the decomposition corresponding to the decomposition of Bayesian networks in Equation ( 2 Sometimes, to process graphs of this form without having to consider the mathematics of chain graphs, the following device is used.\nComment 2.1 When a set of nodes U in a chain graph form a clique (a fully connected subgraph), and all have identical children and parents otherwise, then the set of nodes can be replaced a single node representing the cross product of the variables. This operation for Figure 6 Chain graphs can be decomposed into a chain of directed and undirected graphs. An example is given in Figure 8. Figure 8 Having done this decomposition, the components are analyzed using all the machinery of directed and undirected graphs. The interpretation of these graphs in terms of independence statements and the implied functional form of the joint probability is a combination of the previous two forms given in Equation ( 2) and Theorem 2.1, based on (Frydenberg, 1990, Theorem 4.1), and on the interpretation of conditional graphical models in Section 2.3." }, { "figure_ref": [ "fig_8", "fig_8", "fig_9" ], "heading": "Introduction to learning with graphical models", "publication_ref": [ "b33", "b54", "b20", "b62" ], "table_ref": [], "text": "A simpli ed inference problem is represented in Figure 9. Here, the nodes var 1 , var 2 and var 3 are shaded. This represents that the value of these nodes is given, so the inference task is to predict the value of the remaining variable class. This graph matches the so-called \\idiot's\" Bayes classi er (Duda & Hart, 1973;Langley, Iba, & Thompson, 1992) used in supervised learning for its speed and simplicity. The probabilities on this network are easily learned from data about the three input variables var 1 , var 2 and var 3 , and class. This graph also matches an unsupervised learning problem where the class class is not in the data but is hidden. An unsupervised learning algorithm learns hidden classes (Cheeseman, Self, Kelly, Taylor, Freeman, & Stutz, 1988;McLachlan & Basford, 1988). The implied joint for these variables read from the graph is: p(class; var 1 ; var 2 ; var 3 ) = p(class) p(var 1 jclass) p(var 2 jclass) p(var 3 jclass) : (8)\nThe Bayesian classi er gets its name because it is derived by applying Bayes theorem to this joint to get the conditional formula: p(classjvar 1 ; var 2 ; var 3 ) = p(class)p(var 1 jclass) p(var 2 jclass) p(var 3 jclass) P class p(class)p(var 1 jclass) p(var 2 jclass) p(var 3 jclass) : (9) The same formula is used to predict the hidden class for objects in the simple unsupervised learning framework. Again, this formula, and corresponding formula for more general classi ers, can be found automatically by using exact methods for inference on Bayesian networks.\nConsider the simple model given in Figure 9. If the matching unsupervised learning problem for this model was represented, a sample of N cases of the variables would be observed, with the rst case being var 1;1 , var 2;1 , var 3;1 , and the N-th case being var 1;N , var 2;N , var 3;N . The corresponding hidden classes, class 1 to class N , would not be observed, but interest would be in performing inference about the parameters needed to specify the hidden classes. The learning problem is represented in Figure 10. This includes two added features: an explicit representation of the model parameters and , and a representation of the sample as N repeated subgraphs. The parameter (a vector of class probabilities) gives the proportions for the hidden classes, and the three parameters 1 , 2 and 3 give how the variables are distributed within each hidden class. For instance, if there are 10 classes, then is a vector of 10 class probabilities such that the prior probability of a case being in class c is c . If var 1 is a binary variable, then 1 would be 10 probabilities, one for each class, such that if the case is known to be in class c, then the probability var 1 is true is given by 1;c and the probability var 1 is false is given by 1 1;c . This yields the following equations:\np(class = cj ; M) = c ; p(var j = truejclass = c; j ; M) = j;c :\nThe unknown model parameters , 1 , 2 and 3 are included in the graphical model to explicitly represent all unknown variables and parameters in the learning problem." }, { "figure_ref": [ "fig_9", "fig_10", "fig_11", "fig_8", "fig_9", "fig_1", "fig_12", "fig_1", "fig_9", "fig_9", "fig_13" ], "heading": "Introduction to Bayesian learning", "publication_ref": [ "b8", "b71", "b57", "b4", "b21", "b15", "b86", "b50", "b50", "b15", "b97", "b60", "b81", "b12" ], "table_ref": [], "text": "Now is a useful time to introduce the basic terminology of Bayesian learning theory. This is not an introduction to the eld. Introductions are given in (Bretthorst, 1994;Press, 1989;Loredo, 1992;Bernardo & Smith, 1994;Cheeseman, 1990). This section reviews notions such as the sample likelihood and Bayes factor, important for subsequent results.\nFor the above unsupervised learning problem there is the model, M, which is the use of the hidden class and the particular graphical structure of Figure 10. There are data assumed to be independently sampled, and there are the parameters of the model ( , 1 , etc.). In order to use the theory, it must be assumed that the model is correct. That is, the \\true\" distribution for the data can be assumed to come from this model with some parameters. In practice, hopefully the model assumptions are su ciently close to the truth. Di erent sets of model assumptions may be tried. Typically, the \\true\" model parameters are unknown, although there may be some rough idea about their values. Sometimes, several models are considered (for instance di erent kinds of Bayesian networks), but it is assumed that just one of them is correct. Model selection or model averaging methods are used to deal with them.\nFor the Bayesian classi er above, a subjective probability is placed over the model parameters, in the form p( ; 1 ; 2 ; 3 jM). This is called the prior probability. Bayesian statistics and decision theory is distinguished from all other statistical approaches in that it places initial probability distributions, the prior probability, over unknown model parameters. If the model is a feed-forward neural network, then a prior probability needs to be placed over the network weights and the standard deviation of the error. If the model is linear regression with Gaussian error, then it is over the linear parameters and the standard deviation of the error. Prior probabilities are an active area of research and are discussed in most introductions to Bayesian methods.\nThe next important component is the sample likelihood, which, on the basis of the model assumptions M and given a set of parameters , 1 , 2 and 3 , says how likely the sample of data was. This is p(samplej ; 1 ; 2 ; 3 ; M). The model needs to completely determine the sample likelihood. The sample likelihood is the basis of the maximum likelihood principle and many hypothesis testing methods (Casella & Berger, 1990). This combines with the prior to form the posterior probability: p( ; 1 ; 2 ; 3 jsample; M) = p(samplej ; 1 ; 2 ; 3 ; M)p( ; 1 ; 2 ; 3 jM) p(samplejM) :\nThis equation is Bayes theorem and the term p(samplejM) is derived from the prior and sample likelihood using an integration or sum that is often di cult to do: p(samplejM) = Z ; 1 ; 2 ; 3 p(samplej ; 1 ; 2 ; 3 ; M)p( ; 1 ; 2 ; 3 jM) d( ; 1 ; 2 ; 3 ) : (10) This term is called the evidence for model M, or model likelihood, and is the basis for most Bayesian model selection, model averaging methods, and Bayesian hypothesis testing Buntine methods using Bayes factors (Smith & Spiegelhalter, 1980;Kass & Raftery, 1993). The Bayes factor is a relative quantity used to compare one model M 1 with another M 2 :\nBayes-factor(M 2 ; M 1 ) = p(samplejM 2 ) p(samplejM 1 ) : Kass and Raftery (1993) review the large variety of methods available for computing or estimating the evidence for a model including numerical integration, importance sampling, and the Laplace approximation. In implementation, the log of the Bayes factor is used to keep the arithmetic within reasonable bounds. The log of the evidence can still produce large numbers, and since rounding errors in oating point arithmetic scales with the order of magnitude, the log Bayes factor is the preferred quantity to consider in implementation.\nThe evidence is often simpler in mathematical analysis. The Bayes factor is the Bayesian equivalent to the likelihood ratio test used in orthodox statistics and developed by Wilks. See Casella and Berger (1990) for an introduction and Vuong (1989) for a recent review. The evidence and Bayes factors are fundamental to Bayesian methods. It is often the case that a complex \\non-parametric\" model (a statistical term that loosely translates as \\many and varied parameter\" model) be used for a problem, rather than a simple model with some xed number of parameters. Examples of such models are decision trees, most neural networks, and Bayesian networks. For instance, suppose two models are proposed with M 1 and M 2 being two Bayesian networks suggested by the domain expert. These are given in Figure 11. They are over two multinomial variables var 1 and var 2 and two Gaussian variables x 1 and x 2 . Model M 2 has an additional arc going from the discrete variable var 2 to the real valued variable x 1 . The parameters 1 ; 2 ; 1 ; 1 ; 2 ; 2 for model M 1 parameterize probability distributions for the rst Bayesian network, and the parameters 1 ; 2 ; 0 1 ; 0 1 ; 2 ; 2 for the second. The task is to learn not only a set of parameters, but also to select a Bayesian network from the two. The Bayes factor gives the comparative worth of the two models. This simple example extends in principle to selecting a single decision tree, rule set, or Bayesian network from the huge number available from attributes in the domain. In this case compare the posterior probabilities of the two models p(M 1 jsample) and p(M 2 jsample). Assuming the truth falls in one or other model, the rst is computed using Bayes theorem as:\nvar 2 x 1 θ 1 θ 2 µ 2 var 1 N σ 2 x 2 µ 1 σ 1 Model = M 1 Model = M 2 var 2 x 1 θ 1 θ 2 µ 2 var 1 N σ 2 x 2 µ 1 ' σ 1 ' Gaussian Gaussian Gaussian Gaussian\np(M 1 jsample) = p(samplejM 1 )p(M 1 ) p(samplejM 1 )p(M 1 ) + p(samplejM 2 )p(M 2 ) ;\n= 1 1 + Bayes-factor(M 2 ; M 1 ) p(M 2 ) p(M 1 ) :\nMore generally, when multiple models exist, it still holds that:\np(M 2 jsample) p(M 1 jsample) = Bayes-factor(M 2 ; M 1 ) p(M 2 ) p(M 1 ) :\nNotice that the computation requires of each model its prior and its evidence. The second form reduces the computation to the relative quantity being the Bayes factor, and a ratio of the priors. The predictions of the individual models is averaged according to the model posteriors p(M 1 jsample) and p(M 2 jsample) = 1 p(M 1 jsample). The general components used in this calculation are the model priors, the evidence for each model or the Bayes factors, and the prediction for the new case made for each model. This process of model averaging happens in general. A typical non-parametric problem would be to learn class probability trees from data. The number of class probability tree models is super-exponential in the number of features. Even when learning Bayesian networks from data the number of Bayesian networks is at best quadratic in the number of features. Doing an exhaustive search of these spaces and doing the full averaging implied by the equation above is computationally infeasible in general. It may be the case that 15 models have posterior probabilities p(Mjsample) between 0.1 and 0.01, and several thousand more models have posteriors from 0.001 to 0.0000001. Rather than select a single model, a representative set of several models might be chosen and averaged using the identity:\np(xjsample) = X i p(M i jsample) p(xjsample; M i ) :\nThe general averaging process is depicted in Figure 12 where a Gibbs sampler is used to generate a representative subset of models with high posterior. This kind of computation is done for class probability trees where representative sets of trees are found using a heuristic branch and bound algorithm (Buntine, 1991b), and for learning Bayesian networks (Madigan & Raftery, 1994). A sampling scheme for Bayesian networks is presented in Section 8.3. In their current form, graphical models do not allow the convenient representation of a learning problem. There are four important points to be observed regarding the use of graphical models to improve their suitability for learning. Consider again the unsupervised learning system described in the introduction of Section 3. The unknown model parameters , 1 , 2 , and 3 are included in the graphical model to explicitly represent all variables in the problem, even model parameters. By including these in the probabilistic model, an explicitly Bayesian model is constructed. Every variable in a graphical model, even unknown model parameters, has a de ned prior probability.\nThe learning sample is a repeated set of measured variables so the basic model of Figure 9 appears duplicated as many times as there are cases in the sample, as shown in Figure 10. Clearly, this awkward repetition will occur whenever homogeneous data is being modeled (typical in learning). Techniques for handling this repetition form a major part of this paper.\nNeither graph in Figures 9 and 10 represents the goal of learning. For learning to be goal directed, additional information needs to be included in the graph: how is learned knowledge evaluated or how can subsequent performance be measured? This is the role of decision theory and it is modeled in graphical form using in uence diagrams (Shachter, 1986). This is not discussed here, but is covered in (Buntine, 1994). Finally, it must be possible to take a graphical representation of a learning problem and the goal of learning and construct an algorithm to solve the problem. Subsequent sections discuss techniques for this. Consider a simpli ed version of the same unsupervised problem. In fact, the simplest possible learning problem containing uncertainty goes as follows: there is a biased coin with This assumes is distributed according to the Beta distribution with parameters 1 = 1:5 and 2 = 1:5,\np( j 1 ; 2 ) = 1 1 (1 ) 2 1 Beta( 1 ; 2 ) (11\n)\nwhere Beta(; ) is the standard beta function given in many mathematical tables. This prior is plotted in Figure 14. A Beta(1:0; 1:0) prior, for instance, is uniform in , whereas Beta(1:5; 1:5) slightly favors values closer to 0.5|a fairer coin. Figure 13(b) is an equivalent Figure 14: The Beta(1:5; 1:5) prior ( = 1:5) and other priors on graphical model using the notation of plates. The repeated group, in this case the heads i nodes, is replaced by a single node with a box around it. The box is referred to as a plate. and implies that the enclosed subgraph is duplicated N times (into a \\stack\" of plates), the enclosed variables are indexed, and any exterior-interior links are duplicated. In Section 2.1 it was shown that any Bayesian network has a corresponding form for the joint probability of variables in the Bayesian network. The same applies to plates. The plate indicates that a product ( Q ) will appear in the corresponding form. The probability equation for Figure 10, read directly from the graph, is: p( ; 1 ; 2 ; 3 ; class 1 ; var 1;1 ; var 2;1 ; var 3;1 ; : : :; class N ; var 1;N ; var 2;N ; var 3;N ) = p( ) p( 1 ) p( 2) p( 3 ) p(class 1 j ) p(var 1;1 jclass 1 ; 1 ) p(var 2;1 jclass 1 ; 2 ) p(var 3;1 jclass 1 ; 3 ) : : : p(class N j ) p(var 1;N jclass N ; 1 ) p(var 2;N jclass N ; 2 ) p(var 3;N jclass N ; 3 ) :\nThe corresponding equation using product notation is: p( ; 1 ; 2 ; 3 ; class i ; var 1;i ; var 2;i ; var 3;i : i = 1; : : :\n; N) = p( ) p( 1 ) p( 2 ) p( 3 ) N Y i=1\np(class i j ) p(var 1;i jclass i ; 1 ) p(var 2;i jclass i ; 2 ) p(var 3;i jclass i ; 3 ) :\nThese two equations are equivalent. However, the di erences in their written form corresponds to the di erences in their graphical form. Each plate is converted into a product: the joint probability ignoring the plates is written, a product ( Many learning problems can be similarly modeled with plates. Write down the graphical model for the full learning problem with only a single case provided. Put a box around the data part of the model, pull out the model parameters (for instance, the weights of the network or the classi cation parameters), and ensure they are unshaded because they are unknown. Now add the data set size (N) to the bottom left corner.\nThe notion of a plate is formalized below. This formalization is included for use in subsequent proofs.\nDe nition 3.1 A chain graph G with plates on variable set X consists of a chain graph G 0 on variables X with additional boxes called plates placed around groups of variables. Only directed arcs can cross plate boundaries, and plates can be overlapping. Each plate P has an integer N P in the bottom left corner indicating its cardinality. Each plate indexes the variables inside it with values i = 1; : : :; N P . Each variable V 2 X occurs in some subset of the plates. Let indval(V ) denote the set of values for indices corresponding to these plates. That is, indval(V ) is the cross product of index sets f1; : : :; N P g for plates P containing V .\nA graph with plates can be expanded to remove the plates. Figure 10 is the expanded form of Figure 15. Given a chain graph with plates G on variables X, construct the expanded graph as follows:\nFor each variable V 2 X, add a node for V i for each i 2 indvar(V ). For each undirected arc between variables U and V , add an undirected arc between U i and V i for i 2 indvar(V ) = indvar(U). For each directed arc between variables U and V , add a directed arc between U i and V j for i 2 indvar(V ) and j 2 indvar(V ) where i and j have identical values for index components from the same plate.\nThe parents for indexed variables in a graph with plates are the parents in the expanded graph." }, { "figure_ref": [], "heading": "parents(U) = i2indval(U) parents(U i ) :", "publication_ref": [], "table_ref": [], "text": "A graph with plates is interpreted using the following product form. If the product form for the chain graph G 0 without plates with chain components T is p(XjM(G 0 )) = Y 2T p( jparents( ); M) ; then the product form for the chain graph G with plates has a product for each plate:\np(XjM(G)) = Y 2T Y i2indval( ) p( i jparents( i ); M) : (12)\nThis is given by the expanded version of the graph. Testing for independence on chain graphs with plates involves expanding the plates. In some cases, this can be simpli ed." }, { "figure_ref": [], "heading": "Exact operations on graphical models", "publication_ref": [ "b47", "b45", "b65", "b26" ], "table_ref": [], "text": "This section introduces basic inference methods on graphs without plates and exact inference methods on graphs with plates. While there are no common machine learning algorithms explained in this section, the operations explained are the mathematical basis of most fast learning algorithms. Therefore, the importance of these basic operations should not be underestimated. Their use within more well-known learning algorithms is explained in later sections.\nOnce a graphical model is developed to represent a problem, the graph can be manipulated using various exact or approximate transformations to simplify the problem. This section reviews basic exact transformations available: arc reversal, node removal, and exact removal of plates by recursive arc reversal. The summary of operations emphasizes the computational aspects. A graphical model has an associated set of de nitions or tables for the basic functions and conditional probabilities implied by the graph, the operations given below e ect both the graphical structure and these underlying mathematical speci cations. In both cases, the process of making these transformations should be constructive so that a graphical speci cation for a learning problem can be converted into an algorithm.\nThere are several generic approaches for performing inference on directed and undirected networks without plates. These approaches are mentioned, but will not be covered in detail. The rst approach is exact and corresponds to removing independent or irrelevant information from the graph, then attempting to optimize an exact probabilistic computation by nding a reordering of the variables. The second approach to performing inference is approximate and corresponds to approximate algorithms such as Gibbs sampling, and other Markov chain Monte Carlo methods (Hrycej, 1990;Hertz et al., 1991;Neal, 1993). In some cases, the complexity of the rst approach is inherently exponential in the number of variables, so the second can be more e cient. The two approaches can be combined in some cases after appropriate reformulation of the problem (Dagum & Horvitz, 1992)." }, { "figure_ref": [], "heading": "Exact inference without plates", "publication_ref": [ "b83", "b85", "b100", "b44", "b82", "b34" ], "table_ref": [], "text": "The exact inference approach has been highly re ned for the case where all variables are discrete. It is not surprising that available algorithms have strong similarities (Shachter, Andersen, & Szolovits, 1994) since the major choice points involve the ordering of the summation and whether this ordering is selected dynamically or statically. Other special classes of inference algorithms include the cases where the model is a multivariate Gaussian (Shachter & Kenley, 1989;Whittaker, 1990), or corresponds to some speci c diagnostic structure, such as two-level believe networks with a level of symptoms connected to a level of diseases (Henrion, 1990). This subsection reviews some simple, exact transformations on graphical models without plates. Two representative methods are covered but are by no means optimal: arc reversal and arc removal. They are important, however, because they are the building blocks on which methods for graphs with plates are based. Many more sophisticated variations and combinations of these algorithms exist in the literature, including the handling of deterministic nodes (Shachter, 1990) and chain graphs and undirected graphs (Frydenberg, 1990)." }, { "figure_ref": [ "fig_14" ], "heading": "Arc reversal", "publication_ref": [ "b81" ], "table_ref": [], "text": "Two basic steps for inference are to marginalize nuisance parameters or to condition on new evidence. This may require evaluating probability variables in a di erent order. The arc reversal operator interchanges the order of two nodes connected by a directed arc (Shachter, 1986). This operator corresponds to Bayes theorem and is used, for instance, to automate the derivation of Equation ( 9) from Equation ( 8). The operator applies to directed acyclic graphs and to chain graphs where a and b are adjacent chain components. Suppose nodes a and b need to be reordered. Assume that between a and b there is no directed path of length greater than one. If there was, then reversing the arc between a and b would create a cycle, which is forbidden in a Bayesian network and chain graph. The formula for the variable reordering can be found by applying Bayes theorem to the above equation. The corresponding graph is given in the right of Figure 16. Notice that the e ect on the graph is that nodes for a and b now share their parents. This is an important point. If all of a's parents were also b's, and vice versa, excepting b (parents(a) = parents(b) fbg), then the graph would be unchanged except for the direction of the arc between a and b.\nRegardless, the probability tables or formula associated with the graph also need to be updated. If the variables are discrete and full conditional probability tables are maintained, then this operation requires instantiating the set fa; bg parents(a) parents(b) in all ways, which is exponential in the number of variables." }, { "figure_ref": [ "fig_9" ], "heading": "Arc and node removal", "publication_ref": [], "table_ref": [], "text": "Some variables in a graph are part of the model, but are not important for the goal of the data analysis. These are called nuisance parameters. An unshaded node y without children (no outward going arcs) that is neither an action node nor a utility node can always be removed from a Bayesian network. This corresponds to leaving out the term p(yjparents(y)) in the product of Equation 2. Given p(a; b; y) = p(a) p(bja) p(yja; b) then y can be marginalized out trivially to yield: p(a; b) = p(a) p(bja) :\nBuntine More generally this applies to chain graphs|a chain component whose nodes are all unshaded and have no children can be removed. If y is a node without children, then remove the node with y from the graph and the arcs to it; ignore the factor p(yjparents(y)) in the full joint form. Consider that the i-th case in Figure 10 (nodes class i ; var 1;i ; var 2;i ; var 3;i ) can be removed from the model without a ecting the rest of the graph." }, { "figure_ref": [ "fig_12", "fig_16" ], "heading": "Removal of plates by exact methods", "publication_ref": [ "b46" ], "table_ref": [], "text": "Consider the simple coins problem of Figure 13 again. The graph represents the joint probability for p( ; heads 1 ; : : :; heads N ). The main question of interest here is the conditional probability of given the data heads 1 ; : : :; heads N . This could be obtained through repeated arc reversals between and heads 1 , then between and heads 2 , and so on, until all the data appears before in the directed graph. Doing this repeated series of applications of Bayes theorem yields a fully connected graph with (N + 1)N=2 arcs. The corresponding formula for the posterior simpli ed with Lemma 2.1 is also simple: p( jheads 1 ; : : :; heads N ; 1 = 1:5\n; 2 = 1:5) = 1 1+p (1 ) 2 1+n Beta( 1 + p; 2 + n) (13\n)\nwhere p is the number of heads in the sequence and n = N p is the number of tails.\nThis is a worthwhile introductory exercise in Bayesian decision theory (Howard, 1970) that should be familiar to most students of statistics. Compare this with Equation ( 11). There are several important points to notice about this result: E ectively, this does a parameter update, 0 1 = 1 + p and 0 2 = 2 + n, requiring no search or numerical optimization. The whole sequence of tosses, irrespective of its length and the ordering of the heads and tails, can be summed up with two numbers. These summary statistics are called su cient statistics because, assuming the model used is correct, they are su cient to explain all that is important about in the data,.\nThe corresponding graph can be simpli ed as shown in Figure 17. The plate is eciently removed and replaced by the su cient statistics (two numbers) irrespective of the size of the sample." }, { "figure_ref": [], "heading": "n,p θ", "publication_ref": [], "table_ref": [], "text": "Beta(1.5+n,1.5+p) The posterior distribution has a simple form. Furthermore, all the moments of , log , and log(1 ) for the distribution can be computed as simple functions of the normalizing constant, Beta( 0 1 ; 0 2 ). For instance:\nE jheads 1 ;:::;heads N ; 1 ; 2 (log ) = @ log Beta( 0 1 ; 0 2 ) @ 1 ;\nE jheads 1 ;:::;heads N ; 1 ; 2 ( ) = Beta( 0 1 + 1; 0 2 ) Beta( 0 1 ; 0 2 ) ; E jheads 1 ;:::;heads N ; 1 ; 2 2 = Beta( 0 1 + 2; 0 2 ) Beta( 0 1 ; 0 2 ) Beta 2 ( 0 1 + 1; 0 2 ) Beta 2 ( 0 1 ; 0 2 ) :\nThis result might seem somewhat obscure, but it is a general property holding for a large class of distributions that allows some averages to be calculated by symbolic manipulation of the normalizing constant." }, { "figure_ref": [ "fig_17", "fig_17", "fig_17", "fig_17", "fig_17" ], "heading": "The exponential family", "publication_ref": [ "b15", "b31", "b100", "b31", "b4", "b48", "b31", "b4", "b6", "b4", "b61" ], "table_ref": [], "text": "This result generalizes to a much larger class of distributions referred to as the exponential family (Casella & Berger, 1990;DeGroot, 1970). This includes standard undergraduate distributions such as Gaussians, Chi squared, and Gamma, and many more complex distributions constructed from simple components including class probability trees over discrete input domains (Buntine, 1991b), simple discrete and Gaussian versions of a Bayesian network (Whittaker, 1990), and linear regression with a Gaussian error. Thus, these are a broad and not insigni cant class of distributions that are given in the de nition below.\nTheir general form has a linear combination of parameters and data in the exponential.\nDe nition 4.1 A space X is independent of the parameter if the space remains the same when just is changed. If the domains of x; y are independent of , then the conditional distribution for x given y, p(xjy; ; M), is in the exponential family when p(xjy; ;\nM) = h(x; y) Z( ) exp k X i=1 w i ( )t i (x; y) ! (14\n)\nfor some functions w i , t i , h and Z and some integer k, for h(x; y) > 0. The normalization constant Z( ) is known as the partition function.\nNotice the functional form of Equation ( 14) is similar to the functional form for an undirected graph of Equation ( 4), as holds in many cases for a Markov random eld. For the previous coin tossing example, both the coin tossing distribution (a binomial on heads i ) and the posterior distribution on the model parameters ( ) are in the exponential family. To see this, notice the following rewrites of the original probabilities. These make the components w i , t i and Z explicit. p(headsj ) = exp (1 heads=true log + 1 heads=false log(1 )) ; p( jheads 1 ; : : :\n; heads N ; 1 ; 2 ) = 1 Beta( 1 + p; 2 + n) exp (( 1 + p 1) log + ( 2 + n 1) log(1 )) :\nTable 2 in Appendix B gives a selection of distributions, and their functional form. Further details can be found in most textbooks on probability distributions (DeGroot, 1970;Bernardo & Smith, 1994).\nThe following is a simple graphical reinterpretation of the Pitman-Koopman Theorem from statistics (Je reys, 1961;DeGroot, 1970). In Figure 18(a), T(x ; y ) is a statistic of xed dimension independent of the sample size N (corresponding to n; p in the coin tossing example). The theorem says that the sample in Figure 18(a) can be summarized in statistics, as shown in Figure 18(b), if and only if the probability distribution for xjy; is in the exponential family. In this case, T(x ; y ) is a su cient statistic. 18(a). Have x in the domain X and y in the domain Y , both domains are independent of , and both domains have components that are real valued or nite discrete. Let the conditional distribution for x given y; be f(xjy; ), which is positive for all x 2 X. If rst derivatives exist with respect to all real valued components of x and y, the plate removal operation applies for all samples x = x 1 ; : : :; x N , y = y 1 ; : : :; y N , and , as given in Figure 18(b), for some su cient statistics T(x ; y ) of dimension independent of N if and only if the conditional distribution for x given y; is in the exponential family, given by Equation ( 14). In this case, T(x ; y ) is an invertible function of the k averages: 1 N N X j=1 t i (x j ) : i = 1; : : :; k :\nIn some cases, this extends to domains X and Y dependent on (Je reys, 1961).\nSu cient statistics for a distribution from the exponential family are easily read from the functional form by taking a logarithm. For instance, for the multivariate Gaussian, the su cient statistics are x i for i = 1; : : :; d and x i x j for 0 i j d, and the normalizing constant Z( ; ) is given by: Z( ; ) = (2 ) d=2 det 1=2 exp 1 2 y :\nAs for coin tossing, it generally holds that if a sampling distribution (a binomial on heads i ) is in the exponential family, then the posterior distribution for the model parameters ( ) can also be cast as exponential family. This is only useful when the normalizing constant (this is Beta( 1 + p; 2 + n) in the coin tossing example) and its derivatives are readily computed.\nLemma 4.1 (The conjugacy property). In the context in Theorem 4.1, assume the distribution for x given y; can be represented by the exponential family. Factor the normalizing constant Z( ) into two components, Z( ) = Z 1 ( )Z 2 , where the second is the constant part independent of . Assume the prior on takes the form:\np( j ; M) = f( ) Z ( ) exp k+1 (log 1=Z 1 ( )) + k X i=1 i w i ( ) ! (15\n)\nfor some k+1 dimensional parameter , where Z ( ) is the appropriate normalizing constant and f( ) is any function. Then the posterior distribution for , p( j ; x 1 ; : : :; x N ; M), is also represented by Equation ( 15) with the parameters\n0 k+1 = k+1 + N 0 i = i + N X j=1\nt i (x j ; y j ) i = 1; : : :; k :\nWhen the function f( ) is trivial, for instance uniformly equal to 1, then the distribution in Equation ( 15) is referred to as the conjugate distribution, which means it has a mathematical form mirroring that of the sample likelihood. The prior parameters , by looking at the update equations in the lemma, can be thought of as corresponding to the su cient statistics from some \\prior sample\" and given by k+1 . This property is useful for analytic and computational purposes. Once the posterior distribution is found, and assuming it is one of the standard distributions, the property can easily be established. Table 3 in Appendix B gives some standard conjugate prior distributions for those in Table 2, and Table 4 gives their matching posteriors. More extensive summaries of this are given by DeGroot (1970) and Bernardo and Smith (1994). The parameters for these priors can be set using standard reference priors (Box & Tiao, 1973;Bernardo & Smith, 1994) or elicited from a domain expert.\nThere are several other important consequences of the Pitman-Koopman Theorem or recursive arc reversal that should not go unnoticed.\nComment 4.1 If x; y are discrete and nite valued, then the distribution p(xjy; ) can be represented as a member of the exponential family. This holds because a positive nite discrete distribution can always be represented as an extended case statement in the form p(xjy; ) = exp 0 @ X i=1;k 1 t i (x;y) f i ( ) 1 A where the boolean functions t i (x; y) are a set of mutually exclusive and exhaustive conditions. The indicator function 1 A has the value 1 if the boolean A is true and 0 otherwise. The main importance of the exponential family is in continuous or integer domains. Of course, since a large class of functions log p(xjy; ) can always be approximated arbitrarily well by a polynomial in x; y and with su ciently many terms, the exponential family covers a broad class of distributions.\nThe application of the exponential family to learning is perhaps the earliest published result on computational learning theory. The following two interpretations of the recursive arc reversal theorem are relevant mainly for distributions involving continuous variables.\nComment 4.2 An incremental learning algorithm with nite memory must compress the information it has seen so far in the training sample into a smaller set of statistics. This can only be done without sacri cing information in the sample, in a context where all probabilities are positive, if the hypothesis or search space of learning is a distribution from the exponential family.\nComment 4.3 The computational requirements for learning an exponential family distribution are guaranteed to be linear in the sample size: rst compute the su cient statistics and then learning proceeds independently of the sample size. This could be exponential in the dimension of the feature space, however. Furthermore, in the case where the functions w i are full rank in (dimension of is k, same as w, and the Jacobian of w with respect to is invertible, det dw( ) d 6 = 0), various moments of the distribution can easily be found. For this situation, the function w 1 , when it exists, is called the link function (McCullagh & Nelder, 1989).\nLemma 4.2 Consider the notation of De nition 4.1. If the link function w 1 for an exponential family distribution exists, then moments of functions of t i (x; y) and exp (t i (x; y)) can be expressed in terms of derivatives and direct applications of the functions Z, t i , w i , and w 1 . If the normalizing constant Z and the link function are in closed form, then so will the moments.\nTechniques for doing these symbolic calculations are given in Appendix B. Exponential family distributions then fall into two groups. There are those where the normalizing constant and link function are known, such as the Gaussian. One can e ciently compute their moments and determine the functional form of their conjugate distributions up to the normalizing constant. For others this is not the case. For others, such as a Markov random eld used in image processing, moments can generally only be computed by an approximation process like Gibbs sampling given in Section 7.1." }, { "figure_ref": [ "fig_8", "fig_20" ], "heading": "Linear regression: an example", "publication_ref": [ "b15" ], "table_ref": [], "text": "As an example, consider the problem of linear regression with Gaussian error described in Figure 19. This is an instance of a generalized linear model and has a linear construction at its core. The M basis functions are known deterministic functions of the input variables x 1 ; : : :; x n . These would typically be nonlinear orthogonal functions such as Legendre polynomials. These combine linearly with the parameters to produce the mean m for the Gaussian.\nThe corresponding learning problem represented as plates is expressed in Figure 20. The For this case, the correspondence to the exponential family is drawn as follows. The individual data likelihoods, p(yjx 1 ; : : :; x n ; ; ), need only be considered. Expand the probability to show it is a linear sum of data terms and parameter terms.\np(yjx 1 ; : : :;\nx n ; ; ) = 1 p 2 exp 0 B @ 1 2 2 0 @ y M X j=1 basis j (x : ) j 1 A 2 1 C A ; = 1 p 2 exp 0 @ 1 2 2 y 2 M X j;k=1 basis j (x : ) basis k (x : ) j k 2 2 + M X j=1 basis j (x : ) y j 2 2 1 A :\nThe data likelihood in this last line can be seen to be in the same form as the general exponential family where the su cient statistics are the various data terms in the exponential.\nAlso, the link function does not exist because there are M parameters and M(M + 1)=2 su cient statistics. The model of Figure 20 can therefore be simpli ed to the graph in Figure 21, where q and S are the usual sample means and covariances obtained from the so-called normal equations of linear regression. S is a matrix of dimension M (the number of basis functions) and q is a vector of dimension M.\nS j;k = 1 N N X i=1\nbasis j (x :;i ) basis k (x :;i )\nq j = 1 N N X i=1 basis j (x :;i ) y i ysq = 1 N N X i=1\ny 2 i :\nThese three su cient statistics can be read directly from the data likelihood above. Consider the formula: Z y p(yjx 1 ; : : :; x n ; ; ) dy :\nDi erentiating with respect to i shows that the expected value of y given x 1 ; : : :; x n and ; is the mean (as expected):\nE yjx 1 ;:::;xn; ; (y) = m = M X j=1 basis j (x : ) j :\nDi erentiating with respect to shows that the expected error from the mean is 2 (again expected):\nE yjx 1 ;:::;xn; ; (y m) 2 = 2 :\nHigher-order derivatives give formula for higher-order moments such as skewness and kurtosis (Casella & Berger, 1990), which are functions of the second, third and fourth central moments. While these are well known for the Gaussian, the interesting point is that these formula are constructed by di erentiating the component functions in Equation ( 14) without recourse to integration. Finally, the conjugate distribution for the parameters in this linear regression problem is the multivariate Gaussian distribution when is known, and for 2 is the inverted Gamma." }, { "figure_ref": [ "fig_21", "fig_21", "fig_21", "fig_21" ], "heading": "Recognizing and using the exponential family", "publication_ref": [ "b60", "b88", "b42", "b95", "b70" ], "table_ref": [], "text": "How can recursive arc reversal be applied automatically to a graphical model? First, when a graphical model or some subset of a graphical model falls in the exponential family needs to be identi ed. If each conditional distribution in a Bayesian network or chain component in a chain graph is exponential family, then the full joint is exponential family. The following lemma gives this with some additional conditions for deterministic nodes. This applies to Bayesian networks using Comment 2.2. \n= l X i=1 u i (X) v i ( )\nfor some functions u i ; v i . Then the conditional distribution p(X; Y j ) is from the exponential family.\nSecond, how can these results be used when the model does not fall in the exponential family? There are two categories of techniques available in this context. In both cases, the algorithms concerned can be constructed from the graphical speci cations. The two new classes together with the recursive arc reversal case are given in Figure 22. In each case, (I) denotes the graphical con guration and (II) denotes the operations and simpli cations performed by the algorithm. When the various normalization constants are known in closed form and appropriate moments and Bayes factors can be computed quickly, all three algorithm schemas have reasonable computational properties.\nThe rst category is where a useful subset of the model does fall into the exponential family. This is represented by the partial exponential family in Figure 22. The part of the problem that is exponential family is simpli ed using the recursive arc reversal of Theorem 4.1, and the remaining part of the problem is typically handled approximately. Decision trees and Bayesian networks over multinomial or Gaussian variables also fall into this category. This happens because when the structure of the tree or Bayesian network is given the remaining problem is composed of a product of multinomials or Gaussians. This is the basis of various Bayesian algorithms developed for these problems (Buntine, 1991b;Madigan & Raftery, 1994;Buntine, 1991c;Spiegelhalter, Dawid, Lauritzen, & Cowell, 1993;Heckerman, Geiger, & Chickering, 1994). Strictly speaking, decision trees and Bayesian networks over multinomial or Gaussian variables are in the exponential family (see Comment 4.1). However, it is more computationally convenient to treat them this way. This category is discussed more in Section 8. The second category is where, if some hidden variables are introduced into the data, the problem becomes exponential family if the hidden values are known. This is represented by the mixture model in Figure 22. Mixture models (Titterington, Smith, & Makov, 1985;Poland, 1994) are used to model unsupervised learning, incomplete data in the classi cation problems, robust regression, and general density estimation. Mixture models extend the exponential family to a rich class of distributions, so this second category is an important one in practice. General methods for handling these problems correspond to Gibbs sampling (and other Markov chain Monte Carlo methods) discussed in Section 7.2 and its deterministic counterpart the expectation maximization algorithm, discussed in Section 7.4. As shown in Figure 22, these algorithms cycle back and forth between a process that reestimates c given using rst-order inference and a process that uses the fast exponential family algorithms to re-estimate given c." }, { "figure_ref": [], "heading": "Other operations on graphical models", "publication_ref": [ "b92", "b29", "b50", "b91", "b60" ], "table_ref": [], "text": "The recursive arc reversal theorem of Section 4.2 characterizes when plates can be readily removed and the sample summarized in some statistics. Outside of these cases, more general classes of approximate algorithms exist. Several of these are introduced in Section 7 and more detail is given, for instance, by Tanner (1993). These more general algorithms require a number of basic operations be performed on graphs: Decomposition: A learning problem can sometimes be decomposed into simpler subproblems with each yielding to separate analysis. One form of decomposition of learning problems is considered in Section 6.2. Another related form that applies to undirected graphs is developed by Dawid and Lauritzen (1993). Other forms of decomposition can be done at the modeling level, where the initial model is constructed in a manner requiring fewer parameters, as is Heckerman's similarity networks (1991).\nExact Bayes factors: Model selection and averaging methods are used to deal with multiple models (Kass & Raftery, 1993;Buntine, 1991b;Stewart, 1987;Madigan & Raftery, 1994). These require the computation of Bayes factors for models constructed during search. Exact methods for computing Bayes factors are considered in Section 6.3.\nDerivatives: Various approximation and search algorithms require derivatives be calculated, as discussed next." }, { "figure_ref": [ "fig_22", "fig_22", "fig_22", "fig_22", "fig_22", "fig_22" ], "heading": "Derivatives", "publication_ref": [ "b40", "b98", "b14", "b13", "b58", "b92", "b94", "b59", "b66", "b101", "b98", "b14", "b79", "b45" ], "table_ref": [], "text": "An important operation on graphs is the calculation of derivatives of parameters. This is useful after conditioning on the known data to do approximate inference. Numerical optimization using derivatives can be done to search for MAP values of parameters, or to apply the Laplace approximation to estimate moments. This section shows how to compute derivatives using operations local to each node. The computation is therefore easily parallelized, as is popular, for instance, in neural networks.\nSuppose a graph is used to compile a function that searches for the MAP values of parameters in the graph conditioned on the known data. In general, this requires use of numerical optimization methods (Gill, Murray, & Wright, 1981). To use a gradient descent, conjugate gradient or Levenberg-Marquardt approach requires calculation of rst derivatives. To use a Newton-Raphson approach requires calculation of second derivatives, as well. While this could be done numerically by di erence approximations, more accurate calculations exist. Methods for symbolically di erentiating networks of functions, and piecing together the results to produce global derivatives are well understood (Griewank & Corliss, 1991). For instance, software is available for taking a function de ned in Fortran, C++ code, or some other language, to produce a second function that computes the exact derivative. These problems are also well understood for feed-forward networks (Werbos, McAvoy, & Su, 1992;Buntine & Weigend, 1994), and graphical models with plates only add some additional complexity. The basic results are reproduced in this section and some simple examples given to highlight special characteristics arising from their use with chain graphs.\nConsider the problem of learning a feed-forward network. A simple feed-forward network is given in Figure 23(a). The corresponding learning problem is given in Figure 23(b), representing the feed-forward network as a Bayesian network. Here the sigmoid units of the network are modeled with deterministic nodes, and the network output represents the mean of a bivariate Gaussian with inverse variance matrix . Because of the nonlinear sigmoid function making the deterministic mapping from inputs x 1 ; x 2 ; x 3 to the means m 1 ; m 2 , this learning problem has no reasonable component falling in the exponential family. A rough fallback method is to calculate a MAP value for the weight parameters. This would be the method used for the Laplace approximation (Buntine & Weigend, 1991;MacKay, 1992) covered in (Tanner, 1993;Tierney & Kadane, 1986). The setting of priors for feed-forward networks is di cult (MacKay, 1993;Nowlan & Hinton, 1992;Wolpert, 1994), and it will not be considered here other than assuming a prior is used, p(w). The graph implies the p( ; w 1 ; : : :; w 5 j o 1;i ; o 2;i ; x 1;i ; x 2;i ; x 3;i : i = 1; : : :; N)\nw 1 w 2 w 3 w 4 w 5 x 1 x 2 x 3 m 1 m 2 o 1 o 2 h 1 h 2 h 3 Sigmoid Sigmoid Sigmoid Σ N Sigmoid Gaussian Gaussian x 1 x 2 x 3 m 1 m 2 (a) (b)\n/ p( ) p(w 1 ; : : :\n; w 5 ) N Y i=1 det 1=2 2 exp 1 2 (o i m i ) y (o i m i ) m i = Sigmoid(w y i h) h i = Sigmoid(w y i+2 x) :\nThe undirected clique on the parameters w indicates the prior has a term p(w). Suppose the posterior is di erentiated with respect to the parameters w 4 . The result is well known to the neural network community since this kind of calculation yields the standard backpropagation equations.\nRather than work through this calculation, instead look at the general case. To develop the general formula for di erentiating a graphical model, a few more concepts are needed. Deterministic nodes form islands of determinism within the uncertainty represented by the graph. Partial derivatives within each island can be calculated via recursive use of the chain rule, for instance, by forward or backward propagation of derivatives through the equations. For instance, forward propagation for the above network gives: @m 1 @w 4 = 3 X i=1 @m 1 @h i @h i @w 4 :\nThis is called forward propagation because the derivatives with respect to w 4 are propagated forward in the network. In contrast, backward propagation would propagate derivatives of m 1 with respect to di erent variables backwards. For each island of determinism, the important variables are the output variables, and their derivatives are required. So for the feed-forward network above, partial derivatives of m 1 , m 2 , and m 3 with respect to w 4 are required.\nDe nition 6.1 The non-deterministic children of a node x, denoted ndchildren(x), are the set of non-deterministic variables y such that there exists a directed path from x to y given by x; y 1 ; : : :; y n ; y, with all intermediate variables (y 1 ; : : :; y n ) being deterministic. The nondeterministic parents of a node x, denoted ndparents(x), are the set of non-deterministic variables y such that there exists a directed path from y to x given by y; y 1 ; : : :; y n ; x, with all intermediate variables (y 1 ; : : :; y n ) being deterministic. The deterministic children of a node x, denoted detchildren(x), are the set of deterministic variables y that are children of x.\nThe deterministic parents of a node x, denoted detparents(x), are the set of deterministic variables y that are parents of x.\nFor instance, in the model in Figure 23, the non-deterministic children of w 3 are o 1 and o 2 .\nDeterministic nodes can be removed from a graph by rewriting the equations represented into the remaining variables of the graph. Because some graphical operations do not apply to deterministic nodes, this removal is often done implicitly within a theorem. This goes as follows:\nLemma 6.1 A chain graph G with nodes X has deterministic nodes Y X. The chain graph G 0 is created by adding to G a directed arc from every node to its non-deterministic children, and by deleting the deterministic nodes Y . The graphs G and G 0 are equivalent probability models on the nodes X Y .\nThe general formula for di erentiating Bayesian networks with plates and deterministic nodes is given below in Lemma 6.2. This is nothing more than the chain rule for di erentiation, but it is important to notice the network structure of the computation. When partial derivatives are computed over networks, there are local and global partial derivatives that can be di erent. Consider the feed-forward network of Figure 23 again. On this gure, place an extra arc from w 4 to m 2 . Now consider the partial derivative of m 2 with respect to w 4 . The value of m 2 is in uenced by w 4 directly, as the new arc shows, and indirectly via h 2 .\nWhen computing a partial derivative involving indirect in uences, we need to di erentiate between the direct and indirect e ects. Various notations are used for this (Werbos et al., 1992;Buntine & Weigend, 1994). Here the notation of a local versus global derivative is used. The local partial derivative is subscripted with an l, @=@ l and represents the partial derivative computed at the node using only the direct in uences|the parents. For the example of the partial derivative of m 2 with respect to w 4 , the various local partial derivatives combine to produce the global partial derivative: @m 2 @w 4 = @m 2 @ l w 4 + @m 2 @ l h 2 @h 2 @ l w 4 :\nThis is equivalent to: @m 2 @w 4 = @m 2 @ l w 4 + @m 2 @ l h 2 @h 2 @ l w 4 + @m 2 @ l h 1 @h 1 @ l w 4 ;\nsince @h 1 @ l w 4 = 0. In general, the (global) partial derivative for an index variable i is the sum of the local partial derivative at the node containing i , the partial derivatives for each child of i that is also a non-deterministic child, and combinations of (global) partial derivatives for deterministic children found by backward or forward propagation of derivatives.\nLemma 6.2 (Di erentiation). A model M is represented by a Bayesian network G with plates and deterministic nodes on variables X. Denote the known variables in X by K and the unknown variables by U = X K. Let the conditional probability represented by the graph G be p(UjK; M). Let be some unknown variable in the graph, and let 1 nd( ) be 1 if is non-deterministic and 0 otherwise. If occurs inside a plate then let i be some arbitrary valid index (i 2 indval( )), otherwise let i be null. Then: @ log p(UjK; M)\n@ i = 1 nd( i ) @ log p( i jparents( i )) @ l i (17) + X x2ndchildren( i )\\children( i ) @ log p(xjparents(x)) @ l i + X x2ndchildren( i )\nX y2detparents(x);y6 = i @ log p(xjparents(x)) @ l y @y @ i :\nFurthermore, if Y U is some subset of the unknown variables, then the partial derivative of the probability of Y given the known variables, p(Y jK; M) is an expected value of the above probabilities:\n@ log p(Y jK; M) @ i = E U Y jY;K;M @ log p(UjK; M) @ i :\n(18) Equation ( 17) contains only one global partial derivative which is inside the double sum on the right side. This is the partial derivative @y=@ i and can be computed from its local island of determinism using the chain rule of di erentiation, for instance, using forward propagation from i 's deterministic children, or backward propagation from i 's non-deterministic parents.\nTo apply the Di erentiation Lemma on problems like feed-forward networks or unsupervised learning, the lemma needs to be extended to chain graphs. This means di erentiating Markov networks as well as Bayesian networks, and handling the expected value in Equation (18). These extensions are explained below after rst giving two examples.\nAs a rst example, consider the feed-forward network problem of Figure 23. By treating the two output units in the feed-forward network as a single variable, a Cartesian product (o 1 ; o 2 ), the above Di erentiation Lemma can now be applied directly to the feed-forward network model of Figure 23. This uses the simpli cation given in Section 2.4 with Comment 2.1. Let P be the joint probability for the feed-forward network model, given in Equation ( 16). The non-deterministic children of w 4 are the single chain component consisting of the two variables o 1 and o 2 . Its parents are the set fm 1 ; m 2 g. Consider the Di erentiation Lemma. There are no children of w 4 that are also non-deterministic, so the middle sum in Equation ( 17) of the Di erentiation Lemma is empty. Then the lemma yields, after expanding out the inner most sum: @ log P @w = @ log p(w) @ l w 4 + N X i=1 @ log p(o 1;i ; o 2;i j ; m 1 ; m 2 ) @ l m 1 @m 1 @ l w 4 + @ log p(o 1;i ; o 2;i j ; m 1 ; m 2 ) @ l m 2 @m 2 @ l w 4 ;\nwhere p(o 1;i ; o 2;i j ; m 1 ; m 2 ) is the two-dimensional Gaussian, and @m i @w 4 is from the global derivative but evaluates to a local derivative.\nAs a second example, reconsider the simple unsupervised learning problem given in the introduction to Section 3. The likelihood for a single datum given the model parameters is a marginal of the form: p(var 1 = 1; var 2 = 0; var 3 = 1j ; ) = 10 X c=1 c 1;c (1 2;c ) 3;c :\nTaking the logarithm of the full case probability p(class; var 1 ; var 2 ; var 3 j ; ) reveals the vectors of components w and t of the exponential distribution: log p(class; var 1 ; var 2 ; var 3 j ; )\n= 10 X c=1 1 class=c log c + 3 X j=1 10 X c=1\n1 class=c;var j =true log j;c + 1 class=c;var j =false log(1 j;c ) :\nNotice that the normalizing constant Z( ; ) is 1 in this case. Consider nding the partial derivative @ log p(var 1 ; var 2 ; var 3 j ; )=@ 2;5 . This is done for each case when di erentiating the posterior or the likelihood of the unsupervised learning model. Applying Equation ( 18) to this yields: @ log p(var 1 ; var 2 ; var 3 j ; ) @ 2;5 = 10 X d=1 @ log 2;d @ 2;5 E class=cjvar 1 ;var 2 ;var 3 ; ; (1 class=d;var 2 =true ) + @ log(1 2;d ) @ 2;5 E class=cjvar 1 ;var 2 ;var 3 ; ; (1 class=d;var 2 =true ) = 1 2;5 1 var 2 =true p(class = 5jvar 1 ; var 2 ; var 3 ; ; ) + 1 1 2;5 1 var 2 =false p(class = 5jvar 1 ; var 2 ; var 3 ; ; ) :\nNotice that the derivative is computed by doing rst-order inference to nd p(class = 5jvar 1 ; var 2 ; var 3 ; ; ), as noted by Russell, Binder, and Koller (1994). This property holds in general for exponential family models with missing or unknown variables. Derivatives are calculated by some rst-order inference followed by a combination with derivatives of the w functions. Consider the notation for the exponential family introduced previously in De nition 4.1, where the functional form is:\np(xjy; ; M) = h(x; y) Z( ) exp k X i=1 w i ( )t i (x; y) ! :\nConsider the partial derivative of a marginal of this probability, p(x ujy; ; M), for u x.\nUsing Equation ( 18) in the Di erentiation Lemma, the partial derivative becomes: @ log p(x ujy; ; M) @ = k X i=1 @w i ( ) @ E ujx u;y; (t i (x; y)) @Z( ) @ :\n(19)\nIf the partition function is not known in closed form (the case with the Boltzmann machine) then the nal derivative @Z( )=@ is approximated (the key formula for doing this is Equation ( 28) in Appendix B).\nTo extend the Di erentiation Lemma to chain graphs, use the trick illustrated with the feed-forward network. First, interpret the chain graph as a Bayesian network on chain components, as done in Equation ( 7), then apply the Di erentiation Lemma. Finally, evaluate necessary local partial derivatives with respect to i of each individual chain component. Since undirected graphs are not necessarily normalized, this may present a problem. In general, there is an undirected graph G 0 on variables X Y . Following Theorem 2.1, the general form is:\np(XjY ) = Q C2Cliques(G 0 ) f C (C) P X Q C2Cliques(G 0 ) f C (C) :\nThe local partial derivative with respect to x becomes:\n@ log p(XjY ) @ l x = 0 @ X C2Cliques(G 0 );x2C @ log f C (C) @ l x 1 A E XjY 0 @ X C2Cliques(G 0 );x2C @ log f C (C) @ l x 1 A :\n(20) The di culty here is computing the expected value in the formula, which comes from the normalizing constant. Indeed, this computation forms the core of the early Boltzmann machine algorithm (Hertz et al., 1991). In general, this must be done using something like Gibbs sampling and the techniques of Section 7.1 can be applied directly." }, { "figure_ref": [ "fig_10", "fig_1", "fig_1", "fig_1", "fig_1", "fig_3", "fig_3" ], "heading": "Decomposing learning problems", "publication_ref": [ "b34", "b55", "b29", "b69", "b34" ], "table_ref": [], "text": "Learning problems can be decomposed into sub-problems in some cases. While the material in this section applies generally to these sorts of decompositions, this section considers one simple example and then proves some general results on problems decomposition. Problem decompositions can also be recomputed on the y to create a search through a space of models that takes advantage of decompositions that exist. A general result is also presented on incremental decomposition. These results are simple applications of known methods for testing independence (Frydenberg, 1990;Lauritzen et al., 1990), with some added complication because of the use of plates.\nConsider the simple learning problem given in Section 3, Figure 11 over two multinomial variables var 1 and var 2 , and two Gaussian variables x 1 and x 2 . For this problem we have speci ed two alternative models, model M 1 and model M 2 . Model M 2 has an additional arc going from the discrete variable var 2 to the real valued variable x 1 . We will use this subsequently to discuss local search of these models evaluated by their evidence.\nA manipulation of the conditional distribution for this model, making use of Lemma 2.1, yields, for model M 1 , the conditional distribution given in Figure 24. When parameters,\nθ 1 var 1 N var 2 θ 2 var 1 N x 1 var 1 µ 1 σ 1 N x 1 µ 2 N σ 2 x 2 var 1\nFigure 24: A simpli cation of model M 1 1 , 2 are a priori independent, and their data likelihoods do not introduce cross terms between them, the parameters become a posteriori independent as well. This occurs for 1 , 2 , and the set f 1 ; 1 g. This model simpli cation also implies the evidence for model M 1 decomposes similarly. Denote the sample of the variable x 1 as x 1; = x 1;1 ; : : :; x 1;N , and likewise for var 1 and var 2 . In this case, the result is: evidence(M 1 ) = p(var 1; jM 1 ) p(var 2; jvar 1; ; M 1 ) p(x 1; jvar 1; ; M 1 ) p(x 2; jx 1; ; var 1; ; M 1 ) : ( 21)\nThe evidence for model M 2 is similar except that the posterior distribution of 1 and 1 is replaced by the posterior distribution for 0 1 and 0 1 . This result is general, and applies to Bayesian networks, undirected graphs, and more generally to chain graphs. Similar results are covered by Dawid and Lauritzen (1993) for a family of models they call hyper-Markov. The general result described above is an application of the rules of independence applied to plates. This uses the notion of nondeterministic children and parents introduced in De nition 6.1. It also requires a notion of local dependence, which is called the Markov blanket, following Pearl (1988), since it is a generalization of the equivalent set for Bayesian networks.\nDe nition 6.2 We have a chain graph G without plates. The Markov blanket of a node u is all neighbors, non-deterministic parents, non-deterministic children, and non-deterministic parents of the children and their chain components:\nMarkov-blanket(u) = neighbors(u) ndparents(u) ndchildren(u) (22) ndparents(chain-components(ndchildren(u))) : From Frydenberg (1990) it follows that u is independent of the other non-deterministic variables in the graph G given the Markov blanket.\nTo perform the simpli cation depicted in Figure 24, it is su cient then to nd the nest partitioning of the model parameters such that they are independent. The decomposition in Figure 24 represents the nest such partition of model M 1 . The evidence for the model will then factor according to the partition, as given for model M 1 in Equation ( 21). For this task there is the following theorem, depicted graphically in Figure 25. Theorem 6.1 (Decomposition). A model M is represented by a chain graph G with plates.\nLet the variables in the graph be X. There are P possibly empty subsets of the variables X, X i for i = 1; : : :; P such that unknown(X i ) is a partition of unknown(X). This induces a decomposition of the graph G into P subgraphs G i where: the graph G i contains the nodes X i and any arcs and plates occurring on these nodes, and the potential functions for cliques in G i are equivalent to those in G.\nThe induced decomposition represents the unique nest equivalent independence model to the original graph if and only if X i for i = 1; : : :; P is the nest collection of sets such that, when ignoring plates, for every unknown node u in X i , its Markov blanket is also in X i . This nest decomposition takes O(jXj 2 ) to compute. Furthermore, the evidence for M now becomes a product over each subgraph:\nevidence(M) = p(known(X )jM) = f 0 Y i f i (known(X i; )) (23)\nfor some functions f i (given in the proof).\nFigure 25 shows how this decomposition works when there are unknown nodes. \n2 x 1 θ 1 θ 2 var 1 N x 2 θ 3 θ 4 x 3 θ 5 (a) (b) var 2 θ 1 θ 2 var 1 N var 2 x 1 var 1 N x 2 θ 3 θ 4 N x 2\nx 3 θ 5" }, { "figure_ref": [ "fig_25", "fig_10", "fig_25", "fig_10" ], "heading": "Figure 25: The incremental decomposition of a model", "publication_ref": [ "b23", "b49", "b42" ], "table_ref": [], "text": "In some cases, the functions f i given in the Decomposition Theorem in Equation ( 23) have a clean interpretation: they are equal to the evidence for the subgraphs. This result can be obtained from the following corollary.\nCorollary 6.1.1 (Local Evidence). In the context of Theorem 6.1, suppose there exists a set of chain components j from the graph ignoring plates such that X j = j ndparents( j ), where unknown(ndparents( j )) = ;. Then f j (known(X j; )) = p(known( j ) jndparents( j ) ; M) : If we denote the j-th subgraph by model M S j , then this term is the conditional evidence for model M S j given ndparents( j ) . Denote by M S 0 the maximal subgraph on known variables only (induced by cliques 0 as given in the proof of the Decomposition Theorem).\nIf the condition of Corollary 6.1.1 holds for M S j for j = 0; 1; : : :; P, then it follows that the evidence for the model M is equal to the product of the evidence for each subgraph:\nevidence(M) = P Y i=0 evidence(M S i ) :(24)\nThis holds in general if the original graph G is a Bayesian network, as used in learning Bayesian networks (Buntine, 1991c;Cooper & Herskovits, 1992).\nCorollary 6.1.2 Equation ( 24) holds if the parent graph G is a Bayesian network with plates.\nIn general, we might consider searching through a family of graphical models. To do this local search (Johnson, Papdimitriou, & Yannakakis, 1985) or numerical optimization can be used to nd high posterior models, or Markov chain Monte Carlo methods to select a sample of representative models, as discussed in Section 7.2. To do this, how to represent a family of models must be shown. Figure 26, for instance, is similar to the models of Figure 11 except that some arcs are hatched. This is used to indicate that these arcs are optional. To instantiate a hatched arc they can either be removed or replaced with a full arc. This graphical model then represents many di erent models, for all 2 4 possible instantiations of the arcs. Prior probabilities for these models could be generated using a scheme such as in (Buntine, 1991c, p54) or (Heckerman et al., 1994), where a prior probability is assigned by a domain expert for di erent parts of the model, arcs and parameters, and the prior for a full model found by multiplication. The family of models given by Figure 26 includes those of Figure 11 as instances. During search or sampling, an important property is the Bayes factor for the two models, Bayes-factor(M 2 ; M 1 ), as described in Section 3.1. Because of the Decomposition Theorem and its corollary, the Bayes factor for M 2 versus M 1 can be found by looking at local Bayes factors. The di erence between models M 1 and M 2 is the parent for the variable x 1 , Bayes-factor(M 2 ; M 1 ) = p(x 1; jvar 1; ; var 2; ; M 2 ) p(x 1; jvar 1; ; M 1 ) : That is, the Bayes factor can be computed from only considering the models involving 1 ; 1 and 0 1 ; 0 1 . This incremental modi cation of evidence, Bayes factors, and nest decompositions is also general, and follows directly from the independence test. A similar property for undirected graphs is given in (Dawid & Lauritzen, 1993). This is developed below for the case of directed arcs and non-deterministic variables. Handling deterministic variables will require repeated application of these results, because several non-deterministic variables may be e ected when adding a single arc between deterministic variables. Lemma 6.3 (Incremental decomposition). For a graph G in the context of Theorem 6.1, we have two non-deterministic variables U and V such that U is given. Consider adding or removing a directed arc from U to V . To update the nest decomposition of G, there is a unique subgraph containing the unknown variables in ndparents(chain-component(V )). To this subgraph add or delete an arc from U to V , and add or delete U to the subgraph if required.\nShaded non-deterministic parents can be added at will to nodes in a graph, and the nest decomposition remains unchanged except for a few additional arcs. The use of hatched arcs in these contexts causes no additional trouble to the decomposition process. That is, the nest decomposition for a graph with plates and hatched directed arcs is formed as if the arcs where unhatched directed arcs. The evidence is adjusted during the search by adding the di erent parents as required." }, { "figure_ref": [ "fig_26" ], "heading": "Bayes factors for the exponential family", "publication_ref": [ "b50", "b88" ], "table_ref": [], "text": "To make use of the decomposition results in a learning system it is necessary to be able to generate Bayes factors or evidence for the component models. For models in the exponential family, whose normalization constant is known in closed form, this turns out to be easy. If these exact computations are not available, various approximation methods can be used to compute the evidence or Bayes factors (Kass & Raftery, 1993); some are discussed in Section 7.3.\nIf the conjugate distribution for an exponential family model and its derivatives can be readily computed, then the Bayes factor for the model can be found in closed form. Along with the above decomposition methods, this result is an important basis of many fast Bayesian algorithms considering multiple models. It is used explicitly or implicitly in all Bayesian methods for learning decision trees, directed graphical models (with discrete or Gaussian variables), and linear regression (Buntine, 1991b;Spiegelhalter et al., 1993). statistics, and contribution to the evidence. Conjugate priors from Table 3 in Appendix B (using yjx Gaussian) are indexed accordingly as: ijj j ijj Gaussian( 0;ijj ; 0;ijj 2 ijj ) for i = 1; 2 and j = 0; 1 ; 2 ijj Gamma( 0;ijj =2; 0;ijj ) for i = 1; 2 and j = 0; 1 :\nNotice that 0;ijj is one-dimensional when i = 0 and two-dimensional when i = 2. Suitable su cient statistics for this situation are read from Table 4 by looking at the data summaries used there. This can be simpli ed for x 1 because d = 1 and y 1 for the Gaussian is uniformly 1. Thus the su cient statistics for x 1 become the means and variances for the di erent values of var 1 . Denote x 1j0 and x 1j1 as the sample means of x 1 when var 1 = 0; 1, respectively, and s 2 1j0 and s 2 1j1 their corresponding sample variances. This cannot be done for the second case, so we use the notation from Table 4, where ; ; from Table 4 become, respectively, 2jj ; 2jj ; 2jj . Change the vector y to (1; x 1 ) when making the calculations indicated here. The su cient statistics are, for each case of var 1 = j:\nS 2jj = N X i=1 1 var 1;i =j y i y y i ; m 2jj = N X i=1 1 var 1;i =j x i y i ; s 2 2jj = N X i=1\n1 var 1;i =j (x i 2jj y y i ) 2 :\nThe evidence for the last two terms can now be read from Table 5. This becomes: p(x 1; jvar 1; ; M 1 ) = Y j=0;1 q 0;1jj n 1;j =2 q 0;1jj + n 1;j (( 0;1jj + n 1;0 )=2) ( 0;1jj =2) 0;1jj =2 0;1jj 0;1jj + s 2 1jj + 0 N 0 + N (x 0;1jj ) 2 ( 0;1jj +n 1;j )=2 ; p(x 2; jx 1; ; var 1\n; ; M 1 ) = Y j=0;1 det 1=2 0;2jj n 1;j =2 det 1=2 2jj (( 0;2jj + n 1;0 )=2) ( 0;2jj +n 1;j )=2 2jj ( 0;2jj =2) 0;2jj =2 0;2jj :\nThe nal simpli cation of the model is given in Figure 27." }, { "figure_ref": [ "fig_21" ], "heading": "Approximate methods on graphical models", "publication_ref": [ "b32", "b3", "b1" ], "table_ref": [], "text": "Exact algorithms for learning of any reasonable size invariably involve the recursive arc reversal theorem of Section 4.2. Most learning methods, however, use approximate algorithms at some level. The most common uses of the exponential family within approximation algorithms were summed up in Figure 22. Various other methods for inference on plates can be applied either at the model level or the parameter level: Gibbs sampling, rst described in Section 7.1, other more general Markov chain Monte Carlo algorithms, EM style algorithms (Dempster, Laird, & Rubin, 1977), and various closed form approximations such as the mean eld approximation, and the Laplace approximation (Berger, 1985;Azevedo-Filho & Shachter, 1994). This section summarizes the main families of these approximate methods.\nθ 1 n 1 n 2,*|* θ 2 x 1|* s 2 1|* µ 1 σ 1 µ 2 σ 2 ,* m 2|* s 2 2|* S 2|*" }, { "figure_ref": [ "fig_7", "fig_7", "fig_7" ], "heading": "Gibbs sampling", "publication_ref": [ "b37", "b77", "b77", "b65", "b40", "b49", "b64", "b80", "b96", "b77", "b77", "b65", "b69" ], "table_ref": [], "text": "Gibbs sampling is the basic tool of simulation and can be applied to most probability distributions (Geman & Geman, 1984;Gilks et al., 1993a;Ripley, 1987) as long as the full joint has no zeros (all variable instantiations are possible). It is a special case of the general Markov chain Monte Carlo methods for approximate inference (Ripley, 1987;Neal, 1993). Gibbs sampling can be applied to virtually any graphical model whether there are plates, undirected or directed arcs, and whether the variables are real or discrete. Gibbs sampling does not apply to graphs with deterministic nodes, however, since these put zeroes in the full joint. This section describes Gibbs sampling without plates, as a precursor to discussing Gibbs sampling with plates in Section 7.2. On challenging problems, other forms of Markov chain Monte Carlo sampling can and should be tried. The literature is extensive.\nGibbs sampling corresponds to a probabilistic version of gradient ascent, although their goals of averaging as opposed to maximizing are fundamentally di erent. Gradient ascent in real valued problems corresponds to simple methods from function optimization (Gill et al., 1981) and in discrete problems corresponds to local repair or local search (Johnson et al., 1985;Minton, Johnson, Philips, & Laird, 1990;Selman, Levesque, & Mitchell, 1992). Gibbs sampling varies gradient ascent by introducing a random component. The algorithm usually tries to ascend, but will sometimes descend, as a strategy for exploring further around the search space. So the algorithm tends to wander around local maxima with occasional excursions to other regions of the space. Gibbs sampling is also the core algorithm of simulated annealing if temperature is held equal to one (van Laarhoven & Aarts, 1987).\nTo sample a set of variables X according to some non-zero distribution p(X), initialize X to some value and then repeatedly resample each variable x 2 X according to its conditional probability p(xjX fxg). For the simple medical problem of Figure 2, suppose the value of symptoms is known, and the remaining variables are to be sampled, then do as follows:\n1. Initialize the remaining variables somehow.\n2. Repeat the following for i = 1; 2; 3; : : :, and record the sample of Age i ; Occ i ; Clim i ; Dis i at the end of each cycle.\n(a) Reassign Age by sampling it according to the conditional: p(AgejOcc; Clim; Dis; Symp) :\nThat is, take the values of Occ; Clim; Dis; Symp as given and compute the resulting conditional distribution on Age. Then sample Age according to that distribution.\n(b) Reassign Occ by sampling it according to the conditional: p(OccjAge; Clim; Dis; Symp) :\n(c) Reassign Clim by sampling it according to the conditional: p(ClimjAge; Occ; Dis; Symp) :\n(d) Reassign Dis by sampling it according to the conditional: p(DisjAge; Clim; Occ; Symp) :\nThis sequence of steps is depicted in Figure 28. In this gure, the basic graph has been re- Figure 28: Gibbs sampling on the medical example arranged for each step to represent the dependencies that arise during the sampling process. This uses the arc reversal and conditioning operators introduced previously.\nThe e ect of sampling is not immediate. Age 2 ; Occ 2 ; Clim 2 ; Dis 2 is conditionally dependent on Age 1 ; Occ 1 ; Clim 1 ; Dis 1 , and in general so is Age i ; Occ i ; Clim i ; Dis i for any i. However the e ect of the sampling scheme is that in the long run, for large i, Age i ; Occ i ; Clim i ; Dis i is approximately generated according to p(Age; Occ; Clim; DisjSymp) independently of Age 1 ; Occ 1 ; Clim 1 ; Dis 1 . In Gibbs sampling, all the conditional sampling is done in accordance with the original distribution, and since this is a stationary process, in the long run the samples converge to the stationary distribution or xed-point of the process. Methods for making subsequent samples independent are known as regenerative simulation (Ripley, 1987) and correspond to sending the temperature back to zero occasionally.\nWith this sample di erent quantities such as the probability a patient will have Age > 20 and Clim = tropical given Symp can be estimated. This is done by looking at the frequency of this event in the generated sample. The justi cation for this is the subject of Markov process theory (C inlar, 1975, Theorem 2.26). The following result, presented informally, applies:\nComment 7.1 Let x 1 ; x 2 ; : : :; x I be a sequence of discrete variables from a Gibbs sampler for the distribution p(x) > 0. Then the average of g(x i ) approaches the expected value with probability 1 as I approaches in nity:\n1 I I X i=1 g(x i ) ! g(x) = E (g(x)) :\nFurther, for a second function h(x i ), the ratio of two sample averages for g and h approaches their \\true\" ratio: P I i=1 g(x i ) P I i=1 h(x i ) ! g(x) h(x) :\nThis is used to approximate conditional expected values.\nTo complete this procedure it is necessary to know how many Gibbs samples to take, how large to make I, and how to estimate the error in the estimate. Both these questions have no easy answer but heuristic strategies exist (Ripley, 1987;Neal, 1993). For Bayesian networks this scheme is easy in general since the only requirement when sampling from p(xjX fxg) is the conditional distribution for nodes connected to x, and the global probabilities do not need to be calculated. Notice, for instance, that in Figure 28 some sampling operations do not require all ve variables. The general form for Bayesian networks given in Equation (2) goes as follows:\np(xjX fxg) = p(X) P\nx p(X) = p(xjparents(x)) Q y : x2parents(y) p(yjparents(y)) P x p(xjparents(x)) Q y : x2parents(y) p(yjparents(y)) :\nNotice the product is over a subset of variables. Only include conditional distributions for variables that have x as a parent. Thus, the formula only involves examining the parents, children and children's parents of x, the so-called Markov blanket (Pearl, 1988). Also, notice normalization is only required over the single dimension changed in the current cycle, done in the denominator. For x discrete, these conditional probabilities can be enumerated and direct sampling done for x.\nThe kind of simpli cation above for Bayesian networks also applies to undirected graphs and chain graphs, with or without plates. Here, modify Equation ( 4 (25)\nIn this formula, ignore all cliques not containing x so, again, Gibbs sampling only computes with information local to the node. Also, the troublesome normalization constant does not have to be computed because the probability is a ratio of functions and so cancels out. As before, normalization is only required over the single dimension x." }, { "figure_ref": [ "fig_9", "fig_8", "fig_8", "fig_30" ], "heading": "Gibbs sampling on plates", "publication_ref": [ "b72", "b95", "b93", "b45" ], "table_ref": [], "text": "Many learning problems can be represented as Bayesian networks. For instance, the simple unsupervised learning problem represented in Figure 10 is a Bayesian network once the plate is expanded out. It follows that Gibbs sampling is readily applied to learning as a general inference algorithm (Gilks et al., 1993a(Gilks et al., , 1993b)). Consider a simpli ed example of this unsupervised learning problem. In this model, assume that each variable var 1 and var 2 belongs to a mixture of Gaussians of known variance equal to 1.0. This simple model is given in Figure 29. For a given class, class = c,\nvar 1 var 2 µ 1 µ 2 φ class\nFigure 29: Unsupervised learning in two dimensions the variables var 1 and var 2 are distributed as Gaussian with means 1;c and 2;c . In the uniform, unit-variance case the distribution for each sample is given by: p(var 1 ; var 2 j; ; ; M) = X c c N(var 1 1;c )N(var 2 2;c ) ;\nwhere N(; ) is the one-dimensional Gaussian probability density function with standard deviation of 1. This model might seem trivial, but if the standard deviation were to vary as well, the model corresponds to a Kernel density estimate so can approximate any other distribution arbitrarily well using su cient number of tiny Gaussians.\nIn this simpli ed Gaussian mixture model, the sequence of steps for Gibbs sampling goes as follows:\n1. Initialize the variables c ; 1;c ; 2;c for each class c. 2. Repeat the following and record the sample of c ; 1;c ; 2;c for each class c at the end of each cycle.\n(a) For i = 1; : : :; N, reassign class i according to the conditional: p(class i j var 1;i ; var 2;i ; ; 1 ; 2 ) :\n(b) Reassign the vector by sampling according to the conditional: p( jclass i : i = 1; : : :; N) :\n(c) Reassign the vector 1 (and 2 ) by sampling according to the conditional:\np( 1 jvar 1;i ; class i : i = 1; : : :; N) : Step 2(a) represents the standard sampling operation using inference on Bayesian networks without plates.\nStep 2(b) and 2(c) are also easy to perform because in this case the distributions are exponential family, and the graph matches the conditions for Lemma 6.1.2. Therefore, each of the model parameters , 1 ; 2 are a posteriori independent and their distribution is known in closed form, with the su cient statistics calculated in O(N) time.\nOne important caveat in the use of Gibbs sampling for learning is the problem of symmetry. In the above description, there is nothing to distinguish class 1 from class 2. Initially, the class centers for the above process will remain distinct. Asymptotically, since there is nothing in the problem de nition to distinguish between class 1 and class 2, they will appear indistinguishable. This problem is handled by symmetry breaking: force 1;c < 2;c . Gibbs sampling applies whenever there are variables associated with the data that are not given. Hidden or latent variables are an example. Incomplete data (or missing values) (Quinlan, 1989), robust methods and modeling of outliers, and various density estimation and non-parametric methods all fall in this family of models (Titterington et al., 1985). Gibbs sampling generalizes to virtually any graphical model with plates and unshaded nodes inside the plate; the sequence of sampling operations will be much the same as in Figure 30. If the underlying distribution is exponential family, for instance, Lemma 5.1 applies after shading all nodes inside the plate; each full cycle is guaranteed to be linear time in the sample size. The algorithm in the exponential family case is summed up in Figure 31. Thomas, Spiegelhalter, and Gilks (1992) (Gilks et al., 1993b) have taken advantage of this general applicability of sampling to create a compiler that converts a graphical representation of a data analysis problem, with plates, into a matching Gibbs sampling algorithm. This scheme applies to a broad variety of data analysis problems. was large compared to the number of unknown parameters in the problem? A good way to think of this is: rst, if N is su ciently large, then the samples of the model parameters and 1 ; 2 will tend to drift around their mean because their posterior variance would be O(I=N). That is, after the i-th step, the sample is i ; 1;i ; 2;i . The i + 1-th sample i+1 ; 1;i+1 ; 2;i+1 would be conditionally dependent on these, but because N is large, the posterior variance of the i + 1-th sample given the i-th sample would be small so that: i+1 E i ; 1;i ; 2;i ( i+1 ) :\nThis approximation is used with Markov chain Monte Carlo methods by the mean eld method from statistical physics, popular in neural networks (Hertz et al., 1991). Rather than sampling a sequence of parameters , 1 ; 2 ; : : :; i , according to some scheme, use the deterministic update: i+1 = E i = i ( i+1 ) ;\n(26) where the expected value is according to the sampling distribution. This instead generates a deterministic sequence 1 ; 2 ; : : :; i that under reasonable conditions converges to some maxima. This kind of approach leads naturally to the EM algorithm, which will be discussed in Section 7.4." }, { "figure_ref": [ "fig_8", "fig_8" ], "heading": "The expectation maximization (EM) algorithm", "publication_ref": [ "b32", "b63" ], "table_ref": [], "text": "The expectation maximization algorithm, widely known as the EM algorithm, corresponds to a deterministic version of Gibbs sampling used to search for the MAP estimate for model parameters (Dempster et al., 1977). It is generally considered to be faster than gradient descent. Convergence is slow near a local maxima so some implementations switch to conjugate gradient or other methods (Meilijson, 1989) when near a solution. The computation used to nd the derivative is similar to the computation used for the EM algorithm, so this does not require a great deal of additional code. Also, the determinism means the EM algorithm no longer generates unbiased posterior estimates of model parameters. The intended gain is speed, not accuracy. The EM algorithm can generally be applied to exponential family models wherever Gibbs sampling can be applied. The correspondence between EM and Gibbs is shown below.\nConsider again the simple unsupervised learning problem represented in Figure 29 (Section 7.2). In this case, the sequence of steps for the EM algorithm is similar to that for the Gibbs sampler. The EM algorithm works on the means or modes of unknown variables instead of sampling them. Rather than sampling the set of classes and thereby computing su cient statistics so that a distribution for and 1 ; 2 can be found, a sequence of class means are generated and thereby used to compute expected su cient statistics. Likewise, instead of sampling new parameters and 1 ; 2 , modes are then computed from the expected su cient statistics.\nConsider again the unsupervised learning problem in Figure 9. Suppose there are 10 classes and that the three variables var 1 ; var 2 ; var 3 are nite valued and discrete and modeled with a multinomial with probabilities conditional on the class value class i . The sufcient statistics in this case are all counts: n j is the number of cases where the class is j; n v;kjj is the number of cases where class = j and var v = k:\nn j = N X i=1 1 class i =j ; n v;kjj = N X i=1 1 class i =j 1 var v;i =k :\nThe expected su cient statistics computed from the rules of probability for a given set of parameters and 1 ; 2 ; 3 are given by: n j = N X i=1 p(class i = jjvar 1;i ; var 2;i ; var 3;i ; ; 1 ; 2 ; 3 ) ;\nn v;kjj = N X i=1\np(class i = jjvar 1;i ; var 2;i ; var 3;i ; ; 1 ; 2 ; 3 ) 1 var v;i =k :\nThanks to Lemma 4.2, these kinds of expected su cient statistics can be computed for most exponential family distributions. Once su cient statistics are computed for any of the distributions posterior means or modes of the model parameters (in this case and 1 ; 2 ; 3 ) can be found. 1. Initialize the parameters and 1 ; 2 ; 3 .\n2. Repeat the following until some convergence criteria is met:\n(a) Compute the expected su cient statistics n j and n v;kjj . (b) Recompute and 1 ; 2 ; 3 to be equal to their mode conditioned on the su cient statistics. For many posterior distributions, these can be found in standard tables, and in most cases found via Lemma 4.2. For instance, using the mean for gives: j = n j + j P j (n j + j ) :\nthe problem of learning a Bayesian network, both structure and parameters, where the distribution is exponential family given the network structure. The variable T is a discrete variable indicating which graphical structure is chosen for the Bayesian network. The variable X represents the full set of variables given for the problem. The variable T represents the distributional parameters for the Bayesian network, and is the part of the model that is conveniently exponential family. That is, p(Xj T ; T) will be treated as exponential for di erent T and hold T xed. The su cient statistics in this case are given by ss(X ; T). In this case, the subproblem that conveniently falls in the exponential family, p( T jX ; T) is simpli ed, but it is necessary to resort to the more general learning techniques of previous sections to solve the remaining part of the problem, p(TjX )." }, { "figure_ref": [ "fig_31", "fig_32", "fig_20", "fig_20" ], "heading": "Linear regression with heterogeneous variance", "publication_ref": [ "b53" ], "table_ref": [], "text": "Consider the heterogeneous variance problem given in Figure 32. This shows a graphical model for the linear regression problem of Section 4.4 modi ed to the situation where the standard deviation is heterogeneous, so it is a function of the inputs x as well. In this case, The exponential transformation guarantees that the standard deviation s will also be positive.\nThe corresponding learning model can be simpli ed to the graph in Figure 33. Compare this with the model given in Figure 21. What is the di erence? In this case, the su cient statistics exist, but they are shown to be deterministically dependent on the sample and ultimately on the unknown parameters for the standard deviation weights-. If the parameters for the standard deviation were known then the graph could be reduced to Figure 21. Computationally, this is an important gain. It says that for a given set of values for weights-, calculation can be done that is linear time in the sample size to arrive at a characterization of what the parameters from the mean, weights-, should be. In short, one half of a problem, p(weights-jweights-; y i ; x :;i : i = 1; : : :; N), is well understood. To do a search for the MAP solution, search the space of parameters for the standard deviation, weights-, since the remaining (weights-) is given. Another variation of linear regression replaces the Gaussian error function with a more robust error function such as Student's t distribution, or an L q norm for 1 < q < 2. By introducing a convolution, these robust regression models can be handled by combining the EM algorithm with standard least squares (Lange & Sinsheimer, 1993)." }, { "figure_ref": [ "fig_22", "fig_33" ], "heading": "Feed-forward networks with a linear output layer", "publication_ref": [], "table_ref": [], "text": "A similar example is the standard feed-forward network where the nal output layer is linear.\nThis situation is given by Figure 23 if we change the deterministic functions for m 1 and m 2 to be linear instead of sigmoidal. In this case Lemma 5.1 identi es that when the weight vectors w 3 ; w 4 , and w 5 are assumed given, the distribution is in the exponential family.\nThus the simpli cation to Figure 34 is possible using the standard su cient statistics for multivariate linear regression. Algorithmically, this implies that given values for the internal weight vectors w 3 ; w 4 , and w 5 , and assuming a conjugate prior holds for the output weight vectors w 1 and w 2 , the posterior distribution for the output weight vectors w 1 and w 2 , and their means and variances, can be found in closed form. The evidence for w 1 and w 2 given w 3 ; w 4 and w 5 , p(y jx 1; ; : : :; x n; ; w 3 ; w 4 ; w 5 ; M), can also be computed using the exact method of Lemma 6.4, so therefore the posterior for w 3 ; w 4 , and w 5 , p(w 3 ; w 4 ; w 5 jy ; x 1; ; : : :; x n; ; M) / p(w 3 ; w 4 ; w 5 jM) p(y jx 1; ; : : :; x n; ; w 3 ; w 4 ; w 5 ; M) ; can be computed in closed form up to a constant. This e ectively cuts the problem into two pieces|w 3 ; w 4 and w 5 followed by w 1 and w 2 given w 3 ; w 4 , and w 5 |and provides a clean solution to the second piece. " }, { "figure_ref": [ "fig_25", "fig_11" ], "heading": "Bayesian networks with missing variables", "publication_ref": [ "b23", "b88", "b77" ], "table_ref": [], "text": "Class probability trees and discrete Bayesian networks can be learned e ciently by noticing that their basic form is exponential family (Buntine, 1991b(Buntine, , 1991a(Buntine, , 1991c;;Cooper & Herskovits, 1992;Spiegelhalter et al., 1993). Take, for instance, the family of models speci ed by the Bayesian network given in Figure 26. In this case, the local evidence corollary, Corollary 6.1.1, applies. The evidence for Bayesian networks generated from this graph is therefore a product over the nodes in the Bayesian network. If we change a Bayesian network by adding or removing an arc, the Bayes factor is therefore simply the local Bayes factor for the node, as mentioned in the incremental decomposition lemma, Lemma 6.3. Local search is then quite fast, and Gibbs sampling over the space of Bayesian networks is possible. A similar situation exists with trees (Buntine, 1991b). The same results apply to any Bayesian network with exponential family distributions at each node, such as Gaussian or Poisson. Results for Gaussians are presented, for instance, in (Geiger & Heckerman, 1994).\nThis local search approach is a MAP approach because it searches for the network structure maximizing posterior probability. More accurate approximation can be done by generating a Markov chain of Bayesian networks from the search space of Bayesian networks. Because the Bayes factors are readily computed in this case, Gibbs sampling or Markov chain Monte Carlo schemes can be used. The scheme given below is the Metropolis algorithm (Ripley, 1987). This only looks at single neighbors until a successor is found. This is done be repeating the following steps:\n1. For the initial Bayesian network G, randomly select a neighboring Bayesian network G 0 di ering only by an arc.\n2. Compute Bayes-factor(G 0 ; G) by making the decompositions described in Theorem 6.1, doing a local computation as described in Lemma 6.3, and using the Bayes factors computed with Lemma 6.4.\n3. Accept the new Bayesian network G 0 with probability given by: min 1; Bayes-factor(G 0 ; G) p(G 0 ) p(G) : If accepted, assign G 0 to G, otherwise G remains unchanged.\nA local maxima Bayesian network could be found concurrently, however, this scheme generates a set of Bayesian networks appropriate for model averaging and expert evaluation of the space of potential Bayesian networks. Of course, initialization might search for local maxima to use as a reference. This sampling scheme was illustrated in the context of averaging in Figure 12.\nThis scheme is readily adapted to learn the structure and parameters of a Bayesian network with missing or latent variables. For the Metropolis algorithm, add Step 4, which resamples the missing data and latent variables. 4. For the current complete data and Bayesian network G, compute the predictive distribution for the missing data or latent variables. Use this to resample the missing data or latent variables to construct a new set of complete data (for subsequent use in computing Bayes factors)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b93", "b22", "b61", "b2" ], "table_ref": [ "tab_6" ], "text": "The marriage of learning and graphical models presented here provides a framework for understanding learning. It also provides a framework for developing a learning or data analysis toolkit, or more ambitiously, a software generator for learning algorithms. Such a toolkit combines two important components: a language for representing a learning problem together with techniques for generating a matching algorithm. While a working toolbox is not demonstrated, a blueprint is provided to show how it could be constructed, and the construction of some well-known learning algorithms has been demonstrated. Table 1 lists some standard problems, the derivation of algorithms using the operations from the previous chapters, and where in the text they are considered. The notion of a learning toolkit is not new, and can be seen in the BUGS system by Thomas, Spiegelhalter, and Gilks (1992) (Gilks et al., 1993b), in the work of Cohen (1992) for inductive logic programming, and emerging in software for handling generalized linear models (McCullagh & Nelder, 1989;Becker, Chambers, & Wilks, 1988).\nThere is an important role for a data analysis toolkit. Every problem has its own quirks and requirements. Knowledge discovery, for instance, can vary in many ways depending on the user-de ned notion of interestingness. Learning is often an embedded task in a larger system. So while there are some easy applications of learning, generally learning applications require special purpose development of learning systems or related support software. Sometimes, this can be achieved by patching together some existing techniques or by decomposing a problem into subproblems. Nevertheless, the decomposition and patching of learning algorithms with inference and decision making can be formalized and understood within graphical models. In some ways the S system plays the role of a toolkit (Chambers & Hastie, 1992). It provides a system for prototyping learning algorithms, includes the ability to handle generalized linear models, does automatic di erentiation of expressions, and includes many statistical and mathematical functions useful as primitives. The language of graphical models is best viewed as an additional layer on top of this kind of system. Note, also, that it is impractical to assume that a software generator could create algorithms competitive with current nely tuned algorithms, for instance, for hidden Markov models. However, a software toolkit for learning could be used to prototype an algorithm that could later be re ned by hand. The combination of learning and graphical models shares some of the superior aspects of each of the di erent learning elds. Consider the philosophy of neural networks. These nonparametric systems are composed of simple computational components, usually readily parallelizable, and often nonlinear. The components can be pieced together to tailor systems for speci c applications. Graphical models for learning have these same features. Graphical models also have the expressibility of probabilistic knowledge representations that were developed in arti cial intelligence to be used in knowledge acquisition contexts. They therefore form an important basis of knowledge re nement. Finally, graphical models for learning allow the powerful tools of statistics to be applied to the problem.\nOnce learning problems are speci ed in the common language of graphical models, their associated learning algorithms, their derivation, and their interrelationships can be explored. This allows commonalities between seemingly diverse pairs of algorithms|such as k-means clustering versus approximate methods for learning hidden Markov models, learning decision trees versus learning Bayesian networks (Buntine, 1991a), and Gibbs sampling versus the expectation maximization algorithm in Section 7.4|to be understood as variations of one another. The framework is important as an educational tool." }, { "figure_ref": [], "heading": "Appendix A. Proofs of Lemmas and Theorems", "publication_ref": [ "b34", "b34", "b61" ], "table_ref": [], "text": "A.1 Proof of Theorem 2.1 A useful property of independence is that A is independent of B given C if and only if p(A; B; C) = f(A; C)g(B; C) for some functions f and g. The only if result follows directly from this property. The proof of the if result makes use of the following simple lemma. If A is independent of B given X A B, and p(X) = Q i f i (X i ) for some functions f i > 0 and variable sets X i X, then: p(X) = Y i g i (X i B) h i (X i A) (27) for some functions g i ; h i > 0. Notice that it is known p(X) = g(X B) h(X A) for some functions g and h by independence. Instantiate the variables in B to some value b. This is de ned for all X since the domain is a cross product. The lemma holds because all functions are strictly positive if g i (X i B) = f i (X i ; B = b) f i (X i ; B = b; A = a) and h i (X i A) = f i (X i ; A = a) :\nThe nal proof of the if result follows by applying Equation ( 27) repeatedly. Suppose the variables in X are x 1 ; : : :; x v . Now p(X) = f 0 (X) for some strictly positive function f 0 . Therefore: p(X) = g 0 (X fx 1 g)h 0 (fx 1 g neighbors(x)) : Denote A i;0 = X fx i g and A i;1 = fx i g neighbors(x i ). Repeating the application of Equation ( 27) for each variable yields: p(X) = Y i 1 =0;1 : : : Y iv=0;1 g i 1 ;:::;iv (X j=1;:::;v A j;i j ) : for strictly positive functions g i 1 ;:::;iv . Now, consider these functions. It is only necessary to keep the function g i 1 ;:::;iv if the set X S j=1;:::;v A j;i j is maximal: it is not contained in any such set. Equivalently, keep the function g i 1 ;:::;iv if the set S j=1;:::;v A j;i j is minimal. The minimal such sets are the set of cliques on the undirected graph. The result follows.\nA.2 Proof of Theorem 6.1 It takes O(jXj 2 ) operations to remove the deterministic nodes from a graph using Lemma 6.1. These nodes can be removed from the graph and then reinserted at the end. Hereafter, assume the graph contains no deterministic nodes. Also, denote the unknown variables in a set Y by unknown(Y ) = Y known(Y ). Then, without loss of generality, assume that X i contains all known variables in the Markov blanket of unknown(X i ).\nShowing the independence model is equivalent amounts to showing for i = 1; : : :; P that unknown(X i ) is independent of S j6 =i unknown(X j ) given known(X). To test independence using the method of Frydenberg (1990), each plate must be expanded (that is, duplicate it the right number of times), moralize the graph, removing the given nodes, and then test for separability. The Markov blanket for each node in this expanded graph corresponds to those nodes directly connected in the moralized expanded graph. Suppose we have the nest unique partition unknown(X i ) of the unknown nodes. X i 's are then reconstructed by adding known variables in the Markov blankets for variables in unknown(X i ). Suppose V is an unknown variable in a plate, and V j are its instances once the plate is expanded. Now, by symmetry, every V j is either in the same element of the nest partition, or they are all in separate elements. If V j has a certain unknown variable in its Markov boundary outside the plate, then so must V k for k 6 = j by symmetry. Therefore V j and V k are in the same element of the partition. Hence by contradiction, if V j is in a separate element, that element occurs wholly within the plate boundaries. Therefore, this nest partition can be represented using plates, and the nest partition identi ed from the graph ignoring plates. The operation of nding the nest separated sets in a graph is quadratic in the size of the graph, hence the O(jXj 2 ) complexity.\nAssume the condition holds and consider Equation ( 23). Let cliques( ) denote the subsets of variables in parents( ) that form cliques in the graph formed by restricting G to parents( ) and placing an undirected arc between all parents. Let (X) be the set of chain components in X. From Frydenberg (1990) Furthermore, if u 2 X i is not known, then the variables in u's Markov blanket will occur in X i , and therefore, if u 2 C for some clique C, then C X i . Therefore cliques containing an unknown variable can be partitioned according to which subgraph they belong in. Let: cliques 0 j = fC : C 2 cliques( (X)); unknown(X j ) \\ C 6 = ;g ; and add to this any remaining cliques wholly contained in the set so far: Distribution Functional form j C-dim multinomial( 1 ; : : :; C ) j for j 2 1; : : :; C] yjx Gaussian(x y ; ) 1 p 2 exp 1 2 (y x y ) 2 for y 2 <; x 2 < d x Gamma( > 0; > 0) ( ) x 1 e x for x 2 < + C-dim Dirichlet( 1 ; : : :; A.4 Proof of Lemma 6.3\nConsider the de nition of the Markov blanket. If a directed arc is added between the nodes, then the Markov blanket will only change for an unknown node X if U now enters the set of non-deterministic parents of the chain-components containing non-deterministic children of X. This will not e ect the subsequent graph separability, however, because it will only subsequently add arcs between U, a given node, and other nodes. Table 5: Distributions and their evidence 1994). For the distributions in Table 2 with priors in Table 3 , Table 5 gives their matching evidence derived using Lemma 6.4 and cancelling a few common terms.\nIn the case where the functions w i are full rank in (dimension of is k, same as w, and the Jacobian of w with respect to is invertible, det dw( ) d 6 = 0), then various moments of the distribution can be easily found:\nE xjy; (t(x; y)) = dw( ) d 1 dZ( ) d :\n(28)\nThe vector function w( ) now has an inverse and it is referred to as the link function (McCullagh & Nelder, 1989). This yields:\nE xjy; exp k X i=1 i t i (x; y) !! = Z(w 1 ( + w( ))) Z( ) :\n(29)\nThese are important because if the normalization constant Z can be found in closed form, then it can be di erentiated and divided, for instance, symbolically, to construct formula for various moments of the distribution such as E xj (t i (x)) and E xj (t i (x)t j (x)). Furthermore, Equation (28) implies that derivatives of the normalization constant, dZ( )= d , can be found by estimating moments of the su cient statistics (for instance, by Markov chain Monte Carlo methods)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The general program presented here is shared by many, including Peter Cheeseman, who encouraged this development from its inception. These ideas were presented in formative stages at Snowbird 1993 (Neural Networks for Computing), April 1993, and to the Bayesian Analysis in Expert Systems (BAIES) group in Pavia, Italy, June 1993. Feedback from that group helped further develop these ideas. Graduate students at Stanford and Berkeley have also received various incarnations of this ideas. Thanks also to George John, Ronny Kohavi, Scott Schmidler, Scott Roy, Padhraic Smyth, and Peter Cheeseman for their feedback on drafts, and to the JAIR reviewers. Brian Williams pointed out the extension of the decomposition theorems to the deterministic case. Brian Ripley reminded me of the extensive features of S." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b6", "b4", "b32", "b32" ], "table_ref": [], "text": "For instance, if the normalizing constant Z ( ) in Lemma 4.1 was known in closed form, then the Bayes factor can be readily computed. Lemma 6.4 Consider the context of Lemma 4.1. Then the model likelihood or evidence, given by evidence(M) = p(x 1 ; : : :; x N jy 1 ; : : :; y N ; M), can be computed as: evidence(M) = p( j ) Q N j=1 p(x j jy j ; ) p( j 0 ) = Z ( 0 ) Z ( )Z N 2 :\nFor yjx Gaussian this involves multiplying out the two sets of normalizing constants for the Gaussian and Gamma distributions. The evidence for some common exponential family distributions is given in Appendix B in Table 5 For instance, consider the learning problem given in Figure 24. Assume that the variables var 1 and var 2 are both binary (0 or 1) and that the parameters 1 and 2 are interpreted as follows:\np(var 1 = 0j 1 ) = 1 ; p(var 2 = 0jvar 1 = 0; 2 ) = 2;0j0 ; p(var 2 = 0jvar 1 = 1; 2 ) = 2;0j1 :\nIf we use Dirichlet priors for these parameters, as shown in Table 3, then the priors are:\n( 1 ; 1 1 ) Dirichlet( 1;0 ; 1;1 ) ; ( 2;0jj ; 1 2;0jj ) Dirichlet( 2;0jj ; 2;1jj ) for j = 0; 1 ; where 2;0j0 is a priori independent of 2;0j1 . The choice of priors for these distributions is discussed in (Box & Tiao, 1973;Bernardo & Smith, 1994). Denote the corresponding su cient statistics as n 1;j (equal to the number of data where var 1 = j) and n 2;jji (equal to the number of data where var 2 = j and var 1 = i). Then the rst two terms of the evidence for model M 1 , read directly from Table 5, can be written as: p(var 1; jM 1 ) = Beta(n 1;0 + 1;0 ; n 1;1 + 1;1 ) Beta( 1;0 ; 1;1 ) ; p(var 2; jvar 1; ; M 1 ) = Beta(n 2;0j0 + 2;0j0 ; n 2;1j0 + 2;1j0 ) Beta( 2;0j0 ; 2;1j0 ) Beta(n 2;0j1 + 2;0j1 ; n 2;1j1 + 2;1j1 ) Beta( 2;0j1 ; 2;1j1 ) :\nAssume the variables x 1 and x 2 are Gaussian with means given by 1j0 when var 1 = 0; 1j1 when var 1 = 1; 2j0;1 + 2j0;2 x 1 when var 1 = 0 ; 2j1;1 + 2j1;2 x 1 when var 1 = 1 and variances 1jj and 2jj respectively. In this case, we split the data set into two parts, those when var 1 = 0, and those when var 1 = 1. Each get their own parameters, su cient Buntine All of the other Gibbs sampling algorithms discussed in Section 7.2 can be similarly placed in this EM framework. When the mode is used in Step 2(b), ignoring numerical problems, the EM algorithm converges on a local maxima of the posterior distribution for the parameters (Dempster et al., 1977). The general method is summarized in the following comment (Dempster et al., 1977).\nComment 7.2 The conditions of Lemma 5.1 apply with data variables X inside the plate and model parameters outside. In addition, some of the variables U X are latent, so they are unknown and unshaded. Some of the remaining variables are sometimes missing, so, for the data X i , variables V i (X U) are not given. This means the data given for the i-th datum is X U V i for i = 1; : : :N. The EM algorithm goes as follows: E-step: The contribution to the expected su cient statistics for each datum is:\nThe expected su cient statistic is then ET = P N i=1 ET i . M-step: Maximize the conjugate posterior using the expected su cient statistics ET in place of the su cient statistics using the MAP approach for this distribution. The xed point of this algorithm is a local maxima of the posterior for . Here, given the su cient statistics, the mean or mode of the parameters are computed instead of being sampled. EM is therefore Gibbs with a mean/mode approximation done at the two major sampling steps of the algorithm.\nIn some cases, the expected su cient statistics can be computed in closed form. Assume the exponential family distribution for p(Xj ) has a known normalization constant Z( ) and the link function w 1 exists. For some i, the normalizing constant for the exponential family distribution p(U i ; V i jX U i V i ; ) is known in closed form. Denote it by Z i ( ). Then using the notation of Theorem 4.1:\n8. Partial exponential models\nIn some cases, only an initial, inner part of a learning problem can be handled using the recursive arc reversal theorem of Section 4.3. In this case, simplify what can be simpli ed, and then solve the remainder of the problem using a generic method like the MAP approximation. This section presents several examples: linear regression with heterogeneous variance, feed-forward networks with a linear output layer, and Bayesian networks. This general process was depicted in the graphical model for the partial exponential family of Figure 22. This is an abstraction used to represent the general process. Consider" }, { "figure_ref": [], "heading": "Appendix B. The exponential family", "publication_ref": [ "b31", "b4" ], "table_ref": [], "text": "The exponential family of distributions was described in De nition 4.1. The common use of the exponential family exists because of Theorem 4.1. Table 2 gives a few exponential family distributions and their functional form. Further details and more extensive tables can be found in most Bayesian textbooks on probability distributions (DeGroot, 1970;Bernardo & Smith, 1994). Table 3 gives some standard conjugate prior distributions for those in for n c = P N i=1 1 j i =c = # < j 0 s = c > yjx Gaussian j d-dim Gaussian( ; 1 2 ), 2 Gamma(( 0 + N)2; 2= ), for = 0 + P N i=1 y i y y i , = 1 0 0 + P N i=1 x i y i , for = P N i=1 (x i y y i ) 2 + ( 0 ) y 0 ( 0 ) + 0 x Gamma j Gamma(N + 0 ; P N i=1 x i + 0 )\nx d-dim Gaussian j Gaussian( ; (N + N 0 ) ), Wishart(N + 0 ; S + S 0 ) for = x + N 0 N+N 0 ( 0 x), for S = P N i=1 (x i x)(x i x) y " } ]
[ { "authors": "S Andersen; K Olesen; F Jensen; F Jensen", "journal": "Morgan Kaufmann", "ref_id": "b0", "title": "HUGIN|a shell for building Bayesian belief universes for expert systems", "year": "1989" }, { "authors": "A Azevedo-Filho; R Shachter", "journal": "", "ref_id": "b1", "title": "Laplace's method approximations for probabilistic inference in belief networks with continuous variables", "year": "1994" }, { "authors": "R Becker; J Chambers; A Wilks", "journal": "Wadsworth & Brooks/Cole", "ref_id": "b2", "title": "The New S Language", "year": "1988" }, { "authors": "J Berger", "journal": "Springer-Verlag", "ref_id": "b3", "title": "Statistical Decision Theory and Bayesian Analysis", "year": "1985" }, { "authors": "J Bernardo; A Smith", "journal": "John Wiley", "ref_id": "b4", "title": "Bayesian Theory", "year": "1994" }, { "authors": "J Besag; J York; A Mollie", "journal": "Ann. Inst. Statist. Math", "ref_id": "b5", "title": "Bayesian image restoration with two applications in spatial statistics", "year": "1991" }, { "authors": "G Box; G Tiao", "journal": "Addison-Wesley", "ref_id": "b6", "title": "Bayesian Inference in Statistical Analysis", "year": "1973" }, { "authors": "L Breiman; J Friedman; R Olshen; C Stone", "journal": "Wadsworth", "ref_id": "b7", "title": "Classi cation and Regression Trees", "year": "1984" }, { "authors": "G Bretthorst", "journal": "Kluwer Academic", "ref_id": "b8", "title": "An introduction to model selection using probability theory as logic", "year": "1993" }, { "authors": "W Buntine", "journal": "Morgan Kaufmann", "ref_id": "b9", "title": "Classi ers: A theoretical and empirical study", "year": "1991" }, { "authors": "W Buntine", "journal": "Chapman & Hall", "ref_id": "b10", "title": "Learning classi cation trees", "year": "1991" }, { "authors": "W Buntine", "journal": "", "ref_id": "b11", "title": "Theory re nement of Bayesian networks", "year": "1991" }, { "authors": "W Buntine", "journal": "", "ref_id": "b12", "title": "Representing learning with graphical models", "year": "1994" }, { "authors": "W Buntine; A Weigend", "journal": "Complex Systems", "ref_id": "b13", "title": "Bayesian back-propagation", "year": "1991" }, { "authors": "W Buntine; A Weigend", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b14", "title": "Computing second derivatives in feed-forward networks: a review", "year": "1994" }, { "authors": "G Casella; R Berger", "journal": "Wadsworth & Brooks/Cole", "ref_id": "b15", "title": "Statistical Inference", "year": "1990" }, { "authors": "E ", "journal": "Prentice Hall", "ref_id": "b16", "title": "Introduction to Stochastic Processes", "year": "1975" }, { "authors": "", "journal": "Wadsworth & Brooks/Cole", "ref_id": "b17", "title": "Statistical Models in S. Paci c", "year": "1992" }, { "authors": "B Chan; R Shachter", "journal": "", "ref_id": "b18", "title": "Structural controllability and observability in in uence diagrams", "year": "1992" }, { "authors": "E Charniak", "journal": "AI Magazine", "ref_id": "b19", "title": "Bayesian networks without tears", "year": "1991" }, { "authors": "P Cheeseman; M Self; J Kelly; W Taylor; D Freeman; J Stutz", "journal": "", "ref_id": "b20", "title": "Bayesian classi cation", "year": "1988" }, { "authors": "P Cheeseman", "journal": "Morgan Kaufmann", "ref_id": "b21", "title": "On nding the most probable model", "year": "1990" }, { "authors": "W Cohen", "journal": "Morgan Kaufmann", "ref_id": "b22", "title": "Compiling prior knowledge into an explicit bias", "year": "1992" }, { "authors": "G Cooper; E Herskovits", "journal": "Machine Learning", "ref_id": "b23", "title": "A Bayesian method for the induction of probabilistic networks from data", "year": "1992" }, { "authors": "R Cowell", "journal": "Oxford University Press", "ref_id": "b24", "title": "BAIES|a probabilistic expert system shell with qualitative and quantitative learning", "year": "1992" }, { "authors": "P Dagum; A Galper; E Horvitz; A Seiver", "journal": "International Journal of Forecasting", "ref_id": "b25", "title": "Uncertain reasoning and forecasting", "year": "1994" }, { "authors": "P Dagum; E Horvitz", "journal": "", "ref_id": "b26", "title": "Reformulating inference problems through selective conditioning", "year": "1992" }, { "authors": "A Dawid", "journal": "SIAM Journal on Computing", "ref_id": "b27", "title": "Conditional independence in statistical theory", "year": "1979" }, { "authors": "A P Dawid", "journal": "Biometrics", "ref_id": "b28", "title": "Properties of diagnostic data distributions", "year": "1976" }, { "authors": "A Dawid; S Lauritzen", "journal": "Annals of Statistics", "ref_id": "b29", "title": "Hyper Markov laws in the statistical analysis of decomposable graphical models", "year": "1993" }, { "authors": "T Dean; M Wellman", "journal": "Morgan Kaufmann", "ref_id": "b30", "title": "Planning and Control", "year": "1991" }, { "authors": "M Degroot", "journal": "McGraw-Hill", "ref_id": "b31", "title": "Optimal Statistical Decisions", "year": "1970" }, { "authors": "A Dempster; N Laird; D Rubin", "journal": "Journal of the Royal Statistical Society B", "ref_id": "b32", "title": "Maximum likelihood from incomplete data via the EM algorithm", "year": "1977" }, { "authors": "R Duda; P Hart", "journal": "John Wiley", "ref_id": "b33", "title": "Pattern Classi cation and Scene Analysis", "year": "1973" }, { "authors": "M Frydenberg", "journal": "Scandinavian Journal of Statistics", "ref_id": "b34", "title": "The chain graph Markov property", "year": "1990" }, { "authors": "D Geiger; D Heckerman", "journal": "", "ref_id": "b35", "title": "Learning Gaussian networks", "year": "1994" }, { "authors": "D Geman", "journal": "Springer-Verlag", "ref_id": "b36", "title": "Random elds and inverse problems in imaging", "year": "1990" }, { "authors": "S Geman; D Geman", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b37", "title": "Stochastic relaxation, Gibbs distributions, and the Bayesian relation of images", "year": "1984" }, { "authors": "W Gilks; D Clayton; D Spiegelhalter; N Best; A Mcneil; L Sharples; A Kirby", "journal": "Journal of the Royal Statistical Society B", "ref_id": "b38", "title": "Modelling complexity: applications of Gibbs sampling in medicine", "year": "1993" }, { "authors": "W Gilks; A Thomas; D Spiegelhalter", "journal": "The Statistician", "ref_id": "b39", "title": "A language and program for complex Bayesian modelling", "year": "1993" }, { "authors": "P E Gill; W Murray; M H Wright", "journal": "Academic Press", "ref_id": "b40", "title": "Practical Optimization", "year": "1981" }, { "authors": "", "journal": "SIAM", "ref_id": "b41", "title": "Automatic Di erentiation of Algorithms: Theory, Implementation, and Application", "year": "1991" }, { "authors": "D Heckerman; D Geiger; D Chickering", "journal": "Submitted Machine Learning Journal", "ref_id": "b42", "title": "Learning Bayesian networks: The combination of knowledge and statistical data", "year": "1994" }, { "authors": "D Heckerman", "journal": "MIT Press", "ref_id": "b43", "title": "Probabilistic Similarity Networks", "year": "1991" }, { "authors": "M Henrion", "journal": "Wiley", "ref_id": "b44", "title": "Towards e cient inference in multiply connected belief networks", "year": "1990" }, { "authors": "J Hertz; A Krogh; R Palmer", "journal": "Addison-Wesley", "ref_id": "b45", "title": "Introduction to the Theory of Neural Computation", "year": "1991" }, { "authors": "Buntine Howard; R ", "journal": "", "ref_id": "b46", "title": "Decision analysis: perspectives on inference, decision, and experimentation", "year": "1970" }, { "authors": "T Hrycej", "journal": "Arti cial Intelligence", "ref_id": "b47", "title": "Gibbs sampling in Bayesian networks", "year": "1990" }, { "authors": "H Je Reys", "journal": "Clarendon Press", "ref_id": "b48", "title": "Theory of Probability", "year": "1961" }, { "authors": "D Johnson; C Papdimitriou; M Yannakakis", "journal": "FOCS", "ref_id": "b49", "title": "How easy is local search?", "year": "1985" }, { "authors": "R Kass; A Raftery", "journal": "American Statistical Association", "ref_id": "b50", "title": "Bayes factors and model uncertainty", "year": "1993" }, { "authors": "U Kj Ru", "journal": "", "ref_id": "b51", "title": "A computational scheme for reasoning in dynamic probabilistic networks", "year": "1992" }, { "authors": "R Kohavi", "journal": "", "ref_id": "b52", "title": "Bottom-up induction of oblivious, read-once decision graphs : Strengths and limitations", "year": "1994" }, { "authors": "K Lange; J Sinsheimer", "journal": "Journal of Computational and Graphical Statistics", "ref_id": "b53", "title": "Normal/independent distributions and their applications in robust regression", "year": "1993" }, { "authors": "P Langley; W Iba; K Thompson", "journal": "", "ref_id": "b54", "title": "An analysis of Bayesian classi ers", "year": "1992" }, { "authors": "S Lauritzen; A Dawid; B Larsen; H.-G Leimer", "journal": "Networks", "ref_id": "b55", "title": "Independence properties of directed Markov elds", "year": "1990" }, { "authors": "R Little; D Rubin", "journal": "John Wiley and Sons", "ref_id": "b56", "title": "Statistical Analysis with Missing Data", "year": "1987" }, { "authors": "T Loredo", "journal": "Springer-Verlag", "ref_id": "b57", "title": "The promise of Bayesian inference for astrophysics", "year": "1992" }, { "authors": "D Mackay", "journal": "Neural Computation", "ref_id": "b58", "title": "A practical Bayesian framework for backprop networks", "year": "1992" }, { "authors": "D Mackay", "journal": "", "ref_id": "b59", "title": "Bayesian non-linear modeling for the energy prediction competition", "year": "1993" }, { "authors": "D Madigan; A Raftery", "journal": "Journal of the American Statistical Association", "ref_id": "b60", "title": "Model selection and accounting for model uncertainty in graphical models using Occam's window", "year": "1994" }, { "authors": "P Mccullagh; J Nelder", "journal": "Chapman and Hall", "ref_id": "b61", "title": "Generalized Linear Models", "year": "1989" }, { "authors": "G J Mclachlan; K E Basford", "journal": "Marcel Dekker", "ref_id": "b62", "title": "Mixture Models: Inference and Applications to Clustering", "year": "1988" }, { "authors": "I Meilijson", "journal": "J. Roy. Statist. Soc. B", "ref_id": "b63", "title": "A fast improvement to the EM algorithm on its own terms", "year": "1989" }, { "authors": "S Minton; M Johnson; A Philips; P Laird", "journal": "", "ref_id": "b64", "title": "Solving large-scale constraintsatisfaction and scheduling problems using a heuristic repair method", "year": "1990" }, { "authors": "R Neal", "journal": "", "ref_id": "b65", "title": "Probabilistic inference using Markov chain Monte Carlo methods", "year": "1993" }, { "authors": "S Nowlan; G Hinton", "journal": "", "ref_id": "b66", "title": "Simplifying neural networks by soft weight sharing", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b67", "title": "", "year": "" }, { "authors": "J Oliver", "journal": "", "ref_id": "b68", "title": "Decision graphs { an extension of decision trees", "year": "1993" }, { "authors": "J Pearl", "journal": "", "ref_id": "b69", "title": "Probabilistic Reasoning in Intelligent Systems", "year": "1988" }, { "authors": "Morgan Kaufmann; W Poland", "journal": "", "ref_id": "b70", "title": "Decision Analysis with Continuous and Discrete Variables: A Mixture Distribution Approach", "year": "1994" }, { "authors": "S Press", "journal": "Wiley", "ref_id": "b71", "title": "Bayesian Statistics", "year": "1989" }, { "authors": "J Quinlan", "journal": "", "ref_id": "b72", "title": "Unknown attribute values in induction", "year": "1989" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b73", "title": "", "year": "" }, { "authors": "J Quinlan", "journal": "", "ref_id": "b74", "title": "C4.5: Programs for Machine Learning", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b75", "title": "", "year": "" }, { "authors": "B Ripley", "journal": "Wiley", "ref_id": "b76", "title": "Spatial Statistics", "year": "1981" }, { "authors": "B Ripley", "journal": "John Wiley & Sons", "ref_id": "b77", "title": "Stochastic Simulation", "year": "1987" }, { "authors": "R Rivest", "journal": "Machine Learning", "ref_id": "b78", "title": "Learning decision lists", "year": "1987" }, { "authors": "S Russell; J Binder; D Koller", "journal": "", "ref_id": "b79", "title": "Adaptive probabilistic networks", "year": "1994-07" }, { "authors": "B Selman; H Levesque; D Mitchell", "journal": "", "ref_id": "b80", "title": "A new method for solving hard satisability problems", "year": "1992" }, { "authors": "R Shachter", "journal": "Operations Research", "ref_id": "b81", "title": "Evaluating in uence diagrams", "year": "1986" }, { "authors": "R Buntine Shachter", "journal": "Networks", "ref_id": "b82", "title": "An ordered examination of in uence diagrams", "year": "1990" }, { "authors": "R Shachter; S Andersen; P Szolovits", "journal": "", "ref_id": "b83", "title": "Global conditioning for probabilistic inference in belief networks", "year": "1994" }, { "authors": "R Shachter; D Heckerman", "journal": "AI Magazine", "ref_id": "b84", "title": "Thinking backwards for knowledge acquisition", "year": "1987" }, { "authors": "R Shachter; C Kenley", "journal": "Management Science", "ref_id": "b85", "title": "Gaussian in uence diagrams", "year": "1989" }, { "authors": "A Smith; D Spiegelhalter", "journal": "Journal of the Royal Statistical Society B", "ref_id": "b86", "title": "Bayes factors and choice criteria for linear models", "year": "1980" }, { "authors": "D Spiegelhalter", "journal": "", "ref_id": "b87", "title": "", "year": "1993" }, { "authors": "D Spiegelhalter; A Dawid; S Lauritzen; R Cowell", "journal": "Statistical Science", "ref_id": "b88", "title": "Bayesian analysis in expert systems", "year": "1993" }, { "authors": "D Spiegelhalter; S Lauritzen", "journal": "Networks", "ref_id": "b89", "title": "Sequential updating of conditional probabilities on directed graphical structures", "year": "1990" }, { "authors": "S Srinivas; J Breese", "journal": "", "ref_id": "b90", "title": "IDEAL: A software package for analysis of in uence diagrams", "year": "1990" }, { "authors": "L Stewart", "journal": "The Statistician", "ref_id": "b91", "title": "Hierarchical Bayesian analysis using Monte Carlo integration: computing posterior distributions when there are many possible models", "year": "1987" }, { "authors": "M Tanner", "journal": "Springer-Verlag", "ref_id": "b92", "title": "Tools for Statistical Inference", "year": "1993" }, { "authors": "A Thomas; D Spiegelhalter; W Gilks", "journal": "Oxford University Press", "ref_id": "b93", "title": "BUGS: A program to perform Bayesian inference using Gibbs sampling", "year": "1992" }, { "authors": "L Tierney; J Kadane", "journal": "Journal of the American Statistical Association", "ref_id": "b94", "title": "Accurate approximations for posterior moments and marginal densities", "year": "1986" }, { "authors": "D Titterington; A Smith; U Makov", "journal": "John Wiley & Sons", "ref_id": "b95", "title": "Statistical Analysis of Finite Mixture Distributions", "year": "1985" }, { "authors": "P Van Laarhoven; E Aarts", "journal": "D. Reidel", "ref_id": "b96", "title": "Simulated Annealing: Theory and Applications", "year": "1987" }, { "authors": "T Vuong", "journal": "Econometrica", "ref_id": "b97", "title": "Likelihood ratio tests for model selection and non-nested hypotheses", "year": "1989" }, { "authors": "P J Werbos; T Mcavoy; T Su", "journal": "", "ref_id": "b98", "title": "Neural networks, system identi cation, and control in the chemical process industry", "year": "1992" }, { "authors": "N Wermuth; S Lauritzen", "journal": "Journal of the Royal Statistical Society B", "ref_id": "b99", "title": "On substantive research hypotheses, conditional independence graphs and graphical chain models", "year": "1989" }, { "authors": "J Whittaker", "journal": "Wiley", "ref_id": "b100", "title": "Graphical Models in Applied Multivariate Statistics", "year": "1990" }, { "authors": "D Wolpert", "journal": "Morgan Kaufmann", "ref_id": "b101", "title": "Bayesian backpropagation over functions rather than weights", "year": "1994" } ]
[ { "formula_coordinates": [ 6, 90, 354.9, 432.24, 40.6 ], "formula_id": "formula_0", "formula_text": "representation p(X) = Y C2Cliques(G) f C (C) ;(4)" }, { "formula_coordinates": [ 14, 130.12, 429.38, 344.23, 152.02 ], "formula_id": "formula_1", "formula_text": "var 2 x 1 θ 1 θ 2 µ 2 var 1 N σ 2 x 2 µ 1 σ 1 Model = M 1 Model = M 2 var 2 x 1 θ 1 θ 2 µ 2 var 1 N σ 2 x 2 µ 1 ' σ 1 ' Gaussian Gaussian Gaussian Gaussian" }, { "formula_coordinates": [ 15, 182.88, 592.56, 246.24, 37.9 ], "formula_id": "formula_2", "formula_text": "p(xjsample) = X i p(M i jsample) p(xjsample; M i ) :" }, { "formula_coordinates": [ 17, 227.76, 305.64, 289.56, 32.4 ], "formula_id": "formula_3", "formula_text": "p( j 1 ; 2 ) = 1 1 (1 ) 2 1 Beta( 1 ; 2 ) (11" }, { "formula_coordinates": [ 17, 517.32, 314.52, 4.92, 15.2 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 18, 148.08, 242.64, 360.72, 53.5 ], "formula_id": "formula_5", "formula_text": "; N) = p( ) p( 1 ) p( 2 ) p( 3 ) N Y i=1" }, { "formula_coordinates": [ 19, 184.32, 505.2, 337.92, 38.62 ], "formula_id": "formula_6", "formula_text": "p(XjM(G)) = Y 2T Y i2indval( ) p( i jparents( i ); M) : (12)" }, { "formula_coordinates": [ 22, 294.24, 285.96, 223.08, 32.4 ], "formula_id": "formula_7", "formula_text": "; 2 = 1:5) = 1 1+p (1 ) 2 1+n Beta( 1 + p; 2 + n) (13" }, { "formula_coordinates": [ 22, 517.32, 294.84, 4.92, 15.2 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 23, 235.2, 343.44, 282.12, 45.34 ], "formula_id": "formula_9", "formula_text": "M) = h(x; y) Z( ) exp k X i=1 w i ( )t i (x; y) ! (14" }, { "formula_coordinates": [ 23, 517.32, 362.28, 4.92, 15.2 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 23, 145.2, 529.2, 342.96, 45.48 ], "formula_id": "formula_11", "formula_text": "; heads N ; 1 ; 2 ) = 1 Beta( 1 + p; 2 + n) exp (( 1 + p 1) log + ( 2 + n 1) log(1 )) :" }, { "formula_coordinates": [ 24, 168, 657.36, 349.32, 45.58 ], "formula_id": "formula_12", "formula_text": "p( j ; M) = f( ) Z ( ) exp k+1 (log 1=Z 1 ( )) + k X i=1 i w i ( ) ! (15" }, { "formula_coordinates": [ 24, 517.32, 676.2, 4.92, 15.2 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 25, 196.8, 132.6, 87.12, 56.02 ], "formula_id": "formula_14", "formula_text": "0 k+1 = k+1 + N 0 i = i + N X j=1" }, { "formula_coordinates": [ 27, 114.24, 538.32, 393.6, 100.54 ], "formula_id": "formula_15", "formula_text": "x n ; ; ) = 1 p 2 exp 0 B @ 1 2 2 0 @ y M X j=1 basis j (x : ) j 1 A 2 1 C A ; = 1 p 2 exp 0 @ 1 2 2 y 2 M X j;k=1 basis j (x : ) basis k (x : ) j k 2 2 + M X j=1 basis j (x : ) y j 2 2 1 A :" }, { "formula_coordinates": [ 28, 215.04, 278.16, 74.4, 37.9 ], "formula_id": "formula_16", "formula_text": "S j;k = 1 N N X i=1" }, { "formula_coordinates": [ 28, 216.48, 314.4, 136.56, 74.38 ], "formula_id": "formula_17", "formula_text": "q j = 1 N N X i=1 basis j (x :;i ) y i ysq = 1 N N X i=1" }, { "formula_coordinates": [ 29, 283.92, 331.68, 84, 37.9 ], "formula_id": "formula_18", "formula_text": "= l X i=1 u i (X) v i ( )" }, { "formula_coordinates": [ 32, 112.17, 98.39, 387.75, 155.39 ], "formula_id": "formula_19", "formula_text": "w 1 w 2 w 3 w 4 w 5 x 1 x 2 x 3 m 1 m 2 o 1 o 2 h 1 h 2 h 3 Sigmoid Sigmoid Sigmoid Σ N Sigmoid Gaussian Gaussian x 1 x 2 x 3 m 1 m 2 (a) (b)" }, { "formula_coordinates": [ 32, 140.16, 344.16, 322.56, 74.62 ], "formula_id": "formula_21", "formula_text": "; w 5 ) N Y i=1 det 1=2 2 exp 1 2 (o i m i ) y (o i m i ) m i = Sigmoid(w y i h) h i = Sigmoid(w y i+2 x) :" }, { "formula_coordinates": [ 34, 132.96, 226.44, 389.28, 100.68 ], "formula_id": "formula_22", "formula_text": "@ i = 1 nd( i ) @ log p( i jparents( i )) @ l i (17) + X x2ndchildren( i )\\children( i ) @ log p(xjparents(x)) @ l i + X x2ndchildren( i )" }, { "formula_coordinates": [ 35, 100.08, 301.92, 144, 37.9 ], "formula_id": "formula_23", "formula_text": "= 10 X c=1 1 class=c log c + 3 X j=1 10 X c=1" }, { "formula_coordinates": [ 35, 190.8, 657.36, 230.4, 45.58 ], "formula_id": "formula_24", "formula_text": "p(xjy; ; M) = h(x; y) Z( ) exp k X i=1 w i ( )t i (x; y) ! :" }, { "formula_coordinates": [ 36, 213.36, 289.2, 185.28, 42.94 ], "formula_id": "formula_25", "formula_text": "p(XjY ) = Q C2Cliques(G 0 ) f C (C) P X Q C2Cliques(G 0 ) f C (C) :" }, { "formula_coordinates": [ 36, 91.2, 347.52, 430.8, 49.18 ], "formula_id": "formula_26", "formula_text": "@ log p(XjY ) @ l x = 0 @ X C2Cliques(G 0 );x2C @ log f C (C) @ l x 1 A E XjY 0 @ X C2Cliques(G 0 );x2C @ log f C (C) @ l x 1 A :" }, { "formula_coordinates": [ 37, 151.16, 104.12, 310.53, 135.4 ], "formula_id": "formula_27", "formula_text": "θ 1 var 1 N var 2 θ 2 var 1 N x 1 var 1 µ 1 σ 1 N x 1 µ 2 N σ 2 x 2 var 1" }, { "formula_coordinates": [ 38, 158.16, 282.72, 364.08, 37.9 ], "formula_id": "formula_28", "formula_text": "evidence(M) = p(known(X )jM) = f 0 Y i f i (known(X i; )) (23)" }, { "formula_coordinates": [ 38, 130.01, 433.49, 350.23, 166.09 ], "formula_id": "formula_29", "formula_text": "2 x 1 θ 1 θ 2 var 1 N x 2 θ 3 θ 4 x 3 θ 5 (a) (b) var 2 θ 1 θ 2 var 1 N var 2 x 1 var 1 N x 2 θ 3 θ 4 N x 2" }, { "formula_coordinates": [ 39, 219.12, 236.88, 303.12, 37.9 ], "formula_id": "formula_30", "formula_text": "evidence(M) = P Y i=0 evidence(M S i ) :(24)" }, { "formula_coordinates": [ 42, 216.72, 328.08, 130.8, 110.62 ], "formula_id": "formula_31", "formula_text": "S 2jj = N X i=1 1 var 1;i =j y i y y i ; m 2jj = N X i=1 1 var 1;i =j x i y i ; s 2 2jj = N X i=1" }, { "formula_coordinates": [ 42, 176.16, 540.6, 327.84, 44.4 ], "formula_id": "formula_32", "formula_text": "; ; M 1 ) = Y j=0;1 det 1=2 0;2jj n 1;j =2 det 1=2 2jj (( 0;2jj + n 1;0 )=2) ( 0;2jj +n 1;j )=2 2jj ( 0;2jj =2) 0;2jj =2 0;2jj :" }, { "formula_coordinates": [ 43, 132.02, 99.93, 339.5, 132.81 ], "formula_id": "formula_33", "formula_text": "θ 1 n 1 n 2,*|* θ 2 x 1|* s 2 1|* µ 1 σ 1 µ 2 σ 2 ,* m 2|* s 2 2|* S 2|*" }, { "formula_coordinates": [ 45, 226.32, 282.72, 160.56, 37.9 ], "formula_id": "formula_34", "formula_text": "1 I I X i=1 g(x i ) ! g(x) = E (g(x)) :" }, { "formula_coordinates": [ 46, 241.75, 293.21, 132.35, 78.2 ], "formula_id": "formula_35", "formula_text": "var 1 var 2 µ 1 µ 2 φ class" }, { "formula_coordinates": [ 49, 229.2, 308.64, 153.6, 74.38 ], "formula_id": "formula_36", "formula_text": "n j = N X i=1 1 class i =j ; n v;kjj = N X i=1 1 class i =j 1 var v;i =k :" }, { "formula_coordinates": [ 49, 143.28, 454.8, 67.44, 37.9 ], "formula_id": "formula_37", "formula_text": "n v;kjj = N X i=1" } ]
Operations for Learning with Graphical Models
This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov eld. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, di erentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical speci cation. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms. Buntine Some example algorithms: The closed-form solutions to learning can sometimes be used to form a fast inner loop of more complex algorithms. Section 8 illustrates how graphical models help here. The conclusion lists some common algorithms and their derivation within the above framework. Proofs of lemmas and theorems are collected in Appendix A.
Wray L Buntine
[ { "figure_caption": "Figure 1 :1Figure 1: A software generator", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Graphical conditional models conditional models represented have been shaded. This shading means the values of these variables are known or given. In Figure 4(b), the shading indicates that the value for x is known, but the value for c is unknown. Presumably c will be predicted using x. Figure4(a)", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4(c) corresponds to the statement: p(unitjx 1 ; : : :; x n ) = unit = f(x 1 ;:::;xn) =", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Two equivalent conditional models of the medical problem bility, p(Age; Occ; Clim; Dis; Symp) for the two graphs (a) and (b) is: p(Age) p(OccjAge) p(ClimjAge; Occ) p(DisjAge; Occ; Clim) p(SympjAge; Dis) p(Age) p(Occ) p(Clim) p(DisjAge; Occ; Clim) p(SympjAge; Dis) : However, because four of the ve nodes are shaded, this means their values are known. The conditional distributions computed from the above are identical: p(DisjAge; Occ; Clim; Symp) =", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: An expanded medical problem and single symptom node of Figure 2 are expanded to represent the case where there are two possibly co-occurring diseases and three possibly co-occurring symptoms. The medical specialist may have said something like: \\Lung disease and heart disease can in uence each other, or may have some hidden common cause; however, it is often di cult to tell which is the cause of which, if at all.\" In the causal model, join the two disease nodes by an undirected arc to represent direct in uence. Likewise for the symptom nodes. The resultant joint distribution takes the form: p(Age; Occ; Clim; Heart-Dis; Lung-Dis; Symp-A; Symp-B; Symp-C) = (6) p(Age) p(Occ) p(Clim) p(Heart-Dis; Lung-DisjAge; Occ; Clim) p(Symp-A; Symp-B; Symp-CjHeart-Dis; Lung-Dis)", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "p( jparents( ); M) has a form similar to Equation (4).", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: An expanded medical problem used here where Bayesian networks can also be used. In this case: Comment 2.2 The chain components of a Bayesian network are the singleton sets of individual variables in the graph. Furthermore, chain-components(A) = A.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure8: Decomposing a chain graph its directed and undirected components together with the Bayesian network on the right showing how they are pieced together. Having done this decomposition, the components are analyzed using all the machinery of directed and undirected graphs. The interpretation of these graphs in terms of independence statements and the implied functional form of the joint probability is a combination of the previous two forms given in Equation (2) and Theorem 2.1, based on(Frydenberg, 1990, Theorem 4.1), and on the interpretation of conditional graphical models in Section 2.3.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: A simple classi cation problem", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Learning the simple classi cation", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Two graphical models. Which should learning select?", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Averaging over multiple Bayesian networks", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Tossing a coin: model without and with a plate", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "QFigure 15 :15Figure 15: Simple unsupervised learning, with a plate hidden class variable is not shaded so it is not given. The corresponding transformation for the supervised learning problem of Figure 10, where the classes are given, and thus corresponds to the idiot's Bayes classi er, is identical to Figure 15 except that the class variable is shaded because the classes are now part of the training sample.Many learning problems can be similarly modeled with plates. Write down the graphical model for the full learning problem with only a single case provided. Put a box around the data part of the model, pull out the model parameters (for instance, the weights of the network or the classi cation parameters), and ensure they are unshaded because they are unknown. Now add the data set size (N) to the bottom left corner.The notion of a plate is formalized below. This formalization is included for use in subsequent proofs.", "figure_data": "", "figure_id": "fig_13", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Arc reversal: reversing nodes a and b", "figure_data": "", "figure_id": "fig_14", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "(b)) p(ajparents(a)) ; p(bja; A) = p(bjparents(b)) p(ajparents(a)) P b p(bjparents(b)) p(ajparents(a)) :", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: Removing the plate in the coin problem", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: The generalized graph for plate removal", "figure_data": "", "figure_id": "fig_17", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19: Linear regression with Gaussian error", "figure_data": "", "figure_id": "fig_18", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 20: The linear regression problem joint probability for this model is as follows: p( ) p( j ) 1 p 2", "figure_data": "", "figure_id": "fig_19", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 21 :21Figure 21: The linear regression problem with the plate removed", "figure_data": "", "figure_id": "fig_20", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Figure 22 :22Figure 22: Three categories of algorithms using the exponential family", "figure_data": "", "figure_id": "fig_21", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Figure 23 :23Figure 23: Learning a feed-forward network", "figure_data": "", "figure_id": "fig_22", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 25 shows how this decomposition works when there are unknown nodes. Figure 25(a) shows the basic problem and Figure 25(b) shows the nest decomposition. Notice the bottom component cannot be further decomposed because the variable x 1 is unknown.", "figure_data": "", "figure_id": "fig_23", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "var", "figure_data": "", "figure_id": "fig_24", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2626Figure 26: A family of models (optional arcs hatched)", "figure_data": "", "figure_id": "fig_25", "figure_label": "26", "figure_type": "figure" }, { "figure_caption": "Figure 27 :27Figure 27: The full simpli cation of model M 1", "figure_data": "", "figure_id": "fig_26", "figure_label": "27", "figure_type": "figure" }, { "figure_caption": "): p(xjX fxg) = exp P C : x2C2cliques(G) f C (C) P x exp P C : x2C2cliques(G) f C (C) :", "figure_data": "", "figure_id": "fig_27", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 30 (Figure 30 :3030Figure 30(a) illustrates Step 2(a) in the language of graphs. Figure 30(b) illustrates", "figure_data": "", "figure_id": "fig_28", "figure_label": "3030", "figure_type": "figure" }, { "figure_caption": "Figure 31(I) shows the general learning problem, extending the mixture model of Figure 22. This same structure appears in Figure 29. However, in Figure 31(I) the su cient statistics T(x ; u ) are also shown. Figure 31(II) shows more of the algorithm, generalizing Figure 30(a) and (b). Again the role of su cient statistics is shown. The sampling in Figure 30(b) rst computes the su cient statistics and then sampling applies from that.", "figure_data": "", "figure_id": "fig_29", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 31 :31Figure 31: Gibbs sampling with the exponential family 7.3 A closed form approximation What would happen to Gibbs sampling if the number of cases in the training sample, N,", "figure_data": "", "figure_id": "fig_30", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "Figure 32 :32Figure32: Linear regression with heterogeneous variance the standard deviation s is not given but is computed via:", "figure_data": "", "figure_id": "fig_31", "figure_label": "32", "figure_type": "figure" }, { "figure_caption": "Figure 33 :33Figure 33: The heterogeneous variance problem with the plate simpli ed", "figure_data": "", "figure_id": "fig_32", "figure_label": "33", "figure_type": "figure" }, { "figure_caption": "Figure 34 :34Figure 34: Simpli ed learning of a feed-forward network with linear output", "figure_data": "", "figure_id": "fig_33", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "A to a, then: g(X B; A = a) h(X A) = Y i f i (X i ; A = a) :Multiplying both sides of the two equalities together, and substitute in g(X B; A = a) h(X A; B = b) = p(X; A = a; B = b)", "figure_data": "", "figure_id": "fig_34", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "C i ) dunknown(X j ; ) :Furthermore, the potential functions on the cliques in G i are well de ned as described.A.3 Proof of Corollary 6.1.1 If X j = j ndparents( j ), then every clique in a chain component in j will occur in cliques j", "figure_data": "", "figure_id": "fig_37", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Lemma 5.1 A chain graph has a single plate. Let the non-deterministic variables inside the plate be X, and the deterministic variables be Y . Let the variables outside the plate be . If:1. All arcs crossing a plate boundary are directed into the plate.2. For all chain components , the conditional distribution p( jparents( )) is from the exponential family with data variables from (X; Y ) and model parameters from ; furthermore log p( jparents( ); ) is a polynomial function of variables in Y . 3. Each variable y 2 Y can be expressed as a deterministic function of the form y", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Derivation of learning algorithms", "figure_data": "ProblemMethodSectionsBayesian networks and expo-nential family conditionals Bayesian networks with miss-ing/latent variables, and other unsupervised learning models Feed-forward networks Feed-forward networks with linear output Linear regression and extensions Generalized linear modelsDecomposition of exact Bayes factors, with local search or Gibbs sampling to generate alternative models Gibbs sampling or EM making use of above techniques where possible MAP method by exact computation of derivatives As above with an initial removal of the linear component Least squares, EM, and MAP MAP method by exact computation of derivatives6.2, 6.3, 8.3 7.2, 7.4, 8.3 6.1 8.2 4.4, 8.1 2.3, 6.1", "figure_id": "tab_6", "figure_label": "1", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b33", "b8", "b20", "b28" ], "table_ref": [], "text": "For many years, the superiority of partial-order planners over total-order planners has been tacitly assumed by the planning community. Originally, partial-order planning was introduced by Sacerdoti (1975) as a way to improve planning e ciency by avoiding \\premature commitments to a particular order for achieving subgoals\". The utility of partial-order planning was demonstrated anecdotally by showing how such a planner could e ciently solve blocksworld examples, such as the well-known \\Sussman anomaly\".\nSince partial-order planning intuitively seems like a good idea, little attention has been devoted to analyzing its utility, at least until recently (Minton, Bresina, & Drummond, 1991a;Barrett & Weld, 1994;Kambhampati, 1994c). However, if one looks closely at the issues involved, a number of questions arise. For example, do the advantages of partialorder planning hold regardless of the search strategy used? Do the advantages hold when the planning language is so expressive that reasoning about partially ordered plans is intractable (e.g., if the language allows conditional e ects)?\nOur work (Minton et al., 1991a(Minton et al., , 1992) ) has shown that the situation is much more interesting than might be expected. We have found that there are some \\unstated assumptions\" underlying the supposed e ciency of partial-order planning. For instance, the superiority of partial-order planning can depend critically upon the search strategy and search heuristics employed.\nThis paper summarizes our observations regarding partial-order and total-order planning. We begin by considering a simple total-order planner and a closely related partialorder planner and establishing a mapping between their search spaces. We then examine the relative sizes of their search spaces, demonstrating that the partial-order planner has a fundamental advantage because the size of its search space is always less than or equal to that of the total-order planner. However, this advantage does not necessarily translate into an e ciency gain; this depends on the type of search strategy used. For example, we describe a domain where our partial order planner is more e cient than our total order planner when depth-rst search is used, but the e ciency gain is lost when an iterative sampling strategy is used.\nWe also show that partial-order planners can have a second, independent advantage when certain types of operator ordering heuristics are employed. This \\heuristic advantage\" underlies Sacerdoti's anecdotal examples explaining why least-commitment works. However, in our blocksworld experiments, this second advantage is relatively unimportant compared to the advantage derived from the reduction in search space size.\nFinally, we look at how our results extend to partial-order planners in general. We describe how the advantages of partial-order planning can be preserved even if highly expressive languages are used. We also show that the advantages do not necessarily hold for all partial-order planners, but depend critically on the construction of the planning space." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b14", "b38", "b35", "b40", "b16", "b37", "b31" ], "table_ref": [], "text": "Planning can be characterized as search through a space of possible plans. A total-order planner searches through a space of totally ordered plans; a partial-order planner is de ned analogously. We use these terms, rather than the terms \\linear\" and \\nonlinear\", because the latter are overloaded. For example, some authors have used the term \\nonlinear\" when focusing on the issue of goal ordering. That is, some \\linear\" planners, when solving a conjunctive goal, require that all subgoals of one conjunct be achieved before subgoals of the others; hence, planners that can arbitrarily interleave subgoals are often called \\nonlinear\". This version of the linear/nonlinear distinction is di erent than the partial-order/totalorder distinction investigated here. The former distinction impacts planner completeness, whereas the total-order/partial-order distinction is orthogonal to this issue (Drummond & Currie, 1989;Minton et al., 1991a).\nThe total-order/partial-order distinction should also be kept separate from the distinction between \\world-based planners\" and \\plan-based planners\". The distinction is one of modeling: in a world-based planner, each search state corresponds to a state of the world and in a plan-based planner, each search state corresponds to a plan. While totalorder planners are commonly associated with world-based planners, such as Strips, several well-known total-order planners have been plan-based, such as Waldinger's regression planner (Waldinger, 1975), Interplan (Tate, 1974) and Warplan (Warren, 1974). Similarly, partial-order planners are commonly plan-based, but it is possible to have a world-based partial-order planner (Godefroid & Kabanza, 1991). In this paper, we focus solely on the total-order/partial-order distinction in order to avoid complicating the analysis.\nWe claim that the only signi cant di erence between partial-order and total-order planners is planning e ciency. It might be argued that partial-order planning is preferable because a partially ordered plan can be more exibly executed. However, execution exibility can also be achieved with a total-order planner and a post-processing step that removes unnecessary orderings from the totally ordered solution plan to yield a partial order (Back-strom, 1993;Veloso, Perez, & Carbonell, 1990;Regnier & Fade, 1991). The polynomial time complexity of this post-processing is negligible compared to the search time for plan generation. 1 Hence, we believe that execution exibility is, at best, a weak justi cation for the supposed superiority of partial-order planning.\nIn the following sections, we analyze the relative e ciency of partial-order and totalorder planning by considering a total-order planner and a partial-order planner that can be directly compared. Elucidating the key di erences between these planning algorithms reveals some important principles that are of general relevance." }, { "figure_ref": [], "heading": "Terminology", "publication_ref": [ "b6" ], "table_ref": [], "text": "A plan consists of an ordered set of steps, where each step is a unique operator instance. Plans can be totally ordered, in which case every step is ordered with respect to every other step, or partially ordered, in which case steps can be unordered with respect to each other. We assume that a library of operators is available, where each operator has preconditions, deleted conditions, and added conditions. All of these conditions must be nonnegated propositions, and we adopt the common convention that each deleted condition is a precondition. Later in this paper we show how our results can be extended to more expressive languages, but this simple language is su cient to establish the essence of our argument.\nA linearization of a partially ordered plan is a total order over the plan's steps that is consistent with the existing partial order. In a totally ordered plan, a precondition of a plan step is true if it is added by an earlier step and not deleted by an intervening step. In a partially ordered plan, a step's precondition is possibly true if there exists a linearization in which it is true, and a step's precondition is necessarily true if it is true in all linearizations. A step's precondition is necessarily false if it is not possibly true.\nA state consists of a set of propositions. A planning problem is de ned by an initial state and a set of goals, where each goal is a proposition. For convenience, we represent a problem as a two-step initial plan, where the propositions that are true in the initial state are added by the rst step, and the goal propositions are the preconditions of the nal step. The planning process starts with this initial plan and searches through a space of possible plans. A successful search terminates with a solution plan, i.e., a plan in which all steps' preconditions are necessarily true. The search space can be characterized as a tree, where each node corresponds to a plan and each arc corresponds to a plan transformation. Each transformation incrementally extends (i.e., re nes) a plan by adding additional steps or orderings. Thus, each leaf in the search tree corresponds either to a solution plan or a dead-end, and each intermediate node corresponds to an un nished plan which can be further extended.\n1. Backstrom (1993) formalizes the problem of removing unnecessary orderings in order to produce a \\leastconstrained\" plan. He shows that the problem is polynomial if one de nes a least-constrained plan as a plan in which no orderings can be removed without impacting the correctness of the plan. Backstrom also shows that the problem of nding a plan with the fewest orderings over a given operator set is a much harder problem; it is NP-hard. Oneed, call the resulting plan P 0 . Choice point: all such positions must be considered for completeness.\n5. Goal updating: Let G 0 be the set of preconditions in P 0 that are not true. 6. Recursive invocation: TO(P 0 ; G 0 ). " }, { "figure_ref": [ "fig_0", "fig_1", "fig_0" ], "heading": "A Tale of Two Planners", "publication_ref": [ "b38", "b35", "b38" ], "table_ref": [], "text": "In this section we de ne two simple planning algorithms. The rst algorithm, shown in Figure 1, is to, a total-order planner motivated by Waldinger's regression planner (Waldinger, 1975), Interplan (Tate, 1974), and Warplan (Waldinger, 1975). Our purpose here is to characterize the search space of the to planning algorithm, and the pseudo-code in Figure 1 accomplishes this by de ning a nondeterministic procedure that enumerates possible plans. (If the plans are enumerated by a breadth-rst search, then the algorithms presented in this section are provably complete, as shown in Appendix A.) to accepts an un nished plan, P, and a goal set, G, containing preconditions which are currently not true. If the algorithm terminates successfully then it returns a totally ordered solution plan. Note that there are two choice points in this procedure: operator selection and ordering selection. The procedure does not need to consider alternative goal choices. For our purposes, the function select-goal can be any deterministic function that selects a member of G.\nAs used in Step 4, the last deleter of a precondition c for a step O need is de ned as follows.\nStep O del is the last deleter of c if O del deletes c, O del is before O need , and there is no other deleter of c between O del and O need . In the case that no step before O need deletes c, the rst step is considered to be the last deleter.\nFigure 2 illustrates to's plan extension process. This example assumes that steps A and B do not add or delete c. There are three possible insertion points for O add in plan P, each yielding an alternative extension.\nThe second planner is ua, a partial-order planner, shown in Figure 3. ua is similar to to in that it uses the same procedures for goal selection and operator selection; however, the procedure for ordering selection is di erent. Step 4 of ua inserts orderings, but only \\interacting\" steps are ordered. Speci cally, we say that two steps interact if they are unordered with respect to each other and either:\none step has a precondition that is added or deleted by the other step, or one step adds a condition that is deleted by the other step. The only signi cant di erence between ua and to lies in Step 4: to orders the new step with respect to all others, whereas ua adds orderings only to eliminate interactions. It is in this sense that ua is less committed than to.\nFigure 4 illustrates ua's plan extension process. As in Figure 2, we assume that steps A and B do not add or delete c; however, step A and O add interact with respect to some other condition. This interaction yields two alternative plan extensions: one in which O add is ordered before A and one in which O add is ordered after A.\nSince ua orders all steps which interact, the plans that are generated have a special property: each precondition in a plan is either necessarily true or necessarily false. We call such plans unambiguous. This property yields a tight correspondence between the two planners' search spaces. Suppose ua is given the unambiguous plan U and to is given the plan T, where T is a linearization of U. Let us consider the relationship between the way that ua extends U and to extends T. Note that the two planners will have the same set of goals since, by de nition, each goal in U is a precondition that is necessarily false, and a precondition is necessarily false if and only if it is false in every linearization. Since the two plans have the same set of goals and since both planners use the same goal selection method, both algorithms pick the same goal; therefore, O need is the same for both. Similarly, both algorithms consider the same library operators to achieve this goal. Since T is a linearization of U, and O need is the same in both plans, both algorithms nd the same last deleter as well. 2 When to adds a step to a plan, it orders the new step with respect to UA(P;G) 1. Termination check: If G is empty, report success and return solution plan P. 2. Goal selection: Let c = select-goal(G), and let Oneed be the plan step for which c is a precondition. 3. Operator selection: Let Oadd be an operator in the library that adds c. If there is no such Oadd, then terminate and report failure. Choice point: all such operators must be considered for completeness.\n4. Ordering selection: Let Odel be the last deleter of c. Order Oadd after Odel and before Oneed.\nRepeat until there are no interactions:\nSelect a step Oint that interacts with Oadd.\nOrder Oint either before or after Oadd. Choice point: both orderings must be considered for completeness. Let P 0 be the resulting plan.\n5. Goal updating: Let G 0 be the set of preconditions in P 0 that are necessarily false. 6. Recursive invocation: UA(P 0 ; G 0 ). all existing steps. When ua adds a step to a plan, it orders the new step only with respect to interacting steps. ua considers all possible combinations of orderings which eliminate interactions; hence, for any plan produced by to, ua produces a corresponding plan that is less-ordered or equivalent.\nThe following sections exploit this tight correspondence between the search spaces of ua and to. In the next section we analyze the relative sizes of the two planners' search spaces, and later we compare the number of plans actually generated under di erent search strategies." }, { "figure_ref": [], "heading": "Search Space Comparison", "publication_ref": [], "table_ref": [], "text": "The search space for both to and ua can be characterized as a tree of plans. The root node in the tree corresponds to the top-level invocation of the algorithm, and the remaining nodes each correspond to a recursive invocation of the algorithm. Note that in generating a plan, the algorithms make both operator and ordering choices, and each di erent set of choices corresponds to a single branch in the search tree.\nWe denote the search tree for to by tree TO and, similarly, the search tree for ua by tree UA . The number of plans in a search tree is equal to the number of times the planning procedure (ua or to) would be invoked in an exhaustive exploration of the search space.\nNote that every plan in tree UA and tree TO is unique, since each step in a plan is given a unique label. Thus, although two plans in the same tree might both be instances of a particular operator sequence, such as O1 O2 O3, the plans are distinct because their steps have di erent labels. (We have de ned our plans this way to make our proofs more concise.)\nWe can show that for any given problem, tree TO is at least as large as tree UA , that is, the number of plans in tree TO is greater than or equal to the number of plans in tree UA . This is done by proving the existence of a function L which maps plans in tree UA into sets of plans in tree TO that satis es the following two conditions.\n1. Totality Property: For every plan U in tree UA , there exists a non-empty set fT 1 ; : : :; T m g of plans in tree TO such that L(U) = fT 1 ; : : :; T m g. 2. Disjointness Property: L maps distinct plans in tree UA to disjoint sets of plans in tree TO ; that is, if U 1 ; U 2 2 tree UA and U 1 6 = U 2 , then L(U 1 ) \\ L(U 2 ) = fg.\nLet us examine why the existence of an L with these two properties is su cient to prove that the size of ua's search tree is no greater than that of to. Figure 5 provides a guide for the following discussion. Intuitively, we can use L to count plans in the two search trees.\nFor each plan counted in tree UA , we use L to count a non-empty set of plans in tree TO .\nThe totality property means that every time we count a plan in tree UA , we count at least one plan in tree TO ; this implies that j tree UA j P\nh h h h h h h h h h h - - - A A A A U ? @ @ @ @ R H H H H j - @ @ R g g g g f f f f L L L L\nto search tree ua search tree Figure 5: How L maps from tree UA to tree TO plan in the tree. Then T 2 L(U) if and only if (i) T is a linearization of U and (ii) either U and T are both root nodes of their respective search trees or parent(T) 2 L(parent(U)). Intuitively, L maps a plan U in tree UA to all linearizations which share common derivation ancestry. 3 This is illustrated in Figure 5, where for each plan in tree UA a dashed line is drawn to the corresponding set of plans in tree TO .\nWe can show that L satis es the totality and disjointness properties by induction on the depth of the search trees. Detailed proofs are in the appendix. To prove the rst property, we show that for every plan contained in tree UA , all linearizations of that plan are contained in tree TO . To prove the second property, we note that any two plans at di erent depths in tree UA have disjoint sets of linearizations, and then show by induction that any two plans at the same depth in tree UA also have this property. How much smaller is tree UA than tree TO ? The mapping described above provides an answer. For each plan U in tree UA there are j L(U) j distinct plans in to, where j L(U) j is the number of linearizations of U. The exact number depends on how unordered U is. A totally unordered plan has a factorial number of linearizations and a totally ordered plan has only a single linearization. Thus, the only time that the size of tree UA equals the size of tree TO is when every plan in tree UA is totally ordered; otherwise, tree UA is strictly smaller than tree TO and possibly exponentially smaller." }, { "figure_ref": [], "heading": "Time Cost Per Plan", "publication_ref": [ "b27" ], "table_ref": [], "text": "While the size of ua's search tree is possibly exponentially smaller than that of to, it does not follow that ua is necessarily more e cient. E ciency is determined by two factors: the\nStep Executions Per Plan TO Cost UA Cost 1 1 O(1) O(1) 2 1 O(1) O(1) 3 < 1 O(1) O(1) 4 1 O(1) O(e) 5 1 O(n) O(e)\nTable 1: Cost per plan comparisons time cost per plan in the search tree (discussed in this section) and the size of the subtree explored during the search process (discussed in the next section).\nIn this section we show that while ua can indeed take more time per plan, the extra time is relatively small and grows only polynomially with the number of steps in the plan,4 which we denote by n. In comparing the relative e ciency of ua and to, we rst consider the number of times that each algorithm step is executed per plan in the search tree and we then consider the time complexity of each step.\nAs noted in the preceding sections, each node in the search tree corresponds to a plan, and each invocation of the planning procedure for both ua and to corresponds to an attempt to extend that plan. Thus, for both ua and to, it is clear that the termination check and goal selection (Steps 1 and 2) are each executed once per plan. Analyzing the number of times that the remaining steps are executed might seem more complicated, since each of these steps is executed many times at an internal node and not at all at a leaf. However, the analysis is actually quite simple since we can amortize the number of executions of each step over the number of plans produced. Notice that Step 6 is executed once for each plan that is generated (i.e., once for each node other than the root node). This gives us a bound on the number of times that Steps 3, 4, and 5 are executed. 5 More speci cally, for both algorithms, Step 3 is executed fewer times than Step 6, and Steps 4 and 5 are executed exactly the same number of times that Step 6 is executed, that is, once for each plan that is generated. Consequently, for both algorithms, no step is executed more than once per plan, as summarized in Table 1. In other words, the number of times each step is executed during the planning process is bounded by the size of the search tree.\nIn examining the costs for each step, we rst note that for both algorithms, Step 1, the termination check, can be accomplished in O(1) time.\nStep 2, goal selection, can also be accomplished in O(1) time; for example, assuming the goals are stored in a list, the select-goal function can simply return the rst member of the list. Each execution of\nStep 3, operator selection, also only requires O(1) time; if we assume the operators are indexed by their e ects, all that is required is to \\pop\" the list of relevant operators on each execution.\nSteps 4 and 5 are less expensive for to than for ua.\nStep 4 of to is accomplished by inserting the new operator, O add , somewhere between O del and O need . If the possible insertion points are considered starting at O need and working towards O del , then each execution of Step 4 can be accomplished in constant time, since each insertion constitutes one execution of the step. In contrast, Step 4 in ua involves carrying out interaction detection and elimination in order to produce a new plan P 0 . This step can be accomplished in O(e) time, where e is the number of edges in the graph required to represent the partially ordered plan. (In the worst case, there may be O(n 2 ) edges in the plan, and in the best case, O(n)\nedges.) The following is the description of ua's ordering step, from Figure 3, with some additional implementation details: 4. Ordering selection: Order Oadd after Odel and before Oneed. Label all steps preceding Oadd and all steps following Oadd. Let stepsint be the unlabeled steps that interact with Oadd. Let Odel be the last deleter of c. Repeat until stepsint is empty:\nLet Oint = Pop(stepsint)\nif Oint is still unlabeled then either:\n{ order Oint before Oadd, and label Oint and the unlabeled steps before Oint ; or { order Oint after Oadd, and label Oint and the unlabeled steps after Oint .\nChoice point: both orderings must be considered for completeness.\nLet P 0 be the resulting plan.\nThe ordering process begins with a preprocessing stage. First, all steps preceding or following O add are labeled as such. The labeling process is implemented by a depth-rst traversal of the plan graph, starting with O add as the root, which rst follows the edges in one direction and then follows edges in the other direction. This requires at most O(e) time. After the labeling process is complete, only steps that are unordered with respect to O add are unlabeled, and thus the interacting steps (which must be unordered with respect to O add ) are identi able in O(n) time. The last deleter is identi able in O(e) time.\nAfter the preprocessing stage, the procedure orders each interacting step with respect to O add , updating the labels after each iteration. Since each edge in the graph need be traversed no more than once, the entire ordering process takes at most O(e) time (as described in Minton et al., 1991b). To see this, note that the process of labeling the steps before (or after) O int can stop as soon as a labeled step is encountered.\nHaving shown that Step 4 of to has O(1) complexity and Step 4 of ua has O(e) complexity, we now consider Step 5 of both algorithms, updating the goal set. to accomplishes this by iterating through the steps in the plan, from the head to the tail, which requires O(n) time. ua accomplishes this in a similar manner, but it requires O(e) time to traverse the graph. (Alternatively, ua can use the same procedure as to, provided an O(e) topological sort is rst done to linearize the plan.) To summarize our complexity analysis, the use of a partial order means that ua incurs greater cost for operator ordering (Step 4) and for updating the goal set (Step 5). Overall, ua requires O(e) time per plan, while to only requires O(n) time per plan. Since a totally ordered plan requires a representation of size O(n), and a partially ordered graph requires a representation of size O(e), designing procedures with lower costs would be possible only if the entire plan graph did not need to be examined in the worst case." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "The Role of Search Strategies", "publication_ref": [ "b22", "b28", "b24", "b10", "b12", "b15" ], "table_ref": [], "text": "The previous sections have compared to and ua in terms of relative search space size and relative time cost per node. The extra processing time required by ua for each node would appear to be justi ed since its search space may contain exponentially fewer nodes. However, to complete our analysis, we must consider the number of nodes actually visited by each algorithm under a given search strategy.\nFor breadth-rst search, the analysis is straightforward. After completing the search to a particular depth, both planners will have explored their entire trees up to that depth. 6Both ua and to nd a solution at the same depth due to the correspondence between their search trees. Thus, the degree to which ua will outperform to, under breadth-rst, depends solely on the \\expansion factor\" under L, i.e., on the number of linearizations of ua's plans.\nWe can formalize this analysis as follows. For a node U in tree UA , we denote the number of steps in the plan at U by n u , and the number of edges in U by e u . Then for each node U that ua generates, ua incurs time cost O(e u ); whereas, to incurs time cost O(n u ) j L(U) j, where j L(U) j is the number of linearizations of the plan at node U. Therefore, the ratio of the total time costs of to and ua is as follows, where bf(tree UA ) denotes the subtree considered by ua under breadth-rst search.\ncost(to bf ) cost(ua bf ) = P u2bf(tree UA ) O(n u ) j L(U) j P u2bf(tree UA ) O(e u )\nThe analysis of breadth-rst search is so simple because this search strategy preserves the correspondence between the two planners' search spaces. In breadth-rst search, the two planners are synchronized after exhaustively exploring each level, so that to has explored (exactly) the linearizations of the plans explored by ua. For any other search strategy which similarly preserves the correspondence, such as iterative deepening, a similar analysis can be carried out.\nThe cost comparison is not so clear-cut for depth-rst search, since the correspondence is not guaranteed to be preserved. It is easy to see that, under depth-rst search, to does not necessarily explore all linearizations of the plans explored by ua. This is not simply because the planners nondeterministically choose which child to expand. There is a deeper reason: the correspondence L does not preserve the subtree structure of the search space. For a plan U in tree UA , the corresponding linearizations in L(U) may be spread throughout tree TO .\nTherefore, it is unlikely that corresponding plans will be considered in the same order by depth-rst search. Nevertheless, even though the two planners are not synchronized, we might expect that, on average, ua will explore fewer nodes because the size of tree UA is less than or equal to the size of tree TO .\nEmpirically, we have observed that ua does tend to outperform to under depth-rst search, as illustrated by the experimental results in Figure 6. The rst graph compares the mean number of nodes explored by ua and to on 44 randomly generated blocksworld problems; the second graph compares the mean planning time for ua and to on the same problems and demonstrates that the extra time cost per node for ua is relatively insignicant. The problems are partitioned into 4 sets of 11 problems each, according to minimal solution \\length\" (i.e., the number of steps in the plan). For each problem, both planners were given a depth-limit equal to the length of the shortest solution. 7 Since the planners make nondeterministic choices, 25 trials were conducted for each problem. The source code and data required to reproduce these experiments can be found in Online Appendix 1. As we pointed out, one plausible explanation for the observed dominance of ua is that to's search tree is at least as large as ua's search tree. In fact, in the above experiments we often observed that to's search tree was typically much larger. However, the full story is more interesting. Search tree size alone is not su cient to explain ua's dominance; in particular, the density and distribution of solutions play an important role.\nThe solution density of a search tree is the proportion of nodes that are solutions. 8 If the solution density for to's search tree is greater than that for ua's search tree, then to might outperform ua under depth-rst search even though to's search tree is actually larger. For example, it might be the case that all ua solution plans are completely unordered and that the plans at the remaining leaves of tree UA { the failed plans { are totally ordered. In this case, each ua solution plan corresponds to an exponential number of to solution plans, and each ua failed plan corresponds to a single to failed plan. The converse is also possible: the solution density of ua's search tree might be greater than that of to's search tree, thus favoring ua over to under depth-rst search. For example, there might be a single totally ordered solution plan in ua's search tree and a large number of highly unordered failed 7. Since the depth-limit is equal to the length of the shortest solution, an iterative deepening (Korf, 1985) approach would yield similar results. Additionally, we note that increasing the depth-limit past the depth of the shortest solution does not signi cantly change the outcome of these experiments. 8. This de nition of solution density is ill-de ned for in nite trees, but we assume that a depth-bound is always provided, so only a nite subtree is explicitly enumerated. plans. Since each of these failed ua plans would correspond to a large number of to failed plans, the solution density for to would be considerably lower.\nFor our blocksworld problems, we found that the solution densities of the two planners' trees does not di er greatly, at least not in such a way that would explain our performance results. We saw no tendency for tree UA to have a higher solution density than tree TO . For example, for the 11 problems with solutions at depth six, the average solution density 9 for to exceeded that of ua on 7 out of the 12 problems. This is not particularly surprising since we see no a priori reason to suppose that the solution densities of the two planners should di er greatly.\nSince solution density is insu cient to explain ua's dominance on our blocksworld experiments when using depth-rst search, we need to look elsewhere for an explanation. We hypothesize that the distribution of solutions provides an explanation. We note that if the solution plans are distributed perfectly uniformly (i.e., at even intervals) among the leaves of the search tree, and if the solution densities are similar, then both planners can be expected to search a similar number of leaves, as illustrated by the schematic search tree in Figure 7. Consequently, we can explain the observed dominance of ua over to by hypothesizing that solutions are not uniformly distributed; that is, solutions tend to cluster.\nTo see this, suppose that tree UA is smaller than tree TO but the two trees have the same solution density. If the solutions are clustered, as in Figure 8, then depth-rst search can be expected to produce solutions more quickly for tree UA than for tree TO . 10 The hypothesis 9. In our experiments, a nondeterministic goal selection procedure was used with our planners, which meant that the solution density could vary from run to run. We compared the average solution density over 25 trials for each problem to obtain our results. 10. Even if the solutions are distributed randomly amongst the leaves of the trees with uniform probability (as opposed to being distributed \\perfectly uniformly\"), there will be some clusters of nodes. Therefore, to will have a small disadvantage. To see this, let us suppose that each leaf of both tree UA and tree TO is a solution with equal probability p. that solutions tend to be clustered seems reasonable since it is easy to construct problems where a \\wrong decision\" near the top of the search tree can lead to an entire subtree that is devoid of solutions.\nOne way to test our hypothesis is to compare ua and to using a randomized search strategy, a type of Monte Carlo algorithm, that we refer to as \\iterative sampling\" (cf. Minton et al., 1992;Langley, 1992;Chen, 1989;Crawford & Baker, 1994). The iterative sampling strategy explores randomly chosen paths in the search tree until a solution is found. A path is selected by traversing the tree from the root to a leaf, choosing randomly at each branch point. If the leaf is a solution then search terminates; if not, the search process returns to the root and selects another path. The same path may be examined more than once since no memory is maintained between iterations.\nIn contrast to depth-rst search, iterative sampling is relatively insensitive to the distribution of solutions. Therefore, the advantage of ua over to should disappear if our hypothesis is correct. In our experiments, we did nd that when ua and to both use iterative sampling, they expand approximately the same number of nodes on our set of blocksworld problems. 11 (For both planners, performance with iterative sampling was worse than with depth-rst search.) The fact that there is no di erence between ua and to under iterative sampling, but that there is a di erence under depth-rst search, suggests that solutions are and tree TO has N TO leaves, of which k TO are solutions, then p = k UA =N UA = k TO =N TO . In general, if k out of N nodes are solutions, the expected number of nodes that must be tested to nd a solution is :5N=k when k = 1 and approaches N=k as k (and N ) approaches 1. (This is simply the expected number of samples for a binomial distribution.) Therefore, since k TO k UA , the expected number of leaves explored by to is greater than or equal to the expected number of leaves explored by ua, by at most a factor of 2. 11. The iterative sampling strategy was depth-limited in exactly the same way that our depth-rst strategy was. We note, however, that the performance of iterative sampling is relatively insensitive to the actual depth-limit used.\nindeed non-uniformly distributed. Furthermore, this result shows that ua is not necessarily superior to to; the search strategy that is employed makes a dramatic di erence.\nAlthough our blocksworld domain may be atypical, we conjecture that our results are of general relevance. Speci cally, for distribution-sensitive search strategies like depth-rst search, one can expect that ua will tend to outperform to. For distribution-insensitive strategies, such as iterative sampling, non-uniform distributions will have no e ect. We note that while iterative sampling is a rather simplistic strategy, there are more sophisticated search strategies, such as iterative broadening (Ginsberg & Harvey, 1992), that are also relatively distribution insensitive. We further explore such strategies in Section 8.2." }, { "figure_ref": [], "heading": "The Role of Heuristics", "publication_ref": [ "b33" ], "table_ref": [], "text": "In the preceding sections, we have shown that a partial-order planner can be more e cient simply because its search tree is smaller. With some search strategies, such as breadthrst search, this size di erential obviously translates into an e ciency gain. With other strategies, such as depth-rst search, the size di erential translates into an e ciency gain, provided we make additional assumptions about the solution density and distribution.\nHowever, it is often claimed that partial-order planners are more e cient due to their ability to make more informed ordering decisions, a rather di erent argument. For instance, Sacerdoti (1975) argues that this is the reason that noah performs well on problems such as the blocksworld's \\Sussman anomaly\". By delaying the decision of whether to stack A on B before or after stacking B on C, noah can eventually detect that a con ict will occur if it stacks A on B rst, and a critic called \\resolve-conflicts\" can then order the steps intelligently.\nIn this section, we show that this argument can be formally described in terms of our two planners. We demonstrate that ua does in fact have a potential advantage over to in that it can exploit certain types of heuristics more readily than to. This advantage is independent of the fact that ua has a smaller search space. Whether or not this advantage is signi cant in practice is another question, of course. We also describe some experiments where we evaluate the e ect of a commonly-used heuristic on our blocksworld problems." }, { "figure_ref": [], "heading": "Making More Informed Decisions", "publication_ref": [], "table_ref": [], "text": "First, let us identify how it is that ua can make better use of certain heuristics than to. In the ua planning algorithm, step 4 arbitrarily orders interacting plan steps. Similarly, Step 4 of to arbitrarily chooses an insertion point for the new step. It is easy to see, however, that some orderings should be tried before others in a heuristic search. This is illustrated by Figure 9, which compares ua and to on a particular problem. The key in the gure describes the relevant conditions of the library operators, where preconditions are indicated to the left of an operator and added conditions are indicated to the right (there are no deletes in this example). For brevity, the initial step and nal step of the plans are not shown. Consider the plan in tree UA with unordered steps O 1 and O 2 . When ua introduces O 3 to achieve precondition p of O 1 , Step 4 of ua will order O 3 with respect to O 2 , since these steps interact. However, it makes more sense to order O 2 before O 3 , since O 2 achieves precondition q of O 3 . This illustrates a simple planning heuristic that we refer to as the min-goals heuristic: \\prefer the orderings that yield the fewest false preconditions\".\n1 O 3 O 2 O 2 O 1 O 3 O 1 O 3 O 2 O 2 O 1 O 1 O 1 O 2 O 1 O 2 O 1 O 3 O 2 O 1 O 3 O 2 O 3 O 2 O KEY p 1 O p q r q UA TO 1 O Figure 9:\nComparison of ua and to on an example.\nThis heuristic is not guaranteed to produce the optimal search or the optimal plan, but it is commonly used. It is the basis of the \\resolve con icts\" critic that Sacerdoti employed in his blocksworld examples. Notice, however, that to cannot exploit this heuristic as e ectively as ua because it prematurely orders O 1 with respect to O 2 . Due to this inability to postpone an ordering decision, to must choose arbitrarily between the plans O 1 O 2 and O 2 O 1 , before the impact of this decision can be evaluated.\nIn the general case, suppose h is a heuristic that can be applied to both partially ordered plans and totally ordered plans. Furthermore, assume h is a \\useful\" heuristic; i.e., if h rates one plan more highly than another, a planner that explores the more highly rated plan rst will perform better on average. Then, ua will have a potential advantage over to provided that h satis es the following property: for any ua plan U and corresponding to plan T, h(U) h(T); that is, a partially ordered plan must be rated at least as high as any of its linearizations. (Note that for unambiguous plans, the min-goals heuristic satis es this property since it gives identical ratings to a partially ordered plan and its linearizations.) ua has an advantage over to because if ua is expanding plan U and to is expanding a corresponding plan T, then h will rate some child of U at least as high as the most highly rated child of T. This is true since every child of T is a linearization of some child of U, and therefore no child of T can be rated higher than a child of U. Furthermore, there may be a child of U such that none of its linearizations is a child of T, and therefore this child of U can be rated higher than every child of T. Since we assumed that h is a useful heuristic, this means that ua is likely to make a better choice than to. " }, { "figure_ref": [ "fig_5", "fig_6", "fig_5", "fig_5", "fig_6", "fig_6" ], "heading": "Illustrative Experimental Results", "publication_ref": [ "b24" ], "table_ref": [], "text": "The previous section showed that ua has a potential advantage over to because it can better exploit certain ordering heuristics. We now examine the practical e ects of incorporating one such heuristic into ua and to. First, we note that ordering heuristics only make sense for some search strategies. In particular, for breadth-rst search, heuristics do not improve the e ciency of the search in a meaningful way (except possibly at the last level). Indeed, we need not consider any search strategy in which to and ua are \\synchronized\", as de ned earlier, since ordering heuristics do not signi cantly a ect the relative performance of ua and to under such strategies. Thus, we begin by considering a standard search strategy that is not synchronized: depth-rst search.\nWe use the min-goals heuristic as the basis for our experimental investigation, since it is commonly employed, but presumably we could choose any heuristic that meets the criterion set forth in the previous section. Figure 10 shows the impact of min-goals on the behavior of ua and to under depth-rst search. Although the heuristic biases the order in which the two planners' search spaces are explored (cf. Rosenbloom, Lee, & Unruh, 1993), it appears that its e ect is largely independent of the partial-order/total-order distinction, since both planners are improved by a similar percentage. For example, under depth-rst search on the problems with solutions at depth six, ua improved 88% and to improved 87%. Thus, there is no obvious evidence for any extra advantage for ua, as one might have expected from our analysis in the previous section. On the other hand, this does not contradict our theory, it simply means that the potential heuristic advantage was not signi cant enough to show up. In other domains, the advantage might manifest itself more signi cantly. After all, it is certainly possible to design problems in which the advantage is signi cant, as our example in Figure 9 illustrates. Our results simply illustrate that in our blocksworld domain, making intelligent ordering decisions produces a negligible advantage for ua, in contrast to the signi cant e ect due to search space compression (discussed previously). 12 While the min-goals heuristic did not seem to help ua more than to, the results are nevertheless interesting, since the heuristic had a very signi cant e ect on the performance of both planners, so much so that to with min-goals outperforms ua without min-goals.\nWhile the e ectiveness of min-goals is domain dependent, we nd it interesting that in these experiments, the use of min-goals makes more di erence than the use of partial orders. After all, the blocksworld originally helped motivate the development of partial-order planning and most subsequent planning systems have employed partial orders. While not deeply surprising, this result does help reinforce what we already know: more attention should be paid to speci c planning heuristics such as min-goals.\nIn our analysis of search space compression in Section 7, we described a \\distribution insensitive\" search strategy called iterative sampling and showed that under iterative sampling ua and to perform similarly, although their performance is worse than it is under depth-rst search. If we combine min-goals with iterative sampling, we nd that this produces a much more powerful strategy, but one in which to and ua still perform about equally. For simplicity, our implementation of iterative sampling uses min-goals as a pruning heuristic; at each choice point, it explores only those plan extensions with the fewest goals. This strategy is powerful, although incomplete. 13 Because of this incompleteness, we note there was one problem we removed from our sample set because iterative sampling with 12. In Section 9.2, we discuss planners that are \\less-committed\" than ua. For such planners, the advantage due to heuristics might be more pronounced since they \\delay\" their decisions even longer than ua. 13. Instead of exploring only those plan extensions with the fewest goals at each choice point, an alternative strategy is to assign each extension a probability that is inversely correlated with the number of goals, min-goals would never terminate on this problem. With this caveat in mind, we turn to the results in Figure 11, which when compared against Figure 10, show that the performance of both ua and to with iterative sampling was, in general, signi cantly better than their performance under depth-rst search. (Note that the graphs in Figures 10 and11 have very di erent scales.) Our results clearly illustrate the utility of the planning bias introduced by min-goals in our blocksworld domain, since on 43 of our 44 problems, a solution exists in the very small subspace preferred by min-goals. These experiments do not show any advantage for ua as compared with to under the heuristic, which is consistent with our conclusions above. However, this could equally well be because min-goals was so powerful, leading to solutions so quickly, that smaller in uences were obscured.\nThe dramatic success of combining min-goals with iterative sampling led us to consider another search strategy, iterative broadening, which combines the best aspects of depthrst search and iterative sampling. This more sophisticated search strategy initially behaves like iterative sampling, but evolves into depth-rst search as the breadth-cuto increases (Langley, 1992). Assuming that the solution is within the speci ed depth bound, iterative broadening is complete. In its early stages iterative broadening is distribution-insensitive; in its later stages it behaves like depth-rst search and, thus, becomes increasingly sensitive to solution distribution. As one would expect from our iterative sampling experiments, with iterative broadening, solutions were found very early on, as shown in Figure 11. Thus, it is not surprising that ua and to performed similarly under iterative broadening.\nWe should point out that the results presented in this subsection are only illustrative, since they deal with only a single domain and with a single heuristic. Nevertheless, our experiments do illustrate how the various properties we have identi ed in this paper can interact." }, { "figure_ref": [], "heading": "Extending our Results", "publication_ref": [], "table_ref": [], "text": "Having established our basic results concerning the e ciency of ua and to under various circumstances, we now consider how these results extend to other types of planners." }, { "figure_ref": [ "fig_0", "fig_0", "fig_7" ], "heading": "More Expressive Languages", "publication_ref": [ "b13", "b17", "b9", "b29", "b11", "b30", "b27" ], "table_ref": [], "text": "In the preceding sections, we showed that the primary advantage that ua has over to is that ua's search tree may be exponentially smaller than to's search tree, and we also showed that ua only pays a small (polynomial) extra cost per node for this advantage. Thus far we have assumed a very restricted planning language in which the operators are propositional; however, most practical problems demand operators with variables, conditional e ects, or conditional preconditions. With a more expressive planning language, will the time cost per node be signi cantly greater for ua than for to? One might think so, since the work required to identify interacting steps can increase with the expressiveness of the operator language used (Dean & Boddy, 1988;Hertzberg & Horz, 1989). If the cost of detecting step and pick accordingly. Given a depth bound, this strategy has the advantage of being asymptotically complete. We used the simpler strategy here for pedagogical reasons.\ninteraction is high enough, the savings that ua enjoys due to its reduced search space will be outweighed by the additional expense incurred at each node.\nConsider the case for simple breadth-rst search. Earlier we showed that the ratio of the total time costs of to and ua is as follows, where the subtree considered by ua under breadth-rst search is denoted by bf(tree UA ), the number of steps in plan a U is denoted by n u , and the number of edges in U is denoted by e u : cost(TO bf ) cost(UA bf ) = P U2bf(tree UA ) O(n u ) j L(U) j P U2bf(tree UA ) O(e u ) This cost comparison is speci c to the simple propositional operator language used so far, but the basic idea is more general. ua will generally outperform to whenever its cost per node is less than the product of the cost per node for to and the number of to nodes that correspond under L. Thus, ua could incur an exponential cost per node and still outperform to in some cases. This can happen, for example, if the exponential number of linearizations of a ua partial order is greater than the exponential cost per node for ua. In general, however, we would like to avoid the case where ua pays an exponential cost per node and, instead, consider an approach that can guarantee that the cost per node for ua remains polynomial (as long as the cost per node for to also remains polynomial).\nThe cost per node for ua is dominated by the cost of updating the goal set (Step 5) and the cost of selecting the orderings (Step 4). Updating the goal set remains polynomial as long as a plan is unambiguous. Since each precondition in an unambiguous plan is either necessarily true or necessarily false, we can determine the truth value of a given precondition by examining its truth value in an arbitrary linearization of the plan. Thus, we can simply linearize the plan and then use the same procedure to uses for calculating the goal set. As a result, it is only the cost of maintaining the unambiguous property (i.e., Step 4) that is impacted by more expressive languages. One approach for e ciently maintaining this property relies on a \\conservative\" ordering strategy in which operators are ordered if they even possibly interact.\nAs an illustration of this approach, consider a simple propositional language with conditional e ects, such as \\if p and q, then add r\". Hence, an operator can add (or delete) propositions depending on the state in which it is applied. We refer to conditions such as \\p\" in our example as dependency conditions. (Note that, like preconditions, dependency conditions are simple propositions.) Chapman (1987) showed that with this type of language it is NP-hard to decide whether a precondition is true in a partially ordered plan. However, as we pointed out above, for the special case of unambiguous plans, this decision can be accomplished in polynomial time.\nFormally, the language is speci ed as follows. An operator O, as before, has a list of preconditions, pre(O), a list of (unconditional) adds, adds(O), a list of (unconditional) deletes, dels(O). In addition, it has a list of conditional adds, cadds(O), and a list of conditional deletes, cdels(O); both containing pairs hD e ; ei, where D e is a conjunctive set of dependency conditions and e is the conditional e ect (either an added or a deleted condition).\nAnalogous with the constraint that every delete must be a precondition, every conditional delete must be a member of its dependency conditions; that is, for every hD e ; ei 2 cdels(O), e 2 D e .\nFigure 12 shows a version of the ua algorithm, called ua-c, which is appropriate for this language. The primary di erence between the ua and ua-c algorithms is that in both Steps 3 and 4b an operator may be specialized with respect to a set of dependency conditions.\nThe function specialize(O, D ) accepts a plan step, O, and a set of dependency conditions, D; it returns a new step O 0 that is just like O, but with certain conditional e ects made unconditional. The e ects that are selected for this transformation are exactly those whose dependency conditions are a subset of D. Thus, the act of specializing a plan step is the act of committing to expanding its causal role in a plan. 14 Once a step is specialized, ua-c has made a commitment to use it for a given set of e ects. Of course, a step can be further specialized in a later search node, but specializations are never retracted.\nMore precisely, the de nition of O 0 = specialize(O; D), where O is a step, D is a conjunctive set of dependency conditions in O, and n is the set di erence operator, is as follows.\npre(O 0 ) = pre(O) D. adds(O 0 ) = adds(O) fe j hD e ; ei 2 cadds(O) ^De Dg. dels(O 0 ) = dels(O) fe j hD e ; ei 2 cdels(O) ^De Dg. cadds(O 0 ) = fhD e 0 ; ei j hD e ; ei 2 cadds(O) ^De 6 D ^De 0 = D e nDg. cdels(O 0 ) = fhD e 0 ; ei j hD e ; ei 2 cdels(O) ^De 6 D ^De 0 = D e nDg.\nThe de nition of step interaction is generalized for ua-c as follows. We say that two steps in a plan interact if they are unordered with respect to each other and the following disjunction holds: one step has a precondition or dependency condition that is added or deleted by the other step, or one step adds a condition that is deleted by the other step.\nThe di erence between this de nition of step interaction and the one given earlier is indicated by an italic font. This modi ed de nition allows us to detect interacting operators with a simple inexpensive test, as did our original de nition. For example, two steps that are unordered interact if one step conditionally adds r and the other has precondition r. Note that the rst step need not actually add r in the plan, so ordering the two operators might be unnecessary. In general, our de nition of interaction is a su cient criterion for guaranteeing that the resulting plans are unambiguous, but it is not a necessary criterion.\nFigure 13 shows a schematic example illustrating how ua-c extends a plan. The preconditions of each operator are shown on the left of each operator, and the unconditional adds on the right. (We only show the preconditions and e ects necessary to illustrate the specialization process; no deletes are used in the example.) Conditional adds are shown 14. For simplicity, the modi cations used to create ua-c are not very sophisticated. As a result, ua-c's space may be larger than it needs to be in some circumstances, since it aggressively commits to specializations.\nA more sophisticated set of modi cations is possible; however, the subtlies involved in e ciently planning with dependency conditions (Pednault, 1988;Collins & Pryor, 1992;Penberthy & Weld, 1992) are largely irrelevant to our discussion.\nFigure 12: The ua-C planning algorithm underneath each operator. For instance, the rst operator in the plan at the top of the page has precondition p. This operator adds q and conditionally adds u if t is true. The gure illustrates two of the plans produced as a result of adding a new conditional operator to the plan. In one plan, the conditional e ects u ! s] and t ! u] are selected in the specialization process, and in the other plan they are not. The new step, Step 4b, requires only polynomial time per plan generated, and the time cost of the other steps are the same as for ua. Hence, as with our original ua algorithm, the cost per node for the ua-c algorithm is polynomial.\nto can also handle this language given the corresponding modi cations (changing Step 3 and adding Step 4b), and the time cost per plan also remains polynomial. 15 Moreover, the same relationship holds between the two planners' search spaces { tree UA is never larger than tree TO and can be exponentially smaller. This example illustrates that the theoretical advantages that ua has over to can be preserved for a more expressive language. As we pointed out, our de nition of interaction is a su cient criterion for guaranteeing that the resulting plans are unambiguous, but it is not a necessary criterion. Nevertheless, this conservative approach allows interactions to be detected via a simple inexpensive syntactic test. Essentially, we have kept the cost per node for ua-c low by restricting the search space it considers, as shown in Figure 14. ua-c only considers unambiguous plans that can be generated via its \\conservative\" ordering strategy. ua-c is still a partial-order planner, and\nO u t p q O [ ] p q O t u [ ] O p q O t u O O u r\nAdd Operator:\nr O q s r O q s s O u [ ] r s O u [ ] r s r O q s Figure 13\n: An example illustrating the ua-c algorithm it is complete, but it does not consider all partially ordered plans or even all unambiguous partially ordered plans. The same \\trick\" can be used for other languages as well, provided that we can devise a simple test to detect interacting operators. For example, in previous work (Minton et al., 1991b) we showed how this can be done for a language where operators can have variables in their preconditions and e ects. In the general case, for a given ua plan and a corresponding to plan, Steps 1,2, and 3 of the ua algorithm cost the same as the corresponding steps of the to algorithm. As long as the plans considered by ua are unambiguous, Step 5 of the ua algorithm can be accomplished with an arbitrary linearization of the plan, in which case it costs at most O(e) more than Step 5 of the to algorithm. Thus, the only possibility for additional cost is in Step 4. In general, if we can devise a \\local\" criterion for interaction such that the resulting plan is guaranteed to be unambiguous, then the ordering selection step can be accomplished in polynomial time. By \\local\", we mean a criterion that only considers operator pairs to determine interactions; i.e., it must not examine the rest of the plan.\nAlthough the theoretical advantages that ua has over to can be preserved for more expressive languages, there is a cost. The unambiguous plans that are considered may have more orderings than necessary, and the addition of unnecessary orderings can increase the size of ua's search tree. The magnitude of this increase depends on the speci c language, domain, and problem being considered. Nevertheless, we can guarantee that ua's search tree is never larger than to's.\nThe general lesson here is that the cost of plan extension is not solely dependent on the expressiveness of the operator language, it also depends on the nature of the plans that the planner considers. So, although the extension of partially ordered plans is NP-hard for languages with conditional e ects, if the space of plans is restricted (e.g., only unambiguous plans are considered) then this worst-case situation is avoided." }, { "figure_ref": [ "fig_8", "fig_3" ], "heading": "Less Committed Planners", "publication_ref": [ "b34", "b36", "b9", "b9", "b25", "b25", "b20" ], "table_ref": [], "text": "We have shown that ua, a partial-order planner, can have certain computational advantages over a total-order planner, to, since its ability to delay commitments allows for a more compact search space and potentially more intelligent ordering choices. However, there are many planners that are even less committed than ua. In fact, there is a continuum of commitment strategies that we might consider, as illustrated in Figure 15. Total-order planning lies at one end of the spectrum. At the other extreme is the strategy of maintaining a totally unordered set of steps during search until there exists a linearization of the steps that is a solution plan.\nCompared to many well-known planners, ua is conservative since it requires each plan to be unambiguous. This is not required by noah (Sacerdoti, 1977), NonLin (Tate, 1977), nor Tweak (Chapman, 1987), for example. How do these less-committed planners compare to ua and to? One might expect a less-committed planner to have the same advantages over ua that ua has over to. However, this is not necessarily true. As an example, in this section we introduce a Tweak-like planner, called mt, and show that its search space is larger than even to's in some circumstances. 16Figure 16 presents the mt procedure. mt is a propositional planner based on Chapman's Modal Truth Criterion (Chapman, 1987), the formal statement that characterizes Tweak's search space. It is straightforward to see that mt is less committed than ua. The algorithms are quite similar; however, in Step 4, whereas ua orders all interacting steps, mt does not. Since mt does not immediately order all interacting operators, it may have to add additional orderings between previously introduced operators later in the planning process to produce correct plans.\nThe proof that ua's search tree is no larger than to's search tree rested on the two properties of L elaborated in Section 5. By investigating the relationship between mt and to, we found that the second property, the disjointness property, does not hold for mt, and its failure illustrates how mt can explore more plans than to (and, consequently, than ua) on certain problems. The disjointness property guarantees that ua does not generate \\overlapping\" plans. The example in Figure 17 shows that mt fails to satisfy this property because it can generate plans that share common linearizations, leading to considerable redundancy in the search tree. The gure shows three steps, O 1 , O 2 , and O 3 , where each O i has precondition p i and added conditions g i , p 1 , p 2 , and p 3 . The nal step has preconditions g 1 , g 2 , and g 3 , but the initial and nal steps are not shown in the gure. At the top of the gure, in the plan constructed by mt, goals g 1 , g 2 , and g 3 have been achieved, but p 1 , p 2 , and p 3 remain to be achieved. Subsequently, in solving precondition p 1 , mt generates plans which share the linearization O 3 O 2 O 1 (among others). In comparison, both to and ua only generate the plan O 3 O 2 O 1 once. In fact, it is simple to show that, under breadth-rst search, mt explores many more plans than to on this example (and also more than ua, by transitivity) due to the redundancy in its search space. This result may seem counterintuitive. However, note that the search space size for a partial-order planner is potentially much greater than that of a total-order planner since there are many more partial orders over a set of steps than there are total orders. (Thus, when designing a partial-order planner, one may preclude overlapping linearizations in order to avoid redundancy, as discussed by McAllester &Rosenblitt, 1991 andKambhampati, 1994c.) Of course, one can also construct examples where mt does have a smaller search space than both ua and to. Our example simply illustrates that although one planner may be less committed than another, its search space is not necessarily smaller. The commitment strategy used by a planner is simply one factor that in uences overall performance. In particular, the e ect of redundancy in a partial-order planner can overwhelm other considerations. In comparing two planners, one must carefully consider the mapping between their search spaces before concluding that \\less committed ) smaller search space\".\n1 O 3 O 2 O 1 O 3 O 2 O 3 O 2 O g 2 g 3 1 O 2 O 3 O 1 O 2 O 3 O 2 O 3 O 1 O KEY p 1 1" }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b7", "b8", "b23", "b20", "b25", "b30", "b20", "b21" ], "table_ref": [], "text": "For many years, the intuitions underlying partial-order planning were largely taken for granted. Only in the past few years has there been renewed interest in the fundamental principles underlying these issues. Barrett et al. (1991) and Barrett and Weld (1994) describe an interesting and novel analysis of partial-order planning that complements our own work. They compare a partialorder planner with two total-order planners derived from it, one that searches in the space of plans, and the other that searches in the space of world states. Their study focuses on how the goal structure of the problem a ects the e ciency of partial-order planning. Speci cally, they examine how partial-order and total-order planning compare for problems with independent, serializable, and non-serializable goals, when using a resource-bounded depth-rst search. They re ne Korf's work on serializable goals (Korf, 1987), introducing a distinction between trivially serializable subgoals, where the subgoals can be solved in any order without violating a previously solved subgoal, and laboriously serializable subgoals, where the subgoals are serializable, but at least 1=n of the orderings can cause a previously solved subgoal to be violated. Their study describes conditions under which a partial-order planner may have an advantage. For instance, they show that in a domain where the goals are trivially serializable for their partial-order planner and laboriously serializable for their total-order planners, their partial-order planner performs signi cantly better.\nOur study provides an interesting contrast to Barret and Weld's work, since we investigate the relative e ciencies of partial-order and total-order planning algorithms independent of any particular domain structure. Instead, we focus on the underlying properties of the search space and how the search strategy a ects the e ciency of our planners. Nevertheless, we believe there are interesting relationships between the forms of serializability that they investigate, and the ideas of solution density and clustering that we have discussed here.\nTo illustrate this, consider an arti cial domain that Barret and Weld refer to as D 1 S 1 , where, in each problem, the goals are a subset of fG 1 ; G 2 ; : : :G 15 g, the initial conditions are fI 1 ; I 2 ; : : :I 15 g, and each operator O i2f1;2;:::;15g has precondition I i , adds G i , and deletes I i 1 . It follows that if a solution in D 1 S 1 contains operators O i and O j where i < j, then O i must precede O j . In this domain, the goals are trivially serializable for their partial-order planner and laboriously serializable for their total-order planners; thus, the partial-order planner performs best. But note also that in this arti cial domain, there is exactly one solution per problem and it is totally ordered. Therefore, it is immediately clear that, if we give ua and to problems from this domain, then ua's search tree will generally be much smaller than to's search tree. Since there is only single solution for both planners, the solution density for ua will clearly be greater than that for to. Thus, the properties we discussed in this paper should provide a basis for analyzing how di erences in subgoal serializibility manifest their e ect on the search. This subject, however, is not as simple as it might seem and deserves further study.\nIn other related work, Kambhampati has written several papers (Kambhampati, 1994a(Kambhampati, , 1994b(Kambhampati, , 1994c) that analyze the design space of partial-order planners, including the ua planner presented here. Kambhampati compares ua, Tweak, snlp (McAllester & Rosenblitt, 1991), ucpop (Penberthy & Weld, 1992), and several other planners along a variety of dimensions. He presents a generalized schema for partial order planning algorithms (Kambhampati, 1994c) and shows that the commitment strategy used in ua can be viewed as a way to increase the tractability of the plan extension (or re nement) process. Kambhampati also carries out an empirical comparison of the various planning algorithms on a particular problem (Kambhampati, 1994a), showing how the di erences in commitment strategies a ects the e ciency of the planning process. He distinguishes two separate components of the branching factor, b t and b e , the former resulting from the commitment strategy for operator ordering (or in his terms, the \\tractability re nements\") and the latter resulting from the choice of operator (\\establishment re nements\"). Kambhampati's experiments demonstrate that while \\eager\" commitment strategies tend to increase b t , sometimes they also decrease b e , because the number of possible establishers is reduced when plans are more ordered. This is, of course, closely related to the issues investigated in this paper.\nIn addition, Kambhampati and Chen (1993) have compared the relative utility of reusing partially ordered and totally ordered plans in \\learning planners\". They showed that the reuse of partially ordered plans, rather than totally ordered plans, result in \\storage compaction\" because they can represent a large number of di erent orderings. Moreover, partialorder planners have an advantage because they can exploit such plans more e ectively than total-order planners. In many respects, these advantages are fundamentally similar to the advantages that ua derives from its potentially smaller search space." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "By focusing our analysis on a single issue, namely, operator ordering commitment, we have been able to carry out a rigorous comparative analysis of two planners. We have shown that the search space of a partial-order planner, ua, is never larger than the search space of a total-order planner, to. Indeed for certain problems, ua's search space is exponentially smaller than to's. Since ua pays only a small polynomial time increment per node over to, it is generally more e cient.\nWe then showed that ua's search space advantage may not necessarily translate into an e ciency gain, depending in subtle ways on the search strategy and heuristics that are employed by the planner. For example, our experiments suggest that distribution-sensitive search strategies, such as depth-rst search, can bene t more from partial orders than can search strategies that are distribution-insensitive.\nWe also examined a variety of extensions to our planners, in order to demonstrate the generality of these results. We argued that the potential bene ts of partial-order planning may be retained even with highly expressive planning languages. However, we showed that partial-order planners do not necessarily have smaller search spaces, since some \\less-committed\" strategies may create redundancies in the search space. In particular, we demonstrated that a Tweak-like planner, mt, can have a larger search space than our total-order planner on some problems.\nHow general are these results? Although our analysis has considered only two speci c planners, we have examined some important tradeo s that are of general relevance. The analysis clearly illustrates how the planning language, the search strategy, and the heuristics that are used can a ect the relative advantages of the two planning styles.\nThe results in this paper should be considered as an investigation of the possible bene ts of partial-order planning. ua and to have been constructed in order for us to analyze the total-order/partial-order distinction in isolation. In reality, the comparative behavior of two planners is rarely as clear (as witnessed by our discussion of mt). While the general points we make are applicable to other planners, if we chose two arbitrary planners, we would not expect one planner to so clearly dominate the other.\nOur observations regarding the interplay between plan representation and search strategy raise new concerns for comparative analyses of planners. Historically, it has been assumed that representing plans as partial orders is categorically \\better\" than representing plans as total orders. The results presented in this paper begin to tell a more accurate story, one that is both more interesting and more complex than we initially expected. there exist two distinct plans, U 1 = h 1 ; 1 i and U 2 = h 2 ; 2 i, at depth n + 1 in tree UA such that T 2 L(U 1 )\\L(U 2 ). Then (by de nition of L), parent(T) 2 L(parent(U 1 )) and parent(T) 2 L(parent(U 2 )). Since parent(U 1 ) 6 = parent(U 2 ) contradicts the induction hypothesis, suppose that U 1 and U 2 have the same parent U 0 . Then, by the de nition of ua either (i) 1 6 = 2 or (ii) 1 = 2 and 1 6 = 2 . In the rst case, since the two plans do not contain the same set of plan steps, they have disjoint linearizations and, hence, L(U 1 ) \\ L(U 2 ) = fg, which contradicts the supposition. In the second case, 1 = 2 ; hence, both plans resulted from adding plan step O add to the parent plan. Since 1 6 = 2 , there exists a plan step O int that interacts with O add such that in one plan O int is ordered before O add and in the other plan O add is ordered before O int . Thus, in either case, the linearizations of the two plans are disjoint and, hence, L(U 1 ) \\ L(U 2 ) = fg, which contradicts the supposition. Therefore, the statement holds for plans at depth n + 1.\nQ.E.D." }, { "figure_ref": [], "heading": "A.4 Completeness Proof for TO", "publication_ref": [], "table_ref": [], "text": "We now prove that to is complete under a breadth rst search control strategy. To do so, it su ces to prove that if there exists a solution to a problem, then there exists a to-extension that is a compact solution. Before doing so, we prove the following lemma.\nSubplan Lemma: Let totally ordered plan T 0 be a strict subplan of a compact solution T s .\nThen there exists a plan T 1 such that T 1 is a subplan of T s and T 1 is a 1-step to-extension of T 0 .\nProof: Since T 0 is a strict subplan of T s and T s is a compact solution, the set of false preconditions in T 0 , G, must not be empty. Let c = select-goal(G), let O need be the step in T 0 with precondition c, and let O add be the step in T s that achieves c. Consider the totally ordered plan T 1 = h 0 fO add g; 1 i, where 1 s . Clearly, T 1 is a subplan of T s . Furthermore, by the TO-Extension Lemma, T 1 is a 1-step extension of T 0 by to. To see this, note that O add is ordered before O need in T 1 since it is ordered before O need in T s . Similarly, O add is ordered after the last deleter of c in T 0 since any deleter of c in T 0 is a deleter of c in T s , and O add is ordered after the deleters of c in T s . Thus, the conditions of the TO-Extension Lemma hold. Q.E.D.\nTO Completeness Theorem: If plan T s is a totally ordered compact solution, then T s is a to-extension.\nProof: Let n be the length of T s . We show that for all k n, there exists a subplan of T s with length k that is a to-extension. This is su cient to prove our result since any subplan of exactly length n is equivalent to T s . The proof is by induction on k. Base case: If k = 0 the statement holds since the initial plan, which has length 0, is a subplan of any solution plan.\nInduction step: We assume that the statement holds for k and show that if k < n the statement holds for k + 1. By the induction hypothesis, there exists a plan T 0 of length k that is a strict subplan of T s . By the Subplan Lemma, there exists a plan T 1 that is both a subplan of T s and a 1-step to-extension of T 0 . Thus, there exists a subplan of T s of length k + 1. Q.E.D." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [ "b28" ], "table_ref": [], "text": "Most of the work present in this paper was originally described in two conference papers (Minton et al., 1991a(Minton et al., , 1992)). We thank Andy Philips for his many contributions to this project. He wrote the code for the planners and helped conduct the experiments. We also thank the three anonymous reviewers for their excellent comments." }, { "figure_ref": [], "heading": "Appendix A. Proofs", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 De nitions", "publication_ref": [], "table_ref": [], "text": "This section de nes the terminology and notation used in our proofs. The notion of plan equivalence is introduced here because each plan step is, by de nition, a uniquely labeled operator instance, as noted in Section 3 and Section 5. Thus, no two plans have the same set of steps. Although this formalism simpli es our analysis, it requires us to de ne plan equivalence explicitly.\nA plan is a pair h ; i, where is a set of steps, and is the \\before\" relation on , i.e., is a strict partial order on . Notationally, O 1 O 2 if and only if (O 1 ; O 2 ) 2 . For a given problem, we de ne the search tree tree TO as the complete tree of plans that is generated by the to algorithm on that problem. tree UA is the corresponding search tree generated by ua on the same problem.\nTwo plans, P 1 = h 1 ; 1 i and P 2 = h 2 ; 2 i are said to be equivalent, denoted P 1 ' P 2 , if there exists a bijective function f from 1 to 2 such that:\n{ for all O 2 1 , O and f(O) are instances of the same operator, and { for all O 0 ; O 00 2 1 , O 0 O 00 if and only if f(O 0 ) f(O 00 ).\nA plan P 2 is a 1-step to-extension (or 1-step ua-extension) of a plan P 1 if P 2 is equivalent to some plan produced from P 1 in one invocation of to (or ua).\nA plan P is a to-extension (or ua-extension) if either:\n{ P is the initial plan, or { P is a 1-step to-extension (or 1-step ua-extension) of a to-extension (or uaextension).\nIt immediately follows from this de nition that if P is a member of tree TO (or tree UA ), then P is a to-extension (or ua-extension). In addition, if P is a to-extension (or ua-extension), then some plan that is equivalent to P is a member of tree TO (or tree UA ). P 1 is a linearization of P 2 = h ; 2 i if there exists a strict total order 1 such that 2 1 and P 1 ' h ; 1 i. Given a search tree, let parent be a function from a plan to its parent plan in the tree. Note that P 1 is the parent of P 2 , denoted P 1 = parent(P 2 ), only if P 2 is a 1-step extension of P 1 .\nGiven U 2 tree UA and T 2 tree TO , T 2 L(U) if and only if plan T is a linearization of plan U and either both U and T are root nodes of their respective search trees, or parent(T) 2 L(parent(U)).\nThe length of the plan is the number of steps in the plan excluding the rst and last steps. Thus, the initial plan has length 0. A plan P with n steps has length n 2. P 1 is a subplan of P 2 = h 2 ; 2 i if P 1 ' h 1 ; 1 i, where { 1 2 and { 1 is 2 restricted to 1 , i.e., 1 = 2 \\ 1 1 .\nP 1 is a strict subplan of P 2 , if P 1 is a subplan of P 2 and the length of P 1 is less than the length of P 2 .\nA solution plan P is a compact solution if no strict subplan of P is a solution.\nA.2 Extension Lemmas TO-Extension Lemma: Consider totally ordered plans T 0 = h 0 ; 0 i and T 1 = h 1 ; 1 i, such that 1 = 0 fO add g and 0 1 . Let G be the set of false preconditions in T 0 .\nThen T 1 is a 1-step to-extension of T 0 if: c = select-goal(G), where c is the precondition of some step O need in T 0 , and O add adds c, and (O add ; O need ) 2 1 , and (O del ; O add ) 2 1 , where O del is the last deleter of c in T 1 . Proof Sketch: This lemma follows from the de nition of to. Given plan T 0 , with false precondition c, once to selects c as the goal, to will consider all operators that achieve c, and for each operator to considers all positions before c and after the last deleter of c.\nUA-Extension Lemma: Consider a plan U 0 = h 0 ; 0 i produced by ua and plan U 1 = h 1 ; 1 i, such that 1 = 0 fO add g and 0 1 . Let G be the set of false preconditions of the steps in U 0 . Then U 1 is a 1-step ua-extension of U 0 if: c = select-goal(G), where c is the precondition of some step O need in U 0 , and O add adds c, and Proof Sketch: This lemma follows from the de nition of ua. Given plan U 0 , with false precondition c, ua considers all operators that achieve c, and for each such operator ua then inserts it in the plan such that it is before c and after the last deleter. ua then considers all consistent combinations of orderings between the new operator and the operators with which it interacts. No other orderings are added to the plan.\nA.3 Proof of Search Space Correspondence L Mapping Lemma: Let U 0 = h 0 ; u0 i be an unambiguous plan and let U 1 = h 1 ; u1 i be a 1-step ua-extension of U 0 . If T 1 = h 1 ; t1 i is a linearization of U 1 , then there exists a plan T 0 such that T 0 is a linearization of U 0 and T 1 is a 1-step to-extension of T 0 .\nProof: Since U 1 is a 1-step ua-extension of U 0 , there is a step O add such that 1 = 0 fO add g. Let T 0 be the subplan produced by removing O add from T 1 ; that is, T 0 = h 0 ; t0 i, where t0 = t1 \\ 0 0 . Since u0 = u1 \\ 0 0 t1 \\ 0 0 = t0 , it follows that T 0 is a linearization of U 0 .\nUsing the TO-Extension lemma, we can show that T 1 is a 1-step to-extension of T 0 . First, T 0 is a linearization of U 0 , so the two plans have the same set of goals. Therefore, if ua selects some goal c in expanding U 0 , to selects c in extending T 0 . Second, it must be the case that O add adds c since O add is the step ua inserted into U 0 to make c true. Third, O add is before O need in T 1 , since O add is before O need in U 1 (by de nition of ua) and since T 1 is a linearization of U 1 . Fourth, O add is after the last deleter of c, O del , in T 1 , since O add is after O del in U 1 (by de nition of ua) and since T 1 is a linearization of U 1 . Therefore, the conditions of the TO-Extension lemma hold and, thus, T 1 is a 1-step to-extension of T 0 .\nQ.E.D.\nTotality Property For every plan U in tree UA , there exists a non-empty set fT 1 ; : : :; T m g of plans in tree TO such that L(U) = fT 1 ; : : :; T m g. Proof: It su ces to show that if plan U 1 is a ua-extension and plan T 1 is a linearization of U 1 , then T 1 is a to-extension. The proof is by induction on plan length.\nBase case: The statement trivially holds for plans of length 0.\nInduction step: Under the hypothesis that the statement holds for plans of length n, we now prove that the statement holds for plans of length n + 1. Suppose that U 1 is a uaextension of length n + 1 and T 1 is a linearization of U 1 . Let U 0 be a plan such that U 1 is a 1-step ua-extension of U 0 . By the Mapping lemma, there exists a plan T 0 such that T 0 is a linearization of U 0 and T 1 is a 1-step to-extension of T 0 . By the induction hypothesis, T 0 is a to-extension. Therefore, by de nition, T 1 is also a to-extension. Q.E.D.\nDisjointness Property: L maps distinct plans in tree UA to disjoint sets of plans in tree TO ; that is, if U 1 ; U 2 2 tree UA and U 1 6 = U 2 , then L(U 1 ) \\ L(U 2 ) = fg. Proof: By the de nition of L, if T 1 ; T 2 2 L(U), then T 1 and T 2 are at the same tree depth d in tree TO ; furthermore, U is also at depth d in tree UA . Hence, it su ces to prove that if plans U 1 and U 2 are at depth d in tree UA and U 1 6 = U 2 , then L(U 1 ) \\ L(U 2 ) = fg.\nBase case: The statement vacuously holds for depth 0.\nInduction step: Under the hypothesis that the statement holds for plans at depth n, we prove, by contradiction, that the statement holds for plans at depth n + 1. Suppose that A.5 Completeness Proof for UA We now prove that ua is complete under a breadth-rst search strategy. The result follows from the search space correspondence de ned by L and the fact that to is complete. In particular, we show below that for any to-extension T, there exists a ua-extension U such that T is a linearization of U. Since ua produces only unambiguous plans, it must be the case that if T is a solution, U is also a solution. From this, it follows immediately that ua is complete.\nInverse Mapping Lemma: Let T 0 = h 0 ; t0 i be a totally ordered plan. Let T 1 = h 1 ; t1 i be a 1-step to-extension of T 0 . Let U 0 = h 0 ; u0 i be a plan produced by ua such that T 0 is a linearization of U 0 . Then there exists a plan U 1 such that T 1 is a linearization of U 1 and U 1 is a 1-step ua-extension of U 0 .\nProof: By the de nition of to, 1 = 0 fO add g, where O add added some c that is a false precondition of some plan step O need in U 0 . Consider U 1 = h 1 ; u1 i, where u1 is a minimal subset of t1 such that: u0 u1 , and (O add ; O need ) 2 u1 , and (O del ; O add ) 2 u1 , where O del is the last deleter of c in U 1 , and no step in U 1 interacts with O add Since u1 t1 , T 1 is a linearization of U 1 . In addition, U 1 is an extension of U 0 since it meets the three conditions of the UA-Extension Lemma, as follows. First, since c must have been the goal selected by to in extending T 0 , c must likewise be selected by ua in extending U 0 . Second, O add adds c since O add achieves c in T 0 . Finally, by construction, u1 satis es the third condition of the UA-Extension Lemma. Q.E.D.\nUA Completeness Theorem: Let T s be a totally ordered compact solution. Then there exists a ua-extension U s such that T s is a linearization of U s .\nProof: Since to is complete, it su ces to show that if T 1 is a to-extension, then there exists a ua-extension U 1 such that T 1 is a linearization of U 1 . The proof is by induction on plan length. Base case: The statement trivially holds for plans of length 0.\nInduction step: Under the hypothesis that the statement holds for plans of length n, we now prove that the statement holds for plans of length n + 1. Assume T 1 is a to-extension of length n + 1, and let T 0 be a plan such that T 1 is a 1-step to-extension of T 0 . By the induction hypothesis, there exists a ua-extension U 0 of length n such that T 0 is a linearization of U 0 . By the Inverse Mapping Lemma, there exists a plan U 1 that is both a linearization of T 1 and a 1-step ua-extension of U 0 . Since U 1 is a 1-step ua-extension of U 0 , it has length n + 1. Q.E.D." } ]
[ { "authors": "C ; P ", "journal": "", "ref_id": "b0", "title": "", "year": "" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "role selection: While there exists a step Ocadd with unmarked conditional add hDc; ci and a step Ouse with precondition c, such that Ouse is after Ocadd and there is no (unconditional) deleter of c in between Ouse and Ocadd. Either mark hDc; ci, or replace Ocadd", "year": "" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "Goal updating: Let G 0 be the set of preconditions in P 0 that are necessarily false. 6 Recursive invocation: UA-C(P 0 ; G 0 )", "year": "" }, { "authors": "", "journal": "", "ref_id": "b3", "title": "Goal selection: Let c = select-goal(G), and", "year": "" }, { "authors": "", "journal": "", "ref_id": "b4", "title": "Let P 0 be the resulting plan", "year": "" }, { "authors": "", "journal": "", "ref_id": "b5", "title": "Goal updating: Let G 0 be the set of preconditions in P 0 that are not necessarily true. 6. Recursive invocation: MT(P 0 ; G 0 )", "year": "" }, { "authors": "C Backstrom", "journal": "", "ref_id": "b6", "title": "Finding least constrained plans and optimal parallel executions is harder than we thought", "year": "1993" }, { "authors": "A Barrett; S Soderland; D Weld", "journal": "", "ref_id": "b7", "title": "The e ect of step-order representations on planning", "year": "1991" }, { "authors": "A Barrett; D Weld", "journal": "Arti cial Intelligence", "ref_id": "b8", "title": "Partial-order planning: Evaluating possible e ciency gains", "year": "1994" }, { "authors": "D Chapman", "journal": "Arti cial Intelligence", "ref_id": "b9", "title": "Planning for conjunctive goals", "year": "1987" }, { "authors": "P Chen", "journal": "", "ref_id": "b10", "title": "Heuristic Sampling on Backtrack Trees", "year": "1989" }, { "authors": "G Collins; L Pryor", "journal": "", "ref_id": "b11", "title": "Achieving the functionality of lter conditions in a partial order planner", "year": "1992" }, { "authors": "J Crawford; A Baker", "journal": "", "ref_id": "b12", "title": "Experimental results on the application of satis ability algorithms to scheduling problems", "year": "1994" }, { "authors": "T Dean; M Boddy", "journal": "Arti cial Intelligence", "ref_id": "b13", "title": "Reasoning about partially ordered events", "year": "1988" }, { "authors": "M Drummond; K Currie", "journal": "", "ref_id": "b14", "title": "Goal-ordering in partially ordered plans", "year": "1989" }, { "authors": "M Ginsberg; W Harvey", "journal": "Arti cial Intelligence", "ref_id": "b15", "title": "Iterative broadening", "year": "1992" }, { "authors": "P Godefroid; F Kabanza", "journal": "", "ref_id": "b16", "title": "An e cient reactive planner for synthesizing reactive plans", "year": "1991" }, { "authors": "J Hertzberg; A Horz", "journal": "", "ref_id": "b17", "title": "Towards a theory of con ict detection and resolution in nonlinear plans", "year": "1989" }, { "authors": "S Kambhampati", "journal": "", "ref_id": "b18", "title": "Design tradeo s in partial order (plan space) planning", "year": "1994" }, { "authors": "S Kambhampati", "journal": "Arti cial Intelligence", "ref_id": "b19", "title": "Multi contributor causal structures for planning: A formalization and evaluation", "year": "1994" }, { "authors": "S Kambhampati", "journal": "", "ref_id": "b20", "title": "Re nement search as a unifying framework for analyzing plan space planners", "year": "1994" }, { "authors": "S Kambhampati; J Chen", "journal": "", "ref_id": "b21", "title": "Relative utility of EBG-based plan reuse in partial ordering vs. total ordering planning", "year": "1993" }, { "authors": "R Korf", "journal": "Articial Intelligence", "ref_id": "b22", "title": "Depth-rst iterative deepening: An optimal admissible tree search", "year": "1985" }, { "authors": "R Korf", "journal": "Arti cial Intelligence", "ref_id": "b23", "title": "Planning as search: A quantitative approach", "year": "1987" }, { "authors": "P Langley", "journal": "", "ref_id": "b24", "title": "Systematic and nonsystematic search strategies", "year": "1992" }, { "authors": "D Mcallester; D Rosenblitt", "journal": "", "ref_id": "b25", "title": "Systematic nonlinear planning", "year": "1991" }, { "authors": "S Minton; J Bresina; M Drummond", "journal": "", "ref_id": "b26", "title": "Commitment strategies in planning: A comparative analysis", "year": "1991" }, { "authors": "S Minton; J Bresina; M Drummond; A Philips", "journal": "", "ref_id": "b27", "title": "An analysis of commitment strategies in planning: The details", "year": "1991" }, { "authors": "S Minton; M Drummond; J Bresina; A Philips", "journal": "", "ref_id": "b28", "title": "Total order vs. partial order planning: Factors in uencing performance", "year": "1992" }, { "authors": "E Pednault", "journal": "Computational Intelligence", "ref_id": "b29", "title": "Synthesizing plans that contain actions with context-dependent e ects", "year": "1988" }, { "authors": "J Penberthy; D Weld", "journal": "", "ref_id": "b30", "title": "UCPOP: A sound, complete, partial-order planner for adl", "year": "1992" }, { "authors": "P Regnier; B Fade", "journal": "Springer", "ref_id": "b31", "title": "Complete determination of parallel actions and temporal optimization in linear plans of action", "year": "1991" }, { "authors": "P Rosenbloom; S Lee; A Unruh", "journal": "Morgan Kaufmann Publishers", "ref_id": "b32", "title": "Bias in planning and explanation-based learning", "year": "1993" }, { "authors": "E Sacerdoti", "journal": "", "ref_id": "b33", "title": "The nonlinear nature of plans", "year": "1975" }, { "authors": "E Sacerdoti", "journal": "American Elsivier", "ref_id": "b34", "title": "A Structure for Plans and Behavior", "year": "1977" }, { "authors": "A Tate", "journal": "", "ref_id": "b35", "title": "Interplan: A plan generation system which can deal with interactions between goals", "year": "1974" }, { "authors": "A Tate", "journal": "", "ref_id": "b36", "title": "Generating project networks", "year": "1977" }, { "authors": "M Veloso; M Perez; J Carbonell", "journal": "", "ref_id": "b37", "title": "Nonlinear planning with parallel resource allocation", "year": "1990" }, { "authors": "R Waldinger", "journal": "Machine Intelligence", "ref_id": "b38", "title": "Achieving several goals simultaneously", "year": "1975" }, { "authors": "Ellis Harwood; Ltd ", "journal": "", "ref_id": "b39", "title": "", "year": "" }, { "authors": "D Warren", "journal": "", "ref_id": "b40", "title": "Warplan: A system for generating plans", "year": "1974" } ]
[ { "formula_coordinates": [ 8, 175.92, 100.68, 278.16, 114.96 ], "formula_id": "formula_0", "formula_text": "h h h h h h h h h h h - - - A A A A U ? @ @ @ @ R H H H H j - @ @ R g g g g f f f f L L L L" }, { "formula_coordinates": [ 9, 186.72, 95.4, 238.56, 87.48 ], "formula_id": "formula_1", "formula_text": "Step Executions Per Plan TO Cost UA Cost 1 1 O(1) O(1) 2 1 O(1) O(1) 3 < 1 O(1) O(1) 4 1 O(1) O(e) 5 1 O(n) O(e)" }, { "formula_coordinates": [ 10, 141.36, 268.8, 101.76, 12.8 ], "formula_id": "formula_2", "formula_text": "Let Oint = Pop(stepsint)" }, { "formula_coordinates": [ 11, 202.32, 317.28, 207.36, 49.32 ], "formula_id": "formula_3", "formula_text": "cost(to bf ) cost(ua bf ) = P u2bf(tree UA ) O(n u ) j L(U) j P u2bf(tree UA ) O(e u )" }, { "formula_coordinates": [ 16, 122.33, 87.35, 379.78, 257.73 ], "formula_id": "formula_4", "formula_text": "1 O 3 O 2 O 2 O 1 O 3 O 1 O 3 O 2 O 2 O 1 O 1 O 1 O 2 O 1 O 2 O 1 O 3 O 2 O 1 O 3 O 2 O 3 O 2 O KEY p 1 O p q r q UA TO 1 O Figure 9:" }, { "formula_coordinates": [ 23, 164.24, 129.59, 267.27, 176.06 ], "formula_id": "formula_5", "formula_text": "O u t p q O [ ] p q O t u [ ] O p q O t u O O u r" }, { "formula_coordinates": [ 23, 174.96, 103.13, 256.56, 260.67 ], "formula_id": "formula_6", "formula_text": "r O q s r O q s s O u [ ] r s O u [ ] r s r O q s Figure 13" }, { "formula_coordinates": [ 26, 207.72, 109.91, 207.22, 227.64 ], "formula_id": "formula_7", "formula_text": "1 O 3 O 2 O 1 O 3 O 2 O 3 O 2 O g 2 g 3 1 O 2 O 3 O 1 O 2 O 3 O 2 O 3 O 1 O KEY p 1 1" } ]
Total-Order and Partial-Order Planning: A Comparative Analysis
For many years, the intuitions underlying partial-order planning were largely taken for granted. Only in the past few years has there been renewed interest in the fundamental principles underlying this paradigm. In this paper, we present a rigorous comparative analysis of partial-order and total-order planning by focusing on two speci c planners that can be directly compared. We show that there are some subtle assumptions that underly the wide-spread intuitions regarding the supposed e ciency of partial-order planning. For instance, the superiority of partial-order planning can depend critically upon the search strategy and the structure of the search space. Understanding the underlying assumptions is crucial for constructing e cient planners.
Steven Minton; Mark Drummond
[ { "figure_caption": "FigureFigure 2 :2Figure 1: The to planning algorithm", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "FigureFigure 4 :4Figure 3: The ua planning algorithm", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6: ua and to Performance Comparison under Depth-First Search", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Uniform solution distribution, with solution density 0.25", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Non-uniform solution distribution, with solution density 0.25", "figure_data": "", "figure_id": "fig_4", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Depth rst search with and without min-goals", "figure_data": "", "figure_id": "fig_5", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Iterative sampling & iterative broadening, both with min-goals", "figure_data": "", "figure_id": "fig_6", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Hierarchy of Plan Spaces", "figure_data": "", "figure_id": "fig_7", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 1515Figure 15: A continuum of commitment strategies", "figure_data": "", "figure_id": "fig_8", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 17: \\Overlapping\" plans.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Termination check: If G is empty, report success and return solution plan P. 2. Goal selection: Let c = select-goal(G), and let Oneed be the plan step for which c is a precondition. 3. Operator selection: Let Oadd be an operator in the library that adds c. If there is no such Oadd, then terminate and report failure. Choice point: all such operators must be considered for completeness.4. Ordering selection: Let Odel be the last deleter of c. Insert Oadd somewhere between Odel and", "figure_data": "TO(P;G) 1.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "That is, if tree UA has N UA leaves, of which k UA are solutions,", "figure_data": "UA Search TreeTO Search Tree*******= Solution plan", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b4", "b25", "b26", "b31", "b19", "b17", "b32", "b20", "b28", "b3", "b14", "b9" ], "table_ref": [ "tab_0", "tab_1" ], "text": "The task of learning from examples is to nd an approximate de nition for an unknown function f (x) given training examples of the form hx i ; f (x i )i. For cases in which f takes only the values f0; 1g|binary functions|there are many algorithms available. For example, the decision-tree methods, such as C4.5 (Quinlan, 1993) and CART (Breiman, Friedman, Olshen, & Stone, 1984) can construct trees whose leaves are labeled with binary values. Most arti cial neural network algorithms, such as the perceptron algorithm (Rosenblatt, 1958) and the error backpropagation (BP) algorithm (Rumelhart, Hinton, & Williams, 1986), are best suited to learning binary functions. Theoretical studies of learning have focused almost entirely on learning binary functions (Valiant, 1984;Natarajan, 1991).\nIn many real-world learning tasks, however, the unknown function f often takes values from a discrete set of \\classes\": fc 1 ; : : : ; c k g. For example, in medical diagnosis, the function might map a description of a patient to one of k possible diseases. In digit recognition (e.g., c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. LeCun, Boser, Denker, Henderson, Howard, Hubbard, & Jackel, 1989), the function maps each hand-printed digit to one of k = 10 classes. Phoneme recognition systems (e.g., Waibel, Hanazawa, Hinton, Shikano, & Lang, 1989) typically classify a speech segment into one of 50 to 60 phonemes. Decision-tree algorithms can be easily generalized to handle these \\multiclass\" learning tasks. Each leaf of the decision tree can be labeled with one of the k classes, and internal nodes can be selected to discriminate among these classes. We will call this the direct multiclass approach.\nConnectionist algorithms are more di cult to apply to multiclass problems. The standard approach is to learn k individual binary functions f 1 ; : : : ; f k , one for each class. To assign a new case, x, to one of these classes, each of the f i is evaluated on x, and x is assigned the class j of the function f j that returns the highest activation (Nilsson, 1965). We will call this the one-per-class approach, since one binary function is learned for each class.\nAn alternative approach explored by some researchers is to employ a distributed output code. This approach was pioneered by Sejnowski and Rosenberg (1987) in their widelyknown NETtalk system. Each class is assigned a unique binary string of length n; we will refer to these strings as \\codewords.\" Then n binary functions are learned, one for each bit position in these binary strings. During training for an example from class i, the desired outputs of these n binary functions are speci ed by the codeword for class i. With arti cial neural networks, these n functions can be implemented by the n output units of a single network.\nNew values of x are classi ed by evaluating each of the n binary functions to generate an n-bit string s. This string is then compared to each of the k codewords, and x is assigned to the class whose codeword is closest, according to some distance measure, to the generated string s.\nAs an example, consider Table 1, which shows a six-bit distributed code for a ten-class digit-recognition problem. Notice that each row is distinct, so that each class has a unique codeword. As in most applications of distributed output codes, the bit positions (columns) have been chosen to be meaningful. Table 2 gives the meanings for the six columns. During learning, one binary function will be learned for each column. Notice that each column is also distinct and that each binary function to be learned is a disjunction of the original classes. For example, f vl (x) = 1 if f (x) is 1, 4, or 5.\nTo classify a new hand-printed digit, x, the six functions f vl ; f hl ; f dl ; f cc ; f ol ; and f or are evaluated to obtain a six-bit string, such as 110001. Then the distance of this string to each of the ten codewords is computed. The nearest codeword, according to Hamming distance (which counts the number of bits that di er), is 110000, which corresponds to class 4. Hence, this predicts that f (x) = 4.\nThis process of mapping the output string to the nearest codeword is identical to the decoding step for error-correcting codes (Bose & Ray-Chaudhuri, 1960;Hocquenghem, 1959). This suggests that there might be some advantage to employing error-correcting codes as a distributed representation. Indeed, the idea of employing error-correcting, distributed representations can be traced to early research in machine learning (Duda, Machanik, & Singleton, 1963). " }, { "figure_ref": [], "heading": "Code Word", "publication_ref": [], "table_ref": [], "text": "Class vl hl dl cc ol or 0 0 0 0 1 0 0 1 1 0 0 0 0 0 2 0 1 1 0 1 0 3 0 0 0 0 1 0 4 1 1 0 0 0 0 5 1 1 0 0 1 0 6 0 0 1 1 0 1 7 0 0 1 0 0 0 8 0 0 0 1 0 0 9 0 0 1 1 0 0 " }, { "figure_ref": [], "heading": "Code Word Class", "publication_ref": [ "b28", "b15" ], "table_ref": [ "tab_2", "tab_0", "tab_2", "tab_0" ], "text": "f 0 f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 f 9 f 10 f 11 f 12 f 13 f 14 0 1 1 0 0 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 2 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 3 0 0 1 1 0 1 1 1 0 0 0 0 1 0 1 4 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 5 0 1 0 0 1 1 0 1 1 1 0 0 0 0 1 6 1 0 1 1 1 0 0 0 0 1 0 1 0 0 1 7 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 8 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 9 0 1 1 1 0 0 0 0 1 0 1 0 0 1 1\nTable 3 shows a 15-bit error-correcting code for the digit-recognition task. Each class is represented by a code word drawn from an error-correcting code. As with the distributed encoding of Table 1, a separate boolean function is learned for each bit position of the errorcorrecting code. To classify a new example x, each of the learned functions f 0 (x); : : :; f 14 (x) is evaluated to produce a 15-bit string. This is then mapped to the nearest of the ten codewords. This code can correct up to three errors out of the 15 bits. This error-correcting code approach suggests that we view machine learning as a kind of communications problem in which the identity of the correct output class for a new example is being \\transmitted\" over a channel. The channel consists of the input features, the training examples, and the learning algorithm. Because of errors introduced by the nite training sample, poor choice of input features, and aws in the learning process, the class information is corrupted. By encoding the class in an error-correcting code and \\transmitting\" each bit separately (i.e., via a separate run of the learning algorithm), the system may be able to recover from the errors.\nThis perspective further suggests that the one-per-class and \\meaningful\" distributed output approaches will be inferior, because their output representations do not constitute robust error-correcting codes. A measure of the quality of an error-correcting code is the minimum Hamming distance between any pair of code words. If the minimum Hamming distance is d, then the code can correct at least b d 1 2 c single bit errors. This is because each single bit error moves us one unit away from the true codeword (in Hamming distance). If we make only b d 1 2 c errors, the nearest codeword will still be the correct codeword. (The code of Table 3 has minimum Hamming distance seven and hence it can correct errors in any three bit positions.) The Hamming distance between any two codewords in the oneper-class code is two, so the one-per-class encoding of the k output classes cannot correct any errors.\nThe minimum Hamming distance between pairs of codewords in a \\meaningful\" distributed representation tends to be very low. For example, in Table 1, the Hamming distance between the codewords for classes 4 and 5 is only one. In these kinds of codes, new columns are often introduced to discriminate between only two classes. Those two classes will therefore di er only in one bit position, so the Hamming distance between their output representations will be one. This is also true of the distributed representation developed by Sejnowski and Rosenberg (1987) in the NETtalk task.\nIn this paper, we compare the performance of the error-correcting code approach to the three existing approaches: the direct multiclass method (using decision trees), the one-per-class method, and (in the NETtalk task only) the meaningful distributed output representation approach. We show that error-correcting codes produce uniformly better generalization performance across a variety of multiclass domains for both the C4.5 decisiontree learning algorithm and the backpropagation neural network learning algorithm. We then report a series of experiments designed to assess the robustness of the error-correcting code approach to various changes in the learning task: length of the code, size of the training set, assignment of codewords to classes, and decision-tree pruning. Finally, we show that the error-correcting code approach can produce reliable class probability estimates.\nThe paper concludes with a discussion of the open questions raised by these results. Chief among these questions is the issue of why the errors being made in the di erent bit positions of the output are somewhat independent of one another. Without this indepen- dence, the error-correcting output code method would fail. We address this question|for the case of decision-tree algorithms|in a companion paper (Kong & Dietterich, 1995)." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "This section describes the data sets and learning algorithms employed in this study. It also discusses the issues involved in the design of error-correcting codes and describes four algorithms for code design. The section concludes with a brief description of the methods applied to make classi cation decisions and evaluate performance on independent test sets." }, { "figure_ref": [], "heading": "Data Sets", "publication_ref": [ "b18", "b7" ], "table_ref": [ "tab_3" ], "text": "Table 4 summarizes the data sets employed in the study. The glass, vowel, soybean, audi-ologyS, ISOLET, letter, and NETtalk data sets are available from the Irvine Repository of machine learning databases (Murphy & Aha, 1994). 1 The POS (part of speech) data set was provided by C. Cardie (personal communication); an earlier version of the data set was described by Cardie (1993). We did not use the entire NETtalk data set, which consists of a dictionary of 20,003 words and their pronunciations. Instead, to make the experiments feasible, we chose a training set of 1000 words and a disjoint test set of 1000 words at random from the NETtalk dictionary. In this paper, we focus on the percentage of letters pronounced correctly (rather than whole words). To pronounce a letter, both the phoneme and stress of the letter must be determined. Although there are 54 6 syntactically possible combinations of phonemes and stresses, only 140 of these appear in the training and test sets we selected." }, { "figure_ref": [], "heading": "Learning Algorithms", "publication_ref": [ "b23", "b2", "b13", "b5", "b24", "b33", "b16" ], "table_ref": [], "text": "We employed two general classes of learning methods: algorithms for learning decision trees and algorithms for learning feed-forward networks of sigmoidal units (arti cial neural networks). For decision trees, we performed all of our experiments using C4.5, Release 1, which is an older (but substantially identical) version of the program described in Quinlan (1993).\nWe have made several changes to C4.5 to support distributed output representations, but these have not a ected the tree-growing part of the algorithm. For pruning, the con dence factor was set to 0.25. C4.5 contains a facility for creating \\soft thresholds\" for continuous features. We found experimentally that this improved the quality of the class probability estimates produced by the algorithm in the \\glass\", \\vowel\", and \\ISOLET\" domains, so the results reported for those domains were computed using soft thresholds.\nFor neural networks, we employed two implementations. In most domains, we used the extremely fast backpropagation implementation provided by the CNAPS neurocomputer (Adaptive Solutions, 1992). This performs simple gradient descent with a xed learning rate. The gradient is updated after presenting each training example; no momentum term was employed. A potential limitation of the CNAPS is that inputs are only represented to eight bits of accuracy, and weights are only represented to 16 bits of accuracy. Weight update arithmetic does not round, but instead performs jamming (i.e., forcing the lowest order bit to 1 when low order bits are lost due to shifting or multiplication). On the speech recognition, letter recognition, and vowel data sets, we employed the opt system distributed by Oregon Graduate Institute (Barnard & Cole, 1989). This implements the conjugate gradient algorithm and updates the gradient after each complete pass through the training examples (known as per-epoch updating). No learning rate is required for this approach.\nBoth the CNAPS and opt attempt to minimize the squared error between the computed and desired outputs of the network. Many researchers have employed other error measures, particularly cross-entropy (Hinton, 1989) and classi cation gure-of-merit (CFM, Hampshire II & Waibel, 1990). Many researchers also advocate using a softmax normalizing layer at the outputs of the network (Bridle, 1990). While each of these con gurations has good theoretical support, Richard and Lippmann (1991) report that squared error works just as well as these other measures in producing accurate posterior probability estimates. Furthermore, cross-entropy and CFM tend to over t more easily than squared error (Lippmann, personal communication;Weigend, 1993). We chose to minimize squared error because this is what the CNAPS and opt systems implement.\nWith either neural network algorithm, several parameters must be chosen by the user. For the CNAPS, we must select the learning rate, the initial random seed, the number of hidden units, and the stopping criteria. We selected these to optimize performance on a validation set, following the methodology of Lang, Hinton, and Waibel (1990). The training set is subdivided into a subtraining set and a validation set. While training on the subtraining set, we observed generalization performance on the validation set to determine the optimal settings of learning rate and network size and the best point at which to stop training. The training set mean squared error at that stopping point is computed, and training is then performed on the entire training set using the chosen parameters and stopping at the indicated mean squared error. Finally, we measure network performance on the test set.\nFor most of the data sets, this procedure worked very well. However, for the letter recognition data set, it was clearly choosing poor stopping points for the full training set. To overcome this problem, we employed a slightly di erent procedure to determine the stopping epoch. We trained on a series of progressively larger training sets (all of which were subsets of the nal training set). Using a validation set, we determined the best stopping epoch on each of these training sets. We then extrapolated from these training sets to predict the best stopping epoch on the full training set.\nFor the \\glass\" and \\POS\" data sets, we employed ten-fold cross-validation to assess generalization performance. We chose training parameters based on only one \\fold\" of the ten-fold cross-validation. This creates some test set contamination, since examples in the validation set data of one fold are in the test set data of other folds. However, we found that there was little or no over tting, so the validation set had little e ect on the choice of parameters or stopping points.\nThe other data sets all come with designated test sets, which we employed to measure generalization performance." }, { "figure_ref": [], "heading": "Error-Correcting Code Design", "publication_ref": [ "b22" ], "table_ref": [ "tab_2", "tab_4" ], "text": "We de ne an error-correcting code to be a matrix of binary values such as the matrix shown in Table 3. The length of a code is the number of columns in the code. The number of rows in the code is equal to the number of classes in the multiclass learning problem. A \\codeword\" is a row in the code.\nA good error-correcting output code for a k-class problem should satisfy two properties:\nRow separation. Each codeword should be well-separated in Hamming distance from each of the other codewords.\nColumn separation. Each bit-position function f i should be uncorrelated with the functions to be learned for the other bit positions f j ; j 6 = i: This can be achieved by insisting that the Hamming distance between column i and each of the other columns be large and that the Hamming distance between column i and the complement of each of the other columns also be large.\nThe power of a code to correct errors is directly related to the row separation, as discussed above. The purpose of the column separation condition is less obvious. If two columns i and j are similar or identical, then when a deterministic learning algorithm such as C4.5 is applied to learn f i and f j , it will make similar (correlated) mistakes. Errorcorrecting codes only succeed if the errors made in the individual bit positions are relatively uncorrelated, so that the number of simultaneous errors in many bit positions is small. If there are many simultaneous errors, the error-correcting code will not be able to correct them (Peterson & Weldon, 1972).\nThe errors in columns i and j will also be highly correlated if the bits in those columns are complementary. This is because algorithms such as C4.5 and backpropagation treat a class and its complement symmetrically. C4.5 will construct identical decision trees if the 0-class and 1-class are interchanged. The maximum Hamming distance between two columns is attained when the columns are complements. Hence, the column separation condition attempts to ensure that columns are neither identical nor complementary. \nf 0 f 1 f 2 f 3 f 4 f 5 f 6 f 7 c 0 0 0 0 0 1 1 1 1 c 1 0 0 1 1 0 0 1 1 c 2 0 1 0 1 0 1 0 1\nUnless the number of classes is at least ve, it is di cult to satisfy both of these properties. For example, when the number of classes is three, there are only 2 3 = 8 possible columns (see Table 5). Of these, half are complements of the other half. So this leaves us with only four possible columns. One of these will be either all zeroes or all ones, which will make it useless for discriminating among the rows. The result is that we are left with only three possible columns, which is exactly what the one-per-class encoding provides.\nIn general, if there are k classes, there will be at most 2 k 1 1 usable columns after removing complements and the all-zeros or all-ones column. For four classes, we get a seven-column code with minimum inter-row Hamming distance 4. For ve classes, we get a 15-column code, and so on.\nWe have employed four methods for constructing good error-correcting output codes in this paper: (a) an exhaustive technique, (b) a method that selects columns from an exhaustive code, (c) a method based on a randomized hill-climbing algorithm, and (d) BCH codes. The choice of which method to use is based on the number of classes, k. Finding a single method suitable for all values of k is an open research problem. We describe each of our four methods in turn." }, { "figure_ref": [], "heading": "Exhaustive Codes", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "When 3 k 7, we construct a code of length 2 k 1 1 as follows. Row 1 is all ones. Row 2 consists of 2 k 2 zeroes followed by 2 k 2 1 ones. Row 3 consists of 2 k 3 zeroes, followed by 2 k 3 ones, followed by 2 k 3 zeroes, followed by 2 k 3 1 ones. In row i, there are alternating runs of 2 k i zeroes and ones. Table 6 shows the exhaustive code for a ve-class problem. This code has inter-row Hamming distance 8; no columns are identical or complementary." }, { "figure_ref": [], "heading": "Column Selection from Exhaustive Codes", "publication_ref": [ "b29" ], "table_ref": [ "tab_7" ], "text": "When 8 k 11, we construct an exhaustive code and then select a good subset of its columns. We formulate this as a propositional satis ability problem and apply the GSAT algorithm (Selman, Levesque, & Mitchell, 1992) to attempt a solution. A solution is required to include exactly L columns (the desired length of the code) while ensuring that the Hamming distance between every two columns is between d and L d, for some chosen value of d. Each column is represented by a boolean variable. A pairwise mutual 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 3 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 4 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 5 0 1 0 1 0 1 0 1 0 1 0 1 0 1 exclusion constraint is placed between any two columns that violate the column separation condition. To support these constraints, we extended GSAT to support mutual exclusion and \\m-of-n\" constraints e ciently." }, { "figure_ref": [ "fig_0" ], "heading": "Randomized Hill Climbing", "publication_ref": [], "table_ref": [], "text": "For k > 11, we employed a random search algorithm that begins by drawing k random strings of the desired length L. Any pair of such random strings will be separated by a Hamming distance that is binomially distributed with mean L=2. Hence, such randomly generated codes are generally quite good on average. To improve them, the algorithm repeatedly nds the pair of rows closest together in Hamming distance and the pair of columns that have the \\most extreme\" Hamming distance (i.e., either too close or too far apart). The algorithm then computes the four codeword bits where these rows and columns intersect and changes them to improve the row and column separations as shown in Figure 1. When this hill climbing procedure reaches a local maximum, the algorithm randomly chooses pairs of rows and columns and tries to improve their separations. This combined hill-climbing/random-choice procedure is able to improve the minimum Hamming distance separation quite substantially." }, { "figure_ref": [], "heading": "BCH Codes", "publication_ref": [ "b3", "b14" ], "table_ref": [], "text": "For k > 11 we also applied the BCH algorithm to design codes (Bose & Ray-Chaudhuri, 1960;Hocquenghem, 1959). The BCH algorithm employs algebraic methods from Galois eld theory to design nearly optimal error-correcting codes. However, there are three practical drawbacks to using this algorithm. First, published tables of the primitive polynomials required by this algorithm only produce codes up to length 64, since this is the largest word size employed in computer memories. Second, the codes do not always exhibit good column separations. Third, the number of rows in these codes is always a power of two. If the number of classes k in our learning problem is not a power of two, we must shorten the code by deleting rows (and possible columns) while maintaining good row and column separations. We have experimented with various heuristic greedy algorithms for code shortening. For most of the codes used in the NETtalk, ISOLET, and Letter Recognition domains, we have used a combination of simple greedy algorithms and manual intervention to design good shortened BCH codes.\nIn each of the data sets that we studied, we designed a series of error-correcting codes of increasing lengths. We executed each learning algorithm for each of these codes. We stopped lengthening the codes when performance appeared to be leveling o ." }, { "figure_ref": [], "heading": "Making Classi cation Decisions", "publication_ref": [], "table_ref": [], "text": "Each approach to solving multiclass problems|direct multiclass, one-per-class, and errorcorrecting output coding|assumes a method for classifying new examples. For the C4.5 direct multiclass approach, the C4.5 system computes a class probability estimate for each new example. This estimates the probability that that example belongs to each of the k classes. C4.5 then chooses the class having the highest probability as the class of the example.\nFor the one-per-class approach, each decision tree or neural network output unit can be viewed as computing the probability that the new example belongs to its corresponding class. The class whose decision tree or output unit gives the highest probability estimate is chosen as the predicted class. Ties are broken arbitrarily in favor of the class that comes rst in the class ordering.\nFor the error-correcting output code approach, each decision tree or neural network output unit can be viewed as computing the probability that its corresponding bit in the codeword is one. Call these probability values B = hb 1 ; b 2 ; : : : ; b n i, where n is the length of the codewords in the error-correcting code. To classify a new example, we compute the L 1 distance between this probability vector B and each of the codewords W i (i = 1 : : :k) in the error correcting code. The L 1 distance between B and W i is de ned as L 1 (B; W i ) = L X j=0 jb j W i;j j:\nThe class whose codeword has the smallest L 1 distance to B is assigned as the class of the new example. Ties are broken arbitrarily in favor of the class that comes rst in the class ordering. " }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "We now present the results of our experiments. We begin with the results for decision trees. Then, we consider neural networks. Finally, we report the results of a series of experiments to assess the robustness of the error-correcting output code method." }, { "figure_ref": [ "fig_1" ], "heading": "Decision Trees", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows the performance of C4.5 in all eight domains. The horizontal line corresponds to the performance of the standard multiclass decision-tree algorithm. The light bar shows the performance of the one-per-class approach, and the dark bar shows the performance of the ECOC approach with the longest error-correcting code tested. Performance is displayed as the number of percentage points by which each pair of algorithms di er. An asterisk indicates that the di erence is statistically signi cant at the p < 0:05 level according to the test for the di erence of two proportions (using the normal approximation to the binomial distribution, see Snedecor & Cochran, 1989, p. 124).\nFrom this gure, we can see that the one-per-class method performs signi cantly worse than the multiclass method in four of the eight domains and that its behavior is statistically indistinguishable in the remaining four domains. Much more encouraging is the observation that the error-correcting output code approach is signi cantly superior to the multiclass approach in six of the eight domains and indistinguishable in the remaining two.\nIn the NETtalk domain, we can also consider the performance of the meaningful distributed representation developed by Sejnowski and Rosenberg. This representation gave 66.7% correct classi cation as compared with 68.6% for the one-per-class con guration, 70.0% for the direct-multiclass con guration, and 74.3% for the ECOC con guration. The di erences in each of these gures are statistically signi cant at the 0.05 level or better except that the one-per-class and direct-multiclass con gurations are not statistically distinguishable." }, { "figure_ref": [ "fig_2" ], "heading": "Backpropagation", "publication_ref": [], "table_ref": [], "text": "Figure 3 shows the results for backpropagation in ve of the most challenging domains. The horizontal line corresponds to the performance of the one-per-class encoding for this method. The bars show the number of percentage points by which the error-correcting output coding representation outperforms the one-per-class representation. In four of the ve domains, the ECOC encoding is superior; the di erences are statistically signi cant in the Vowel, NETtalk, and ISOLET domains. 2In the letter recognition domain, we encountered great di culty in successfully training networks using the CNAPS machine, particularly for the ECOC con guration. Experiments showed that the problem arose from the fact that the CNAPS implementation of backpropagation employs a xed learning rate. We therefore switched to the much slower opt program, which chooses the learning rate adaptively via conjugate-gradient line searches. This behaved better for both the one-per-class and ECOC con gurations.\nWe also had some di culty training ISOLET in the ECOC con guration on large networks (182 units), even with the opt program. Some sets of initial random weights led to local minima and poor performance on the validation set.\nIn the NETtalk task, we can again compare the performance of the Sejnowski-Rosenberg distributed encoding to the one-per-class and ECOC encodings. The distributed encoding yielded a performance of 71.5% correct, compared to 72.9% for the one-per-class encoding, and 74.9% for the ECOC encoding. The di erence between the distributed encoding and the one-per-class encoding is not statistically signi cant. From these results and the previous results for C4.5, we can conclude that the distributed encoding has no advantages over the one-per-class and ECOC encoding in this domain." }, { "figure_ref": [], "heading": "Robustness", "publication_ref": [], "table_ref": [], "text": "These results show that the ECOC approach performs as well as, and often better than, the alternative approaches. However, there are several important questions that must be answered before we can recommend the ECOC approach without reservation: Do the results hold for small samples? We have found that decision trees learned using error-correcting codes are much larger than those learned using the one-per-class or multiclass approaches. This suggests that with small sample sizes, the ECOC method may not perform as well, since complex trees usually require more data to be learned reliably. On the other hand, the experiments described above covered a wide range of training set sizes, which suggests that the results may not depend on having a large training set.\nDo the results depend on the particular assignment of codewords to classes? The codewords were assigned to the classes arbitrarily in the experiments reported above, which suggests that the particular assignment may not be important. However, some assignments might still be much better than others.\nDo the results depend on whether pruning techniques are applied to the decisiontree algorithms? Pruning methods have been shown to improve the performance of multiclass C4.5 in many domains.\nCan the ECOC approach provide class probability estimates? Both C4.5 and backpropagation can be con gured to provide estimates of the probability that a test example belongs to each of the k possible classes. Can the ECOC approach do this as well?" }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Small sample performance", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "As we have noted, we became concerned about the small sample performance of the ECOC method when we noticed that the ECOC method always requires much larger decision trees than the OPC method. Table 7 compares the sizes of the decision trees learned by C4.5 under the multiclass, one-per-class, and ECOC con gurations for the letter recognition task and the NETtalk task. For the OPC and ECOC con gurations, the tables show the average number of leaves in the trees learned for each bit position of the output representation. For letter recognition, the trees learned for a 207-bit ECOC are more than six times larger than those learned for the one-per-class representation. For the phoneme classi cation part of NETtalk, the ECOC trees are 14 times larger than the OPC trees. Another way to compare the sizes of the trees is to consider the total number of leaves in the trees. The tables clearly show that the multiclass approach requires much less memory (many fewer total leaves) than either the OPC or the ECOC approaches. With backpropagation, it is more di cult to determine the amount of \\network resources\" that are consumed in training the network. One approach is to compare the number of hidden units that give the best generalization performance. In the ISOLET task, for example, the one-per-class encoding attains peak validation set performance with a 78hidden-unit network, whereas the 30-bit error-correcting encoding attained peak validation set performance with a 156-hidden-unit network. In the letter recognition task, peak performance for the one-per-class encoding was obtained with a network of 120-hidden units compared to 200 hidden units for a 62-bit error-correcting output code.\nFrom the decision tree and neural network sizes, we can see that, in general, the errorcorrecting output representation requires more complex hypotheses than the one-per-class representation. From learning theory and statistics, we known that complex hypotheses typically require more training data than simple ones. On this basis, one might expect that the performance of the ECOC method would be very poor with small training sets. To test this prediction, we measured performance as a function of training set size in two of the larger domains: NETtalk and letter recognition.\nFigure 4 presents learning curves for C4.5 on the NETtalk and letter recognition tasks, which show accuracy for a series of progressively larger training sets. From the gure it is clear that the 61-bit error-correcting code consistently outperforms the other two con gurations by a nearly constant margin. Figure 5 shows corresponding results for backpropagation on the NETtalk and letter recognition tasks. On the NETtalk task, the results are the same: sample size has no apparent in uence on the bene ts of error-correcting output coding. However, for the letter-recognition task, there appears to be an interaction. Error-correcting output coding works best for small training sets, where there is a statistically signi cant bene t. With the largest training set|16,000 examples|the one-per-class method very slightly outperforms the ECOC method.\nFrom these experiments, we conclude that error-correcting output coding works very well with small samples, despite the increased size of the decision trees and the increased complexity of training neural networks. Indeed, with backpropagation on the letter recognition task, error-correcting output coding worked better for small samples than it did for large ones. This e ect suggests that ECOC works by reducing the variance of the learning algorithm. For small samples, the variance is higher, so ECOC can provide more bene t." }, { "figure_ref": [], "heading": "Assignment of Codewords to Classes", "publication_ref": [ "b1" ], "table_ref": [ "tab_7" ], "text": "In all of the results reported thus far, the codewords in the error-correcting code have been arbitrarily assigned to the classes of the learning task. We conducted a series of experiments in the NETtalk domain with C4.5 to determine whether randomly reassigning the codewords to the classes had any e ect on the success of ECOC. Table 8 shows the results of ve random assignments of codewords to classes. There is no statistically signi cant variation in the performance of the di erent random assignments. This is consistent with similar experiments reported in Bakiri (1991)." }, { "figure_ref": [ "fig_5" ], "heading": "Effect of Tree Pruning", "publication_ref": [], "table_ref": [], "text": "Pruning of decision trees is an important technique for preventing over tting. However, the merit of pruning varies from one domain to another. Figure 6 shows the change in performance due to pruning in each of the eight domains and for each of the three con gurations studied in this paper: multiclass, one-per-class, and error-correcting output coding.\nFrom the gure, we see that in most cases pruning makes no statistically signi cant di erence in performance (aside from the POS task, where it decreases the performance of all three con gurations). Aside from POS, only one of the statistically signi cant changes involves the ECOC con guration, while two a ect the one-per-class con guration, and one a ects the multiclass con guration. These data suggest that pruning only occasionally has a major e ect on any of these con gurations. There is no evidence to suggest that pruning a ects one con guration more than another." }, { "figure_ref": [ "fig_6", "fig_6", "fig_7" ], "heading": "Class Probability Estimates", "publication_ref": [ "b35", "b17" ], "table_ref": [], "text": "In many applications, it is important to have a classi er that cannot only classify new cases well but also estimate the probability that a new case belongs to each of the k classes. For example, in medical diagnosis, a simple classi er might classify a patient as \\healthy\" because, given the input features, that is the most likely class. However, if there is a non-zero probability that the patient has a life-threatening disease, the right choice for the physician may still be to prescribe a therapy for that disease.\nA more mundane example involves automated reading of handwritten postal codes on envelopes. If the classi er is very con dent of its classi cation (i.e., because the estimated Asterisk indicates that the di erence is signi cant at the 0.05 level or better.\nprobabilities are very strong), then it can proceed to route the envelope. However, if it is uncertain, then the envelope should be \\rejected\", and sent to a human being who can attempt to read the postal code and process the envelope (Wilkinson, Geist, Janet, et al., 1992). One way to assess the quality of the class probability estimates of a classi er is to compute a \\rejection curve\". When the learning algorithm classi es a new case, we require it to also output a \\con dence\" level. Then we plot a curve showing the percentage of correctly classi ed test cases whose con dence level exceeds a given value. A rejection curve that increases smoothly demonstrates that the con dence level produced by the algorithm can be transformed into an accurate probability measure.\nFor one-per-class neural networks, many researchers have found that the di erence in activity between the class with the highest activity and the class with the second-highest activity is a good measure of con dence (e.g., LeCun et al., 1989). If this di erence is large, then the chosen class is clearly much better than the others. If the di erence is small, then the chosen class is nearly tied with another class. This same measure can be applied to the class probability estimates produced by C4.5.\nAn analogous measure of con dence for error-correcting output codes can be computed from the L 1 distance between the vector B of output probabilities for each bit and the codewords of each of the classes. Speci cally, we employ the di erence between the L 1 distance to the second-nearest codeword and the L 1 distance to the nearest codeword as our con dence measure. If this di erence is large, an algorithm can be quite con dent of its classi cation decision. If the di erence is small, the algorithm is not con dent.\nFigure 7 compares the rejection curves for various con gurations of C4.5 and backpropagation on the NETtalk task. These curves are constructed by rst running all of the test examples through the learned decision trees and computing the predicted class of each example and the con dence value for that prediction. To generate each point along the curve, a value is chosen for a parameter , which de nes the minimum required con dence. The classi ed test examples are then processed to determine the percentage of test examples whose con dence level is less than (these are \\rejected\") and the percentage of the remaining examples that are correctly classi ed. The value of is progressively incremented (starting at 0) until all test examples are rejected.\nThe lower left portion of the curve shows the performance of the algorithm when is small, so only the least con dent cases are rejected. The upper right portion of the curve shows the performance when is large, so only the most con dent cases are classi ed. Good class probability estimates produce a curve that rises smoothly and monotonically. A at or decreasing region in a rejection curve reveals cases where the con dence estimate of the learning algorithm is unrelated or inversely related to the actual performance of the algorithm.\nThe rejection curves often terminate prior to rejecting 100% of the examples. This occurs when the nal increment in causes all examples to be rejected. This gives some idea of the number of examples for which the algorithm was highly con dent of its classi cations. If the curve terminates early, this shows that there were very few examples that the algorithm could con dently classify.\nIn Figure 7, we see that|with the exception of the Multiclass con guration|the rejection curves for all of the various con gurations of C4.5 increase fairly smoothly, so all of them are producing acceptable con dence estimates. The two error-correcting con gurations have smooth curves that remain above all of the other con gurations. This shows that the performance advantage of error-correcting output coding is maintained at all con dence levels|ECOC improves classi cation decisions on all examples, not just the borderline ones. Similar behavior is seen in the rejection curves for backpropagation. Again all con gurations of backpropagation give fairly smooth rejection curves. However, note that the 159-bit code actually decreases at high rejection rates. By contrast, the 61-bit code gives a monotonic curve that eventually reaches 100%. We have seen this behavior in several of the cases we have studied: extremely long error-correcting codes are usually the best method at low rejection rates, but at high rejection rates, codes of \\intermediate\" length (typically 60-80 bits) behave better. We have no explanation for this behavior.\nFigure 8 compares the rejection curves for various con gurations of C4.5 and backpropagation on the ISOLET task. Here we see that the ECOC approach is markedly superior to either the one-per-class or multiclass approaches. This gure illustrates another phenomenon we have frequently observed: the curve for multiclass C4.5 becomes quite at and terminates very early, and the one-per-class curve eventually surpasses it. This suggests that there may be opportunities to improve the class probability estimates produced by C4.5 on multiclass trees. (Note that we employed \\softened thresholds\" in these experiments.) In the backpropagation rejection curves, the ECOC approach consistently outperforms the one-per-class approach until both are very close to 100% correct. Note that both con gurations of backpropagation can con dently classify more than 50% of the test examples with 100% accuracy.\nFrom these graphs, it is clear that the error-correcting approach (with codes of intermediate length) can provide con dence estimates that are at least as good as those provided by the standard approaches to multiclass problems. " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b21", "b27", "b10" ], "table_ref": [], "text": "In this paper, we experimentally compared four approaches to multiclass learning problems: multiclass decision trees, the one-per-class (OPC) approach, the meaningful distributed output approach, and the error-correcting output coding (ECOC) approach. The results clearly show that the ECOC approach is superior to the other three approaches. The improvements provided by the ECOC approach can be quite substantial: improvements on the order of ten percentage points were observed in several domains. Statistically signi cant improvements were observed in six of eight domains with decision trees and three of ve domains with backpropagation. The improvements were also robust: ECOC improves both decision trees and neural networks; ECOC provides improvements even with very small sample sizes; and\nThe improvements do not depend on the particular assignment of codewords to classes. The error-correcting approach can also provide estimates of the con dence of classi cation decisions that are at least as accurate as those provided by existing methods.\nThere are some additional costs to employing error-correcting output codes. Decision trees learned using ECOC are generally much larger and more complex than trees constructed using the one-per-class or multiclass approaches. Neural networks learned using ECOC often require more hidden units and longer and more careful training to obtain the improved performance (see Section 3.2). These factors may argue against using errorcorrecting output coding in some domains. For example, in domains where it is important for humans to understand and interpret the induced decision trees, ECOC methods are not appropriate, because they produce such complex trees. In domains where training must be rapid and completely autonomous, ECOC methods with backpropagation cannot be recommended, because of the potential for encountering di culties during training.\nFinally, we found that error-correcting codes of intermediate length tend to give better con dence estimates than very long error-correcting codes, even though the very long codes give the best generalization performance.\nThere are many open problems that require further research. First and foremost, it is important to obtain a deeper understanding of why the ECOC method works. If we assume that each of the learned hypotheses makes classi cation errors independently, then coding theory provides the explanation: individual errors can be corrected because the codewords are \\far apart\" in the output space. However, because each of the hypotheses is learned using the same algorithm on the same training data, we would expect that the errors made by individual hypotheses would be highly correlated, and such errors cannot be corrected by an error-correcting code. So the key open problem is to understand why the classi cation errors at di erent bit positions are fairly independent. How does the error-correcting output code result in this independence?\nA closely related open problem concerns the relationship between the ECOC approach and various \\ensemble\", \\committee\", and \\boosting\" methods (Perrone & Cooper, 1993;Schapire, 1990;Freund, 1992). These methods construct multiple hypotheses which then \\vote\" to determine the classi cation of an example. An error-correcting code can also be viewed as a very compact form of voting in which a certain number of incorrect votes can be corrected. An interesting di erence between standard ensemble methods and the ECOC approach is that in the ensemble methods, each hypothesis is attempting to predict the same function, whereas in the ECOC approach, each hypothesis predicts a di erent function. This may reduce the correlations between the hypotheses and make them more e ective \\voters.\" Much more work is needed to explore this relationship.\nAnother open question concerns the relationship between the ECOC approach and the exible discriminant analysis technique of Hastie, Tibshirani, and Buja (In Press). Their method rst employs the one-per-class approach (e.g., with neural networks) and then applies a kind of discriminant analysis to the outputs. This discriminant analysis maps the outputs into a k 1 dimensional space such that each class has a de ned \\center point\". New cases are classi ed by mapping them into this space and then nding the nearest \\center point\" and its class. These center points are similar to our codewords but in a continuous space of dimension k 1. It may be that the ECOC method is a kind of randomized, higher-dimensional variant of this approach.\nFinally, the ECOC approach shows promise of scaling neural networks to very large classi cation problems (with hundreds or thousands of classes) much better than the oneper-class method. This is because a good error-correcting code can have a length n that is much less than the total number of classes, whereas the one-per-class approach requires that there be one output unit for each class. Networks with thousands of output units would be expensive and di cult to train. Future studies should test the scaling ability of these di erent approaches to such large classi cation tasks." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors thank the anonymous reviewers for their valuable suggestions which improved the presentation of the paper. The authors also thank Prasad Tadepalli for proof-reading the nal manuscript. The authors gratefully acknowledge the support of the National Science Foundation under grants numbered IRI-8667316, CDA-9216172, and IRI-9204129. Bakiri also thanks Bahrain University for its support of his doctoral research." } ]
[ { "authors": "", "journal": "Adaptive Solutions, Inc", "ref_id": "b0", "title": "CNAPS back-propagation guide", "year": "1992" }, { "authors": "G Bakiri", "journal": "", "ref_id": "b1", "title": "Converting English text to speech: A machine learning approach", "year": "1991" }, { "authors": "E Barnard; R A Cole", "journal": "", "ref_id": "b2", "title": "A neural-net training program based on conjugategradient optimization", "year": "1989" }, { "authors": "R C Bose; D K Ray-Chaudhuri", "journal": "Information and Control", "ref_id": "b3", "title": "On a class of error-correcting binary group codes", "year": "1960" }, { "authors": "L Breiman; J H Friedman; R A Olshen; C J Stone", "journal": "Wadsworth International Group", "ref_id": "b4", "title": "Classi cation and Regression Trees", "year": "1984" }, { "authors": "J S Bridle", "journal": "", "ref_id": "b5", "title": "Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters", "year": "1990" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "C Cardie", "journal": "", "ref_id": "b7", "title": "Using decision trees to improve case-based learning", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b8", "title": "", "year": "" }, { "authors": "R O Duda; J W Machanik; R C Singleton", "journal": "", "ref_id": "b9", "title": "Function modeling experiments", "year": "1963" }, { "authors": "Y Freund", "journal": "ACM Press", "ref_id": "b10", "title": "An improved boosting algorithm and its implications on learning complexity", "year": "1992" }, { "authors": "I I Hampshire; J B Waibel; A H ", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b11", "title": "A novel objective function for improved phoneme recognition using time-delay neural networks", "year": "1990" }, { "authors": "T Hastie; R Tibshirani; A Buja", "journal": "Journal of the American Statistical Association", "ref_id": "b12", "title": "Flexible discriminant analysis by optimal scoring", "year": "" }, { "authors": "G Hinton", "journal": "Arti cial Intelligence", "ref_id": "b13", "title": "Connectionist learning procedures", "year": "1989" }, { "authors": "A Hocquenghem", "journal": "Chi res", "ref_id": "b14", "title": "Codes corecteurs d'erreurs", "year": "1959" }, { "authors": "E B Kong; T G Dietterich", "journal": "", "ref_id": "b15", "title": "Why error-correcting output coding works with decision trees", "year": "1995" }, { "authors": "K J Lang; G E Hinton; A Waibel", "journal": "Neural Networks", "ref_id": "b16", "title": "A time-delay neural network architecture for isolated word recognition", "year": "1990" }, { "authors": "Y Lecun; B Boser; J S Denker; B Henderson; R E Howard; W Hubbard; L D Jackel", "journal": "Neural Computation", "ref_id": "b17", "title": "Backpropagation applied to handwritten zip code recognition", "year": "1989" }, { "authors": "P Murphy; D Aha", "journal": "", "ref_id": "b18", "title": "UCI repository of machine learning databases machinereadable data repository", "year": "1994" }, { "authors": "B K Natarajan", "journal": "Morgan Kaufmann", "ref_id": "b19", "title": "Machine Learning: A Theoretical Approach", "year": "1991" }, { "authors": "N J Nilsson", "journal": "McGraw-Hill", "ref_id": "b20", "title": "Learning Machines", "year": "1965" }, { "authors": "M P Perrone; L N Cooper", "journal": "Chapman and Hall", "ref_id": "b21", "title": "When networks disagree: Ensemble methods for hybrid neural networks", "year": "1993" }, { "authors": "W W Peterson; E J Weldon", "journal": "MIT Press", "ref_id": "b22", "title": "Error-Correcting Codes", "year": "1972" }, { "authors": "J R Quinlan", "journal": "Morgan Kaufmann", "ref_id": "b23", "title": "C4.5: Programs for Empirical Learning", "year": "1993" }, { "authors": "M D Richard; R P Lippmann", "journal": "Neural Computation", "ref_id": "b24", "title": "Neural network classi ers estimate bayesian a posteriori probabilities", "year": "1991" }, { "authors": "F Rosenblatt", "journal": "Psychological Review", "ref_id": "b25", "title": "The perceptron: a probabilistic model for information storage and organization in the brain", "year": "1958" }, { "authors": "D E Rumelhart; G E Hinton; R J Williams", "journal": "MIT Press", "ref_id": "b26", "title": "Learning internal representations by error propagation", "year": "1986" }, { "authors": "R E Schapire", "journal": "Machine Learning", "ref_id": "b27", "title": "The strength of weak learnability", "year": "1990" }, { "authors": "T J Sejnowski; C R Rosenberg", "journal": "Journal of Complex Systems", "ref_id": "b28", "title": "Parallel networks that learn to pronounce english text", "year": "1987" }, { "authors": "B Selman; H Levesque; D Mitchell", "journal": "AAAI/MIT Press", "ref_id": "b29", "title": "A new method for solving hard satis ability problems", "year": "1992" }, { "authors": "G W Snedecor; W G Cochran", "journal": "Iowa State University Press", "ref_id": "b30", "title": "Statistical Methods", "year": "1989" }, { "authors": "L G Valiant", "journal": "Commun. ACM", "ref_id": "b31", "title": "A theory of the learnable", "year": "1984" }, { "authors": "A Waibel; T Hanazawa; G Hinton; K Shikano; K Lang", "journal": "IEEE Transactions on Acoustics, Speech, and Signal Processing", "ref_id": "b32", "title": "Phoneme recognition using time-delay networks", "year": "1989" }, { "authors": "A Weigend", "journal": "", "ref_id": "b33", "title": "Measuring the e ective number of dimensions during backpropagation training", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b34", "title": "", "year": "" }, { "authors": "R A Wilkinson; J Geist; S Janet", "journal": "", "ref_id": "b35", "title": "The rst census optical character recognition systems conference", "year": "1992" } ]
[ { "formula_coordinates": [ 3, 127.44, 536.52, 366.48, 147.2 ], "formula_id": "formula_0", "formula_text": "f 0 f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 f 9 f 10 f 11 f 12 f 13 f 14 0 1 1 0 0 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 2 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 3 0 0 1 1 0 1 1 1 0 0 0 0 1 0 1 4 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 5 0 1 0 0 1 1 0 1 1 1 0 0 0 0 1 6 1 0 1 1 1 0 0 0 0 1 0 1 0 0 1 7 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 8 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 9 0 1 1 1 0 0 0 0 1 0 1 0 0 1 1" }, { "formula_coordinates": [ 8, 213.12, 180.36, 192.96, 53.28 ], "formula_id": "formula_1", "formula_text": "f 0 f 1 f 2 f 3 f 4 f 5 f 6 f 7 c 0 0 0 0 0 1 1 1 1 c 1 0 0 1 1 0 0 1 1 c 2 0 1 0 1 0 1 0 1" } ]
Solving Multiclass Learning Problems via Error-Correcting Output Codes
Multiclass learning problems involve nding a de nition for an unknown function f (x) whose range is a discrete set containing k > 2 values (i.e., k \classes"). The de nition is acquired by studying collections of training examples of the form hx i ; f (x i )i. Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of over tting avoidance techniques such as decision-tree pruning. Finally, we show that|like the other methods|the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.
Thomas G Dietterich
[ { "figure_caption": "Figure 1 :1Figure 1: Hill-climbing algorithm for improving row and column separation. The two closest rows and columns are indicated by lines. Where these lines intersect, the bits in the code words are changed to improve separations as shown on the right.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Performance (in percentage points) of the one-per-class and ECOC methods rel-ative to the direct multiclass method using C4.5. Asterisk indicates di erence is signi cant at the 0.05 level or better.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance of the ECOC method relative to the one-per-class using backpropagation. Asterisk indicates di erence is signi cant at the 0.05 level or better.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Accuracy of C4.5 in the multiclass, one-per-class, and error-correcting output coding con gurations for increasing training set sizes in the NETtalk and letter recognition tasks. Note that the horizontal axis is plotted on a logarithmic scale.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Accuracy of backpropagation in the one-per-class and error-correcting output coding con gurations for increasing training set sizes on the NETtalk and letter recognition tasks.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Change in percentage points of the performance of C4.5 with and without pruning in three con gurations. Horizontal line indicates performance with no pruning.Asterisk indicates that the di erence is signi cant at the 0.05 level or better.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Rejection curves for various con gurations of C4.5 and backpropagation on the NETtalk task. The \\Distributed\" curve plots the behavior of the Sejnowski-Rosenberg distributed representation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Rejection curves for various con gurations of C4.5 and backpropagation on the ISOLET task.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "A distributed code for the digit recognition task.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Meanings of the six columns for the code in Table1.", "figure_data": "Column position Abbreviation 1 vl 2 hl 3 dl 4 cc 5 ol 6 orMeaning contains vertical line contains horizontal line contains diagonal line contains closed curve contains curve open to left contains curve open to right", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Data sets employed in the study.", "figure_data": "Name glass vowel POS soybean audiologyS ISOLET letter NETtalkNumber of Number of Features Classes 9 6 10 11 30 12 35 19 69 24 617 26 16 26 203 54 phonemes 6 stressesNumber of Training Examples Test Examples Number of 214 10-fold xval 528 462 3,060 10-fold xval 307 376 200 26 6,238 1,559 16,000 4,000 1000 words = 1000 words = 7,229 letters 7,242 letters", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "All possible columns for a three-class problem. Note that the last four columns are complements of the rst four and that the rst column does not discriminate among any of the classes.", "figure_data": "ClassCode Word", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Exhaustive code for k=5. Row Column", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Size of decision trees learned by C4.5 for the letter recognition task and the NETtalk task.", "figure_data": "Letter Recognition Leaves per bit Total leaves Multiclass 2353 One-per-class 242 6292 207-bit ECOC 1606 332383NETtalk Multiclass One-per-Class 159-bit ECOCLeaves per bit phoneme stress 61 600 901 911Total leaves phoneme stress 1425 1567 3320 3602 114469 29140", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Five random assignments of codewords to classes for the NETtalk task. Each column shows the percentage of letters correctly classi ed by C4.5 decision trees.", "figure_data": "Multiclass One-per-class 70.0 68.661-Bit Error-Correcting Code Replications a b c d e 73.8 73.6 73.5 73.8 73.3", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b16", "b12" ], "table_ref": [], "text": "Planning by adapting previously successful plans is an attractive reasoning paradigm for several reasons. First, cognitive studies suggest that human experts depend on a knowledge of past problems and solutions for good problem-solving performance. Second, computational complexity arguments show that reasoning from rst principles requires time exponential in the size of the problem. Systems that reuse old solutions can potentially avoid this problem by solving a smaller problem: that of adapting a previous solution to the current task. Intuition tells us that many new problem-solving situations closely resemble old situations, therefore there may be advantage to using past successes to solve new problems.\nFor example, case-based planners typically accomplish their task in three phases:\nRETRIEVAL: Given a set of initial conditions and goals, retrieve from the library a similar plan|one that has worked in circumstances that resemble the inputs. The retrieval phase may also involve some supercial modication of the library plan, for example, renaming constants and making the library plan's initial and goal conditions match the input specications.\nADAPTATION: Modify the retrieved plan | e.g., by adding and removing steps, by changing step orders, or by modfying variable bindings | until the resulting plan achieves the input goal.\nThe adaptation process consists of a standard set of plan-renement operators (those that add steps and constraints to the plan) plus the ability to retract the renements made when the library plan was originally generated.\nWe view the general planning problem as a search through a directed graph of partial plans. The graph's root represents the null plan and its leaves represent nished plans. Generative planning starts at the root of the graph and searches for a node (plan) that satises the goal. It generates the graph by successively rening (constraining) the plan. The retrieval phase of an adaptation-based planner, on the other hand, returns an arbitrary node in the graph, and the adapter begins searching from that point. It must be able to search down the graph like a generative planner but also must be able to search backward through the graph by retracting constraints, producing more abstract plans. Our complete and systematic adaptation algorithm is able to search every node in the graph without considering any node more than once.\nWe have implemented our algorithm2 in Common Lisp on UNIX workstations and tested it on several problem domains. Experimental studies compare our algorithm to a similar eort, priar (Kambhampati & Hendler, 1992). Our results show a systematic speedup from plan reuse for certain simple and regular problem classes.\nOur work on spa makes the following contributions:\nOur algorithm captures the essence of the plan-adaptation process within an extremely simple framework. As such it is amenable to formal analysis, and provides a framework with which to evaluate other domain-independent algorithms like the priar system (Kambhampati & Hendler, 1992), and to analyze domain-dependent representations and adaptation strategies.\nWe use the framework to investigate chef's transformational approach to plan adaptation, and show how chef's repair strategies could be added to spa as search-control heuristics.\nWe analyze the tradeo between plan generation and adaptation, characterizing the similarity required by the plan retrieval routine to produce speedup.\nWe report on empirical experiments and demonstrate for a simple class of problems a systematic relationship between computation time and the similarity between the input problem and the library plan the adaptation algorithm begins with.\nThe paper proceeds as follows: we rst review previous work in planning by adapting or repairing previous solutions. Next we review the least-commitment generative planning algorithm on which spa is based, in doing so introducing many of spa's data structures. Section 4 then explains the details of our adaptation algorithm. In Section 5 we prove that spa is sound, complete and systematic.\nSince the speed of adaptation depends on the quality of the plan retrieved from the library, it can be faster to perform generative planning than attempt to adapt an inappropriate library plan; in Section 6 we analyze this tradeo and also discuss some interesting interactions between the algorithms for adaptation and plan tting. Then in Section 7 we show how transformational planners such as Hammond's (1990) chef system can be analyzed using our framework. Section 8 reports our empirical studies. After reviewing related work (Section 9), Section 10 discusses our progress and poses questions for future research." }, { "figure_ref": [], "heading": "Previous Work on Adaptive Planning", "publication_ref": [ "b12", "b3", "b29", "b16", "b32", "b33", "b7", "b20", "b30" ], "table_ref": [], "text": "The idea of planning by adaptation has been in the literature for many years, and in many dierent forms. In this section we review this work briey, trying to motivate and put into perspective our current work on spa.\nThe basic idea behind planning by adaptation (or similar work in case-based planning, transformational planning, or planning by solution replay) is to solve a new problem by (1) retrieving from memory a problem that had been solved previously, then (2) adapting the old solution to the new problem.\nThe chef system (Hammond, 1990) is a case-based planner that solves problems in the domain of Szechwan cooking. When given a goal to produce a dish with particular properties, chef rst tries to anticipate any problems or conicts that might arise from the new goal, and uses that analysis to retrieve from memory a candidate solution (baseline plan). The baseline plan is then manipulated by a modifying algorithm that tries to satisfy any new goals and repair problems that did not arise in the baseline scenario. It then executes the plan, and if execution results in failure a repair algorithm analyzes the failure and uses the result of that analysis to improve the index for this solution so that it will not be retrieved in situations where it will fail again.\nchef addresses a wide range of problems important to case-based planning: how to anticipate problems, how to retrieve a solution from the case library, how to adapt or modify an old solution, and how to use execution failure to improve subsequent retrievals. Our spa system primarily addresses the adaptation problem, and in Section 7 we use our framework to analyze chef's modication strategies in some detail.\nThe plexus system (Alterman, 1988) confronts the problem of \\adaptive planning,\" but also addresses the problem of run-time adaptation to plan failure. plexus approaches plan adaptation with a combination of tactical control and situation matching. When a plan failure is detected it is classied as being either a failing precondition, a failing outcome, a case of diering goals, or a step out of order. Ignoring the aspects of plexus that deal with incomplete and incorrect knowledge, the program's main repair strategy involves replacing a failed plan step with one that might achieve the same purpose. plexus uses a semantic network to represent abstraction classes of actions that achieve the same purpose (walking and driving are both instances of transportation actions, for example).\nThe gordius system (Simmons, 1988) is a transformational planner. While the dierence between a transformational planner and a case-based planner has not been precisely dened, a major dierence concerns how the two types of planners get the starting point for plan adaptation. Cased-based systems get this plan via retrieval of a past solution from a library, but gordius combines small plan fragments for dierent (hopefully independent) aspects of the current problem. gordius diers from chef in two other ways: rst of all, gordius does not perform an anticipation analysis on the plan, trying to identify trouble spots before library retrieval. Instead it accepts the fact that the retrieved plan will be awed, and counts on its repair heuristics to patch it. chef, on the other hand, assumes that the retrieved library plan will be a close enough t to the new problem so that little or no adaptation will be necessary. Second, much of the gordius work is devoted to developing a set of repair operators for quantied and metric variables.\nThe main idea behind the spa system separates it from the three systems mentioned above: that the process of plan adaptation is a fairly simple extension to the process of plan generation. As a consequence we can assume that the algorithm that generates library plans|and the structure of those plans|is the same as the adaptation algorithm and the plan structures it generates. In the spa view, plan generation is just a special case of plan adaptation (one in which there is no retrieved structure to exploit).\nTwo pieces of work developed at the same time as spa adopt similar assumptions: the priar system (Kambhampati & Hendler, 1992) and the NoLimit system (Veloso, 1992(Veloso, , 1994)).\nThe main dierence between spa and priar is the underlying planning algorithm: spa uses a constraint-posting technique similar to Chapman's (1987) tweak as modied by McAllester and Rosenblitt (1991), whereas priar uses a variant of nonlin (Tate, 1977), a hierarchical planner. Section 8 compares these two planners in some detail.\nThe NoLimit system also takes a search-oriented approach to planning. It diers from spa in the role a case plays in the problem-solving process. A library plan (case) in a transformational or case-based-planning framework stores a solution to a prior problem along with a summary of what new problems it would be a suitable solution for, but it contains little information about the process that generated the solution. Derivational analogy, on the other hand, stores substantial descriptions of the decisions that resulted in the solution. In particular, Veloso's system records more information at each choice point than does spa: a list of failed alternatives, for example. The relative eectiveness of the two approaches seems to hinge on the extent to which old planning decisions (as opposed to the plans themselves) can be understood and exploited in similar planning episodes.\nIn summary, we consider our work on spa to be complementary to most existing work in transformational or case-based planning. The latter has concentrated on developing heuristically eective problem solvers for particular domains. Case-based-planning research has also explored the problem of how to retrieve cases from the plan library|in particular the problem of how to index them eectively. spa, on the other hand, is a domain-independent algorithm, and does not address the retrieval or indexing problems in any deep way.\nThe main objectives of this work are (1) to explore the idea that plan adaptation is a fairly minor representational and algorithmic variant of the basic problem of plan generation, (2) to provide preliminary evidence that this view of plan adaptation is empirically viable, and (3) to provide to the community an implementation of an algorithm that will allow eective problem solvers to be built based on this idea.\nWe now begin the development of our framework with a description of the underlying framework for purely generative planning." }, { "figure_ref": [], "heading": "Generative Planning: the SNLP Algorithm", "publication_ref": [ "b20" ], "table_ref": [], "text": "Since the spa algorithm is an extension of a partial-order, constraint-posting, least commitment generative planning algorithm, we begin by presenting the generation algorithm itself. However, we do so using the notation of the spa system, and in the process intro-duce many of the data structures and functions needed to implement the full adaptation algorithm. Our treatment is brief|see elsewhere (McAllester & Rosenblitt, 1991;Barrett & Weld, 1994a) for more detail." }, { "figure_ref": [], "heading": "Data Structures", "publication_ref": [ "b20", "b15" ], "table_ref": [], "text": "An action is a schematic representation of an operator available to the planner. An action consists of a name, a set of preconditions, an add list, a delete list, and a set of binding constraints. The rst four are expressions that can contain variables. We use question marks to identify variables, ?x for instance. Binding constraints are used to indicate that a particular variable cannot be bound to a particular constant or to some other variable.\nHere is an action corresponding to a simple blocksworld puton operator:\n(defaction :name '(puton ?x ?y) :preconds '((on ?x ?z) (clear ?x) (clear ?y)) :adds '((on ?x ?y) (clear ?z)) :deletes '((on ?x ?z) (clear ?y)) :constraints '((<> ?x ?y) (<> ?x ?z) (<> ?y ?z)\n(<> ?x TABLE) (<> ?y TABLE)))\nAn instance of an action is inserted into a plan as a step. Instantiating an action involves (1) giving unique names to the variables in the action, and (2) assigning the step a unique index in the plan, so a plan can contain more than one instance of the same action. A step is therefore an instance of an action inserted into a plan with an index that uniquely identies it.\nA plan also contains a set of constraints, which either constrain the order of two steps in the plan or constrain the bindings of variables in the steps. An ordering constraint takes the form S i < S j , where S i and S j are steps, and indicates that the step with index i must occur before the step with index j. A binding constraint is of the form (= v 1 v 2 ) or (6 = v 1 v 2 ), where v 1 is a variable appearing in some step in the plan and v 2 is either a variable or constant appearing in the plan. 3We also annotate every constraint with a record of why it was placed in the plan. Therefore a plan's constraints is actually a set of pairs of the form c, r where c is a either an ordering or binding constraint, and r is a reason data structure (dened below).\nThe nal component of a plan is a set of causal links, each of the form S i Q !S j , where S i and S j are steps, and Q is an expression. The link records the fact that one purpose of S i in the plan is to make Q true, where Q is a precondition of S j . If a plan contains a link S i Q !S j it must also contain the ordering S i < S j .\nA plan consists of a set of steps, a set of constraints, and a set of links.\nA planning problem is a triple I, G, Actions. I is a set of expressions describing the problem's initial conditions, G is a set of expressions describing the problem's goal, and Actions is the set of available actions. We assume that Actions is available to the algorithm as a global variable.\nNext we exploit a standard representational trick and convert a planning problem to a plan by building a plan that contains 1. a step with name initial, index 0, having no preconditions nor delete list, but with an add list consisting of the problem's initial conditions I, 2. a step with name goal, index 1, with preconditions consisting of the goal expressions G, but empty add and delete lists, 3. the single ordering constraint S 0 < S 1 , 4. no variable-binding constraints, 5. no links.\nEvery plan must contain at least the two steps and the ordering, and we call the plan with only this structure the null plan.\nTwo interesting properties of a plan are its set of open preconditions and its set of threatened links. The former is the set of expressions that appear in any step's precondition but have no causal support within the plan; the latter is the set of explicit causal relationships that might be nullied by other steps in the plan. Formally an open condition in a plan, notated Q !S j , is a step S j in the plan that has precondition Q, and for which there is no link in the plan of the form S i Q !S j for any step S i . A link of the form S i Q !S j is threatened just in case there is another step S t in the plan such that 1. the plan's ordering constraints would allow S t to be ordered after S i and before S j , and 2. S t has a postcondition (either add 4 or delete) that the plan's variable-binding constraints would allow to unify with Q.\nA plan with no open preconditions and no threatened links is called a solution to the associated planning problem.\nFinally we introduce the reason data structure, unnecessary for generative planning but essential for adaptation. Every time a step, link or constraint is added to a plan an associated reason records its purpose. A reason consists of two parts: 1) a symbol recording why the constraint was added (either add-step, establish, or protect), and 2) either a link, step, or threat in the plan identifying the part of the plan being repaired. Section 3.4 discusses reasons in more detail.\n4. Some people nd it counterintuitive that St should threaten Si Q !Sj if it has Q on its add list. After all, the presence of St doesn't prevent Q from being true when Sj is executed. Our denition, adopted from McAllester and Rosenblitt (1991), is necessary to ensure systematicity. See (Kambhampati, 1993) for a discussion." }, { "figure_ref": [], "heading": "The Planning Algorithm", "publication_ref": [ "b20", "b26" ], "table_ref": [], "text": "The generative planning algorithm is based on the idea of starting with the null plan and successively rening it by choosing a aw (open condition or threatened link) and adding new steps, links, or constraints to x it. The algorithm terminates either when a complete plan is found (success) or when all possible renement options have been exhausted (failure).\nConsider a planning problem with initial conditions Initial and goal Goal. We assume throughout the paper that the set of actions available to the planner, Actions, is xed. We now dene a top-level function, PlanGeneratively, which initializes the search and calls a function that performs the renement process. Rening a plan consists of two parts: selecting a aw in the plan (an open precondition or threatened link), then generating all possible corrections to the aw. The selection of which aw to correct need not be reconsidered, but the manner in which it is corrected might have to be, which is why all possible corrections are added to the search frontier. An open condition can be supported either by choosing an existing step that asserts the proposition or by adding a new step that does so: function ResolveOpen( Q !S j , P): List of plans 1 for each step S i currently in P do 2 if S i can be ordered before S j , and S i adds a condition unifying with Q then 3 collect Support(S i , Q, S j , P) 4 for each action A in Actions whose add list contains a condition unifying with Q do 5 (S k , P 0 ) := AddStep(A,P) 6 collect Support(S k , Q, S j , P 0 ) 7 return the list of plans collected at lines 3 and 6.\nThe function AddStep takes an action and plan as inputs, makes a copy of the plan, instantiates the action into a step, and adds it to the plan with the required ordering and binding constraints. It returns both the newly added step and the newly copied plan. Add S k to P 0 5\nAdd each of A's :constraints to P 0 , each tagged with R 6\nAdd the orderings S 0 < S k and S k < S 1 to P 0 , both tagged with R 7 return (S k , P 0 ) Now Support adds a causal link between two existing steps in the plan S i and S j , along with the required ordering and binding constraints. Notice that there might be more than one way to link the two steps because there might be more than one postcondition of S i that can unify with the link proposition Q. This operation is identical to the way snlp adds causal links except that the constraints are annotated with a reason structure.\nfunction Support(S i , Q, S j , P): List of plans 1 for each set of bindings B causing S i to assert Q do 2 P 0 := a copy of P 3\nL := a new link S i Q !S j 4 R := a new reason [establish L] 5\nAdd L to P 0 6 Add the ordering constraint S i < S j to P 0 , tagged with R 7\nAdd B to P 0 , tagged with R 8 collect P 0 9 return the set of plans collected at step 8 Recall that a threat to a link S i Q !S j is a step S t that can consistently be ordered between S i and S j and can consistently assert either Q or :Q as a postcondition (i.e. either adds or deletes Q). We use the notation S i Q !S j ; S t to denote this threat. The three possible ways to resolve a threat|promotion, demotion, and separation|involve adding ordering and binding constraints to the plan: Add the constraint S t < S i to P 0 , tagged with R 5 collect P 0 6 if S t can consistently be ordered after S j then 7 P 0 := a copy of P 8\nAdd the constraint S j < S t to P 0 , tagged with R 9 collect P 0 10 for each set of bindings B that prevents S t 's eects from unifying with Q do 11 P 0 := a copy of P 12\nAdd constraints S i < S t and S t < S j to P 0 , both tagged with R 13\nAdd B to P 0 , tagged with R 14 collect P 0 15 return all new plans collected at lines 5, 9, and 14 above Note that line 10 is a bit subtle because both codesignation and noncodesignation constraints must be added. 5 For example, there are two dierent minimal sets of binding constraints that must be added to protect S i (on ?x ?y) ! S j from a step S t that deletes (on ?a ?b): f(6 = ?x?a)g, and f(= ?x?a), (6 = ?y?b)g. Line 12 is also interesting | the constraints S i < S t and S t < S j are added in order to assure systematicity.\n3.3 Formal Properties: Soundness, Completeness, Systematicity McAllester and Rosenblitt (1991) prove three properties of this algorithm:\n1. Soundness: for any input problem with initial conditions I, goal G, and actions Actions, if PlanGeneratively(I, G) successfully returns a plan P, then executing the steps in P in any situation satisfying I will always produce a state in which G is true.\n2. Completeness: PlanGeneratively will nd a solution plan if one exists.\n3. Systematicity: PlanGeneratively will never consider the same plan (partial or complete) more than once. Completeness and systematicity can be explained further by viewing PlanGeneratively as searching a directed graph of partial plans. The graph has a unique root, the null plan, 5. But see (Peot & Smith, 1993) for an alternative approach. and a call to RefinePlan generates a node's children by choosing a aw and generating its successors (the partial plans resulting from considering all possible ways of xing it). Figure 2 (Page 330) illustrates how renement replaces a frontier node with its children.\nThe completeness result means simply that every solution plan appears as a leaf node in this graph, and that PlanGeneratively will eventually visit every leaf node if necessary. Systematicity implies that this directed graph is in fact a tree, as Figure 2 suggests. The graph has this property because the children of a partial plan node are all alternative xes for a aw f |each child has a dierent step, ordering, or binding constraint added to x f . And since subsequent renements only add more constraints, each of its children inherit this commitment to how f should be xed. Therefore any plan on the frontier will dier from every other plan on the frontier in the way it xes some aw, and the same plan will never appear on the frontier more than once." }, { "figure_ref": [], "heading": "Using Reasons to Record Renement Decisions", "publication_ref": [], "table_ref": [], "text": "As we mentioned above, the reason data structure is unnecessary in a planner that performs only renement operations. snlp, for example, does not use them. However, they provide the basis for retracting past decisions which is a necessary component of plan adaptation as discussed in the next section. Before explaining the retraction process, however, we summarize the reason data structures that record how and why a plan was rened. A dierent reason structure is used for each of the three types of renement:\nStep addition. When a new step S i is added to a plan (function AddStep), the variable-binding constraints associated with its action schema are also added, along with two ordering constraints ensuring that the new step occurs after the initial step and before the goal step. The reasons accompanying these constraints are all of the form [add-step S i ].\nCausal link addition. When a link of the form S i Q !S j is added to the plan (function Support), an ordering constraint S i < S j is also added, along with variable-binding constraints ensuring that the selected postcondition of S i actually asserts the proposition required by the selected precondition of S j . These constraints will be annotated with a reason structure of the form [establish S i Q !S j ].\nThreat resolution. When a link S i Q !S j is threatened by a step S t , the link can be resolved (function ResolveThreat) by adding one of three sorts of constraints: an ordering of the form S t < S i , an ordering of the form S j < S t , or variable-binding constraints ensuring that the threatening postcondition of S t does not actually falsify the link's proposition Q. These constraints will be annotated with a reason structure of the form [protect S i Q !S j ; S t ]. This completes our review of generative (renement) planning, so we now turn to the extensions that turn this planner into an adaptive algorithm." }, { "figure_ref": [], "heading": "Plan Adaptation", "publication_ref": [], "table_ref": [], "text": "There are two major dierences between generative and adaptive planning: " }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Plan Extension", "publication_ref": [], "table_ref": [], "text": "Figure 2: Plan renement replaces a plan tagged down with a set of new plans, each with an additional step or constraint.\n1. In adaptive planning there is a library retrieval phase in which the plan library is searched for a plan that closely matches the input initial and goal forms; the library plan is then adjusted to match the current problem.\n2. Adaptive planning begins with this retrieved and adjusted partial plan, and can retract planning constraints added when the plan was originally generated; generative planning begins with a plan with no constraints and can only add new ones.\nIn other words, both generative and adaptive planning are searching for a solution plan in a tree of partial plans, but generative planning starts at the (unique) root of the plan tree whereas adaptive planning begins at some arbitrary place in the tree (possibly at a solution, possibly at the root, possibly at some interior node).\nFigure 1 shows that adaptation starts at an interior node, and a solution might appear \\below\" it in the tree or in a dierent subtree altogether. As a result the adaptation algorithm must be able to move \\up\" the tree by removing constraints from the plan as well as move \\down\" the tree by adding constraints.\nFigures 2 and3 show the way this movement is accomplished: plan renement is the (only) operation performed by a generative planner. It takes a partial plan on the horizon and replaces it with that plan's children, a set of plans identical to the input plan except for having one more aw repaired.\nPlan retraction takes a plan on the horizon and chooses a causal link, or set of constraints to remove. That plan is replaced on the horizon with the parent (which is marked for additional retraction), along with the plan's siblings (representing all alternative ways of Plan Retraction re-xing the aw whose x was retracted from the plan). The siblings are then tagged for additional renement.\nIn Section 5 we show that this simple scheme|augmenting a generative planner's renement ability with the ability to retract constraints|is sucient to implement plan adaptation. In other words, we prove that the adaptive planner will still only produce valid solutions (soundness), it can nd a solution no matter where in the plan space the library plan places it initially (completeness), and it still doesn't explore areas of the plan tree redundantly (systematicity)." }, { "figure_ref": [], "heading": "The Adaptive Planning Algorithm", "publication_ref": [ "b32" ], "table_ref": [], "text": "The adaptation algorithm performs a standard breadth-rst search, maintaining the search frontier as a set of pairs each of the form P, up or form P, down. In either case P is a (possibly incomplete) plan and up or down indicates the way to manipulate the plan to generate the plan's neighbors in the search space: down means generate P's successors by further rening it (adding new steps and/or constraints) exactly as in generative planning; up means generate P's successors by retracting one of the renements made when the plan was originally constructed.\nfunction PlanAdaptively(Initial, Goal, Library): Plan or failure 1\nLibPlan := retrieve a plan for Initial and Goal from Library 2\nAdjustedPlan := adjust LibPlan to match Initial and Goal exactly 3\nNewPlan := AdaptationLoop(LibPlan) 4\nStore NewPlan in Library 5 return NewPlan 4.1.1 Plan Retrieval, Adjustment, and Storage Our basic plan-retrieval algorithm is quite simple: we scan the plan library, matching forms in the library plan's goal with the input goal. We retrieve the library plans with the greatest number of matches, then break ties by counting the number of matches between the input initial conditions and the initial conditions of the tied plans. Ties in the number of matches for both goal and initial expressions are broken arbitrarily. This process selects a single library plan, but its initial and goal conditions need not match the input initial and goal expressions exactly. The adjustment process adds goals to the library plan that appear in the input goal but are not already in the plan, and deletes goals from the library plan that do not appear in the input goal expression. The library plan's initial conditions are changed similarly to match the input problem description. Then causal links are adjusted: a link in the library plan of the form S i Q !S j where S i has been deleted becomes an open condition of the form Q !S j ; if S j has been deleted, the link itself can be removed. New open conditions are also added for any new goal forms. This new plan is guaranteed to be a renement of the null plan for the current problem, but unlike the library plan it is not necessarily complete. See Section 6 for more details on the retrieval and adjustment algorithms.\nThen the adaptation phase is initiated, which modies the retrieved plan to produce a solution plan for the new problem. This solution is passed to the library storage routine, which decides whether and how to store the plan for use in subsequent planning episodes. The question of whether to store a newly adapted solution back into the plan library is an important one, since having more plans in the plan library makes the library-retrieval process take longer. On the other hand, storing many plans in the library increases the chances that one will be a close match to a subsequent input problem.\nIdeally the plan library should consist of a relatively small set of \\qualitatively dierent\" solutions to \\commonly occurring\" problems, but a characterization of qualitatively dierent and of commonly occurring can be hard to come by. spa makes no contribution to the question of what should appear in the plan library, and our empirical work in Section 8 assumes a predetermined plan library which is not augmented during the experimental trials. See (Veloso, 1992) for an illuminating investigation of these issues." }, { "figure_ref": [], "heading": "The Adaptation Loop", "publication_ref": [], "table_ref": [], "text": "The AdaptationLoop function is similar to its generative counterpart RefinementLoop except in the latter case every plan selected for renement is further rened. In the case of adaptation, a partial plan might be marked for renement or alternatively for retraction, and the algorithm must keep track of which. Thus the frontier becomes a set of pairs of the form P, d where P is a partial plan and d is a symbol denoting a direction, either down or up. The down case means rene the plan further, in which case the RefinePlan function is called, exactly the same as in generation. A direction of up results in a call to RetractPlan, which is dened below. " }, { "figure_ref": [ "fig_2" ], "heading": "Retracting Renements", "publication_ref": [], "table_ref": [], "text": "Instead of adding and protecting causal links, retraction removes choices made when the library plan was originally generated. Just as RefinePlan selects a aw in the current plan and adds to the frontier all dierent ways of xing the aw, RetractRefinement takes a prior renement choice, uses the associated reason structure to completely remove that renement, and adds to the frontier all of the alternative ways that the renement might have been made.\nAs Figure 3 illustrates, retraction replaces a queue entry of the form P, upwith a \\parent\" of P's (also tagged up) along with a set of P's siblings, each tagged down. A precise denition of \\sibling\" is the set of renements to P's parent that are not isomorphic to P. We dene isomorphism as follows:\nDenition: Two plans P 1 and P 2 are isomorphic just in case 1. Steps agree:\nthere is a 1:1 mapping from steps in P 1 and P 2 such that corresponding steps have identical names (take the correspondence to be S 1 ; S 2 ; : : : S n to R 1 ; R 2 ; : : : R n ) 2. Links agree:\nS i Q !S j 2 P 1 i R i Q !R j 2 P 2\n3. Orderings agree:\nS i < S j 2 P 1 i R i < R j 2 P 2 4. Binding constraints agree: (= ?s i K) 2 P 1 i (= ?r i K) 2 P 2 , where ?s i is a variable in step i of P 1 and ?r i is the corresponding variable in step i of P 2 and K is a constant likewise for (6 = ?s i K) and (6 = ?r i K) (= ?su i ?sv j ) 2 P 1 i (= ?ru i ?rv j ) 2 P 2 , where ?su i and ?sv j are variables in steps i and j of P 1 respectively, and ?ru i ?rv j are the corresponding variables in steps i and j of P 2 respectively likewise for (6 = ?su i ?sv j ) and (6 = ?ru i rv j ).\nThis denition implies that two isomorphic plans have the same open conditions and threatened links as well. Note that two plans may have corresponding steps and identical orderings and not be isomorphic, however, since they can dier on one or more causal links.\nThe question now arises as to which decisions can be reversed when moving upward in the space of partial plans. The simplest answer is that RetractRefinement must be able to eliminate any decision that could have been made by RefinePlan. Renement decisions made by RefinePlan can result in the following elements being added to a plan: A single causal link, plus an ordering constraint plus binding constraints inserted to x an open condition. In this case all the constraints will be tagged with the reason\n[establish S i Q !S j ].\nA new step plus a causal link, inserted to x an open condition. In this case two ordering constraints and a set of binding constraints associated with the step will be tagged with the reason [add-step S], and an ordering constraint and a second set of binding constraints will be added along with the new link, as above.\nAn ordering constraint inserted to x a threat either by promotion or demotion. This constraint will be tagged with [protect S i Q !S j ; S t ] where S t is the threatening step.\nA set of variable-binding constraints plus two ordering constraints inserted to x a threat by separation. These constraints will be tagged with [protect S i Q !S j ; S t ].\nA single call to RetractRefinement should therefore retract one such renement decision, which amounts to removing the associated set of orderings, binding constraints, steps, and links from the plan. Notice that a decision corresponds closely to a set of identical reason structures in a plan, so retracting a decision from a plan really amounts to removing a set of constraints with identical tags, along with their associated links and steps.\nThe one exception to this correspondence is the fact that the decision to add a step to a plan (reason [add-step : : : ]) is always made as part of a decision to add a link (reason [establish : : : ]), so these two decisions should be retracted as a pair as well. We will treat the two as separate decisions, but our algorithm will ensure that a step is removed from a plan as soon as its last causal link is retracted.\nAlthough the choice of a decision to retract is made nondeterministically, it cannot be made arbitrarily, since the planner could not have generated the decisions in any order. For example, when building plan P, the planner might have created a link S i Q !S j and later introduced a set of ordering or binding constraints C to protect this link from being threatened by another step S t . The retraction algorithm must be able to retract either decision (delete the link or the constraints), but these two decisions are not symmetric. If C is deleted, L becomes threatened again, but if L is deleted, then C becomes superuous.\nTo protect against leaving the plan with superuous steps, links, or constraints, we allow the algorithm to retract only those decisions that are exposed. Informally, a decision is exposed if no other constraints in the plan depend on the structure added to the plan by that decision. The formal denition of exposed is stated in terms of reasons within a plan, since as we noted above decisions add constraints to a plan that are tagged with identical reasons. The rst and third cases are fairly straightforward: constraints that resolve a threat can always be retracted, and a step can only be removed if it no longer participates in any causal links." }, { "figure_ref": [], "heading": "Denition: A reason R is exposed in plan", "publication_ref": [], "table_ref": [], "text": "P if 1. R is of the form [protect S i Q !S j ] for some link S i Q !S j , or 2. R is of the form [establish S i Q !S j ] for some link S i Q !S\nThe second case deserves some explanation, however. The rst subcase says that a link cannot be deleted from a plan as long as there are constraints in the plan protecting it from a threat|otherwise the constraints added to resolve the threat would become superuous. The second subcase guards against the following special case: suppose that P contains only two links, S i Q !S j and S x R !S y . Furthermore, suppose that S i posed a threat to S x R !S y , but a previous decision resolved that threat. One might be tempted to remove the link S i Q !S j and along with it the step S i , since S i would no longer serve any purpose in the plan. But doing so would leave superuous structure in the plan, namely the constraints that were added to resolve the threat S x R !S y ; S i . Our denition for exposed guarantees rst that a step will be removed whenever it ceases to serve a purpose in the plan's causal structure (i.e. whenever its last link is removed), but that doing so will never leave superuous constraints in the plan. Now the order in which decisions can be retracted can be stated simply: a decision can be retracted only if its associated reason is exposed. Obeying this ordering means that the plan will never contain superuous constraints, links, or steps; equivalently we might say that retracting only exposed decisions corresponds to the reverse order in which a generative planner might have made those decisions originally.\nConstraining retraction to occur in this order might seem to be overly restrictive, so we make two important observations. First, note that the order of retraction is not constrained to be the reverse of the order used when the library plan was created | only the reverse of one of the decision-orderings that could have been used to create the library plan. Second, we direct the reader to Section 7, which explains how chef repair strategies, encoded as spa heuristics, could sidestep these restrictions by acting as macro operators.\nNext we present the RetractRefinement function. Notice how the denition mirrors that of its generative counterpart RefinePlan: the latter chooses a aw and returns a list that includes all possible ways of xing it, the former chooses an exposed decision, removes the constraints that originally xed it, and enqueues all the alternative ways of xing it.\nfunction RetractRenement(P): List of Plan, Direction 1 R := select an exposed reason 2 if there is no exposed reason then return fg." }, { "figure_ref": [], "heading": "3", "publication_ref": [], "table_ref": [], "text": "(F, P 0 ) := RemoveStructure(R, P) 4 collectP 0 ; up 5 for each plan P 00 returned by CorrectFlaw(F,P 0 ) do 6 if P 00 is not isomorphic to P then collectP 00 ; down 7 return all plan, direction pairs collected in lines 4 and 6.\nThe way to remove the structure associated with an exposed reason depends on the type of the reason. The function RemoveStructure returns the aw associated with the input reason as well as the plan produced by removing the appropriate constraints, links, and steps. Notice that the coupling between link and step decisions is made here: when the last link to a step is deleted the step is deleted too. For this reason we do not have to handle the case of removing a reason of the form [add-step S]: a step becomes exposed only when a link is deleted, but this function removes the step immediately. So a reason of the form [add-step S] will never appear exposed in a plan. function RemoveStructure(R, P): (Flaw, Plan)\n1 if R is of the form [protect S i Q !S j ; S t ] then 2 F := S i Q !S j ; S t 3 P 0 := a copy of P 4\nDelete from P 0 all constraints tagged with R 5 return (F, P 0 )\n6 else if R is of the form [establish S i Q !S j ] then 7 F := Q !S j 8 P 0 := a copy of P 9\nDelete S i Q !S j from P 0 10 Delete from P 0 all constraints tagged with R 11 if P 0 contains no link of the form S i Q !S k for any step S k and expression Q then 12 delete S i from P 00 along with all constraints tagged with [add-step S i ] 13 return (F, P 0 ) This concludes the description of the spa algorithm; we next examine the algorithm's formal properties, proving that it is sound (any plan it returns constitutes a solution to the input planning problem), complete (if there is any solution to the input planning problem, spa will eventually nd it, regardless of the library plan it chooses to adapt), systematic (the adaptation will never consider a partial plan more than once)." }, { "figure_ref": [ "fig_1" ], "heading": "Soundness, Completeness, and Systematicity", "publication_ref": [ "b20" ], "table_ref": [], "text": "To prove formal properties of the spa algorithm we begin by characterizing a lifted version of the generative algorithm developed by McAllester and Rosenblitt's (1991) algorithm (hereafter called snlp) in terms of a search through the space of partial plans. We then consider retraction as well. This discussion uses many of the concepts and terms from Section 3 describing plans and planning problems.\nConsider a directed graph as in Figure 1 where a node represents a plan and an arc represents a plan-renement operator. We can dene the children of a node (plan) P, subject to a nondeterministic choice, as follows:\nDenition: The children of a plan P are exactly these:\n1. If P is complete then it has no children.\n2. Otherwise select one of P's open conditions or threatened links.\n3. If the choice is the open condition, Q !S j , then P's children are all plans that can be constructed by adding a link S i Q !S j , an ordering S i < S j , and a minimal variable binding constraint , where S i is either an existing step or a newly created step that can consistently be ordered prior to S j , and that adds some proposition R, where R = Q.\n4. Otherwise, if the choice is the threat, S i Q !S j , S t , then the node has the children obtained by (a) adding the ordering S t < S i (b) adding the ordering S j < S t (c) adding the orderings S i < S t and S t < S j in addition to a minimal variable binding constraint, , that forces all forms R in S t 's add and delete list, R doesn't unify with Q.\nprovided these are consistent with the constraints currently in P.\nMcAllester and Rosenblitt (1991) claim three properties of this representation and algorithm:\nSoundness: a leaf node corresponds to a partial plan, any completion of which will in fact satisfy the input goal.\nCompleteness: any plan that solves the planning problem is realized in the graph as a leaf node. Therefore any strategy for searching the graph that is guaranteed to consider every node eventually will nd a solution to the planning problem if one exists.\nSystematicity: two distinct nodes in the graph represent non-isomorphic plans, and furthermore, the graph generated by a planning problem is a tree. Therefore a search of the plan graph that does not repeat a node will never consider a partial plan or any of its renements more than once." }, { "figure_ref": [], "heading": "Soundness", "publication_ref": [], "table_ref": [], "text": "The soundness property for spa follows directly from snlp's soundness, since soundness is not a property of the algorithm's search strategy, but comments only on the nature of leaf nodes (complete plans). Since spa denes plans and solutions in the same way as snlp, spa too is sound." }, { "figure_ref": [], "heading": "Completeness", "publication_ref": [], "table_ref": [], "text": "Completeness, recall, consists of two claims:\n1. that every solution to the planning problem is realized as a leaf node of the graph, and 2. that the search algorithm will eventually visit every leaf node in the graph.\nThe rst condition once again does not depend on the way the graph is searched, therefore it is true of spa because it is true of snlp. The second condition is less clear, however: snlp makes sure it covers the entire graph by starting at the root and expanding the graph downward in a systematic fashion, whereas spa starts at an arbitrary point in the graph and traverses it in both directions.\nA proof of completeness amounts to demonstrating that for any partial plan P i representing the beginning point for spa|the case (library plan) supplied by the retrieval mechanism|the algorithm will eventually retract constraints from the plan until it visits the root node (null plan), and doing so also implies that it will visit all subtress of the root node as well. More formally stated, we have:\nTheorem 1: A call to AdaptPlan with a library plan P will cause every partial plan (every node in the plan graph dened by P's planning problem) to be visited. We use an inductive argument to prove this theorem, showing that the subgraph rooted at P i is completely explored, and that the algorithm will follow a path up to the root (null plan) exploring every subgraph in the process.\nWe begin by demonstrating informally that SPA's method of rening a partial plan (adding constraints as opposed to retracting) is equivalent to the graph search undertaken by snlp. (Recall that spa operates by manipulating a search frontier whose entries are P, down and P, up, corresponding respectively to adding and deleting constraints from P.)\nClaim 1: The entries generated by spa's processing an entry of the form P, down correspond exactly to the snlp graph of partial plans rooted at P, assuming the same choice is made as to what condition (open or threat) to resolve at each stage. It suces to show that the new entries generated by spa in response to an entry of the form P, down correspond to the same partial plans that comprise P's children in the graph as dened above (Page 337). There were three parts to the denition: P complete, P rened by choosing an open condition to satisfy, P rened by choosing a threat to resolve.\nIn the case that P is complete, P has no children, and likewise spa terminates generating no new entries. Otherwise spa calls RefinePlan, which chooses a condition to resolve and generates new down entries, one for each possible resolution. Note therefore that a down entry generates only down entries; in other words renement will only generate more renements just as a directed path in the graph leads to successively more constrained plans.\nIn the second case an open condition is chosen; RefinePlan generates new down entries for all existing steps possibly prior to the open condition and for all actions that add the open condition's proposition. This corresponds exactly to case (3) above.\nIn the last case a threat condition (a link and a threatening step) is chosen; RefinePlan adds the orderings and/or binding constraints that prevent the threat, exactly as in case (4) above.\nHaving veried that spa generates the immediate children of a partial plan in a manner equivalent to snlp, and furthermore having noted that it enters these children on the frontier with down tags as well (so their children will also be extended), the following lemma follows directly from Claim 1 above, the completeness of snlp, and a restriction on the search algorithm noted below:\nLemma 1: If spa ever adds to the frontier the entry P, down then it will eventually explore all partial plans contained in the graph rooted at P (including P itself). One must be precise about what it means to \\explore\" a partial plan, or equivalently to \\visit\" the corresponding graph node. AdaptPlan contains a loop in which it selects an entry from the frontier (i.e. a plan / direction pair), checks it for completeness (terminating if so), and otherwise renes the plan. So \\exploring\" or \\considering\" a plan means selecting the plan's entry on the search frontier. Lemma 1 actually relies on a search-control strategy that is guaranteed eventually to consider every entry on the frontier. This corresponds to a search strategy that will eventually visit every node in a graph given enough time|in other words, one that will not spend an innite amount of time in a subgraph without exploring other areas of the graph. snlp's iterative-deepening search strategy has this property as does spa's breadth-rst search.\nThe base case for completeness follows directly from Lemma 1 and the fact that AdaptPlan initially puts both P i , up and P i , down on the frontier: Lemma 2: The subgraph rooted at P i will be fully explored. Now we can state the induction condition as a lemma:\nLemma 3: If a partial plan P is fully explored, and P p is the partial plan generated as a result of (nondeterministically) retracting a choice from P, then the subgraph rooted at P p will be fully explored as well. The fact that P p is considered as a result of a retraction from P means that the entry P, up was considered, resulting in a call to RetractRefinement from which P p was generated as the parent node P 0 in the call to RetractRefinement. To show that P p 's subgraph is fully explored we need to show that 1. P p is visited, 2. the subgraph beginning at P is fully explored, 3. that all of P p 's children other than P are fully explored.\nThe rst is true because RetractRefinement generates the entry P p , up, which means that P p will eventually be visited. The second condition is the induction hypothesis. The third condition amounts to demonstrating that (1) the children returned by Retract-Refinement actually represent P p 's children as dened above, and (2) that these children will themselves be fully explored.\nThe rst is easily veried: RetractRefinement immediately calls CorrectFlaw on the aw it chooses to retract, which is exactly the function called by RefinePlan to address the aw in the rst place. In other words, the new nodes generated for P p by RetractRefinement are exactly those that would be generated by RefinePlan, which by Claim 1 are P p 's children.\nAs for the children being fully explored, all the children except for P itself are put on the frontier with a down tag, and therefore by Lemma 1 will be fully explored. P itself is fully explored by assumption, which concludes the proof of Lemma 3.\nFinally we need to demonstrate the call to AdaptPlan(P i ) eventually retracts to the graph's root. First of all, the rst call to AdaptPlan generates an entry of the form P i , up, and processing an entry of the form P i , up generates an entry of the form P i+1 , up, where P i+1 represents the retraction of a single constraint from P i .\nThe call to AdaptPlan(P i ) therefore generates a sequence of entries of the form P 1 , up, P 2 , up, : : : , P k , up, where k is the number of decisions6 in P i . In this sequence P 1 = P i and P k has no constraints. Furthermore, Lemma 2 tells us that the subgraph rooted at P 1 is fully explored and Lemma 3 tells us that the rest of the P i subgraphs are fully explored as well.\nThe nal question is whether P k , a plan with no constraints, is necessarily the null plan (dened above to be a plan with just the initial and nal steps and the single constraint ordering initial before nal). We know that calls to RetractRefinement will eventually delete all causal links and all orderings that were added as the result of protecting a threat. Superuous steps (steps that have no associated link) and orderings (that were added without a corresponding threat condition) might appear in P i , however, and RetractRefinement would never nd them. P k , then, would contain no more retraction options, but would not be the null plan.\nWe can x this easily enough, either by requiring the library-retrieval machinery to supply plans without superuous steps and constraints, or by inserting an explicit check in RetractRefinement that removes superuous steps and constraints when there are no more options to retract.\nThe former might not be desirable: the library plan might contain steps that don't initially appear to serve the goal, but later come in handy; leaving them in the plan means the planner need not re-introduce them into the plan. The latter option is inexpensive, and is actually implemented in our code. See Section 6.3 for further discussion of this issue.\nAssuming that P k is the null plan, the completeness proof is nished: we showed that calling AdaptPlan(P i ) fully explores its own subgraph, and furthermore generates a path to the graph's root (the null plan) ensuring that all nodes below the path are visited in the process." }, { "figure_ref": [], "heading": "Systematicity", "publication_ref": [], "table_ref": [], "text": "Systematicity, like completeness, is a two-part claim. The rst is formal: that the plan graph is a tree|in other words, that the policy of generating a node's parents by making a nondeterministic but xed choice of a condition (open or threat) to resolve, then generating the node's children by applying all possible ways to resolve that condition means that any two distinct plan nodes represent non-isomorphic plans. The second claim is that the strategy for searching the graph never visits a plan node more than once.\nThe rst claim applies just to the formal denition of the plan graph, so the systematicity of snlp suces to prove the systematicity of spa.\nTo verify the second claim we need only to show that for any partial plan P, spa will generate that plan just once. We demonstrate this in two parts:\nLemma 4: Processing an entry of the form P, down will never cause P to be generated again. This is true because generating P, down causes P's children to be generated with down tags, and so on. Every successive node that gets generated will have strictly more constraints or more links than P, and therefore will not be isomorphic.\nLemma 5: Processing an entry of the form P, up will never cause P to be generated again.\nProcessing P, up causes P's parent P p to be generated with an up tag and P's siblings to be generated with a down tag. Note that P is not generated again at this point.\nNo further extension of a sibling of P can ever be isomorphic to P, since they will dier (at least) on the selection of a solution to the condition resolved between P p and its children. Likewise, no sibling of P p can ever be rened to be isomorphic to P, since it will dier from P (at least) in the constraint that separates P p from its siblings.\nTherefore as long as a plan is not explicitly entered on the frontier with both down and up tags, it will never be considered more than once. Actually the tted library plan, P i , is initially entered on the queue with both down and up tags, so spa may consider this partial plan more than once, and is therefore not strictly systematic. Every other partial plan, however, is generated during an iteration of the loop in AdaptPlan, which generates each of its plans only once, either up or down. So the spa graph-search algorithm is systematic except for the fact that it might consider its initial plan twice." }, { "figure_ref": [], "heading": "Interactions between Retrieval and Adaptation", "publication_ref": [], "table_ref": [], "text": "While the bulk of our research has been devoted to the adaptation phase of the planning process, it is impossible to consider this phase completely in isolation. In this section we consider the expected benet of adaptation as well as some subtle interactions between adaptation and retrieval. First we compare the complexity of plan adaptation with that of plan generation from scratch; this ratio provides an estimate of how close the library plan must match the current situation in order for adaptation to be faster than generation. Next we outline how plans are stored in and retrieved from spa's library. Finally we describe some interesting interactions between the processes of retrieval and adaptation." }, { "figure_ref": [], "heading": "Should one adapt?", "publication_ref": [], "table_ref": [], "text": "All planners that reuse old cases face the fundamental problem of determining which plans are \\good\" to retrieve, i.e., which plans can be cheaply adapted to the current problem. In this section we present a simple analysis of the conditions under which adaptation of an existing case is likely to be more expeditious than generative planning.\nThe basic idea is that at any node, adaptation has exactly the same options as generative planning, plus the opportunity to retract a previous decision. Thus the search space branching factor is one greater for adaptation than for generative planning.\nSuppose that the generative branching factor is b and a working plan of length n exists for the new problem. In this case, the cost of generation is b n . Now suppose that the library-retrieval module returns a plan that can be extended to a working plan with k adaptations; this corresponds roughly to the addition of k new steps or the replacement of k 2 inappropriate steps. Thus adaptation is likely to be faster than generative planning whenever\n(b + 1) k < b n\nThis inequality is satised whenever\nk n < log b+1 b\nAs the branching factor b increases, the logarithm increases towards a limit of one. Thus small branching factors exact the greatest bound on the k n ratio. But since generative planning almost always has a branching factor of at least 3 and since log 4 3 = 0:79, we conclude that adaptation is likely preferable whenever the retrieval module returns a case that requires at most 80% as many modications as generative planning would require. A conservative estimate suggests that this corresponds to a tted library plan in which at most 40% of the actions are inappropriate. While we acknowledge that this analysis must be taken loosely, we believe it provides useful intuitions on the case-quality required to make adaptation worthwhile." }, { "figure_ref": [], "heading": "The retrieval phase", "publication_ref": [], "table_ref": [], "text": "Our model of retrieval and adaptation is based on the premise that the spa algorithm itself generates its library plans. Plans generated by spa automatically have stored with them all the dependencies introduced in the process of building the plan, i.e. all of its causal links and constraints.\nMost of a plan's propositions are variabilized before the plan is stored in the library|we do the variabilization in a problem-specic manner, but the general issue of what parts of a plan to variabilize can be viewed as a problem of explanation-based generalization, and is discussed by Kedar-Cabelli and McCarthy (1987) and by Kambhampati and Kedar (1991).\nLibrary retrieval is a two-step process: given a set of initial and goal conditions, the algorithm rst identies the most promising library plan, then does some shallow modication to make the plan's initial and goal conditions match the inputs." }, { "figure_ref": [], "heading": "Library retrieval", "publication_ref": [ "b16" ], "table_ref": [], "text": "The rst phase of the retrieval process uses either an application-supplied method or a domain-independent algorithm similar to the one used by Kambhampati and Hendler (1992) to select candidate plans. First the input goals are matched against the each library plan's goals, and the libary plans with the greatest number of matches are identied. This can result in many candidates, since several plans can match, and a single plan can match in a number of dierent ways. To choose among the remaining alternatives the algorithm examines the match between the initial conditions. It computes for each alternative the number of open conditions created by replacing the library plan's initial conditions with the input initial conditions. This is intended to measure the amount of planning work necessary to get the input initial world state to the state expected by the library plan. It counts the number of open conditions for each option and chooses the plan with the minimum, breaking ties arbitrarily." }, { "figure_ref": [], "heading": "Fitting the retrieved plan", "publication_ref": [], "table_ref": [], "text": "Having matched a library plan, tting it to the new problem is simple:\n1. Instantiate the library plan with the variable bindings produced by the match above.\n2. Replace the library plan's goal conditions with the new goal conditions." }, { "figure_ref": [], "heading": "Create a new open condition for each goal proposition that appears in the new goal", "publication_ref": [], "table_ref": [], "text": "set but not in the library plan's goal set.\n4. Replace the library plan's initial conditions with the new problem's initial conditions.\n5. For each causal link that \\consumes\" a proposition from the old initial conditions, if that proposition is absent from the new initial conditions, then delete the link and add a corresponding new open condition.\n6. For each causal link that \\produces\" a proposition for the old goal conditions, if that proposition is absent from the new goals, then delete the link." }, { "figure_ref": [], "heading": "Conservative vs. generous tting", "publication_ref": [], "table_ref": [], "text": "The algorithm above does no pruning of superuous steps: the plan returned can contain steps that existed to \\produce\" causal links for propositions in the library plan's goal set that are not part of the new goals. Hence the tted plan can contain links, steps, and constraints which are (apparently) irrelevant to the current problem. Of course, until the adaptation algorithm actually runs it is impossible to tell whether these parts of the library plan will actually turn out to be useful. If removed during the tting process, the adaptation algorithm might discover that it needs to re-generate the same structures.\nThe question therefore arises as to whether the tting algorithm should delete all such links, potentially removing many steps and constraints (a conservative strategy), or should it leave them in the plan hoping that they will eventually prove useful (a generous approach)? One can easily construct cases in which either strategy performs well and the other performs poorly.\nWe noted above an interesting interaction between the generous strategy and our adaptation algorithm. AdaptPlan's retraction algorithm is the inverse of extension, which means that it can only retract decisions that it might have actually made during extension. AdaptPlan will obviously never generate a superuous plan step, and so a library plan containing superuous links or steps could not have been produced directly by the adapter. If so, AdaptPlan might not be able to retract all the planning decisions in the library plan, and is therefore not complete. (Since it cannot retract all previous planning decisions, it cannot retract all the way back to the null plan, and therefore may fail to explore the entire plan space.) Recall from Section 5.2 that the retraction algorithm presented in Section 4 is only complete when used in conjunction with a conservative tting strategy, or alternatively by modifying the RetractRefinement code so it deletes superuous steps|steps other than the initial and goal steps that do not produce a causal link|from any plan it returns." }, { "figure_ref": [], "heading": "Transformational Adaptation", "publication_ref": [ "b12", "b29" ], "table_ref": [], "text": "Most previous work on case-based planning has concentrated on nding good indexing schemes for the plan library, with the idea that storing and retrieving appropriate cases would minimize the need for adaptation. We can nonetheless use the spa framework to analyze the adaptation component of other systems. The repair strategies included in the chef system (Hammond, 1990), for example, specify transformations that can be decomposed into sequences of spa rene and retract primitives. Our analysis proves useful in two dierent ways:\n1. It shows how chef's indexing and repair strategies could be exploited in the spa framework by providing heuristic search-control information.\n2. It demonstrates how spa's simple structure can be used to analyze more complex adaptation strategies, and ultimately could be used to compare alternative theories of plan repair.\nWe start with a section summarizing chef's design. Then in Section 7.2 we consider its repair strategies sequentially, decomposing them into spa operators. Section 7.3 proves that chef's set of repairs is incomplete, and Section 7.4 discusses ways to encode chef's heuristics in spa's framework. Section 7.5 discusses how our analysis could be extended to other transformational planners such as gordius (Simmons, 1988)." }, { "figure_ref": [], "heading": "Plan adaptation in chef", "publication_ref": [ "b27", "b7" ], "table_ref": [], "text": "chef uses a ve-stage process for adapting an existing plan to achieve new goals. chef rst takes a library plan, ts it to the new problem, and simulates its execution, using the new initial conditions and goals. Roughly speaking, chef's failures correspond to a spa plan with at least one aw|a threatened link or open precondition. chef next uses forward and backward chaining to analyze the failure, discovering things like what step or steps caused the failure, and what goals those steps were servicing. The result is a causal network corresponding to the causal links constructed by spa in the process of plan generation. 7 spa therefore performs the rst two stages of chef's adaptation process in the process of plan generation.\nchef then uses the causal explanation to select one of sixteen prestored diagnoses, called TOPs (Schank, 1982). A TOP also contains a set of repair strategies|plan transformations that might eliminate the failure|and each repair strategy has an associated test to check its applicability. chef's TOPS are divided into ve classes: failures due to side eects, desired eects, side features, desired features, and step parameters. The strips action representation used by spa does not distinguish between object features and other propositions and does not allow parameterized steps, so only the rst two classes of TOPs are relevant to our analysis. In any case, these two classes are the most important, since they account for more than half of the TOPs. The distinction between a side eect and a desired eect is straightforward: side eects are operator postconditions that don't support a link, while desired eects do have a purpose in the plan. Naturally, the set of appropriate repairs are dierent in the two cases.\nAfter choosing a TOP, chef instantiates the associated repair strategies using the details of the current planing task. For each possible repair chef runs a test to see if the repair is applicable, using the result of this test to instantiate the repair. For example, the test for an abstract repair corresponding to insertion of a \\white knight\" (Chapman, 1987) would determine which steps could reassert the desired proposition.\nFinally, chef uses a set of heuristic rules to rank the various instantiated repairs, chooses the best, and applies it. Once the plan is transformed, it is simulated once again; detection of a new failure starts the cycle anew." }, { "figure_ref": [], "heading": "Plan transformation in chef", "publication_ref": [ "b6", "b31", "b32" ], "table_ref": [], "text": "Seven of chef's seventeen repair strategies do not apply to our strips-like action representation. For example, the repair that adjusts the duration of an action is inapplicable since all stripsactions are assumed to occur instantaneously. The rest of this section describes the ten relevant repairs and reduces them to spa primitives.\nFour repairs add new steps to the awed plan. In each case the plan failure corresponds to a link S p Q !S c threatened by another step S t .\n1. Recover|Add a new step after S t that will remove the side-eect proposition :Q before the consuming step, S c , is executed. 2. Alter-feature|Add a new step that changes an undesired trait into one that matches the goal.\n3. Remove-feature| Add a new step that deletes an object's undesired characteristic.\n7. Some of chef's failure explanations are more expressive than spa's causal links since the latter cannot express metagoals such as avoiding wasteful actions. Here and in the rest of our analysis, we consider only the features of chef that pertain to strips planning. We caution the reader that many of chef's innovations are relevant only in non-strips domains and thus that our analysis, by necessity, is incomplete.\nThese three repairs are identical from spa's perspective, since strips does not distinguish between object features and other types of propositions. Since the repair strategy is the same in the three cases, it is unclear that the distinction chef makes between objects and features provides any useful control knowledge in these cases.\nEach of these repairs corresponds to the introduction of a \\white knight.\" Accomplishing these repairs systematically requires retracting the threatened link then adding a new link produced by a new step (rather than simply adding the new step). Thus spa can simulate these three transformations with a retract-rene sequence, although additional retraction might be needed to eliminate decisions that depended on the original threatened link.\n4. Split-and-reform|Divide a step into two distinct steps that together achieve the desired results of the original.\nIn spa terminology, it is clear that the step to be split, S p , must be producing two causal links, since it is accomplishing two purposes. Thus spa can eect this repair by retracting the threatened link (which automatically removes some variable binding and ordering constraints), adding a new step S p 0 , and then adding a new link S p 0 Q !S c .\nTwo transformations replace an existing step in the plan.\n5. Alter-plan:side-eect|In this case the failure is a link S p Q !S c which is threatened by another step S t whose postcondition :Q is not involved in another link. The repair is to replace S t with another step, S t 0 that doesn't have :Q as a postcondition. 6. Alter-plan:precondition|This failure is a step S c which either has an open precondition Q or whose precondition is supported by a threatened link. The repairing transformation replaces S c with a new step that does not have Q as a precondition.\nThese transformations have the best potential for providing spa with search-control heuristics. Both of these repairs make a replacement (retract followed by rene) to a link in the middle of a causal network. Recall, however, that spa only makes changes to the \\fringe\" of the network: spa only retracts decisions that have no other decisions depending on them. For example, consider the following elaboration of the Alter-plan:side-eect example above. Suppose that the current plan contains two additional decisions: the decision to establish a causal link S t Q !S u (this is what caused the inclusion of S t to begin with) and also a decision to protect this link from another threatening step S k . Since the latter choice depends on the very existence of the link, the decision to add S t to the plan as support for S t Q !S u cannot be retracted until the decision to protect it has been retracted. Emulation of the Alter-plan:side-eect and Alter-plan:precondition transformations would result in n+1 spa retract operations followed by n+1 renes, where n is the number of dependent decisions in the causal network. In the current spa implementation, there is no facility for this type of macro operator, but these chef transformations suggest the utility of storing the sequence of decisions retracted in the course of bringing S c to the fringe, and then replaying these decisions with a process similar to derivational analogy (Carbonell, 1983;Veloso & Carbonell, 1993;Veloso, 1992).\nOne repair changes the plan's variable-binding constraints.\n7. Alter-item|A new object is substituted to eliminate unwanted features while maintaining desired characteristics.\nThis repair can be used to correct a number of spa failures: threatened links, inconsistent constraints, and the inability to support an open condition (due to unresolvable threats). spa could eect the repair by retracting the decision that added the constraints (most likely the addition of a causal link) and rening a similar decision that bound a new object.\nThree transformations modify the plan's temporal constraints, reordering existing steps.\n8. Alter-placement:after|This repair corresponds to promotion and requires no retraction. 9. Alter-placement:before|This repair corresponds to demotion and also requires no retraction. 10. Reorder|The order in which two steps are to be run is reversed. This can be accomplished by retracting the decision that added the original ordering and asserting the opposite ordering.\nThis analysis of chef aids in our understanding of transformational planners in two ways. First it claries chef's operation, providing a simple explanation of its repair strategies and showing what sorts of transformations it can and cannot accomplish. Second it lays the groundwork for incorporating chef's strategies into spa's adaptation algorithm in the form of control policies (Section 7.4)." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "The completeness of chef", "publication_ref": [], "table_ref": [], "text": "One result of analyzing chef's repair strategies in spa's framework is a demonstration that chef's particular set of repairs is incomplete|that is, there are some combinations of library plans and input problems for which chef will be unable to generate a suitable sequence of repairs. Consider, for example, the causal structure shown in Figure 4.\nAssume that ordering constraints restrict the plan steps to the gure's left-to-right ordering; in other words suppose that the only consistent linearization of this plan starts with S a then S t then S b and so on. The arrows denote causal links, but only two links have been labeled with the proposition produced (the others are irrelevant). Since S t deletes P and S b requires it, it is clear that S t threatens S a P !S b . Since S u consumes :P , both P and :P are useful eects and the threat must match one of chef's desired eect TOPs. In fact, Figure 4 is a classic example of the blocked-precondition TOP which has only two repair strategies: Recover and Alter-plan:precondition. In particular, chef is forbidden from trying the Alter-plan:side-eect repair since the threat results from a desired eect (for S u ) not a side eect. This means that chef will never consider replacing S t with a step that doesn't delete P , even though that may be the only way to achieve a working plan. To see that this transformation is capable of resulting in a working plan, note that the choice of S u to support the goal may have been incorrect. In other words, it may be possible to replace S u with another step that does not require :P , which would make the failure a side-eect failure instead of a desired-eect failure, and would enable S t 's replacement.\nWhat are the implications of this result? Probably chef's incompleteness is of minor consequence, especially since that project's goal was to produce a heuristically adequate set of indexing and transformation strategies rather than a formally veriable algorithm. An analysis like this is nonetheless instructive since it makes precise what tradeos chef's algorithm makes. It can be instructive to ask why a particular algorithm is unsound, incomplete, or unsystematic, and what advantages in expressive power or expected performance are gained by sacricing some formal property. We believe that an algorithm's formal properties provide one of a number of ways to understand the algorithm's behavior, but do not constitute the ultimate standard by which an algorithm's value should be judged.\nWe next turn to the topic of how to use the chef repair strategies within the spa framework to guide the adaptation algorithm." }, { "figure_ref": [], "heading": "chef transformations as spa heuristics", "publication_ref": [], "table_ref": [], "text": "At the highest level, chef and spa operate in very dierent ways. chef starts with a complete plan that fails to satisfy the goal and uses transformations to generate a new complete plan. chef admits no notion of a partial plan and no explicit notion of retracting a commitment. Contrast this with the approach taken by spa, which can retract any previous planning decision, resulting in an incompletely specied plan. Thus, to endow spa with search-control heuristics corresponding to chef's transformations, we need to chain together spa's local rene/retract decisions to eect a \\jump\" from one area of the plan space to another.\nThe simplest way of giving spa this capability is to reformulate spa's top-level control loop from a breadth-rst exploration of the space of plans (using a queue or priority queue) to a depth-rst or iterative-deepening depth-rst search (using a stack). In such a scheme RefinePlan would no longer enqueue all the new plans returned by CorrectFlaw; instead it would choose the \\best\" successor plan (using some heuristic ranking information) and explore it, leaving the alternates on the stack for later exploration if backtracking proved necessary. RetractRefinement would do likewise with the retracted node's siblings. This modication to spa's top-level control loop eliminates the need for a global plan-evaluation heuristic, using instead the following four hooks for heuristic control knowledge:\n1. When the RetractRefinement procedure is given a plan to retract, heuristic information is brought to bear to decide which decision to retract.\n2. After RetractRefinement generates its new plans it uses heuristics to choose whether to continue retracting constraints from the parent or whether to rene a child (and if it chooses the latter, which sibling to rene).\n3. RefinePlan likewise uses heuristic information to determine which open condition or threatened link should be addressed rst.\n4. After RefinePlan generates its successor plans it uses heuristics to select which sibling to continue rening.\nConsider the operation of this depth-rst variant of spa, given the initial tted plan tagged both up and down. Rule 2 applies, since this choice requires deciding between retraction and extension. We could encode chef's repair strategies by making Rule 2 examine the current plan's causal structure, use that structure to choose an appropriate TOP, and choose a repair strategy. As described in the previous sections, each repair strategy can be encoded as a macro operator of renes and retracts | these could be written into a globally accessible memory area and \\read o\" by subsequent rules. A Recover repair might expand to a two-step sequence: retract link, rene link. Rule 2 would choose to retract the tted plan, then Rule 1 would choose the troublesome link to be retracted, then Rule 2 would choose the child corresponding to adding the step specied by the Recover repair. At this point, the macro operator would have been completely executed.\nSince this new control structure uses only the standard spa plan modication operators and only returns when the set of open conditions and threatened links are null, soundness is maintained. Similarly, as long as depth-rst iterative-deepening search is used, this approach preserves spa's completeness. Systematicity is violated by the use of iterativedeepening search, however, and there is another problem with systematicity under this approach as well: multiple repairs cannot necessarily be performed in sequence. The latter problem stems from the fact that all chef repairs involve renes and most involve retracts followed by renes. Yet, the only plans returned by a call to RefinePlan are tagged down and thus they cannot have a transformation involving retraction applied to them (without violating systematicity). There appear to be several possible solutions to this problem: Delay attempting any repairs that do not involve retraction, such as Alter-placement: after and Alter-placement:before, until another repair that does retract has been applied.\nPerform all retractions initially, before trying any extension adaptations. 8Ignore the up and down tags and allow both extension and retraction at any node. While this approach sacrices systematicity, the hope is that the advantages of search control directed by chef-style transformation will oset the increased size of the overall search space. In any case, the approach still guarantees completeness." }, { "figure_ref": [], "heading": "Extending the analysis", "publication_ref": [ "b29", "b25", "b24" ], "table_ref": [], "text": "This section can only sketch the possibilities for integrating the ideas of transformational planners into the spa framework. Future research will implement these ideas and test to see whether they work as the previous section suggests. An implementation would also allow ablation studies evaluating the relative utility of dierent repair heuristics. We suspect that Alter-plan:side-eect and Alter-plan:precondition would provide the greatest guidance, but we will test this belief.\nIt would also be interesting to duplicate our analysis of chef for other transformational planners. We believe this would be a straightforward exercise in many cases. For example, the rst step that gordius (Simmons, 1988) takes when debugging a plan is to build a causal structure like the one spa builds. Since gordius (like chef) uses a rich set of repair heuristics that match faulty causal structures, we suspect that they can be decomposed into spa-like primitives as were chef's. One diculty in this analysis would concern gordius's emphasis on actions with metric eects. Since spa's strips representation does not allow conditional eects (nor those computed by arithmetic functions) a rst step would be to add spa-style retraction to the ucpop (Penberthy & Weld, 1992) or zeno (Penberthy & Weld, 1993) planners. While ucpop handles conditional eects and universal quantication, it does not match gordius in expressiveness. zeno, however, handles metric eects and continuous change." }, { "figure_ref": [], "heading": "Empirical Study", "publication_ref": [ "b16", "b16" ], "table_ref": [], "text": "We had two goals in conducting empirical studies:\n1. to make more precise the nature and extent of speedup that could be realized by using library-ret planning, and 2. to compare spa to priar (Kambhampati & Hendler, 1992), which pursues similar ideas within a dierent planning framework.\nThe work on priar closely parallels our own: the key idea in both cases is that a generative planning algorithm can be used to adapt library plans, provided that (1) the planner keeps some record of the reasons for its choices, and (2) the planner can retract as well as make renement choices. Since priar and spa share use of a strips-like representation, we were able to replicate the experiments undertaken by Kambhampati and Hendler (1992) and compare our results with theirs 9 ." }, { "figure_ref": [ "fig_5" ], "heading": "Problem statement", "publication_ref": [ "b14", "b16" ], "table_ref": [], "text": "First some background: the priar experiments use two general classes of block-stacking problems, named xBS and xBS1. x is an integer (ranging from 3 to 12) designating the number of blocks involved in that problem.\nThe rst class, e.g. 3BS, involves an initial conguration in which all the blocks are on the table and clear. The goal in the xBS1 problems is also to build a stack of height x, but all the blocks are not clear on the table initially in these problems|some blocks can 9. See Section 9 for a discussion of the dierences between the two systems. Figure 5 shows initial and nal states for two selected problems (complete specications for the nBS1 problems can be found elsewhere (Hanks & Weld, 1992;Kambhampati & Hendler, 1992).\nThe priar experiments involved comparing the planner's performance on a problem when the plan was generated from scratch with its performance when the solution to a smaller problem was used as a library plan. 4BS ! 8BS and 3BS ! 5BS1 are two example experiments. For example, 3BS ! 5BS1 involves comparing the time required to generate a plan for solving the 5BS1 from scratch with the time required for solving the 5BS1 problem starting with a solution for 3BS.\nNote that these experiments involve the adaptation process only|the problem of selecting an appropriate library plan was not considered." }, { "figure_ref": [], "heading": "Representation language", "publication_ref": [ "b16", "b16" ], "table_ref": [], "text": "We tried to imitate priar's representation language as closely as possible: both representations have two predicates, ON and CLEARTOP, and two primitive actions, PUT-BLOCK-ON-BLOCK and PUT-BLOCK-ON-TABLE. 10 priar uses a hierarchical representation, including non-primitive actions expressing concepts like \\to get A on B, rst generate a plan to clear A, then generate a plan to clear B, then execute the (primitive) PUT-BLOCK-ON-BLOCK action.\" spa's representation consists only of descriptions for the two primitive actions. The closest analogue in spa to hierarchical domain-specic knowledge is the notion of search-control information: application-supplied functions that determine which node in the graph of partial plans to consider next, what actions to introduce, in what order, how preconditions are to be achieved, and so on.\n10. The domain theory presented in (Kambhampati & Hendler, 1992, Appendix B) also mentions pyramids and blocks, as well as various rules like nothing could be ON a pyramid. Since no pyramids gured in the experiments presented in (Kambhampati & Hendler, 1992, Section 7), we omitted them from our representation. priar's representation also includes several domain axioms, e.g. one that denes CLEARTOP as the absence of one block ON another. spa does not provide for domain axioms, so we incorporated that information into the action and problem denitions." }, { "figure_ref": [], "heading": "Control information", "publication_ref": [ "b14", "b21", "b14", "b8", "b5" ], "table_ref": [], "text": "There is no obvious correspondence between priar's hierarchical plan representation and spa's control knowledge, so the question immediately arose as to what control information we should provide in running the experiments. spa can exploit domain-dependent control information in three places:\n1. to decide how to match objects in the (given) library plan against the objects in the input problem's initial and goal forms, 2. to decide which partial plan to consider next, and 3. to decide which part of the partial (incomplete) plan to work on next.\nThe rst piece of domain-dependent control information involves how to t the library plan to the new problem,11 which involves choosing constants in the input problem to substitute for constants in the library plan. We adopted the same policy as did Kambhampati and Hendler: choose the substitution that maximizes the number of input goal forms that actually appear in the transformed library plan, and in the case of a tie choose the substitution that maximizes the number of initial conditions in the input problem that appear in the transformed library plan.\nThe problem is that nding the optimal mapping can be quite expensive: if the input problem mentions n objects and the library problem mentions k objects, nding the best mapping may involve examining all n k possibilities. The analysis in (Hanks & Weld, 1992) demonstrates the potential cost of mapping using the example of solving the 8BS problem using successively larger library plans. The complexity of computing the optimal mapping grows exponentially with the size of the library plan to the point where solving the 8BS problem using a solution to exactly the same problem as a library plan is actually more expensive than using a smaller library plan (even though it requires no adaptation at all). We note that this is similar to the utility problem addressed by Minton in the context of EBL (Minton, 1988). In subsequent experiments we used a heuristic, domain-dependent, linear-time mapping algorithm, described in (Hanks & Weld, 1992).\nA control policy for the second decision requires shifting from breadth-rst search to a best-rst strategy. The longer paper discusses our ranking function in detail. To control decisions of the third sort (what aw in the current plan to address next) we built a search-control heuristic that essentially implemented a policy of \\build stacks from the bottom up.\" We acknowledge that the addition of domain specic heuristics complicates the comparison between spa's performance and that of priar, but we argue that this addition is \\fair\" because priar used heuristic information itself. In priar's case the domain specic knowledge took the form of a set of task-reduction schemata (Charniak & McDermott, 1984) rather than ranking functions, but both systems use heuristic control knowledge. Unfortunately, it is nearly impossible to assess the correspondence between the two forms of domain knowledge, but preliminary experiments, for example in (Barrett & Weld, 1994b), show that task-reduction schemata can provide planner speedup that is just as signicant as that obtained by spa's ranking functions." }, { "figure_ref": [], "heading": "Problem", "publication_ref": [], "table_ref": [], "text": "Proc. time (msec) Speedup pctg. " }, { "figure_ref": [ "fig_6" ], "heading": "Comparative results", "publication_ref": [ "b16", "b18", "b13" ], "table_ref": [ "tab_4", "tab_4" ], "text": "The rst three columns of Table 1 show how spa's performance compares to priar's in absolute terms. 12 We caution readers against using this information to draw any broad conclusions about the relative merits of the two approaches: the two programs were written in dierent languages, run on dierent machines, and neither was optimized to produce the best possible raw performance numbers. 13 Nonetheless we note that the absolute time numbers are comparable: spa tended to work faster on smaller problems, priar better on larger problems, but the data do not suggest that either program is clearly superior.\nKambhampati and Hendler assess priar's performance relative to its own behavior in generating plans from scratch. This number, called the savings percentage, is dened to be as s0r s , where s is the time required to solve a problem, e.g. 12BS1, from scratch and r is the time required to solve that same problem using a smaller library plan, e.g. one for 3BS. The fourth and fth columns of Table 1 compare spa and priar on this metric.\nThe question therefore arises as to why priar's speedup numbers are consistently so much larger in magnitude than spa's, particularly on larger problems, even though absolute performance is not signicantly better. The answer has to do with the systems' relative performance in planning from scratch. As Figure 6 demonstrates, priar's performance degrades much faster than spa's on generative tasks. We have no explanation for priar's behavior, but its eect on the savings-percentage number is clear: these numbers are high because priar's performance on generative tasks degrades much more quickly than does 12. All performance numbers for priar appear in (Kambhampati & Hendler, 1992, Section 7). 13. See (Langley & Drummond, 1990) and (Hanks, Pollack, & Cohen, 1993) for a deeper discussion on the empirical evaluation of planning programs. its behavior on ret tasks. Just to emphasize this relationship: for the 3BS!12BS1 problem priar's processing time is 65% of spa's. For 5BS!12BS1 it is 52% of spa's. For 10BS!12BS1 the number is 43%, but for generating 12BS1 from scratch priar runs about 12 times slower. This result points out that one must use extreme caution in evaluating any system based on these relative speedup gures, since they are actually measuring only the relationship between two separate components of a single system. It also points out that the problem of deciding when to generate plans from scratch instead of adapting them must take into account the eectiveness of the underlying generation mechanism." }, { "figure_ref": [], "heading": "Summary", "publication_ref": [ "b17" ], "table_ref": [], "text": "Our two goals were to establish a systematic relationship between library use and problemsolving eort, and to compare our system's performance to that of the similar priar. In the rst case we note that on certain problems, most notably the nBS!mBS rets, there is a regular and systematic relationship between the t between library and input problems (measured roughly by the dierence between n and m) and the time required to solve the problem. 14 We should note, however, that the simple nature of the domain and the problems admits a particularly obvious measure of \\degree of t,\" so these results may not extend to less regular problem-solving scenarios. In the second case we demonstrated that the performance of the two systems was roughly comparable both in absolute terms and in terms of the relative value of retting.\nWe must once again advise caution in interpreting these results. Although we believe they provide a preliminary validation of spa's algorithm both in absolute terms and compared to priar's hierarchical approach, the fact is the experiments were conducted in a very regular and simple problem domain, which allowed us to characterize the size of a problem or plan using a single number, and further allowed us to characterize the extent to which a library plan would be suitable for use in solving a larger input problem by comparing the numbers associated with the two plans.\nFuture work must therefore concentrate on two areas: the whole problem of how to retrieve a good plan from the library (which both spa and priar ignore), and the problem of assessing, in a realistic domain, the \\degree of t\" between a library plan and an input problem. A similar analysis appears in (Koehler, 1994)." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b16", "b7", "b20", "b30", "b12", "b3", "b31", "b12", "b3", "b10" ], "table_ref": [], "text": "We have already mentioned the work on priar (Kambhampati & Hendler, 1992) as close to our own, in particular its use of the generative planner to provide library plans and dependencies that can later be retracted. priar and spa also share the same strips-like action representation. The main dierence between the two approaches is the underlying planning algorithm: spa uses a constraint-posting technique similar to Chapman's (1987) tweak, as modied by McAllester and Rosenblitt (1991), whereas priar uses a variant of nonlin (Tate, 1977), a hierarchical planner.\npriar's plan representation, and thus the algorithms that manipulate them, are more complicated that spa's. There are three dierent types of validations (relationships between nodes in the plan graph), for example|lter condition, precondition, and phantom goal|as well as dierent \\reduction levels\" for the plan that represents a hierarchical decomposition of its structure, along with ve dierent strategies for repairing validation failures. Contrast this representation with spa's plan representation consisting of causal links and step-order constraints.\npriar's more complicated planning and validation structure makes it harder to evaluate the algorithm formally. Kambhampati and Hendler (1992, p. 39) prove a soundness result and argue informally for a property like completeness: \\we claim that our framework covers all possible modications for plans that are describable within the action representation described in this paper.\" It is not clear the exact relationship between this property and our completeness property.\nThe work on adaptation for case-based planning has mainly been concerned with nding good strategies for applying adaptations. In Section 7 we discussed chef (Hammond, 1990) in detail, analyzing it in terms of spa's adaptation primitives. Since spa uses the strips representation and cannot represent simultaneous actions or actions with temporal extent, we were only able to consider ten of chef's seventeen repair strategies. However, we consider it interesting that nine of these transformations can be encoded simply as either one or two chained spa primitives.\nSection 2 also discussed plexus (Alterman, 1988) and NoLimit (Veloso & Carbonell, 1993). Veloso (1992) also describes a mechanism by which case memory is extended during problem solving, including learning techniques for improving the similarity metric used in library retrieval. These issues have been completely ignored in our development of spa, but it is possible that they could be added to our system. Some case-based planning work, for example by Hammond (1990) and Alterman (1988), also addresses situations in which the planner's domain model is incomplete and/or incorrect. Both of these systems generate a plan using a process of retrieval and adaptation, then execute the plan. If execution fails (although the model incorrectly predicted that it would succeed), these systems try to learn the reasons why, and store the failure in memory so the system does not make the same mistake again. spa sidesteps this challenging problem, since it addresses only the problem of ahead-of-time plan generation|not the problem of execution and error recovery. The xii planner (Golden, Etzioni, & Weld, 1994) uses a planning framework similar to spa's, developing a representation and algorithm for generative planning in the presence of incomplete information; the xii planner still assumes what partial information it has is correct, however.\nWe mentioned in Section 2 that our goals in building the spa system were somewhat dierent from most work in adaptive planning: our intent is that as a formal framework spa can be used to analyze case-based planners to understand how they succeed in particular problem domains. As an implemented system we hope that spa can be used to build eective problem solvers. The key is likely to be the addition of domain-dependent case-retrieval algorithms and heuristic control strategies." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We have presented the spa algorithm, an approach to case-based planning based on the idea that the adaptation of previous planning episodes (library plans) is really a process of appropriately retracting old planning decisions and adding new steps, links and constraints in order to make the library plan skeleton solve the problem at hand.\nThe algorithm is simple, and has nice formal properties: soundness, completeness, and systematicity. It also makes clear the distinction between domain-independent algorithms and the application of domain-dependent control knowledge. As such it is an excellent vehicle for studying the problem of case-based planning in the abstract and for analyzing domain-dependent strategies for plan repair.\nOur experimental results established a systematic relationship between computational eort required and the extent to which a library plan resembles the input problem, and also compared our system's performance to that of the similar priar. The system's performance is encouraging, but we noted that the results should be interpreted within the context of the simple and regular problems in which they were conducted." }, { "figure_ref": [], "heading": "On the formal properties of algorithms", "publication_ref": [], "table_ref": [], "text": "We should comment briey on the implications of our algorithm's formal properties. Having properties like completeness and systematicity does not necessarily make an algorithm good, nor does the absence of these properties necessarily make an algorithm bad. The value of a framework for planning must ultimately be measured in its ability to solve interesting problems|to provide coverage of an interesting domain, to scale to problems of reasonable size, and so on. Soundness, completeness, and systematicity are neither necessary nor sucient to build an eective planner.\nHowever, the properties do help us to understand planning algorithms, however, which is equally important. What is it about chef that made it eective in its cooking domain? What is the essential dierence between the priar and the spa frameworks? Formal analysis of an algorithm can provide insight into what makes it eective. We showed that chef's transformation strategies come at the cost of an incomplete algorithm, but understanding what parts of the search space they exclude can help us better understand how they are eective.\nFormal properties can also act as an idealization of a desirable property that is more dicult to evaluate. Few would argue, for example, that systematicity is necessary for eective performance. 15 On the other hand, it is obviously important to make sure that a plan-adaptation algorithm does not cycle, and we can at least guarantee that a systematic algorithm will not cycle over partial plans. 16 So systematicity might be too strong a requirement for an algorithm, but at the same time it provides an end point in a spectrum." }, { "figure_ref": [], "heading": "Future work", "publication_ref": [ "b29", "b23", "b25" ], "table_ref": [], "text": "Our work raises many questions that suggest avenues for future research:\nAlthough there are many hooks for domain-dependent information in our adaptation algorithm, we have not seriously explored the quality of the search-control interface. How convenient is it to specify heuristics to guide adaptation in a more realistic domain?\nOur analysis of transformational planning systems (Section 7) is preliminary. We hope to implement the approach described there and determine which of chef's transformational repairs provide the greatest computational benet. It would also be interesting to perform the same type of analysis on gordius (Simmons, 1988) or other transformational planners.\nThe interplay between decisions made during the plan-retrieval process and the planadaptation process have not been well explored. We need to confront the issues faced by all case-based planners: what makes a good plan to retrieve, and what is the best way to t that plan for the plan adapter? Our analysis (section 6.1) is an interesting start, but much is left to consider.\nOne of the problems with the approach advocated in this paper is its dependence on the strips action representation. It would be especially interesting to extend our ideas to a more expressive language (for example, something like adl (Pednault, 1988) by adding retraction to ucpop (Penberthy & Weld, 1992), or the language used by gordius).\nThe planning task is closely related to that of design (both are synthesis activities). We may be able to generalize our algorithm to address case-based design of lumped-" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was improved by discussions with Tony Barrett, Paul Beame, Denise Draper, Oren Etzioni, and Rao Kambhampati. Denise Draper cleaned up some of the code, infuriating us, but producing an improved system. David Madigan helped with the empirical analysis. Thanks also to Steve Minton, Alicen Smith, Ying Sun, and the anonymous reviewers, whose suggestions improved the presentation of this paper substantially. This work was funded in part by National Science Foundation Grants IRI-8902010, IRI-8957302, IRI-9008670, and IRI-9303461, by Oce of Naval Research Grants 90-J-1904 and N00014-94-1-0060, and by a grant from the Xerox corporation." } ]
[ { "authors": "", "journal": "Williams", "ref_id": "b0", "title": "parameter devices using ideas from system dynamics", "year": "1990" }, { "authors": "Weld Neville", "journal": "", "ref_id": "b1", "title": "References", "year": "1992" }, { "authors": "", "journal": "Morgan Kaufmann", "ref_id": "b2", "title": "Readings in Planning", "year": "1990" }, { "authors": "R Alterman", "journal": "Cognitive Science", "ref_id": "b3", "title": "Adaptive planning", "year": "1988" }, { "authors": "A Barrett; D Weld", "journal": "Articial Intelligence", "ref_id": "b4", "title": "Partial order planning: Evaluating possible eciency gains", "year": "1994" }, { "authors": "A Barrett; D Weld", "journal": "", "ref_id": "b5", "title": "Task-decomposition via plan parsing", "year": "1994" }, { "authors": "J Carbonell", "journal": "", "ref_id": "b6", "title": "Derivational analogy in problem solving and knowledge acquistion", "year": "1983" }, { "authors": "D Chapman", "journal": "Articial Intelligence", "ref_id": "b7", "title": "Planning for conjunctive goals", "year": "1987" }, { "authors": "E Charniak; D Mcdermott", "journal": "Addison-Wesley Publishing Company", "ref_id": "b8", "title": "Introduction to Articial Intelligence", "year": "1984" }, { "authors": "D Gentner", "journal": "IEEE", "ref_id": "b9", "title": "A structure mapping approach to analogy and metaphor", "year": "1982" }, { "authors": "K Golden; O Etzioni; D Weld", "journal": "", "ref_id": "b10", "title": "Omnipotence without omniscience: Sensor management in planning", "year": "1994" }, { "authors": "K Hammond", "journal": "Academic Press", "ref_id": "b11", "title": "Case-Based Planning: Viewing Planning as a Memory Task", "year": "1989" }, { "authors": "K Hammond", "journal": "Articial Intelligence", "ref_id": "b12", "title": "Explaining and repairing plans that fail", "year": "1990" }, { "authors": "S Hanks; M E Pollack; P R Cohen", "journal": "AI Magazine", "ref_id": "b13", "title": "Benchmarks, testbeds, controlled experimentation, and the design of agent architectures", "year": "1993" }, { "authors": "S Hanks; D S Weld", "journal": "", "ref_id": "b14", "title": "The systematic plan adaptator: A formal foundation for case-based planning", "year": "1992" }, { "authors": "S Kambhampati", "journal": "", "ref_id": "b15", "title": "On the utility of systematicity: Understanding the tradeos between redundancy and commitment in partial-order planning", "year": "1993" }, { "authors": "S Kambhampati; J Hendler", "journal": "Articial Intelligence", "ref_id": "b16", "title": "A validation structure based theory of plan modication and reuse", "year": "1992" }, { "authors": "J Koehler", "journal": "AAAI", "ref_id": "b17", "title": "Avoiding pitfalls in case-based planning", "year": "1994" }, { "authors": "P Langley; M Drummond", "journal": "", "ref_id": "b18", "title": "Toward an Experimental Science of Planning", "year": "1990" }, { "authors": "Morgan Kaufman", "journal": "", "ref_id": "b19", "title": "", "year": "" }, { "authors": "D Mcallester; D Rosenblitt", "journal": "", "ref_id": "b20", "title": "Systematic nonlinear planning", "year": "1991" }, { "authors": "S Minton", "journal": "", "ref_id": "b21", "title": "Quantitative results concerning the utility of explanation-based learning", "year": "1988" }, { "authors": "D Neville; D Weld", "journal": "", "ref_id": "b22", "title": "Innovative design as systematic search", "year": "1992" }, { "authors": "E Pednault", "journal": "Computational Intelligence", "ref_id": "b23", "title": "Synthesizing plans that contain actions with context-dependent eects", "year": "1988" }, { "authors": "J S Penberthy; D S Weld", "journal": "", "ref_id": "b24", "title": "A new approach to temporal planning (preliminary report)", "year": "1993" }, { "authors": "J Penberthy; D Weld", "journal": "", "ref_id": "b25", "title": "UCPOP: A sound, complete, partial order planner for ADL", "year": "1992" }, { "authors": "M Peot; D Smith", "journal": "", "ref_id": "b26", "title": "Threat-removal strategies for partial-order planning", "year": "1993" }, { "authors": "R Schank", "journal": "Cambridge University Press", "ref_id": "b27", "title": "Dynamic Memory: A Theory of Reminding and Learning in Computers and People", "year": "1982" }, { "authors": "R Schank; R Abelson", "journal": "Erlbaum", "ref_id": "b28", "title": "Scripts, Plans, Goals, and Understanding", "year": "1977" }, { "authors": "R Simmons", "journal": "", "ref_id": "b29", "title": "A theory of debugging plans and interpretations", "year": "1988" }, { "authors": "A Tate", "journal": "", "ref_id": "b30", "title": "Generating project networks", "year": "1977" }, { "authors": "M Veloso; J Carbonell", "journal": "Machine Learning", "ref_id": "b31", "title": "Derivational Analogy in prodigy: Automating Case Acquisition, Storage, and Utilization", "year": "1993" }, { "authors": "M Veloso", "journal": "", "ref_id": "b32", "title": "Learning by Analogical Reasoning in General Problem Solving", "year": "1992" }, { "authors": "M Veloso", "journal": "", "ref_id": "b33", "title": "Flexible strategy learning: Analogical replay of problem solving episodes", "year": "1994" }, { "authors": "B Williams", "journal": "", "ref_id": "b34", "title": "Interaction-based invention: Designing novel devices from rst principles", "year": "1990" }, { "authors": "H Yang; D Fisher", "journal": "", "ref_id": "b35", "title": "Similarity-based retrieval and partial reuse of macrooperators", "year": "1992" } ]
[ { "formula_coordinates": [ 6, 238.8, 328.56, 166.56, 11.52 ], "formula_id": "formula_0", "formula_text": "(<> ?x TABLE) (<> ?y TABLE)))" }, { "formula_coordinates": [ 9, 95.53, 608.16, 213.36, 43.8 ], "formula_id": "formula_1", "formula_text": "L := a new link S i Q !S j 4 R := a new reason [establish L] 5" }, { "formula_coordinates": [ 13, 95.52, 546.96, 219.6, 25.56 ], "formula_id": "formula_2", "formula_text": "NewPlan := AdaptationLoop(LibPlan) 4" }, { "formula_coordinates": [ 15, 159.36, 445.44, 133.68, 18.96 ], "formula_id": "formula_3", "formula_text": "S i Q !S j 2 P 1 i R i Q !R j 2 P 2" }, { "formula_coordinates": [ 16, 117.36, 184.8, 94.08, 18.96 ], "formula_id": "formula_4", "formula_text": "[establish S i Q !S j ]." }, { "formula_coordinates": [ 17, 120.96, 88.32, 299.52, 65.76 ], "formula_id": "formula_5", "formula_text": "P if 1. R is of the form [protect S i Q !S j ] for some link S i Q !S j , or 2. R is of the form [establish S i Q !S j ] for some link S i Q !S" }, { "formula_coordinates": [ 18, 95.52, 358.08, 264, 62.28 ], "formula_id": "formula_6", "formula_text": "1 if R is of the form [protect S i Q !S j ; S t ] then 2 F := S i Q !S j ; S t 3 P 0 := a copy of P 4" }, { "formula_coordinates": [ 18, 95.52, 432.96, 259.68, 64.92 ], "formula_id": "formula_7", "formula_text": "6 else if R is of the form [establish S i Q !S j ] then 7 F := Q !S j 8 P 0 := a copy of P 9" }, { "formula_coordinates": [ 24, 275.04, 301.92, 61.44, 15.72 ], "formula_id": "formula_8", "formula_text": "(b + 1) k < b n" }, { "formula_coordinates": [ 24, 277.68, 361.32, 57.84, 24.96 ], "formula_id": "formula_9", "formula_text": "k n < log b+1 b" } ]
A Domain-Independent Algorithm for Plan Adaptation
The paradigms of transformational planning, case-based planning, and plan debugging all involve a process known as plan adaptation | modifying or repairing an old plan so it solves a new problem. In this paper we provide a domain-independent algorithm for plan adaptation, demonstrate that it is sound, complete, and systematic, and compare it to other adaptation algorithms in the literature. Our approach is based on a view of planning as searching a graph of partial plans. Generative planning starts at the graph's root and moves from node to node using planrenement operators. In planning by adaptation, a library plan|an arbitrary node in the plan graph|is the starting point for the search, and the plan-adaptation algorithm can apply both the same renement operators available to a generative planner and can also retract constraints and steps from the plan. Our algorithm's completeness ensures that the adaptation algorithm will eventually search the entire graph and its systematicity ensures that it will do so without redundantly searching any parts of the graph. Hanks & Weld GENERALIZATION: Generalize the newly created plan and store it as a new case in the library (provided it is suciently dierent from plans currently in the library). This paper focuses on the adaptation process in the general context of a case-based planning system; however, our adaptation algorithm could be useful for transformational and plan debugging systems as well.Work in case-based planning has historically been conducted in particular application domains, and has tended to focus on representation rather than algorithmic issues. The research addresses problems like what features of a library plan make good indices for subsequent retrieval, how features of the library plan can suggest eective adaptation strategies, and so on. Our work develops a domain-independent algorithm for plan adaptation, and is therefore complementary: it provides a common platform with which one can analyze and compare the various representation schemes and adaptation strategies, as well as explore in the abstract the potential benets of the case-based approach to planning. Sections 7 and 8.2 discuss the chef (Hammond, 1989) and priar (Kambhampati & Hendler, 1992) systems using our framework, and Section 6.1 characterizes precisely the potential benets of plan adaptation versus plan generation. This paper presents an algorithm, spa (the \systematic plan adaptor") for plan adaptation that is sound, complete, and systematic. Soundness means that the output plan is guaranteed to satisfy the goal, completeness means that the planner will always nd a solution plan if one exists (regardless of the library plan provided by the retriever), and systematicity means that the algorithm explores the space of adaptations non-redundantly (in short, it will never consider an adaptation more than once). Systematicity is the trickiest property to guarantee, and for two reasons. First, the adapter operates in a space of incomplete plans. 1 Each incomplete plan can expand into an exponential number of completions; systematicity requires that the adaptation algorithm never consider two incomplete plans that share even one completion, whereas completeness requires that every potential completion be considered. Second, plan adaptation requires a combination of retracting previous planning decisions (choice and ordering of plan steps, binding of variables within the action schemas), as well as making new decisions. Systematicity requires that a decision, once retracted, never be considered again. Our framework for planning by adaptation is based on two premises having to do with the nature of stored plans and how they are manipulated: A library plan or case is stored as a complete and consistent plan for solving the prior problem. This plan contains the steps and orderings that solved the prior problem along with additional constraints and dependency information that record why the steps and orderings appear there. Applying a case to a new problem rst involves adjusting the library plan to match the initial and goal conditions of the current problem, a process that produces a consistent but incomplete plan. The adaptation process attempts to complete this plan. 1. An incomplete plan may be partially ordered, may contain partially constrained variables, and may require additional steps or constraints for it to achieve the goal.
Steve Hanks; Daniel S Weld
[ { "figure_caption": "amounts to resolving an open condition or resolving a threat: function CorrectFlaw(F, P): List of plans 1 if F is an open precondition then 2 return ResolveOpen(F, P) 3 else return ResolveThreat(F, P)", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Plan renement and retraction as search in plan space.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Plan retraction replaces an up tagged plan with another up plan and several sibling plans tagged down.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "j and (a) P contains no reason of the form [protect S i Q !S j ], and (b) either S i participates in another link S i Q !S x , or S i does not appear in any protected threat of the form [protect S x Q !S y ; S i ], or 3. R is of the form [add-step S k ] and P contains no link of the form S k Q !S.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: chef repairs cannot x desired-eect plan failures with this causal structure.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Two Blocksworld Problems", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: System performance for generative planning", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Comparative performance, spaand priar", "figure_data": "spapriarspapriar3BS!4BS11.72.4 59%40%3BS!5BS14.04.3 50%49%4BS!5BS12.93.2 64%62%4BS!6BS16.911.6 53%34%5BS!7BS111.211.1 58%71%4BS1!8BS118.622.2 55%72%4BS!8BS121.315.4 49%81%5BS!8BS119.210.1 54%87%6BS!9BS130.218.1 53%90%7BS!9BS124.911.4 61%94%4BS!10BS161.752.9 40%87%7BS!10BS140.723.4 61%94%8BS!10BS135.014.5 66%96%3BS!12BS1 133.277.1 18%96%5BS!12BS1 114.051.8 30%97%10BS!12BS1 53.121.2 67%99%", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b22", "b29", "b1", "b26", "b11", "b12", "b4", "b28", "b26", "b12", "b22", "b29", "b7", "b19", "b21", "b22", "b22" ], "table_ref": [], "text": "Reinforcement learning (RL, e.g., Sutton, 1984;Watkins, 1989;Barto, 1992;Sutton, Barto, & Williams, 1991;Lin, 1992Lin, , 1993;;Cichosz, 1994) is a machine learning paradigm that relies on evaluative training information. At each step of discrete time a learning agent observes the current state of its environment and executes an action. Then it receives a reinforcement value, also called a payo or a reward (punishment), and a state transition takes place. Reinforcement values provide a relative measure of the quality of actions executed by the agent. Both state transitions and rewards may be stochastic, and the agent does not know either transition probabilities or expected reinforcement values for any state-action combinations. The objective of learning is to identify a decision policy (i.e., a state-action mapping) that maximizes the reinforcement values received by the agent in the long term. A commonly assumed formal model of a reinforcement learning task is a Markovian decision problem (MDP, e.g., Ross, 1983). The Markov property means that state transitions and reinforcement values always depend solely on the current state and the current action: there is no dependence on previous states, actions, or rewards, i.e., the state information supplied to the agent is su cient for making optimal decisions.\nAll the information the agent has about the external world and its task is contained in a series of environment states and reinforcement values. It is never told what actions to execute in particular states, or what actions (if any) would be better than those which c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. it actually performs. It must learn an optimal policy by observing the consequences of its actions. The abstract formulation and generality of the reinforcement learning paradigm make it widely applicable, especially in such domains as game-playing (Tesauro, 1992), automatic control (Sutton et al., 1991), and robotics (Lin, 1993). To formulate a particular task as a reinforcement learning task, one just has to design appropriate state and action representation, and a reinforcement mechanism specifying the goal of the task. The main limitation of RL applications is that it is by nature a trial-and-error learning method, and it is hardly applicable in domains where making errors costs much.\nA commonly studied performance measure to be maximized by an RL agent is the expected total discounted sum of reinforcement: E \" 1 X t=0 t r t # ;\n(1) where r t denotes the reinforcement value received at step t, and 0 1 is a discount factor, which adjusts the relative signi cance of long-term rewards versus short-term ones. To maximize the sum for any positive , the agent must take into account the delayed consequences of its actions: reinforcement values may be received several steps after the actions that contributed to them were performed. This is referred to as learning with delayed reinforcement (Sutton, 1984;Watkins, 1989). Other reinforcement learning performance measures have also been considered (Heger, 1994;Schwartz, 1993;Singh, 1994), but in this work we limit ourselves exclusively to the performance measure speci ed by Equation 1.\nThe key problem that must be solved in order to learn an optimal policy under the conditions of delayed reinforcement is known as the temporal credit assignment problem (Sutton, 1984). It is the problem of assigning credit or blame for the overall outcomes of a learning system (i.e., long-term reinforcement values) to each of its individual actions, possibly taken several steps before the outcomes could be observed. Discussing reinforcement learning algorithms, we will concentrate on temporal credit assignment and ignore the issues of structural credit assignment (Sutton, 1984), the other aspect of credit assignment in RL systems." }, { "figure_ref": [], "heading": "Temporal Di erence Methods", "publication_ref": [ "b23" ], "table_ref": [], "text": "The temporal credit assignment problem in reinforcement learning is typically solved using algorithms based on the methods of temporal di erences (TD). They have been introduced by Sutton (1988) as a class of methods for learning predictions in multi-step prediction problems. In such problems prediction correctness is not revealed at once, but after more than one step since the prediction was made, though some partial information relevant to its correctness is revealed at each step. This information is available and observed as the current state of a prediction problem, and the corresponding prediction is computed as a value of a function of states.\nConsider a multi-step prediction problem where at each step it is necessary to learn a prediction of some nal outcome. It could be for example predicting the outcome of a game of chess in subsequent board situations, predicting the weather on Sunday on each day of the week, or forecasting some economic indicators. The traditional approach to learning such predictions would be to wait until the outcome occurs, keeping track of all predictions computed at intermediate steps, and then, for each of them, to use the di erence between the actual outcome and the predicted value as the training error. It is supervised learning, where directed training information is obtained by comparing the outcome with predictions produced at each step. Each of the predictions is modi ed so as to make it closer to the outcome.\nTemporal di erence learning makes it unnecessary to always wait for the outcome. At each step the di erence between two successive predictions is used as the training error. Each prediction is modi ed so as to make it closer to the next one. In fact, TD is a class of methods referred to as TD( ), where 0 1 is called a recency factor. Using > 0 allows one to incorporate prediction di erences from more time steps, to hopefully speed up learning.\nTemporal credit assignment in reinforcement learning may be viewed as a prediction problem. The outcome to predict in each state is simply the total discounted reinforcement that will be received starting from that state and following the current policy. Such predictions can be used for modifying the policy so as to optimize the performance measure given by Equation 1. Example reinforcement learning algorithms that implement this idea, called TD-based algorithms, will be presented in Section 2.2." }, { "figure_ref": [], "heading": "Paper Overview", "publication_ref": [ "b23", "b12", "b28", "b16", "b22", "b29", "b30", "b0" ], "table_ref": [], "text": "Much of the research concerning TD-based reinforcement learning algorithms has concentrated on the simplest TD(0) case. However, experimental results obtained with TD( > 0) indicate that it often allows one to obtain a signi cant learning speedup (Sutton, 1988;Lin, 1993;Tesauro, 1992). It has been also suggested (e.g., Peng & Williams, 1994) that TD( > 0) should perform better in non-Markovian environments than TD(0) (i.e., it should be less sensitive to the potential violations of the Markov property). It is thus important to develop e cient and general implementation techniques that would allow TD-based RL algorithms to use arbitrary . This has been the motivation of this work.\nThe remainder of this paper is organized as follows. In Section 2 a formal de nition of TD methods is presented and their application to reinforcement learning is discussed. Three example RL algorithms are brie y described: AHC (Sutton, 1984), Q-learning (Watkins, 1989;Watkins & Dayan, 1992), and advantage updating (Baird, 1993). Section 3 presents the traditional approach to TD( ) implementation, based on so called eligibility traces, which is criticized for ine ciency and lack of generality. In Section 4 the analysis of the e ects of the TD algorithm leads to the formulation of the TTD (Truncated Temporal Di erences) procedure. The two remaining sections are devoted to experimental results and concluding discussion." }, { "figure_ref": [], "heading": "De nition of TD( )", "publication_ref": [ "b23", "b5" ], "table_ref": [], "text": "When Sutton (1988) introduced TD methods, he assumed they would use parameter estimation techniques for prediction representation. According to his original formulation, states of a prediction problem are represented by vectors of real-valued features, and corresponding predictions are computed by the use of a set of modi able parameters (weights). Under such representation learning consists in adjusting the weights appropriately on the basis of observed state sequences and outcomes. Below we present an alternative formula-tion, adopted from Dayan (1992), that simpli es the analysis of the e ects of the TD( ) algorithm. In this formulation states may be elements of an arbitrary nite state space, and predictions are values of some function of states. Transforming Sutton's original de nition of TD( ) to this alternative form is straightforward.\nWhen discussing either the generic or RL-oriented form of TD methods, we consequently ignore the issues of function representation. It is only assumed that TD predictions or functions maintained by reinforcement learning algorithms are represented by a method that allows adjusting function values using some error values, controlled by a learning rate parameter. Whenever we write that the value of an n-argument function ' for arguments p 0 ; p 1 ; : : :; p n 1 should be updated using an error value of , we mean that '(p 0 ; p 1 ; : : :; p n 1 ) should be moved towards '(p 0 ; p 1 ; : : :; p n 1 ) + , to a degree controlled by some learning rate factor . The general form of this abstract update operation is written as update ('; p 0 ; p 1 ; : : :; p n 1 ; ):\n(2) Under this convention, a learning algorithm is de ned by the rule it uses for computing error values." }, { "figure_ref": [], "heading": "Basic Formulation", "publication_ref": [ "b5", "b23", "b5" ], "table_ref": [], "text": "Let x 0 ; x 1 ; : : :; x m 1 be a sequence of m states of a multi-step prediction problem. Each state x t can be observed at time step t, and at step m, after passing the whole sequence, a real-valued outcome z can be observed. The learning system is required to produce a corresponding sequence of predictions P(x 0 ); P(x 1 ); : : :; P(x m 1 ), each of which is an estimate of z.\nFollowing Dayan (1992), let us de ne for each state x:\nx (t) =\n( 1 if x t = x 0 otherwise:\nThen the TD( ) prediction error for each state x determined at step t is given by:\nx (t) = (P(x t+1 ) P(x t )) t X k=0 t k x (k);(3)\nwhere 0 1 and P(x m ) = z by de nition, and the total prediction error for state x determined after the whole observed sequence accordingly is:\nx = m 1 X t=0 x (t) = m 1 X t=0 ( (P(x t+1 ) P(x t )) t X k=0 t k x (k) ) :(4)\nThus, learning at each step is driven by the di erence between two temporally successive predictions. When > 0, the prediction di erence at time t a ects not only P(x t ), but also predictions from previous time steps, to an exponentially decaying degree. 1 1. Alternatively, learning the prediction at step t relies not only on the prediction di erence from that step, but also on future prediction di erences. This equivalent formulation will play a signi cant role in Section 4.\nThere are two possibilities of using such de ned errors for learning. The rst is to compute total errors x for all states x, by accumulating the x (t) errors computed at each time step t, and to use them after passing the whole state sequence to update predictions P(x).\nIt corresponds to batch learning mode. The second possibility, called incremental or on-line learning, often more attractive in practice, is to update predictions at each step t using current error values x (t). It is then necessary to modify appropriately Equation 3, so as to take into account that predictions are changed at each step:\nx (t) = (P t (x t+1 ) P t (x t )) t X k=0 t k x (k);\n(5) where P t (x) designates the prediction for state x available at step t. Sutton (1988) proved the convergence of batch TD(0) for a linear representation, with states represented as linearly independent vectors, under the assumption that state sequences are generated by an absorbing Markov process. 2 Dayan (1992) extended his proof to arbitrary . 3" }, { "figure_ref": [], "heading": "TD( ) for Reinforcement Learning", "publication_ref": [ "b23", "b22", "b5", "b6", "b8", "b8", "b22", "b29", "b30", "b0", "b12", "b4", "b3", "b29", "b0" ], "table_ref": [], "text": "So far, this paper has presented TD as a general class of prediction methods for multi-step prediction problems. The most important application of these methods, however, is to reinforcement learning. As a matter of fact, TD methods were formulated by Sutton (1988) as a generalization of techniques he had previously used only in the context of temporal credit assignment in reinforcement learning (Sutton, 1984).\nAs already stated above, the most straightforward way to formulate temporal credit assignment as a prediction problem is to predict at each time step t the discounted sum of future reinforcement z t = 1 X k=0 k r t+k ; called the TD return for time t. The corresponding prediction is designated by U(x t ) and called the predicted utility of state x t . TD returns obviously depend on the policy being followed; we therefore assume that U values represent predicted state utilities with respect to the current policy. For perfectly accurate predictions we would have:\nU(x t ) = z t = r t + z t+1 = r t + U(x t+1 ): Thus, for inaccurate predictions, the mismatch or TD error is r t + U(x t+1 ) U(x t ). The resulting RL-oriented TD( ) equations take form:\nx (t) = (r t + U t (x t+1 ) U t (x t )) t X k=0 ( ) t k x (k) (6)\n2. An absorbing Markov process is de ned by a set of terminal states XT , a set of non-terminal states XN, and the set of transition probabilities Pxy for all x 2 XN and y 2 XN XT . The absorbing property means that any cycles among non-terminal states cannot last inde nitely long, i.e., for any starting non-terminal state a terminal state will eventually be reached (all sequences eventually terminate). 3. Recently stronger theoretical results were proved by Dayan andSejnowski (1994) andJaakkola, Jordan, andSingh (1993).\nand x = 1 X t=0 x (t) = 1 X t=0 ( (r t + U t (x t+1 ) U t (x t )) t X k=0 ( ) t k x (k) ) :(7)\nNote the following additional di erences between these equations and Equations 3 and 4:\ntime step subscripts are used with U values to emphasize on-line learning mode, the discount applied in the sum in Equation 6 includes as well as for reasons that may be unclear now, but will be made clear in Section 4.1, the summation in Equation 7 extends to in nity, because the predicted nal outcome is not, in general, available after any nite number of steps.\nTD-based reinforcement learning algorithms may be viewed as more or less direct implementations of the general rule described by Equation 6. To see this, we will consider three algorithms: well known AHC (Sutton, 1984) and Q-learning (Watkins, 1989;Watkins & Dayan, 1992), and a recent development of Baird (1993) called advantage updating. All the algorithms rely on learning certain real-valued functions de ned over the state or state and action space of a task. The superscript used with any of the described functions designates its optimal values (i.e., corresponding to an optimal policy). Simpli ed versions of the algorithms, corresponding to TD(0), will be presented and related to Equation 6. The presentation below is limited solely to function update rules | for a more elaborated description of the algorithms the reader should consult the original publications of their developers or, for AHC and Q-learning, Lin (1993) or Cichosz (1994). They are all closely related to dynamic programming methods (Barto, Sutton, & Watkins, 1990;Watkins, 1989;Baird, 1993), but these relations, though theoretically and practically important and fruitful, are not essential for the subject of this paper and will not be discussed." }, { "figure_ref": [], "heading": "The AHC Algorithm", "publication_ref": [ "b24", "b22" ], "table_ref": [], "text": "The variation of the AHC algorithm described here is adopted from Sutton (1990). Two functions are maintained: an evaluation function V and a policy function f. The evaluation function evaluates each environment state and is essentially the same as what was called above the U function, i.e., V (x) is intended to be an estimate of the discounted sum of future reinforcement values received starting from state x and following the current policy. The policy function assigns to each state-action pair (x; a) a real number representing the relative merit of performing action a in state x, called the action merit. The actual policy is determined from action merits using some, usually stochastic, action selection mechanism, e.g., according to a Boltzmann distribution (as described in Section 5). The optimal evaluation of state x, V (x), is the expected total discounted reinforcement that will be received starting from state x and following an optimal policy.\nBoth the functions are updated at each step t, after executing action a t in state x t , according to the following rules: update (V; x t ; r t + V t (x t+1 ) V t (x t )); update (f; x t ; a t ; r t + V t (x t+1 ) V t (x t )).\nThe update rule for the V -function directly corresponds to Equation 6 for = 0. The update rule for the policy function increases or decreases the action merit of an action depending on whether its long-term consequences appear to be better or worse than expected. We present this, a simpli ed form of AHC corresponding to TD(0), because this paper proposes an alternative way of using TD( > 0) to that implemented by the original AHC algorithm presented by Sutton (1984).\n2.2.2 The Q-Learning Algorithm Q-learning learns a single function of states and actions, called a Q-function. To each state-action pair (x; a) it assigns a Q-value or action utility Q(x; a), which is an estimate of the discounted sum of future reinforcement values received starting from state x by executing action a and then following a greedy policy with respect to the current Q-function (i.e., performing in each state actions with maximum Q-values). The current policy is implicitly de ned by Q-values. When the optimal Q-function is learned, then a greedy policy with respect to action utilities is an optimal policy.\nThe update rule for the Q-function is: update (Q; x t ; a t ; r t + max a Q t (x t+1 ; a) Q t (x t ; a t )).\nTo show its correspondence to the TD(0) version of Equation 6, we simply assume that predicted state utilities are represented by Q-values so that Q t (x t ; a t ) corresponds to U t (x t ) and max a Q t (x t+1 ; a) corresponds to U t (x t+1 )." }, { "figure_ref": [], "heading": "The Advantage Updating Algorithm", "publication_ref": [], "table_ref": [], "text": "In advantage updating two functions are maintained: an evaluation function V and an advantage function A. The evaluation function has essentially the same interpretation as its counterpart in AHC, though it is learned in a di erent way. The advantage function assigns to each state-action pair (x; a) a real number A(x; a) representing the degree to which the expected discounted sum of future reinforcement is increased by performing action a in state x, relative to the action currently considered best in that state. The optimal action advantages are negative for all suboptimal actions and equal 0 for optimal actions, and can be related to the optimal Q-values by: A (x; a) = Q (x; a) max a 0 Q (x; a 0 ):\nSimilarly as action utilities, action advantages implicitly de ne a policy.\nThe evaluation and advantage functions are updated at step t by applying the following rules: update (A; x t ; a t ; max a A t (x t ; a) A t (x t ; a t ) + r t + V t (x t+1 ) V t (x t )); update (V; x t ; 1 max a A t+1 (x t ) max a A t (x t )]).\nThe update rule for the advantage function is somewhat more complex that the AHC or Q-learning rules, but it still contains a term that directly corresponds to the TD(0) form of Equation 6, by replacing V with U.\nActually, what has been presented above is a simpli ed version of advantage updating. The original algorithm di ers in two details: the time step duration t is explicitly included in the update rules, while in this presentation we assumed t = 1, besides learning updates, described above, so called normalizing updates are performed." }, { "figure_ref": [], "heading": "Eligibility Traces", "publication_ref": [ "b2", "b22", "b29", "b2", "b22", "b2", "b22", "b9", "b14" ], "table_ref": [], "text": "It is obvious that the direct implementation of the computation described by Equation 6is not too tempting. It requires maintaining x (t) values for each state x and past time step t.\nNote, however, that one only needs to maintain the whole sums P t k=0 ( ) t k x (k) for all x and only one (current) t, which is much easier due to a simple trick. Substituting\ne x (t) = t X k=0 ( ) t k x (k);\nwe can de ne the following recursive update rule:\ne x (0) = ( 1 if x 0 = x 0 otherwise; e x (t) = ( e x (t 1) + 1 if x t = x e x (t 1) otherwise:(8)\nThe quantities e x (t) de ned this way are called activity or eligibility traces (Barto, Sutton, & Anderson, 1983;Sutton, 1984;Watkins, 1989). Whenever a state is visited, its activity becomes high and then gradually decays until it is visited again. The update to the predicted utility of each state x resulting from visiting state x t at time t may be then written as\nx (t) = (r t + U t (x t+1 ) U t (x t ))e x (t);\n(9) which is a direct transformation of Equation 6.\nThis technique (with minor di erences) was already used in the early works of Barto et al. (1983) and Sutton (1984), before the actual formulation of TD( ). It is especially suitable for use with parameter estimation function representation methods, such as connectionist networks. Instead of having one e x value for each state x one then has one e i value for each weight w i . That is how eligibility traces were actually used by Barto et al. (1983) and Sutton (1984), inspired by an earlier work of Klopf (1982). Note that in the case of the AHC algorithm, di erent values may be used for maintaining traces used by the evaluation and policy functions.\nUnfortunately, the technique of eligibility traces is not general enough to be easy to implement with an arbitrary function representation method. It is not clear, for example, how it could be used with such an important class of function approximators as memory-based (or instance-based) function approximators (Moore & Atkeson, 1992). Applied with a pure tabular representation, it has signi cant drawbacks. First, it requires additional memory locations, one per state. Second, and even more painful, is that it requires modifying both U(x) and e x for all x at each time step. This operation dominates the computational complexity of TD-based reinforcement learning algorithms, and makes using TD( > 0) much more expensive than TD(0). The eligibility traces implementation of TD( ) is thus, for large state spaces, absolutely impractical on serial computers, unless an appropriate function approximator is used that allows updating function values and eligibility traces for many states concurrently (such as a multi-layer perceptron). But even when such an approximator is used, there are still signi cant computational (both memory and time) additional costs of using TD( ) for > 0 versus TD(0). Another drawback of this approach will be revealed in Section 4.1." }, { "figure_ref": [], "heading": "Truncating Temporal Di erences", "publication_ref": [], "table_ref": [], "text": "This section departs from an alternative formulation of TD( ) for reinforcement learning. Then we follow with relating the TD( ) training errors used in this alternative formulation to TD( ) returns. Finally, we propose approximating TD( ) returns with truncated TD( ) returns, and we show how they can be computed and used for on-line reinforcement learning." }, { "figure_ref": [], "heading": "TD Errors and TD Returns", "publication_ref": [ "b29", "b3", "b29", "b3", "b29" ], "table_ref": [], "text": "Let us take a closer look at Equation 7. Consider the e ects of experiencing a sequence of states x 0 ; x 1 ; : : :; x k ; : : : and corresponding reinforcement values r 0 ; r 1 ; : : :; r k ; : : :. For the sake of simplicity, assume for a while that all states in the sequence are di erent (though it is of course impossible for nite state spaces). Applying Equation 7to state x t under this assumption we have:\nxt = r t + U t (x t+1 ) U t (x t ) + h r t+1 + U t+1 (x t+2 ) U t+1 (x t+1 ) i + ( ) 2 h r t+2 + U t+2 (x t+3 ) U t+2 (x t+2 ) i + : : : = 1 X k=0 ( ) k h r t+k + U t+k (x t+k+1 ) U t+k (x t+k ) i :\nIf a state occurs several times in the sequence, each visit to that state yields a similar update. This simple observation opens a way to an alternative (though equivalent) formulation of TD( ), o ering novel implementation possibilities. Let\n0 t = r t + U t (x t+1 ) U t (x t )(10)\nbe the TD(0) error at time step t. We de ne the TD( ) error at time t using TD(0) errors as follows:\nt = 1 X k=0 ( ) k h r t+k + U t+k (x t+k+1 ) U t+k (x t+k ) i = 1 X k=0 ( ) k 0 t+k : (11)\nNow, we can express the overall TD( ) error for state x, x , in terms of t errors:\nx = 1 X t=0 t x (t):(12)\nIn fact, from Equation 7we have:\nx = 1 X t=0 0 t t X k=0 ( ) t k x (k) = 1 X t=0 t X k=0 ( ) t k 0 t x (k): (13)\nSwapping the order of the two summations we get:\nx = 1 X k=0 1 X t=k ( ) t k 0 t x (k):(14)\nFinally, by exchanging k and t with each other, we receive:\nx = 1 X t=0 1 X k=t ( ) k t 0 k x (t) = 1 X t=0 1 X k=0 ( ) k 0 t+k x (t) = 1 X t=0 t x (t):(15)\nNote the following important di erence between x (t) (Equation 6) and t : the former is computed at each time step t for all x and the latter is computed at each step t only for x t . Accordingly, at step t the error value x (t) is used for adjusting U(x) for all x and t is only used for adjusting U(x t ). This is crucial for the learning procedure proposed in Section 4.2. While applying such de ned t errors on-line makes changes to predicted state utilities at individual steps clearly di erent than those described by Equation 6, the overall e ects of experiencing the whole state sequence (i.e., the sums of all individual error values for each state) are equivalent, as shown above.\nHaving expressed TD( ) in terms of t errors, we can gain more insight into its operation and the role of . Some de nitions will be helpful. Recall that the TD return for time t is de ned as\nz t = 1 X k=0 k r t+k :\nThe m-step truncated TD return (Watkins, 1989;Barto et al., 1990) is received by taking into account only the rst m terms of the above sum, i.e.,\nz m] t = m 1 X k=0 k r t+k :\nNote, however, that the rejected terms m r t+m + m+1 r t+m+1 + : : : can be approximated by m U t+m 1 (x t+m ). The corrected m-step truncated TD return (Watkins, 1989;Barto et al., 1990) is thus:\nz (m) t = m 1 X k=0 k r t+k + m U t+m 1 (x t+m ):\nEquation 11 may be rewritten in the following form:\nt = 1 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) + U t+k (x t+k+1 ) U t+k (x t+k ) i = 1 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) i U t (x t ) + 1 X k=1 ( ) k h U t+k 1 (x t+k ) U t+k (x t+k ) i :(16)\nNote that for = 1 it yields:\n1 t = 1 X k=0 k r t+k U t (x t ) + 1 X k=1 k h U t+k 1 (x t+k ) U t+k (x t+k ) i = z t U t (x t ) + 1 X k=1 k h U t+k 1 (x t+k ) U t+k (x t+k ) i :\nIf we relax for a moment our assumption about on-line learning mode and leave out time subscripts from U values, the last term disappears and we simply have:\n1 t = z t U(x t ): Similarly for general , if we de ne the TD( ) return (Watkins, 1989) for time t as a weighted average of corrected truncated TD returns:\nz t = (1 ) 1 X k=0 k z (k+1) t = 1 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) i (17)\nand again omit time subscripts, we will receive:\nt = z t U(x t ): (18)\nThe last equation brings more light on the exact nature of the computation performed by TD( ). The error at time step t is the di erence between the TD( ) return for that step and the predicted utility of the current state, that is, learning with that error value will bring the predicted utility closer to the return. For = 1 the quantity z t is the usual TD return for time t, i.e., the discounted sum of all future reinforcement values. 4 For < 1 the term r t+k is replaced by r t+k + (1 )U t+k (x t+k+1 ), that is, the actual immediate reward is augmented with the predicted future reward.\nThe de nition of the TD( ) return (Equation 17) may be written recursively as z t = r t + ( z t+1 + (1 )U t (x t+1 )):\n(\nThis probably best explains the role of in TD( ) learning. It determines how the return used for improving predictions is obtained. When = 1, it is exactly the actual observed return, the discounted sum of all rewards. For = 0 it is the 1-step corrected truncated return, i.e., the sum of the immediate reward and the discounted predicted utility of the successor state. Using 0 < < 1 allows to smoothly interpolate between these two extremes, relying partially on actual returns and partially on predictions. Equation 18 holds true only for batch learning mode, but in fact TD methods have been originally formulated for batch learning. The incremental version, more practically useful, introduces an additional term. Let D t designate that term. By comparing Equations 16 and 17 we get:\nD t = t (z t U t (x t )) = 1 X k=1 ( ) k h U t+k 1 (x t+k ) U t+k (x t+k ) i : (20)\nThe magnitude of this discrepancy term, and consequently its in uence on the learning process, obviously depends on the learning rate value. To examine it further, suppose a learning rate is used when learning U on the basis of t errors. Let the corresponding learning rule be:\nU t+1 (x t ) := U t (x t ) + t :\nThen we have\nU t+1 (x t ) U t (x t ) = (z t U t (x t )) + D t = (z U t (x t )) + 1 X k=1 ( ) k h U t+k 1 (x t+k ) U t+k (x t+k ) i (z U t (x t )) 2 1 X k=1 ( ) k t+k 1 ;(21)\nwith equality if and only if x t+k = x t+k 1 for all k. A similar result may be obtained for the eligibility traces implementation, with learning driven by x (t) errors de ned by Equation 9.\nWe would then have:\nU t+1 (x t ) U t (x t ) = (z U t (x t )) 2 1 X k=1\n( ) k 0 t+k 1 e x t+k (t + k 1):\n(\n)22\nThis e ect may be considered another drawback of the eligibility traces implementation of TD( ), apart from its ine ciency and lack of generality. Though for small learning rates the e ect of D t is negligible, it may be still harmful in some cases, especially for large and . 5" }, { "figure_ref": [], "heading": "The TTD Procedure", "publication_ref": [ "b22", "b29", "b27" ], "table_ref": [], "text": "We have shown that TD errors t or z t U t (x t ) can be used almost equivalently for TD( ) learning, yielding the same overall results as the eligibility traces implementation, which has, however, important drawbacks in practice. Nevertheless, it is impossible to use either TD( ) errors t or TD( ) returns z t for on-line learning, since they are not available. At step t the knowledge of both r t+k and x t+k is required for all k = 1; 2; : : :, and there is no way to implement this in practice. Recall, however, the de nition of the truncated TD return. Why not de ne the truncated TD( ) error and the truncated TD( ) return? The appropriate de nitions are:\n;m t = m 1 X k=0 ( ) k 0 t+k (23\n)\n5. Sutton (1984) presented the technique of eligibility traces as an implementation of the recency and frequency heuristics. In this context, the phenomenon examined above may be considered a harmful e ect of the frequency heuristic. Sutton discussed an example nite-state task where this heuristic might be misleading (Sutton, 1984, page 171). and z\n;m t = m 2 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) i + ( ) m 1 h r t+m 1 + U t+m 1 (x t+m ) i = m 1 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) i + ( ) m U t+m 1 (x t+m ):(24)\nWe call ;m t the m-step truncated TD( ) error, or simply the TTD( ; m) error at time step t, and z ;m t the m-step truncated TD( ) return, or the TTD( ; m) return for time t. Note that z ;m t de ned by Equation 24 is corrected, i.e., it is not obtained by simply truncating Equation 17. The correction term ( ) m U t+m 1 (x t+m ) results in multiplying the last prediction U t+m 1 (x t+m ) by alone instead of (1 ), which is virtually equivalent to using = 0 for that step. It is done in order to include in z ;m t all the available information about the expected returns for further time steps (t + m; t + m + 1; : : :) contained in U t+m 1 (x t+m ). Without this correction for large this information would be almost completely lost. So de ned, m-step truncated TD( ) errors or returns, can be used for on-line learning by keeping track of the last m visited states, and updating at each step the predicted utility of the least recent state of those m states. This idea leads to what we call the TTD Procedure (Truncated Temporal Di erences), which can be a good approximation of TD( ) for su ciently large m. The procedure is parameterized by and m values. An m-element experience bu er is maintained, containing records hx t k ; a t k ; r t k ; U t k (x t k+1 )i for all k = 0; 1; : : :; m 1, where t is the current time step. At each step t by writing x k] , a k] , r k] , and u k] we refer to the corresponding elements of the bu er, storing x t k , a t k , r t k , and U t k (x t k+1 ). 6 References to U are not subscripted with time steps, since all of them concern the values available at the current time step | in a practical implementation this directly corresponds to restoring a function value from some function approximator or a look-up table. Under this notational convention, the operation of the TTD( ; m) procedure is presented in Figure 1. It uses TTD( ; m) returns for learning. An alternative version, using TTD( ; m) errors instead (based on Equation 11), is also possible and straightforward to formulate, but there is no reason to use a \\weaker\" version (subject to the harmful e ects described by Equations 20 and 21) when a \\stronger\" one is available at the same cost.\nAt the beginning of learning, before the rst m steps are made, no learning can take place. During these initial steps the operation of the TTD procedure reduces to updating appropriately the contents of the experience bu er. This obvious technical detail was left out in Figure 1 for the sake of simplicity.\nThe TTD( ; m) return value z is computed in step 5 by the repeated application of Equation 19. The computational cost of such propagating the return in time is acceptable in practice for reasonable values of m. For some function representation methods, such as neural networks, the overall time complexity is dominated by the costs of retrieving a function value and learning performed in steps 4 and 6, and the cost of computing z is negligible. One advantage of such implementation is that it allows to use adaptive values: in step 5 one can use k depending on whether a k 1] was or was not a non-policy action, or 6. This naturally means that the bu er's indices are shifted appropriately on each time tick.\nAt each time step t:\n1. observe current state x t ; x 0] := x t ; 2. select an action a t for state x t ; a 0] := a t ; 3. perform action a t ; observe new state x t+1 and immediate reinforcement r t ; 4. r 0] := r t ; u 0] := U(x t+1 ); 5. for k = 0; 1; : : :; m 1 do if k = 0 then z := r k] + u k] else z := r k] + ( z + (1 )u k] );\n6. update (U; x m 1] ; a m 1] ; z U(x m 1] ));\n7. shift the indices of the experience bu er.\nFigure 1: The TTD( ; m) procedure.\n\\how much\" non-policy it was. This re nement to the TD( ) algorithm was suggested by Watkins (1989) or recently Sutton and Singh (1994). Later we will see how the TTD return computation can be performed in a fully incremental way, using constant time at each step for arbitrary m.\nNote that the function update carried out in step 6 at time t applies to the state and action from time t m + 1, i.e., m 1 time steps earlier. This delay between an experience event and learning might be found a potential weakness of the presented approach, especially for large m. Note, however, that as a baseline in computing the error value the current utility U(x m 1] ) = U t (x t m+1 ) is used. This is an important point, because it guarantees that learning will have the desired e ect of moving the utility (whatever value it currently has) towards the corresponding TTD return. If the error used in step 6 were z U t m (x t m+1 ) instead of z U t (x t m+1 ), then applying it to learning at time t would be problematic. Anyway, it seems that m should not be too large.\nThe TTD procedure is not an exact implementation of TD methods for two reasons.\nFirst, it only approximates TD( ) returns with TTD( ; m) returns. Second, it introduces the aforementioned delay between experience and learning. I believe, however, that it is possible to give strict conditions under which the convergence properties of TD( ) hold true for the TTD implementation." }, { "figure_ref": [], "heading": "Choice of m", "publication_ref": [ "b22", "b28", "b12", "b29", "b12" ], "table_ref": [], "text": "The reasonable choice of m obviously depends on . For = 0 the best possible is m = 1 and for = 1 and = 1 no nite value of m is large enough to accurately approximate TD( ). Fortunately, this does not seem to be very painful. It is rather unlikely that in any application one wanted to use the combination of = 1 and = 1, the more so as existing previous empirical results with TD( ) indicate that = 1 is usually not the optimal value to use, and it is at best comparable with other, smaller values (Sutton, 1984;Tesauro, 1992;Lin, 1993). Similar conclusions follow from the discussion of the choice of presented by Watkins (1989) or Lin (1993). For < 1 or < 1 we would probably like to have such a value of m that the discount ( ) m is a small number. One possible de nition of `small' here could be, e.g., `much less than '. This is obviously a completely informal criterion.\nTable 1 illustrates the practical e ects of this heuristic. On the other hand, for too large m, the delay between experience and learning introduced by the TTD procedure might become signi cant and cause some problems. Some of the experiments described in Section 5 have been designed in order to test di erent values of m for xed 0 < < 1. 0:99 0:975 0:95 0:9 0:8 0:6 minfm j ( ) m < 1 10 g 231 92 46 23 12 6\nTable 1: Choosing m: an illustration." }, { "figure_ref": [ "fig_0" ], "heading": "Reset Operation", "publication_ref": [ "b22" ], "table_ref": [], "text": "Until now, we have assumed that the learning process, once started, continues in nitely long. This is not true for episodic tasks (Sutton, 1984) and for many real-world tasks, where learning must usually stop some time. This imposes the necessity of designing a special mechanism for the TTD procedure, that will be called the reset operation. The reset operation would be invoked after the end of each episode in episodic tasks, or after the overall end of learning.\nThere is not very much to be done. The only problem that must be dealt with is that the experience bu er contains the record of the last m steps for which learning has not taken place yet, and there will be no further steps that would make learning for these remaining steps possible. The implementation of the reset operation that we nd the most natural and coherent with the TTD procedure is then to simulate m additional ctious steps, so that learning takes place for all the real steps left in the bu er, and their TTD returns remain una ected by the simulated ctious steps. The corresponding algorithm, presented in Figure 2, is formulated as a replacement of the original algorithm from Figure 1 for the nal time step. At the nal step, when there is no successor state, the ctious successor state utility is assumed to be 0. This corresponds to assigning 0 to u 0] . The actual reset operation is performed in step 5." }, { "figure_ref": [], "heading": "Incremental TTD", "publication_ref": [ "b29", "b27" ], "table_ref": [], "text": "As stated above, the cost of iteratively computing the TTD( ; m) return is relatively small for reasonable m, and with some function representation methods, for which restoring and updating function values is computationally expensive, may be really negligible. We also argued that reasonable values of m should not be too large. On the other hand, such iterative return computation is easy to understand and re ects well the idea of TTD. That is why At the nal time step t: 1. observe current state x t ; x 0] := x t ; 2. select an action a t for state x t ; a 0] := a t ; 3. perform action a t ; observe immediate reinforcement r t ; 4. r 0] := r t ; u 0] := 0; 5. for k 0 = 0; 1; : : :; m 1 do (a) for k = k 0 ; k 0 + 1; : :\n:; m 1 do if k = k 0 then z := r k] + u k] else z := r k] + ( z + (1 )u k] ); (b) update (U; x m 1] ; a m 1] ; z U(x m 1] ));\n(c) shift the indices of the experience bu er. we presented the TTD procedure in that form. It is possible, however, to compute the TTD( ; m) return in a fully incremental manner, using constant time for arbitrary m.\nTo see this, note that the de nition of the TTD( ; m) return (Equation 24) may be rewritten in the following form: (26)\nThe above two equations de ne the algorithm for computing incrementally S ;m t and T ;m t , and consequently computing z ;m t in constant time for arbitrary m, with a very small computational expense. This algorithm is strictly mathematically equivalent to the algorithm presented in Figure 1. 7 Modifying appropriately the TTD procedure is straightforward and will not be discussed. A drawback of this modi cation is that it probably does not allow the learner to use di erent (adaptive) values at each step, i.e., it may not be possible to combine it with the re nements suggested by Watkins (1989) or Sutton and Singh (1994).\nDespite this, such implementation might be bene cial if one wanted to use really large m." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "TTD-Based Implementations of RL Algorithms", "publication_ref": [], "table_ref": [], "text": "To implement particular TD-based reinforcement learning algorithms on the basis of the TTD procedure, one just has to substitute appropriate function values for U, and de ne the updating operation of step 6 in Figure 1 and step 5b in Figure 2. Speci cally, for the three algorithms outlined in Section 2.2 one should:\nfor AHC:\n1. replace U(x t+1 ) with V (x t+1 ) in step 4 (Figure 1); 2. implement step 6 (Figure 1) and step 5b (Figure 2) as:\nv := V (x m 1] ); update (V; x m 1] ; z v); update (f; x m 1] ; a m 1] ; z v);\nfor Q-learning:\n1. replace U(x t+1 ) with max a Q(x t+1 ; a) in step 4 (Figure 1); 2. implement step 6 (Figure 1) and step 5b (Figure 2) as:\nupdate (Q; x m 1] ; a m 1] ; z Q(x m 1] ; a m 1] ));\nfor advantage updating:\n1. replace U(x t+1 ) with V (x t+1 ) in step 4 (Figure 1); 2. implement step 6 (Figure 1) and step 5b (Figure 2) as:\nA max := max a A(x m 1] ; a); update (A; x m 1] ; a m 1] ; A max A(x m 1] ; a t ) + z V (x m 1] )); update (V; x m 1] ; 1 max a A(x m 1] ) A max ])." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b29", "b12", "b16", "b15" ], "table_ref": [], "text": "The simple idea of truncating temporal di erences that is implemented by the TTD procedure is not new. It was probably rst suggested by Watkins (1989). This paper owes much to his work. But, to the best of my knowledge, this idea has never been explicitly and 7. But it is not necessarily numerically equivalent, which may sometimes cause problems in practical implementations.\nexactly speci ed, implemented, and tested. In this sense the TTD procedure is an original development. Lin (1993) used a very similar implementation of TD( ), but only for what he called experience replay, and not for actual on-line reinforcement learning. In his approach a sequence of past experiences is replayed occasionally, and during replay for each experience the TD( ) return (truncated to the length of the replayed sequence) is computed by applying Equation 19, and a corresponding function update is performed. Such a learning method is by some means more computationally expensive than the TTD procedure (especially implemented in a fully incremental manner, as suggested above), since it requires updating predictions sequentially for all replayed experiences, besides \\regular\" TD(0) updates performed at each step (while TTD always requires only one update per time step), and it does not allow the learner to take full advantage of TD( > 0), which is applied only occasionally. Peng and Williams (1994) presented an alternative way of combining Q-learning and TD( ), di erent than discussed in Section 2.2. Their motivation was to better estimate TD returns by the use of TD errors. Toward that end, they used the standard Q-learning error r t + max a Q t (x t+1 ; a) Q t (x t ; a t ) for one-step updates and a modi ed error r t + max a Q t (x t+1 ; a) max a Q t (x t ; a); propagated using eligibility traces, thereafter. The TTD procedure achieves a similar objective in a more straightforward way, by the use of truncated TD( ) returns.\nOther related work is that of Pendrith (1994). He applied the idea of eligibility traces in a non-standard way to estimate TD returns. His approach is more computationally e cient that the classical eligibility traces technique (it requires one prediction update per time step) and is free of the potentially harmful e ect described by Equation 22. The method seems to be roughly equivalent to the TTD procedure with = 1 and large m, though it is probably much more implementationally complex." }, { "figure_ref": [], "heading": "Demonstrations", "publication_ref": [], "table_ref": [], "text": "The demonstrations presented in this section use the AHC variant of the TTD procedure. The reason is that the AHC algorithm is the simplest of the three described algorithms and its update rule for the evaluation function most directly corresponds to TD( ). Future work will investigate the TTD procedure for the two other algorithms.\nA tabular representation of the evaluation and policy functions is used. The abstract function update operation described by Equation 2 is implemented in a standard way as '(p 0 ; p 1 ; : : :; p n 1 ) := '(p 0 ; p 1 ; : : :; p n 1 ) + : (27) Actions to execute at each step are selected using a simple stochastic selection mechanism based on a Boltzmann distribution. According to this mechanism, action a is selected in state x with probability Prob(x; a ) = exp(f(x; a )=T) P a exp(f(x; a)=T) ; (28) where the temperature T > 0 adjusts the amount of randomness." }, { "figure_ref": [ "fig_3" ], "heading": "The Car Parking Problem", "publication_ref": [ "b22" ], "table_ref": [], "text": "This section presents experimental results for a learning control problem with a relatively large state space and hard temporal credit assignment. We call this problem the car parking problem, though it does not attempt to simulate any real-world problem at all. Using words such as `car', `garage', or `parking' is just a convention that simpli es problem description and the interpretation of results. The primary purpose of the experiments is neither just to solve the problem nor to provide evidence of the usefulness of the tested algorithm for any particular practical problem. We use this example problem in order to illustrate the performance of the AHC algorithm implemented within the TTD framework and to empirically evaluate the e ects of di erent values of the TTD parameters and m.\nThe car parking problem is illustrated in Figure 3. A car, represented as a rectangle, is initially located somewhere inside a bounded area, called the driving area. A garage is a rectangular area of a size somewhat larger than the car. All important dimensions and distances are shown in the gure. The agent | the driver of the car | is required to park it in the garage, so that the car is entirely inside. The task is episodic, though it is neither a time-until-success nor time-until-failure task (in Sutton's (1984) terminology), but rather a combination of both. Each episode nishes either when the car enters the garage or when it hits a wall (of the garage or of the driving area). After an episode the car is reset to its initial position." }, { "figure_ref": [], "heading": "State Representation", "publication_ref": [], "table_ref": [], "text": "The state representation consists of three variables: the rectangular coordinates of the center of the car, x and y, and the angle between the car's axis and the x axis of the coordinate system. The orientation of the system is shown in the gure. The initial location and orientation of the car is xed and described by x = 6:15 m, y = 10:47 m, and = 3:7 rad.\nIt was chosen so as to make the task neither too easy nor too di cult." }, { "figure_ref": [], "heading": "Action Representation", "publication_ref": [], "table_ref": [], "text": "The admissible actions are `drive straight on', `turn left', and `turn right'. The action of driving straight on has the e ect of moving the car forward along its axis, i.e., without changing . The actions of turning left and right are equivalent to moving along an arc with a xed radius. The distance of each move is determined by a constant car velocity v and simulation time step . Exact motion equations and other details are given in Appendix A." }, { "figure_ref": [], "heading": "Reinforcement Mechanism", "publication_ref": [], "table_ref": [], "text": "The design of the reinforcement function is fairly straightforward. The agent receives a reinforcement value of 1 (a reward) whenever it successfully parks the car in the garage, and a reinforcement value of 1 (a punishment) whenever it hits a wall. At all other time steps the reinforcement is 0. That is, non-zero reinforcements are received only at the last step of each episode. This involves a relatively hard temporal credit assignment problem, providing a good experimental framework for testing the e ciency of the TTD procedure. The problem is hard not only because of reinforcement delay, but also because punishments are much more frequent than rewards: it is much easier to hit a wall than to park the car correctly.\nWith such a reinforcement mechanism as presented above, an optimal policy for any 0 < < 1 is a policy that allows to park the car in the garage in the smallest possible number of steps." }, { "figure_ref": [], "heading": "Function Representation", "publication_ref": [], "table_ref": [], "text": "The car parking problem has a continuous state space. It is arti cially discretized | divided into a nite number of disjoint regions by quantizing the three state variables, and then a function value for each region is stored in a look-up table. The quantization thresholds are:\nfor x: 0:5, 0:0, 0:5, 1:0, 2:0, 3:0, 4:0, 6:0 m, for y: 0:5, 1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 8:0, 10:0 m, for : 19 20 , , 21 20 , : : :, 29 20 , 3 2 , 31 20 rad. This yields 9 10 14 = 1260 regions. Of course many of them will never be visited. The threshold values were chosen so as to make the resulting discrete state space of a moderate size. The quantization is dense near the garage, and becomes more sparse as the distance from the garage increases." }, { "figure_ref": [ "fig_5", "fig_7" ], "heading": "Experimental Design and Results", "publication_ref": [ "b23", "b12" ], "table_ref": [], "text": "Our experiments with applying the TTD procedure to the car parking problem are divided into two studies, testing the e ects of the two TTD parameters and m. The parameter settings for all experiments are presented in Table 2. The symbols and are used to designate the learning rates for the evaluation and policy functions, respectively. The initial values of the functions were all set to 0, since we assumed that no knowledge is available about expected reinforcement levels. As stated above, the experiments were designed to test the e ects of the two TTD parameters. The other parameters were assigned values according to following principles: the discount factor was xed and equal 0:95 in all experiments, the temperature value was also xed and set to 0:02, which seemed to be equally good for all experiments, the learning rates and were roughly optimized in each experiment. 8 Each experiment continued for 250 episodes, the number selected so as to allow all or almost all runs of all experiments to converge. The results presented for all experiments are averaged over 25 individual runs, each di ering only in the initial seed of the random number generator. This number was chosen as a reasonable compromise between the reliability of results and computational costs. The results are presented as plots of the average reinforcement value per time step for the previous 5 consecutive episodes versus the episode number.\nStudy 1: E ects of . The objective of this study was to examine the e ects of various values on learning speed and quality, with m set to 25. The value m = 25 was found to be large enough for all the tested values (perhaps except = 1). 9 Smaller m values might be used for small (in particular, m = 1 for = 0), but it was kept constant for consistency. The learning curves for this study are presented in Figure 4. The observations can be brie y summarized as follows:\n= 0 gives the worst performance of all (not all of 25 runs managed to converge within 250 episodes), increasing improves learning speed, values above or equal 0:7 are all similarly e ective, greatly outperforming = 0 and clearly better than = 0:5, 8. The optimization procedure in most cases was as follows: some rather large value was tested in a few runs; if it did not give any e ects of overtraining and premature convergence, it was accepted; otherwise a (usually twice) smaller value was tried, etc. 9. Note that for = 0:9, m = 25, and = 0:95 we have ( ) m 0:02 0:855 = .\nusing large caused the necessity of reducing the learning rates (cf. Table 2) to ensure convergence. The main result is that using large with the TTD procedure (including 1) always signi cantly improved performance. It is not quite consistent with the empirical results of Sutton (1988), who found the performance of TD( ) the best for intermediate , and the worst for = 1. Lin (1993), who used > 0 for his experience replay experiments, reported close to 1 as the most successful, similarly as this work. He speculated that the di erence between his results and Sutton's might have been caused by switching occasionally (for non-policy actions) to = 0 in his studies. 10 Our results, obtained for held xed all the time 11 , suggest that this is not a good explanation. It seems more likely that the optimal value simply strongly depends on the particular problem. Another point is that neither our TTD(1; 25) nor Lin's implementation is exactly equivalent to TD(1).\nStudy 2: E ects of m. This study was designed to investigate the e ects of using several di erent m values for a xed and relatively large value. The best (approximately) from study 1 was used, that is 0:9. The smallest tested m value is 5, which we nd to be rather a small value. The learning curves for this study are presented in Figure 5. The results for m = 25 were taken from study 1 for comparison. The observations can be summarized as follows: m = 5 is the worst and m = 25 is the best, the di erences between intermediate m values do not seem to be very statistically signi cant, 10. As a matter of fact, non-policy actions were not replayed at all in Lin's experience replay experiments. 11. Except for using = 0 for the most recent time step covered by the TTD return, as it follows from its de nition (Equation 24).\n12. For = 0:95, = 0:9, and m = 5 we have ( ) m 0:457, which is by all means comparable with = 0:855.\neven the smallest m = 5 gives the performance level much better than that obtained in study 1 for small , i.e., even relatively small m values allow us to have the advantages of large , though larger m values are generally better than small ones,\nThe last observation is probably the most important. It is also very optimistic. It suggests that, at least in some problems, the TTD procedure with > 0 allows to obtain a signi cant learning speed improvement over traditional TD(0)-based algorithms with practically no additional costs, because for small m both space and time complexity induced by TTD is always negligible." }, { "figure_ref": [ "fig_9" ], "heading": "The Cart-Pole Balancing Problem", "publication_ref": [ "b2", "b22", "b2" ], "table_ref": [], "text": "The experiments of this section have one basic purpose: to verify the e ectiveness of the TTD procedure by applying its AHC implementation to a realistic and complex problem, with a long reinforcement delay, for which there exist many previous results for comparison.\nThe cart-pole balancing problem, a classical benchmark of control specialists, is just such a problem. In particular, we would like to see whether it is possible to obtain performance (learning speed and the quality of the nal policy) not worse than that reported by Barto et al. (1983) and Sutton (1984) using the eligibility traces implementation.\nFigure 6 shows the cart-pole system. The cart is allowed to move along a one-dimensional bounded track. The pole can move only in the vertical plane of the cart and the track. The controller applies either a left or right force of xed magnitude to the cart at each time step. The task is episodic: each episode nishes when a failure occurs, i.e., the pole falls or the cart hits an edge of the track. The objective is to delay the failure as long as possible.\nThe problem was realistically simulated by numerically solving a system of di erential equations, describing the cart-pole system. These equations and other simulation details are given in Appendix B. All parameters of the simulated cart-pole system are exactly the same as used by Barto et al. (1983)." }, { "figure_ref": [], "heading": "State Representation", "publication_ref": [], "table_ref": [], "text": "The state of the cart-pole system is described by four state variables: " }, { "figure_ref": [], "heading": "Action Representation", "publication_ref": [], "table_ref": [], "text": "At each step the agent controlling the cart-pole system chooses one of the two possible actions of applying a left or right force to the cart. The force magnitude is xed and equal 10 N. F is the force applied to the cart's center, l is a half of the pole length, and d is a half of the length of the track." }, { "figure_ref": [], "heading": "Reinforcement Mechanism", "publication_ref": [], "table_ref": [], "text": "The agent receives non-zero reinforcement values (namely 1) only at the end of each episode, i.e., after a failure. A failure occurs whenever j j > 0:21 rad (the pole begins to fall) or jxj > 2:4 m (the cart hits an edge of the track). Even at the beginning of learning, with a very poor policy, an episode may continue for hundreds of time steps, and there may be many steps between a bad action and the resulting failure. This makes the temporal credit assignment problem in the cart-pole task extremely hard." }, { "figure_ref": [], "heading": "Function Representation", "publication_ref": [ "b13", "b2" ], "table_ref": [], "text": "As in the case of the car parking problem, we deal with the continuous state space of the cart-pole system by dividing it into disjoint regions, called boxes after Mitchie and Chambers (1968). The quantization thresholds are the same as used by Barto et al. (1983), i.e.:\nfor x: 0:8, 0:8 m, for _\nx: 0:5, 0:5 m/s, for : 0:105, 0:0175, 0, 0:0175, 0:105 rad, for _ : 0:8727, 0:8727 rad/s, which yields 3 3 6 3 = 162 boxes. For each box there is a memory location, storing a function value for that box." }, { "figure_ref": [ "fig_10" ], "heading": "Experimental Design and Results", "publication_ref": [ "b2", "b2", "b2", "b2" ], "table_ref": [], "text": "Computational expense prevented such extensive experimental studies as for the car parking problem. Only one experiment was carried out, intended to be a replication of the experiment presented by Barto et al. (1983). The values of the TTD parameters that seemed the best from the previous experiments were used, that is = 0:9 and m = 25. The discount factor was set to 0:95. The learning rates for the evaluation and policy functions were roughly optimized by a small number of preliminary runs and equal = 0:1 and = 0:05, respectively. The temperature of the Boltzmann distribution action selection mechanism was set to 0:0001, so as to give nearly-deterministic action selection. The initial values of the evaluation and policy functions were set to 0. We did not attempt to strictly replicate the same learning parameter values as in the work of Barto et al. (1983), since they used not only a di erent TD( ) implementation13 , but also a di erent policy representation (based on the fact that there are only two actions, while our representation is general), action selection mechanism (for the same reasons), and function learning rule.\nThe experiment consisted of 10 runs, di ering only in the initial seed of the random number generator, and the presented results are averaged over those 10 runs. Each run continued for 100 episodes. Some of individual runs were terminated after 500; 000 time steps, before completing 100 episodes. To produce reliable averages for all 100 episodes, ctious remaining episodes were added to such runs, with the duration assigned according to the following principle, used in the experiments of Barto et al. (1983). If the duration of the last, interrupted episode was less than the duration of the immediately preceding (complete) episode, the ctious episodes were assigned the duration of that preceding episode. Otherwise, the ctious episodes were assigned the duration of the last (incomplete) episode. This prevented any short interrupted episodes from producing unreliably low averages. The results are presented in Figure 7 as plots of the average duration (the number of time steps) of the previous 5 consecutive episodes versus the episode number, in linear and logarithmic scale.\nWe can observe that TTD-based AHC achieved a similar (slightly better, to be exact) performance level, both as to learning speed and the quality of the nal policy (i.e., the balancing periods), to that reported by Barto et al. (1983). The nal balancing periods lasted above 130; 000 steps, on the average. It was obtained without using 162 additional memory locations for storing eligibility traces, and without the expensive computation necessary to update all of them at each time step, as well as all evaluation and policy function values." }, { "figure_ref": [], "heading": "Computational Savings", "publication_ref": [], "table_ref": [], "text": "The experiments presented above illustrate the computational savings possible with the TTD procedure over conventional eligibility traces. A direct implementation of eligibility traces requires computation proportional to the number of states, i.e., to 1260 in the car parking task and to 162 in the cart-pole task | potentially many more in larger tasks. Even the straightforward iterative version of TTD may be then bene cial, as it requires computation proportional to m, which may be reasonably assumed to be many times less than the size of the state space. Of course, the incremental version of TTD, which requires always very small computation independent of m, is much more e cient.\nIn many practical implementations, to improve e ciency, eligibility traces and predictions are updated only for relatively few recently visited states. Traces are maintained only for the n most recently visited states, and the eligibility traces of all other states are assumed to be 0. 14 But even for this \\e cient\" version of eligibility traces, the savings o ered by TTD are considerable. For a good approximation to in nite traces in such tasks as considered here, n should be at least as large as m. For conventional eligibility traces, there will be always a concern for keeping n low, by reducing , , or the accuracy of the approximation.\nThe same problem occurs for iterative TTD, 15 but for incremental TTD, on the other hand, none of these are at issue. The same small computation is needed independent of m." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b23" ], "table_ref": [], "text": "We have informally derived the TTD procedure from the analysis of the updates introduced by TD methods to the predicted utilities of states, and shown that they can be approximated by the use of truncated TD( ) returns. Truncating temporal di erences allows easy and e cient implementation. It is possible to compute TTD returns incrementally in constant time, irrespective of the value of m (the truncation period), so that the computational expense of using TD-based reinforcement learning algorithms with > 0 is negligible (cf.\nEquations 25 and 26). It cannot be achieved with the eligibility traces implementation. The latter, even for such function representation methods to which it is particularly well 14. This modi cation cannot be applied when a parameter estimation function representation technique is used (e.g., a multi-layer perceptron), where traces are maintained for weights rather than for states. 15. The relative computational expense of iterative TTD and the \\e cient\" version of eligibility traces depends on the cost of the function update operation, which is always performed only for one state by the former, and for n states by the latter.\nsuited (e.g., neural networks), is always associated with signi cant memory and time costs. The TTD procedure is probably the most computationally e cient (although approximate) on-line implementation of TD( ). It is also general, equally good for any function representation method that might be used. An important question concerning the TTD procedure is whether its computational e ciency is not obtained at the cost of reduced learning e ciency. Having low computational costs per control action may not be attractive if the number of actions necessary to converge becomes large. As for now, no theoretically grounded answer to this important question has been provided, though it is not unlikely that such an answer will eventually be found. Nevertheless, some informal consideration may suggest that the TTD-based implementation of TD methods not only does not have to perform worse than the classical eligibility traces implementation, but it can even have some advantages. As it follows from Equations 20, 21, and 22, using TD(0) errors for on-line TD( ) learning, as in the eligibility traces implementation, introduces an additional discrepancy term, whose in uence on the learning process is proportional to the square of the learning rate. That term, though often negligible, may be still harmful in certain cases, especially in tasks where the agent is likely to stay in the same states for long periods. The TTD procedure, based on truncated TD( ) returns, is free of this drawback.\nAnother argument supporting the TTD procedure is associated with using large values, in particular 1. For an exact TD( ) implementation, such as that provided by eligibility traces, it means that learning relies solely on actually observed outcomes, without any regard to currently available predictions. It may be bene cial at the early stages of learning, when predictions are almost completely inaccurate, but in general it is rather risky | actual outcomes may be noisy and therefore sometimes misleading. The TTD procedure never relies on them entirely, even for = 1, since it uses m-step TTD returns for some nite m, corrected by always using = 0 for discounting the predicted utility of the most recent step covered by the return (cf. Equation 17). This deviation of the TTD procedure from TD( ) may turn out to be advantageous.\nThe TTD procedure using TTD returns for learning is only suitable for the implementation of TD methods applied to reinforcement learning. This is because in RL a part of the predicted outcome is available at each step, as the current reinforcement value. However, it is straightforward to formulate another version of the TTD procedure, using truncated TD( ) errors instead of truncated TD( ) returns, that would cover the whole scope of applications of generic TD methods.\nThe experimental results obtained for the TTD procedure seem very promising. The results presented in Section 5.1 show that using large with the TTD procedure can give a signi cant performance improvement over simple TD(0) learning, even for relatively small m.\nWhile it does not say anything about the relative performance of TTD and the eligibility traces implementation of TD( ), it at least suggests that the TTD procedure can be useful. The best results have been obtained for the largest values, including 1. This observation, contradicting to the results reported by Sutton (1988), may be a positive consequence of the TTD procedure's deviation from TD( ) discussed above.\nThe experiments with the cart-pole balancing problem supplied empirical evidence that for a learning control problem with a very long reinforcement delay the TTD procedure can equal or outperform the eligibility traces implementation of TD( ), even for a value of m many times less than the average duration of an episode. This performance level is obtained with the TTD procedure at a much lower computational (both memory and time) expense.\nTo summarize, our informal consideration and empirical results suggest that the TTD procedure may have the following advantages: the possibility of the implementation of reinforcement learning algorithms that may be viewed as instantiations of TD( ), using > 0 for faster learning, computational e ciency: low memory requirements (for reasonable m) and little computation per time step, generality, compatibility with various function representation methods, good approximation of TD( ) for < 1 (or for = 1 and < 1), good practical performance, even for relatively small m.\nThere seems to be one important drawback: lack of theoretical analysis and a convergence proof. We do not know either what parameter values assure convergence or what values make it impossible. In particular, no estimate is available of the potential harmful e ects of using too large m. Both the advantages and drawbacks cause that the TTD procedure is an interesting and promising subject for further work. This work should concentrate, on one hand, on examining the theoretical properties of this technique, and, on the other hand, on empirical studies investigating the performance of various TD-based reinforcement learning algorithms implemented within the TTD framework on a variety of problems, in particular in stochastic domains." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I wish to thank the anonymous reviewers of this paper for many insightful comments. I was unable to follow all their suggestions, but they contributed much to improving the paper's clarity. Thanks also to Rich Sutton, whose assistance during the preparation of the nal version of this paper was invaluable.\nThis research was partially supported by the Polish Committee for Scienti c Research under Grant 8 S503 019 05." }, { "figure_ref": [], "heading": "Appendix A. Car Parking Problem Details", "publication_ref": [], "table_ref": [], "text": "The motion of the car in the experiments of Section 5.1 is simulated by applying at each time step the following equations:\nwhere r is the turn radius, v is the car's velocity, and is the simulation time step. In the experiments r = 5 m was used for the `turn left' action, r = 5 m for `turn right', and r = 0 for `drive straight on'. The velocity was constant and set to 1 m/s, and the simulation time step = 0:5 s was used. With these parameter settings, the shortest possible path from the car's initial location (x = 6:15 m, y = 10:47 m, = 3:7 rad) to the garage requires 21 steps.\nAt each step, after determining the current x, y, and values, the coordinates of the car's corners are computed. Then the test for intersection of each side of the car with the lines delimiting the driving area and the garage is performed to determine whether a failure occurred. If the result is negative, the test is performed for each corner of the car whether it is inside the garage, to determine if a success occurred." }, { "figure_ref": [], "heading": "Appendix B. Cart-Pole Balancing Problem Details", "publication_ref": [], "table_ref": [], "text": "The dynamics of the cart-pole system are described by the following equations of motion: The equations were simulated using Euler's method with simulation time step = 0:02 s." } ]
[ { "authors": "Iii Baird; L C ", "journal": "Wright-Patterson Air Force Base", "ref_id": "b0", "title": "Advantage updating", "year": "1993" }, { "authors": "A G Barto", "journal": "Van Nostrand Reinhold", "ref_id": "b1", "title": "Reinforcement learning and adaptive critic methods", "year": "1992" }, { "authors": "A G Barto; R S Sutton; C Anderson", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b2", "title": "Neuronlike adaptive elements that can solve di cult learning control problems", "year": "1983" }, { "authors": "A G Barto; R S Sutton; C J C H Watkins", "journal": "The MIT Press", "ref_id": "b3", "title": "Learning and sequential decision making", "year": "1990" }, { "authors": "P Cichosz", "journal": "", "ref_id": "b4", "title": "Reinforcement learning algorithms based on the methods of temporal di erences", "year": "1994" }, { "authors": "P Dayan", "journal": "Machine Learning", "ref_id": "b5", "title": "The convergence of TD( ) for general", "year": "1992" }, { "authors": "P Dayan; T Sejnowski", "journal": "Machine Learning", "ref_id": "b6", "title": "TD( ) converges with probability 1", "year": "1994" }, { "authors": "M Heger", "journal": "", "ref_id": "b7", "title": "Consideration of risk in reinforcement learning", "year": "1994" }, { "authors": "Morgan Kaufmann; T Jaakkola; M I Jordan; S P Singh", "journal": "", "ref_id": "b8", "title": "On the convergence of stochastic iterative dynamic programming algorithms", "year": "1993" }, { "authors": "A H Klopf", "journal": "", "ref_id": "b9", "title": "The Hedonistic Neuron: A Theory of Memory, Learning, and Intelligence", "year": "1982" }, { "authors": "D C Washington", "journal": "", "ref_id": "b10", "title": "Hempisphere", "year": "" }, { "authors": "L.-J Lin", "journal": "Machine Learning", "ref_id": "b11", "title": "Self-improving, reactive agents based on reinforcement learning, planning and teaching", "year": "1992" }, { "authors": "L.-J Lin", "journal": "", "ref_id": "b12", "title": "Reinforcement Learning for Robots Using Neural Networks", "year": "1993" }, { "authors": "D Mitchie; R A Chambers", "journal": "Machine Intelligence", "ref_id": "b13", "title": "BOXES: An experiment in adaptive control", "year": "1968" }, { "authors": "A W Moore; C G Atkeson", "journal": "", "ref_id": "b14", "title": "An investigation of memory-based function approximators for learning control", "year": "1992" }, { "authors": "M Pendrith", "journal": "", "ref_id": "b15", "title": "On reinforcement learning of control actions in noisy and non-markovian domains", "year": "1994" }, { "authors": "J Peng; R J Williams", "journal": "", "ref_id": "b16", "title": "Incremental multi-step Q-learning", "year": "1994" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "S Ross", "journal": "Academic Press", "ref_id": "b18", "title": "Introduction to Stochastic Dynamic Programming", "year": "1983" }, { "authors": "Cichosz Schwartz; A ", "journal": "", "ref_id": "b19", "title": "A reinforcement learning method for maximizing undiscounted rewards", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": "S P Singh", "journal": "", "ref_id": "b21", "title": "Reinforcement learning algorithms for average-payo markovian decision processes", "year": "1994" }, { "authors": "R S Sutton", "journal": "", "ref_id": "b22", "title": "Temporal Credit Assignment in Reinforcement Learning", "year": "1984" }, { "authors": "R S Sutton", "journal": "Machine Learning", "ref_id": "b23", "title": "Learning to predict by the methods of temporal di erences", "year": "1988" }, { "authors": "R S Sutton", "journal": "", "ref_id": "b24", "title": "Integrated architectures for learning, planning, and reacting based on approximating dynamic programming", "year": "1990" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b25", "title": "", "year": "" }, { "authors": "R S Sutton; A G Barto; R J Williams", "journal": "", "ref_id": "b26", "title": "Reinforcement learning is direct adaptive optimal control", "year": "1991" }, { "authors": "R S Sutton; S P Singh", "journal": "", "ref_id": "b27", "title": "On step-size and bias in temporal-di erence learning", "year": "1994" }, { "authors": "G Tesauro", "journal": "Machine Learning", "ref_id": "b28", "title": "Practical issues in temporal di erence learning", "year": "1992" }, { "authors": "C J C H Watkins", "journal": "", "ref_id": "b29", "title": "Learning from Delayed Rewards", "year": "1989" }, { "authors": "C J C H Watkins; P Dayan", "journal": "Machine Learning", "ref_id": "b30", "title": "Technical note: Q-learning", "year": "1992" } ]
[ { "formula_coordinates": [ 4, 215.52, 500.22, 306.72, 37.6 ], "formula_id": "formula_0", "formula_text": "x (t) = (P(x t+1 ) P(x t )) t X k=0 t k x (k);(3)" }, { "formula_coordinates": [ 4, 172.8, 576.48, 349.44, 40.78 ], "formula_id": "formula_1", "formula_text": "x = m 1 X t=0 x (t) = m 1 X t=0 ( (P(x t+1 ) P(x t )) t X k=0 t k x (k) ) :(4)" }, { "formula_coordinates": [ 5, 194.88, 595.74, 327.36, 37.36 ], "formula_id": "formula_2", "formula_text": "x (t) = (r t + U t (x t+1 ) U t (x t )) t X k=0 ( ) t k x (k) (6)" }, { "formula_coordinates": [ 6, 90, 89.4, 432.24, 47.14 ], "formula_id": "formula_3", "formula_text": "and x = 1 X t=0 x (t) = 1 X t=0 ( (r t + U t (x t+1 ) U t (x t )) t X k=0 ( ) t k x (k) ) :(7)" }, { "formula_coordinates": [ 8, 246.96, 255.9, 118.32, 37.6 ], "formula_id": "formula_4", "formula_text": "e x (t) = t X k=0 ( ) t k x (k);" }, { "formula_coordinates": [ 8, 203.76, 318.24, 318.48, 70.78 ], "formula_id": "formula_5", "formula_text": "e x (0) = ( 1 if x 0 = x 0 otherwise; e x (t) = ( e x (t 1) + 1 if x t = x e x (t 1) otherwise:(8)" }, { "formula_coordinates": [ 9, 181.68, 401.52, 258, 95.02 ], "formula_id": "formula_6", "formula_text": "xt = r t + U t (x t+1 ) U t (x t ) + h r t+1 + U t+1 (x t+2 ) U t+1 (x t+1 ) i + ( ) 2 h r t+2 + U t+2 (x t+3 ) U t+2 (x t+2 ) i + : : : = 1 X k=0 ( ) k h r t+k + U t+k (x t+k+1 ) U t+k (x t+k ) i :" }, { "formula_coordinates": [ 9, 245.52, 553.8, 276.72, 19.3 ], "formula_id": "formula_7", "formula_text": "0 t = r t + U t (x t+1 ) U t (x t )(10)" }, { "formula_coordinates": [ 9, 150, 606.36, 372.24, 38.02 ], "formula_id": "formula_8", "formula_text": "t = 1 X k=0 ( ) k h r t+k + U t+k (x t+k+1 ) U t+k (x t+k ) i = 1 X k=0 ( ) k 0 t+k : (11)" }, { "formula_coordinates": [ 9, 271.44, 671.4, 250.8, 37.54 ], "formula_id": "formula_9", "formula_text": "x = 1 X t=0 t x (t):(12)" }, { "formula_coordinates": [ 10, 183.12, 106.68, 339.12, 37.78 ], "formula_id": "formula_10", "formula_text": "x = 1 X t=0 0 t t X k=0 ( ) t k x (k) = 1 X t=0 t X k=0 ( ) t k 0 t x (k): (13)" }, { "formula_coordinates": [ 10, 245.28, 161.64, 276.96, 38.02 ], "formula_id": "formula_11", "formula_text": "x = 1 X k=0 1 X t=k ( ) t k 0 t x (k):(14)" }, { "formula_coordinates": [ 10, 151.2, 216.84, 371.04, 37.78 ], "formula_id": "formula_12", "formula_text": "x = 1 X t=0 1 X k=t ( ) k t 0 k x (t) = 1 X t=0 1 X k=0 ( ) k 0 t+k x (t) = 1 X t=0 t x (t):(15)" }, { "formula_coordinates": [ 10, 268.8, 400.68, 74.64, 38.02 ], "formula_id": "formula_13", "formula_text": "z t = 1 X k=0 k r t+k :" }, { "formula_coordinates": [ 10, 262.56, 469.98, 87.12, 37.36 ], "formula_id": "formula_14", "formula_text": "z m] t = m 1 X k=0 k r t+k :" }, { "formula_coordinates": [ 10, 213.6, 549.42, 185.04, 37.6 ], "formula_id": "formula_15", "formula_text": "z (m) t = m 1 X k=0 k r t+k + m U t+m 1 (x t+m ):" }, { "formula_coordinates": [ 10, 123.6, 602.04, 398.64, 106.9 ], "formula_id": "formula_16", "formula_text": "t = 1 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) + U t+k (x t+k+1 ) U t+k (x t+k ) i = 1 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) i U t (x t ) + 1 X k=1 ( ) k h U t+k 1 (x t+k ) U t+k (x t+k ) i :(16)" }, { "formula_coordinates": [ 11, 159.12, 111.48, 303.12, 72.58 ], "formula_id": "formula_17", "formula_text": "1 t = 1 X k=0 k r t+k U t (x t ) + 1 X k=1 k h U t+k 1 (x t+k ) U t+k (x t+k ) i = z t U t (x t ) + 1 X k=1 k h U t+k 1 (x t+k ) U t+k (x t+k ) i :" }, { "formula_coordinates": [ 11, 146.64, 293.64, 375.6, 38.02 ], "formula_id": "formula_18", "formula_text": "z t = (1 ) 1 X k=0 k z (k+1) t = 1 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) i (17)" }, { "formula_coordinates": [ 11, 273.36, 365.04, 248.88, 17.98 ], "formula_id": "formula_19", "formula_text": "t = z t U(x t ): (18)" }, { "formula_coordinates": [ 12, 148.32, 120.12, 373.92, 37.78 ], "formula_id": "formula_21", "formula_text": "D t = t (z t U t (x t )) = 1 X k=1 ( ) k h U t+k 1 (x t+k ) U t+k (x t+k ) i : (20)" }, { "formula_coordinates": [ 12, 116.64, 254.4, 405.6, 85.9 ], "formula_id": "formula_22", "formula_text": "U t+1 (x t ) U t (x t ) = (z t U t (x t )) + D t = (z U t (x t )) + 1 X k=1 ( ) k h U t+k 1 (x t+k ) U t+k (x t+k ) i (z U t (x t )) 2 1 X k=1 ( ) k t+k 1 ;(21)" }, { "formula_coordinates": [ 12, 133.92, 385.08, 206.16, 38.02 ], "formula_id": "formula_23", "formula_text": "U t+1 (x t ) U t (x t ) = (z U t (x t )) 2 1 X k=1" }, { "formula_coordinates": [ 12, 507.48, 396.12, 14.76, 15.2 ], "formula_id": "formula_24", "formula_text": ")22" }, { "formula_coordinates": [ 12, 261.36, 618.06, 255.96, 37.36 ], "formula_id": "formula_25", "formula_text": ";m t = m 1 X k=0 ( ) k 0 t+k (23" }, { "formula_coordinates": [ 12, 517.32, 628.44, 4.92, 15.2 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 13, 99.84, 109.5, 422.4, 73.84 ], "formula_id": "formula_27", "formula_text": ";m t = m 2 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) i + ( ) m 1 h r t+m 1 + U t+m 1 (x t+m ) i = m 1 X k=0 ( ) k h r t+k + (1 )U t+k (x t+k+1 ) i + ( ) m U t+m 1 (x t+m ):(24)" }, { "formula_coordinates": [ 16, 127.2, 235.44, 221.76, 65.98 ], "formula_id": "formula_28", "formula_text": ":; m 1 do if k = k 0 then z := r k] + u k] else z := r k] + ( z + (1 )u k] ); (b) update (U; x m 1] ; a m 1] ; z U(x m 1] ));" }, { "formula_coordinates": [ 17, 146.64, 354.48, 159.84, 50.38 ], "formula_id": "formula_29", "formula_text": "v := V (x m 1] ); update (V; x m 1] ; z v); update (f; x m 1] ; a m 1] ; z v);" }, { "formula_coordinates": [ 17, 146.64, 469.68, 238.08, 17.26 ], "formula_id": "formula_30", "formula_text": "update (Q; x m 1] ; a m 1] ; z Q(x m 1] ; a m 1] ));" } ]
Truncating Temporal Di erences: On the E cient Implementation of TD( ) for Reinforcement Learning
Temporal di erence (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor . Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the e cient and general implementation of TD( ) for arbitrary , for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to su er from both ine ciency and lack of generality. The TTD (Truncated Temporal Di erences) procedure is proposed as an alternative, that indeed only approximates TD( ), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using > 0 with the TTD procedure allows one to obtain a signi cant learning speedup at essentially the same cost as usual TD(0) learning.
Pawe Cichosz
[ { "figure_caption": "Figure 2 :2Figure 2: The reset operation for the TTD( ; m) procedure.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Cichosz", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The car parking problem. The scale of all dimensions is preserved: w = 2 m, l = 4 m, x 0 = 1:5 m, x G = 1:5 m, x 1 = 8:5 m, y 0 = 3 m, y G = 3 m, y 1 = 13 m.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Studysettings for the experiments with the car parking problem.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The car parking problem, learning curves for study 1.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "12 ", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The car parking problem, learning curves for study 2.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "x | the position of the cart on the track, _ x | the velocity of the cart, | the angle of the pole with the vertical, _ | the angular velocity of the pole.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: The cart-pole system. F is the force applied to the cart's center, l is a half of the pole length, and d is a half of the length of the track.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The cart-pole balancing problem, learning curve in (a) linear and (b) logarithmic scale.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b26", "b17", "b18", "b34", "b36", "b39", "b12", "b17", "b18", "b34", "b36", "b16", "b3", "b7", "b4", "b20", "b24", "b10", "b37", "b27", "b16", "b29", "b28", "b31", "b24", "b26", "b18", "b34", "b36", "b16" ], "table_ref": [], "text": "The prototypical example of the problem of cost-sensitive classification is medical diagnosis, where a doctor would like to balance the costs of various possible medical tests with the expected benefits of the tests for the patient. There are several aspects to this problem: When does the benefit of a test, in terms of more accurate diagnosis, justify the cost of the test? When is it time to stop testing and make a commitment to a particular diagnosis? How much time should be spent pondering these issues? Does an extensive examination of the various possible sequences of tests yield a significant improvement over a simpler, heuristic choice of tests? These are some of the questions investigated here.\nThe words \"cost\", \"expense\", and \"benefit\" are used in this paper in the broadest sense, to include factors such as quality of life, in addition to economic or monetary cost. Cost is domain-specific and is quantified in arbitrary units. It is assumed here that the costs of tests are measured in the same units as the benefits of correct classification. Benefit is treated as negative cost.\nThis paper introduces a new algorithm for cost-sensitive classification, called ICET (Inexpensive Classification with Expensive Tests -pronounced \"iced tea\"). ICET uses a genetic algorithm (Grefenstette, 1986) to evolve a population of biases for a decision tree induction algorithm (a modified version of C4.5, Quinlan, 1992). The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET has the following features: (1) It is sensitive to test costs. (2) It is sensitive to classification error costs. (3) It combines a greedy search heuristic with a genetic search algorithm. (4) It can handle conditional costs, where the cost of one test is conditional on whether a second test has been selected yet. (5) It distinguishes tests with immediate results from tests with delayed results.\nThe problem of cost-sensitive classification arises frequently. It is a problem in medical diagnosis (Núñez, 1988(Núñez, , 1991)), robotics (Tan & Schlimmer, 1989, 1990;Tan, 1993), industrial production processes (Verdenius, 1991), communication network troubleshooting (Lirov & Yue, 1991), machinery diagnosis (where the main cost is skilled labor), automated testing of electronic equipment (where the main cost is time), and many other areas.\nThere are several machine learning algorithms that consider the costs of tests, such as EG2 (Núñez, 1988(Núñez, , 1991)), CS-ID3 (Tan & Schlimmer, 1989, 1990;Tan, 1993), and IDX (Norton, 1989). There are also several algorithms that consider the costs of classification errors (Breiman et al., 1984;Friedman & Stuetzle, 1981;Hermans et al., 1974;Gordon & Perlis, 1989;Pazzani et al., 1994;Provost, 1994;Provost & Buchanan, in press;Knoll et al., 1994). However, there is very little work that considers both costs together.\nThere are good reasons for considering both the costs of tests and the costs of classification errors. An agent cannot rationally determine whether a test should be performed without knowing the costs of correct and incorrect classification. An agent must balance the cost of each test with the contribution of the test to accurate classification. The agent must also consider when further testing is not economically justified. It often happens that the benefits of further testing are not worth the costs of the tests. This means that a cost must be assigned to both the tests and the classification errors.\nAnother limitation of many existing cost-sensitive classification algorithms (EG2, CS-ID3) is that they use greedy heuristics, which select at each step whatever test contributes most to accuracy and least to cost. A more sophisticated approach would evaluate the interactions among tests in a sequence of tests. A test that appears useful considered in isolation, using a greedy heuristic, may not appear as useful when considered in combination with other tests. Past work has demonstrated that more sophisticated algorithms can have superior performance (Tcheng et al., 1989;Ragavan & Rendell, 1993;Norton, 1989;Schaffer, 1993;Rymon, 1993;Seshu, 1989;Provost, 1994;Provost & Buchanan, in press).\nSection 2 discusses why a decision tree is the natural form of knowledge representation for classification with expensive tests and how we measure the average cost of classification of a decision tree. Section 3 introduces the five algorithms that we examine here, C4.5 (Quinlan, 1992), EG2 (Núñez, 1991), CS-ID3 (Tan & Schlimmer, 1989, 1990;Tan, 1993), IDX (Norton, 1989), and ICET. The five algorithms are evaluated empirically on five realworld medical datasets. The datasets are discussed in detail in Appendix A. Section 4 presents three sets of experiments. The first set (Section 4.1) of experiments examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors for the given datasets. The second set (Section 4.2) tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set (Section 4.3) looks at ICET's search in bias space and discovers a way to improve the search. We then discuss related work and future work in Section 5. We end with a summary of what we have learned with this research and a statement of the general motivation for this type of research." }, { "figure_ref": [], "heading": "Cost-Sensitive Classification", "publication_ref": [], "table_ref": [], "text": "This section first explains why a decision tree is the natural form of knowledge representation for classification with expensive tests. It then discusses how we measure the average cost of classification of a decision tree. Our method for measuring average cost handles aspects of the problem that are typically ignored. The method can be applied to any standard classification decision tree, regardless of how the tree is generated. We end with a discussion of the relation between cost and accuracy." }, { "figure_ref": [], "heading": "Decision Trees and Cost-Sensitive Classification", "publication_ref": [ "b22", "b26", "b17", "b18", "b36", "b16", "b22" ], "table_ref": [], "text": "The decision trees used in decision theory (Pearl, 1988) are somewhat different from the classification decision trees that are typically used in machine learning (Quinlan, 1992). When we refer to decision trees in this paper, we mean the standard classification decision trees of machine learning. The claims we make here about classification decision trees also apply to decision theoretical decision trees, with some modification. A full discussion of decision theoretical decision trees is outside of the scope of this paper.\nThe decision to do a test must be based on both the cost of tests and the cost of classification errors. If a test costs $10 and the maximum penalty for a classification error is $5, then there is clearly no point in doing the test. On the other hand, if the penalty for a classification error is $10,000, the test may be quite worthwhile, even if its information content is relatively low. Past work with algorithms that are sensitive to test costs (Núñez, 1988(Núñez, , 1991;;Tan, 1993;Norton, 1989) has overlooked the importance of also considering the cost of classification errors.\nWhen tests are inexpensive, relative to the cost of classification errors, it may be rational to do all tests (i.e., measure all features; determine the values of all attributes) that seem possibly relevant. In this kind of situation, it is convenient to separate the selection of tests from the process of making a classification. First we can decide on the set of tests that are relevant, then we can focus on the problem of learning to classify a case, using the results of these tests. This is a common approach to classification in the machine learning literature. Often a paper focuses on the problem of learning to classify a case, without any mention of the decisions involved in selecting the set of relevant tests. 1When tests are expensive, relative to the cost of classification errors, it may be suboptimal to separate the selection of tests from the process of making a classification. We may be able to achieve much lower costs by interleaving the two. First we choose a test, then we examine the test result. The result of the test gives us information, which we can use to influence our choice for the next test. At some point, we decide that the cost of further tests is not justified, so we stop testing and make a classification.\nWhen the selection of tests is interleaved with classification in this way, a decision tree is the natural form of representation. The root of the decision tree represents the first test that we choose. The next level of the decision tree represents the next test that we choose. The decision tree explicitly shows how the outcome of the first test determines the choice of the second test. A leaf represents the point at which we decide to stop testing and make a classification.\nDecision theory can be used to define what constitutes an optimal decision tree, given (1) the costs of the tests, (2) the costs of classification errors, (3) the conditional probabilities of test results, given sequences of prior test results, and (4) the conditional probabilities of classes, given sequences of test results. However, searching for an optimal tree is infeasible (Pearl, 1988). ICET was designed to find a good (but not necessarily optimal) tree, where \"good\" is defined as \"better than the competition\" (i.e., IDX, CS-ID3, and EG2)." }, { "figure_ref": [], "heading": "Calculating the Average Cost of Classification", "publication_ref": [], "table_ref": [ "tab_0", "tab_1", "tab_0", "tab_1" ], "text": "In this section, we describe how we calculate the average cost of classification for a decision tree, given a set of testing data. The method described here is applied uniformly to the decision trees generated by the five algorithms examined here (EG2, CS-ID3, IDX, C4.5, and ICET). The method assumes only a standard classification decision tree (such as generated by C4.5); it makes no assumptions about how the tree is generated. The purpose of the method is to give a plausible estimate of the average cost that can be expected in a real-world application of the decision tree.\nWe assume that the dataset has been split into a training set and a testing set. The expected cost of classification is estimated by the average cost of classification for the testing set. The average cost of classification is calculated by dividing the total cost for the whole testing set by the number of cases in the testing set. The total cost includes both the costs of tests and the costs of classification errors. In the simplest case, we assume that we can specify test costs simply by listing each test, paired with its corresponding cost. More complex cases will be considered later in this section. We assume that we can specify the costs of classification errors using a classification cost matrix.\nSuppose there are distinct classes. A classification cost matrix is a matrix, where the element is the cost of guessing that a case belongs in class i, when it actually belongs in class j. We do not need to assume any constraints on this matrix, except that costs are finite, real values. We allow negative costs, which can be interpreted as benefits. However, in the experiments reported here, we have restricted our attention to classification cost matrices in which the diagonal elements are zero (we assume that correct classification has no cost) and the off-diagonal elements are positive numbers. 2To calculate the cost of a particular case, we follow its path down the decision tree. We add up the cost of each test that is chosen (i.e., each test that occurs in the path from the root to the leaf). If the same test appears twice, we only charge for the first occurrence of the test. For example, one node in a path may say \"patient age is less than 10 years\" and another node may say \"patient age is more than 5 years\", but we only charge once for the cost of determining the patient's age. The leaf of the tree specifies the tree's guess for the class of the case. Given the actual class of the case, we use the cost matrix to determine the cost of the tree's guess. This cost is added to the costs of the tests, to determine the total cost of classification for the case. This is the core of our method for calculating the average cost of classification of a decision tree. There are two additional elements to the method, for handling conditional test costs and delayed test results.\nWe allow the cost of a test to be conditional on the choice of prior tests. Specifically, we consider the case where a group of tests shares a common cost. For example, a set of blood tests shares the common cost of collecting blood from the patient. This common cost is charged only once, when the decision is made to do the first blood test. There is no charge for collecting blood for the second blood test, since we may use the blood that was collected for the first blood test. Thus the cost of a test in this group is conditional on whether another member of the group has already been chosen.\nCommon costs appear frequently in testing. For example, in diagnosis of an aircraft engine, a group of tests may share the common cost of removing the engine from the plane\nc c c × C i j ,\nand installing it in a test cell. In semiconductor manufacturing, a group of tests may share the common cost of reserving a region on the silicon wafer for a special test structure. In image recognition, a group of image processing algorithms may share a common preprocessing algorithm. These examples show that a realistic assessment of the cost of using a decision tree will frequently need to make allowances for conditional test costs. It often happens that the result of a test is not available immediately. For example, a medical doctor typically sends a blood test to a laboratory and gets the result the next day. We allow a test to be labelled either \"immediate\" or \"delayed\". If a test is delayed, we cannot use its outcome to influence the choice of the next test. For example, if blood tests are delayed, then we cannot allow the outcome of one blood test to play a role in the decision to do a second blood test. We must make a commitment to doing (or not doing) the second blood test before we know the results of the first blood test.\nDelayed tests are relatively common. For example, many medical tests must be shipped to a laboratory for analysis. In gas turbine engine diagnosis, the main fuel control is frequently shipped to a specialized company for diagnosis or repair. In any classification problem that requires multiple experts, one of the experts might not be immediately available.\nWe handle immediate tests in a decision tree as described above. We handle delayed tests as follows. We follow the path of a case from the root of the decision tree to the appropriate leaf. If we encounter a node, anywhere along this path, that is a delayed test, we are then committed to performing all of the tests in the subtree that is rooted at this node. Since we cannot make the decision to do tests below this node conditional on the outcome of the test at this node, we must pledge to pay for all the tests that we might possibly need to perform, from this point onwards in the decision tree.\nOur method for handling delayed tests may seem a bit puzzling at first. The difficulty is that a decision tree combines a method for selecting tests with a method for classifying cases. When tests are delayed, we are forced to proceed in two phases. In the first phase, we select tests. In the second phase, we collect test results and classify the case. For example, a doctor collects blood from a patient and sends the blood to a laboratory. The doctor must tell the laboratory what tests are to be done on the blood. The next day, the doctor gets the results of the tests from the laboratory and then decides on the diagnosis of the patient. A decision tree does not naturally handle a situation like this, where the selection of tests is isolated from the classification of cases. In our method, in the first phase, the doctor uses the decision tree to select the tests. As long as the tests are immediate, there is no problem. As soon as the first delayed test is encountered, the doctor must select all the tests that might possibly be needed in the second phase. 3 That is, the doctor must select all the tests in the subtree rooted at the first delayed test. In the second phase, when the test results arrive the next day, the doctor will have all the information required to go from the root of the tree to a leaf, to make a classification. The doctor must pay for all of the tests in the subtree, even though only the tests along one branch of the subtree will actually be used. The doctor does not know in advance which branch will actually be used, at the time when it is necessary to order the blood tests. The laboratory that does the blood tests will naturally want the doctor to pay for all the tests that were ordered, even if they are not all used in making the diagnosis.\nIn general, it makes sense to do all of the desired immediate tests before we do any of the desired delayed tests, since the outcome of an immediate test can be used to influence the decision to do a delayed test, but not vice versa. For example, a medical doctor will question a patient (questions are immediate tests) before deciding what blood tests to order (blood tests are delayed tests). 4 When all of the tests are delayed (as they are in the BUPA data in Appendix A.1), we must decide in advance (before we see any test results) what tests are to be performed. For a given decision tree, the total cost of tests will be the same for all cases. In situations of this type, the problem of minimizing cost simplifies to the problem of choosing the best subset of the set of available tests (Aha and Bankert, 1994). The sequential order of the tests is no longer important for reducing cost.\nLet us consider a simple example to illustrate the method. Table 1 shows the test costs for four tests. Two of the tests are immediate and two are delayed. The two delayed tests share a common cost of $2.00. There are two classes, 0 and 1. Table 2 shows the classification cost matrix. Figure 1 shows a decision tree. Table 3 traces the path through the tree for a particular case and shows how the cost is calculated. The first step is to do the test at the root of the tree (test alpha). In the second step, we encounter a delayed test (delta), so we must calculate the cost of the entire subtree rooted at this node. Note that epsilon only costs $8.00, since we have already selected delta, and delta and epsilon have a common cost. In the third step, we do test epsilon, but we do not need to pay, since we already paid in the second step. In the fourth step, we guess the class of the case. Unfortunately, we guess incorrectly, so we pay a penalty of $50.00.\n4. In the real world, there are many factors that can influence the sequence of tests, such as the length of the delay and the probability that the delayed test will be needed. When we ignore these many factors and pay attention only to the simplified model presented here, it makes sense to do all of the desired immediate tests before we do any of the desired delayed tests. We do not know to what extent this actually occurs in the real world. One complication is that medical doctors in most industrialized countries are not directly affected by the cost of the tests they select. In fact, fear of law suits gives them incentive to order unnecessary tests. In summary, this section presents a method for estimating the average cost of using a given decision tree. The decision tree can be any standard classification decision tree; no special assumptions are made about the tree; it does not matter how the tree was generated. The method requires (1) a decision tree (Figure 1), (2) information on the calculation of test costs (Table 1), (3) a classification cost matrix (Table 2), and (4) a set of testing data (Table 3). The method is (i) sensitive to the cost of tests, (ii) sensitive to the cost of classification errors, (iii) capable of handling conditional test costs, and (iv) capable of handling delayed tests. In the experiments reported in Section 4, this method is applied uniformly to all five algorithms." }, { "figure_ref": [], "heading": "Cost and Accuracy", "publication_ref": [ "b3", "b7", "b4", "b20", "b24", "b10", "b26", "b24", "b24" ], "table_ref": [], "text": "Our method for calculating cost does not explicitly deal with accuracy; however, we can handle accuracy as a special case. If the test cost is set to $0.00 for all tests and the classification cost matrix is set to a positive constant value k when the guess class i does not equal the actual class j, but it is set to $0.00 when i equals j, then the average total cost of using the decision tree is , where is the frequency of errors on the testing dataset and \nF T T T T T F F F F 0 0 0 1 1 1 pk p 0 1 [ , ] ∈\nis the percentage accuracy on the testing dataset. Thus there is a linear relationship between average total cost and percentage accuracy, in this situation.\nMore generally, let C be a classification cost matrix that has cost x on the diagonal, , and cost y off the diagonal, , where x is less than y, . We will call this type of classification cost matrix a simple classification cost matrix. A cost matrix that is not simple will be called a complex classification cost matrix. 5 When we have a simple cost matrix and test costs are zero (equivalently, test costs are ignored), minimizing cost is exactly equivalent to maximizing accuracy.\nIt follows from this that an algorithm that is sensitive to misclassification error costs but ignores test costs (Breiman et al., 1984;Friedman & Stuetzle, 1981;Hermans et al., 1974;Gordon & Perlis, 1989;Pazzani et al., 1994;Provost, 1994;Provost & Buchanan, in press;Knoll et al., 1994) will only be interesting when we have a complex cost matrix. If we have a simple cost matrix, an algorithm such as CART (Breiman et al., 1984) that is sensitive to misclassification error cost has no advantage over an algorithm such as C4.5 (Quinlan, 1992) that maximizes accuracy (assuming other differences between these two algorithms are negligible). Most of the experiments in this paper use a simple cost matrix (the only exception is Section 4.2.3). Therefore we focus on comparison of ICET with algorithms that are sensitive to test cost (IDX, CS-ID3, and EG2). In future work, we will examine complex cost matrices and compare ICET with algorithms that are sensitive to misclassification error cost.\nIt is difficult to find information on the costs of misclassification errors in medical practice, but it seems likely that a complex cost matrix is more appropriate than a simple cost matrix for most medical applications. This paper focuses on simple cost matrices because, as a research strategy, it seems wise to start with the simple cases before we attempt the complex cases.\nProvost (Provost, 1994;Provost & Buchanan, in press) combines accuracy and classification error cost using the following formula:\n(1)\nIn this formula, A and B are arbitrary weights that the user can set for a particular application. Both \"accuracy\" and \"cost\", as defined by Provost (Provost, 1994;Provost & Buchanan, in press), can be represented using classification cost matrices. We can represent \"accuracy\" using any simple cost matrix. In interesting applications, \"cost\" will be represented by a complex cost matrix. Thus \"score\" is a weighted sum of two classification cost matrices, which means that \"score\" is itself a classification cost matrix. This shows that equation (1) can be handled as a special case of the method presented here. There is no loss of information in this translation of Provost's formula into a cost matrix. This does not mean that all criteria can be represented as costs. An example of a criterion that cannot be represented as a cost is stability (Turney, in press)." }, { "figure_ref": [], "heading": "Algorithms", "publication_ref": [ "b26", "b18", "b34", "b36", "b16", "b26", "b26", "b26" ], "table_ref": [], "text": "This section discusses the algorithms used in this paper: C4.5 (Quinlan, 1992), EG2 (Núñez, 1991), CS-ID3 (Tan & Schlimmer, 1989, 1990;Tan, 1993), IDX (Norton, 1989), and ICET.\n5. We will occasionally say \"simple cost matrix\" or \"complex cost matrix\". This should not cause confusion, since test costs are not represented with a matrix. 100 1 p -( )\nC i i , x = i j ≠ ( ) C i j , y = ( ) → x y < score A accuracy ⋅ B cost ⋅ - = 3.1 C4.5\nC4.5 (Quinlan, 1992) builds a decision tree using the standard TDIDT (top-down induction of decision trees) approach, recursively partitioning the data into smaller subsets, based on the value of an attribute. At each step in the construction of the decision tree, C4.5 selects the attribute that maximizes the information gain ratio. The induced decision tree is pruned using pessimistic error estimation (Quinlan, 1992). There are several parameters that can be adjusted to alter the behavior of C4.5. In our experiments with C4.5, we used the default settings for all parameters. We used the C4.5 source code that is distributed with (Quinlan, 1992)." }, { "figure_ref": [], "heading": "EG2", "publication_ref": [ "b18", "b18", "b18" ], "table_ref": [], "text": "EG2 (Núñez, 1991) is a TDIDT algorithm that uses the Information Cost Function (ICF) (Núñez, 1991) for selection of attributes. ICF selects attributes based on both their information gain and their cost. We implemented EG2 by modifying the C4.5 source code so that ICF was used instead of information gain ratio.\nICF for the i-th attribute, , is defined as follows: 6\n(2)\nIn this equation, is the information gain associated with the i-th attribute at a given stage in the construction of the decision tree and is the cost of measuring the i-th attribute. C4.5 selects the attribute that maximizes the information gain ratio, which is a function of the information gain . We modified C4.5 so that it selects the attribute that maximizes . The parameter adjusts the strength of the bias towards lower cost attributes. When , cost is ignored and selection by is equivalent to selection by . When , is strongly biased by cost. Ideally, would be selected in a way that is sensitive to classification error cost (this is done in ICET -see Section 3.5). Núñez (1991) does not suggest a principled way of setting . In our experiments with EG2, was set to 1. In other words, we used the following selection measure:\n(3)\nIn addition to its sensitivity to the cost of tests, EG2 generalizes attributes by using an ISA tree (a generalization hierarchy). We did not implement this aspect of EG2, since it was not relevant for the experiments reported here." }, { "figure_ref": [], "heading": "CS-ID3", "publication_ref": [ "b34", "b36", "b18" ], "table_ref": [], "text": "CS-ID3 (Tan & Schlimmer, 1989, 1990;Tan, 1993) is a TDIDT algorithm that selects the attribute that maximizes the following heuristic function:\n(4) 6. This is the inverse of ICF, as defined by Núñez (1991). Núñez minimizes his criterion. To facilitate comparison with the other algorithms, we use equation ( 2). This criterion is intended to be maximized.\nICF i ICF i 2 ∆I i 1 - C i 1 + ( ) ω ------------------------ = where 0 ω 1 ≤ ≤ ∆I i C i ∆I i ICF i ω ω 0 = ICF i ∆I i ω 1 = ICF i ω ω ω 2 ∆I i 1 - C i 1 + ----------------- ∆I i ( ) 2 C i ----------------\nWe implemented CS-ID3 by modifying C4.5 so that it selects the attribute that maximizes (4).\nCS-ID3 uses a lazy evaluation strategy. It only constructs the part of the decision tree that classifies the current case. We did not implement this aspect of CS-ID3, since it was not relevant for the experiments reported here." }, { "figure_ref": [], "heading": "IDX", "publication_ref": [ "b16" ], "table_ref": [], "text": "IDX (Norton, 1989) is a TDIDT algorithm that selects the attribute that maximizes the following heuristic function:\n(5)\nWe implemented IDX by modifying C4.5 so that it selects the attribute that maximizes (5).\nC4.5 uses a greedy search strategy that chooses at each step the attribute with the highest information gain ratio. IDX uses a lookahead strategy that looks n tests ahead, where n is a parameter that may be set by the user. We did not implement this aspect of IDX. The lookahead strategy would perhaps make IDX more competitive with ICET, but it would also complicate comparison of the heuristic function ( 5) with the heuristics (3) and ( 4) used by EG2 and CS-ID3." }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "ICET", "publication_ref": [ "b5", "b26", "b5", "b5", "b30", "b14", "b40", "b13", "b8", "b0", "b42", "b43" ], "table_ref": [ "tab_2" ], "text": "ICET is a hybrid of a genetic algorithm and a decision tree induction algorithm. The genetic algorithm evolves a population of biases for the decision tree induction algorithm. The genetic algorithm we use is GENESIS (Grefenstette, 1986). 7 The decision tree induction algorithm is C4.5 (Quinlan, 1992), modified to use ICF. That is, the decision tree induction algorithm is EG2, implemented as described in Section 3.2.\nICET uses a two-tiered search strategy. On the bottom tier, EG2 performs a greedy search through the space of decision trees, using the standard TDIDT strategy. On the top tier, GENESIS performs a genetic search through a space of biases. The biases are used to modify the behavior of EG2. In other words, GENESIS controls EG2's preference for one type of decision tree over another.\nICET does not use EG2 the way it was designed to be used. The n costs, , used in EG2's attribute selection function, are treated by ICET as bias parameters, not as costs. That is, ICET manipulates the bias of EG2 by adjusting the parameters, . In ICET, the values of the bias parameters, , have no direct connection to the actual costs of the tests.\nGenetic algorithms are inspired by biological evolution. The individuals that are evolved by GENESIS are strings of bits. GENESIS begins with a population of randomly generated individuals (bit strings) and then it measures the \"fitness\" of each individual. In ICET, an individual (a bit string) represents a bias for EG2. An individual is evaluated by running EG2 on the data, using the bias of the given individual. The \"fitness\" of the individual is the average cost of classification of the decision tree that is generated by EG2. In the next generation, the population is replaced with new individuals. The new individuals are generated from the previous generation, using mutation and crossover (sex). The fittest individuals in the first generation have the most offspring in the second generation. After a fixed number of 7. We used GENESIS Version 5.0, which is available at URL ftp://ftp.aic.nrl.navy.mil/pub/galist/src/ga/genesis.tar.Z or ftp://alife.santafe.edu/pub/USER-AREA/EC/GA/src/gensis-5.0.tar.gz.\n∆I i C i ------- C i C i C i\ngenerations, ICET halts and its output is the decision tree determined by the fittest individual. Figure 2 gives a sketch of the ICET algorithm.\nGENESIS has several parameters that can be used to alter its performance. The parameters we used are listed in Table 4. These are essentially the default parameter settings (Grefenstette, 1986). We used a population size of 50 individuals and 1,000 trials, which results in 20 generations. An individual in the population consists of a string of numbers, where n is the number of attributes (tests) in the given dataset. The numbers are represented in binary format, using a Gray code. 8 This binary string is used as a bias for EG2. The first n numbers in the string are treated as if they were the n costs, , used in ICF (equation ( 2)). The first n numbers range from 1 to 10,000 and are coded with 12 binary digits each. The last two numbers in the string are used to set and CF. The parameter is used in ICF. The parameter CF is used in C4.5 to control the level of pruning of the decision tree. The last two numbers are coded with 8 binary digits each.\nranges from 0 (cost is ignored) to 1 (maximum sensitivity to cost) and CF ranges from 1 (high pruning) to 100 (low pruning). Thus an individual is a string of bits. Each trial of an individual consists of running EG2 (implemented as a modification to C4.5) on a given training dataset, using the numbers specified in the binary string to set ( ), , and CF. The training dataset is randomly split into two equal-sized subsets ( for odd-sized training sets), a sub-training set and a sub-testing set. A different random split is used for each trial, so the outcome of a trial is stochastic. We cannot assume that identical individuals yield identical outcomes, so every individual must be evaluated. This means that there will be duplicate individuals in the population, with slightly different fitness scores. The measure of fitness of an individual is the average cost of classification on the sub-testing set, using the decision tree that was generated on the sub-training set. The aver-8. A Gray code is a binary code that is designed to avoid \"Hamming cliffs\". In the standard binary code, 7 is represented as 0111 and 8 is represented as 1000. These numbers are adjacent, yet the Hamming distance from 0111 to 1000 is large. In a Gray code, adjacent numbers are represented with binary codes that have small Hamming distances. This tends to improve the performance of a genetic algorithm (Grefenstette, 1986). \nn 2 + n 2 + C i ω ω ω 12n 16 + C i i 1 … n , , = ω 1 ±\nage cost is measured as described in Section 2.2. After 1,000 trials, the most fit (lowest cost) individual is then used as a bias for EG2 with the whole training set as input. The resulting decision tree is the output of ICET for the given training dataset. 9 The n costs (bias parameters), , used in ICF, are not directly related to the true costs of the attributes. The 50 individuals in the first generation are generated randomly, so the initial values of have no relation to the true costs. After 20 generations, the values of may have some relation to the true costs, but it will not be a simple relationship. These values of are more appropriately thought of as biases than costs. Thus GENESIS is searching through a bias space for biases for C4.5 that result in decision trees with low average cost.\nThe biases range from 1 to 10,000. When a bias is greater than 9,000, the i-th attribute is ignored. That is, the i-th attribute is not available for C4.5 to include in the decision tree, even if it might maximize . This threshold of 9,000 was arbitrarily chosen. There was no attempt to optimize this value by experimentation.\nWe chose to use EG2 in ICET, rather than IDX or CS-ID3, because EG2 has the parameter , which gives GENESIS greater control over the bias of EG2.\nis partly based on the data (via the information gain,\n) and it is partly based on the bias (via the \"pseudo-9. The 50/50 partition of sub-training and sub-testing sets could mean that ICET may not work well on small datasets. The smallest dataset of the five we examine here is the Hepatitis dataset, which has 155 cases. The training sets had 103 cases and the testing sets had 52 cases. The sub-training and sub-testing sets had 51 or 52 cases. We can see from Figure 3 that ICET performed slightly better than the other algorithms on this dataset (the difference is not significant). \nC i C i C i C i C i C i ICF i ω ICF i ∆I i\ncost\", ). The exact mix of data and bias can be controlled by varying . Otherwise, there is no reason to prefer EG2 to IDX or CS-ID3, which could easily be used instead of EG2.\nThe treatment of delayed tests and conditional test costs is not \"hard-wired\" into EG2. It is built into the fitness function used by GENESIS, the average cost of classification (measured as described in Section 2). This makes it relatively simple to extend ICET to handle other pragmatic constraints on the decision trees.\nIn effect, GENESIS \"lies\" to EG2 about the costs of the tests. How can lies improve the performance of EG2? EG2 is a hill-climbing algorithm that can get trapped at a local optimum. It is a greedy algorithm that looks only one test ahead as it builds a decision tree.\nBecause it looks only one step ahead, EG2 suffers from the horizon effect. This term is taken from the literature on chess playing programs. Suppose that a chess playing program has a fixed three-move lookahead depth and it finds that it will loose its queen in three moves, if it follows a certain branch of the game tree. There may be an alternate branch where the program first sacrifices a pawn and then loses its queen in four moves. Because the loss of the queen is over its three-move horizon, the program may foolishly decide to sacrifice its pawn. One move later, it is again faced with the loss of its queen. Analogously, EG2 may try to avoid a certain expensive test by selecting a less expensive test. One test later, it is again faced with the more expensive test. After it has exhausted all the cheaper tests, it may be forced to do the expensive test, in spite of its efforts to avoid the test. GENESIS can prevent this short-sighted behavior by telling lies to EG2. GENESIS can exaggerate the cost of the cheap tests or it can understate the cost of the expensive test. Based on past trials, GENESIS can find the lies that yield the best performance from EG2.\nIn ICET, learning (local search in EG2) and evolution (in GENESIS) interact. A common form of hybrid genetic algorithm uses local search to improve the individuals in a population (Schaffer et al., 1992). The improvements are then coded into the strings that represent the individuals. This is a form of Lamarckian evolution. In ICET, the improvements due to EG2 are not coded into the strings. However, the improvements can accelerate evolution by altering the fitness landscape. This phenomenon (and other phenomena that result from this form of hybrid) is known as the Baldwin effect (Baldwin, 1896;Morgan, 1896;Waddington, 1942;Maynard Smith, 1987;Hinton & Nowlan, 1987;Ackley & Littman, 1991;Whitley & Gruau, 1993;Whitley et al., 1994;Anderson, in press). The Baldwin effect may explain much of the success of ICET." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b15" ], "table_ref": [], "text": "This section describes experiments that were performed on five datasets, taken from the Irvine collection (Murphy & Aha, 1994). The five datasets are described in detail in Appendix A. All five datasets involve medical problems. The test costs are based on information from the Ontario Ministry of Health (1992). The main purpose of the experiments is to gain insight into the behavior of ICET. The other cost-sensitive algorithms, EG2, CS-ID3, and IDX, are included mainly as benchmarks for evaluating ICET. C4.5 is also included as a benchmark, to illustrate the behavior of an algorithm that makes no use of cost information. The main conclusion of these experiments is that ICET performs significantly better than its competitors, under a wide range of conditions. With access to the Irvine collection and the information in Appendix A, it should be possible for other researchers to duplicate the results reported here.\nMedical datasets frequently have missing values. 10 We conjecture that many missing values in medical datasets are missing because the doctor involved in generating the dataset C i ω decided that a particular test was not economically justified for a particular patient. Thus there may be information content in the fact that a certain value is missing. There may be many reasons for missing values other than the cost of the tests. For example, perhaps the doctor forgot to order the test or perhaps the patient failed to show up for the test. However, it seems likely that there is often information content in the fact that a value is missing. For our experiments, this information content should be hidden from the learning algorithms, since using it (at least in the testing sets) would be a form of cheating. Two of the five datasets we selected had some missing data. To avoid accusations of cheating, we decided to preprocess the datasets so that the data presented to the algorithms had no missing values. This preprocessing is described in Appendices A.2 and A.3. Note that ICET is capable of handling missing values without preprocessing -it inherits this ability from its C4.5 component. We preprocessed the data only to avoid accusations of cheating, not because ICET requires preprocessed data.\nFor the experiments, each dataset was randomly split into 10 pairs of training and testing sets. Each training set consisted of two thirds of the dataset and each testing set consisted of the remaining one third. The same 10 pairs were used in all experiments, in order to facilitate comparison of results across experiments.\nThere are three groups of experiments. The first group of experiments examines the baseline performance of the algorithms. The second group considers how robust ICET is under a variety of conditions. The final group looks at how ICET searches bias space." }, { "figure_ref": [], "heading": "Baseline Performance", "publication_ref": [], "table_ref": [], "text": "This section examines the baseline performance of the algorithms. In Section 4.1.1, we look at the average cost of classification of the five algorithms on the five datasets. Averaged across the five datasets, ICET has the lowest average cost. In Section 4.1.2, we study test expenditures and error rates as functions of the penalty for misclassification errors. Of the five algorithms studied here, only ICET adjusts its test expenditures and error rates as functions of the penalty for misclassification errors. The other four algorithms ignore the penalty for misclassification errors. ICET behaves as one would expect, increasing test expenditures and decreasing error rates as the penalty for misclassification errors rises. In Section 4.1.3, we examine the execution time of the algorithms. ICET requires 23 minutes on average on a single-processor Sparc 10. Since ICET is inherently parallel, there is significant room for speed increase on a parallel machine." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "AVERAGE COST OF CLASSIFICATION", "publication_ref": [], "table_ref": [ "tab_3", "tab_3", "tab_3", "tab_3", "tab_3" ], "text": "The experiment presented here establishes the baseline performance of the five algorithms. The hypothesis was that ICET will, on average, perform better than the other four algorithms. The classification cost matrix was set to a positive constant value k when the guess class i does not equal the actual class j, but it was set to $0.00 when i equals j. We experimented with seven settings for k, $10, $50, $100, $500, $1000, $5000, and $10000.\nInitially, we used the average cost of classification as the performance measure, but we found that there are three problems with using the average cost of classification to compare the five algorithms. First, the differences in costs among the algorithms become relatively small as the penalty for classification errors increases. This makes it difficult to see which algorithm is best. Second, it is difficult to combine the results for the five datasets in a fair manner. 11 It is not fair to average the five datasets together, since their test costs have different scales (see Appendix A). The test costs in the Heart Disease dataset, for example, are substantially larger than the test costs in the other four datasets. Third, it is difficult to combine average costs for different values of k in a fair manner, since more weight will be given to the situations where k is large than to the situations where it is small.\nTo address these concerns, we decided to normalize the average cost of classification. We normalized the average cost by dividing it by the standard cost. Let be the frequency of class i in the given dataset. That is, is the fraction of the cases in the dataset that belong in class i. We calculate using the entire dataset, not just the training set. Let be the cost of guessing that a case belongs in class i, when it actually belongs in class j. Let be the total cost of doing all of the possible tests. The standard cost is defined as follows:\n(6)\nWe can decompose formula (6) into three components:\n(7) (8) (9)\nWe may think of ( 7) as an upper bound on test expenditures, (8) as an upper bound on error rate, and ( 9) as an upper bound on the penalty for errors. The standard cost is always less than the maximum possible cost, which is given by the following formula: (10) The point is that (8) is not really an upper bound on error rate, since it is possible to be wrong with every guess. However, our experiments suggest that the standard cost is better for normalization, since it is a more realistic (tighter) upper bound on the average cost. In our experiments, the average cost never went above the standard cost, although it occasionally came very close.\nFigure 3 shows the result of using formula (6) to normalize the average cost of classification. In the plots, the x axis is the value of k and the y axis is the average cost of classification as a percentage of the standard cost of classification. We see that, on average (the sixth plot in Figure 3), ICET has the lowest classification cost. The one dataset where ICET does not perform particularly well is the Heart Disease dataset (we discuss this later, in Sections 4.3.2 and 4.3.3).\nTo come up with a single number that characterizes the performance of each algorithm, we averaged the numbers in the sixth plot in Figure 3. 12 We calculated 95% confidence regions for the averages, using the standard deviations across the 10 random splits of the 11. We want to combine the results in order to summarize the performance of the algorithms on the five datasets. This is analogous to comparing students by calculating the GPA (Grade Point Average), where students are to courses as algorithms are to datasets. 12. Like the GPA, all datasets (courses) have the same weight. However, unlike the GPA, all algorithms (students) are applied to the same datasets (have taken the same courses). Thus our approach is perhaps more fair to the algorithms than GPA is to students.\nf i 0 1 [ , ] ∈ f i f i C i j , T T min i 1 f i - ( ) max i j , C i j , ⋅ + T min i 1 f i - ( ) max i j , C i j , T max i j , C i j , +\ndatasets. The result is shown in Table 5.\nTable 5 shows the averages for the first three misclassification error costs alone ($10, $50, and $100), in addition to showing the averages for all seven misclassification error costs ($10 to $10000). We have two averages (the two columns in Table 5), based on two groups of data, to address the following argument: As the penalty for misclassification errors increases, the cost of the tests becomes relatively insignificant. With very high misclassification error cost, the test cost is effectively zero, so the task becomes simply to maximize accuracy. As we see in Figure 3, the gap between C4.5 (which maximizes accuracy) and the other algorithms becomes smaller as the cost of misclassification error increases. Therefore the benefit of sensitivity to test cost decreases as the cost of misclassification error increases. It could be argued that one would only bother with an algorithm that is sensitive to test cost when tests are relatively expensive, compared to the cost of misclassification errors. Thus the most realistic measure of performance is to examine the average cost of classification when the cost of tests is the same order of magnitude as the cost of misclassification errors ($10 to $100). This is why Table 5 shows both averages.\nOur conclusion, based on Table 5, is that ICET performs significantly better than the other four algorithms when the cost of tests is the same order of magnitude as the cost of misclassification errors ($10, $50, and $100). When the cost of misclassification errors dominates the test costs, ICET still performs better than the competition, but the difference is less significant. The other three cost-sensitive algorithms (EG2, CS-ID3, and IDX) perform significantly better than C4.5 (which ignores cost). The performance of EG2 and IDX is indistinguishable, but CS-ID3 appears to be consistently more costly than EG2 and IDX." }, { "figure_ref": [], "heading": "TEST EXPENDITURES AND ERROR RATES AS FUNCTIONS OF THE PENALTY FOR ERRORS", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We argued in Section 2 that expenditures on tests should be conditional on the penalty for misclassification errors. Therefore ICET is designed to be sensitive to both the cost of tests and the cost of classification errors. This leads us to the hypothesis that ICET tends to spend more on tests as the penalty for misclassification errors increases. We also expect that the error rate of ICET should decrease as test expenditures increase. These two hypotheses are confirmed in Figure 4. In the plots, the x axis is the value of k and the y axis is (1) the average expenditure on tests, expressed as a percentage of the maximum possible expenditure on tests, , and (2) the average percent error rate. On average (the sixth plot in Figure 4), test expenditures rise and error rate falls as the penalty for classification errors increases. There are some minor deviations from this trend, since ICET can only guess at the value of a test (in terms of reduced error rate), based on what it sees in the training dataset. The testing dataset may not always support that guess. Note that plots for the other four algorithms, corresponding to the plots for ICET in Figure 4, would be straight horizontal lines, since all four algorithms ignore the cost of misclassification error. They generate the same decision trees for every possible misclassification error cost. In essence, ICET works by invoking C4.5 1000 times (Section 3.5). Fortunately, Quinlan's (1992) implementation of C4.5 is quite fast. Table 6 shows the run-times for the algorithms, using a single-processor Sun Sparc 10. One full experiment takes about one week (roughly 23 minutes for an average run, multiplied by 5 datasets, multiplied by 10 random splits, multiplied by 7 misclassification error costs equals about one week). Since genetic algorithms can easily be executed in parallel, there is substantial room for speed increase with a parallel machine. Each generation consists of 50 individuals, which could be evaluated in parallel, reducing the average run-time to about half a minute." }, { "figure_ref": [], "heading": "Robustness of ICET", "publication_ref": [], "table_ref": [], "text": "This group of experiments considers how robust ICET is under a variety of conditions. Each section considers a different variation on the operating environment of ICET. The ICET " }, { "figure_ref": [], "heading": "ALL TESTS IMMEDIATE", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "A critic might object that the previous experiments do not show that ICET is superior to the other algorithms due to its sensitivity to both test costs and classification error costs. Perhaps ICET is superior simply because it can handle delayed tests, while the other algorithms treat all tests as immediate. 13 That is, the method of estimating the average classification cost (Section 2.2) is biased in favor of ICET (since ICET uses the method in its fitness function) and against the other algorithms. In this experiment, we labelled all tests as immediate. Otherwise, nothing changed from the baseline experiments. Table 7 summarizes the results of the experiment. ICET still performs well, although its advantage over the other algorithms has decreased slightly. Sensitivity to delayed tests is part of the explanation of ICET's performance, but it is not the whole story." }, { "figure_ref": [], "heading": "NO GROUP DISCOUNTS", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Another hypothesis is that ICET is superior simply because it can handle groups of tests that share a common cost. In this experiment, we eliminated group discounts for tests that share a common cost. That is, test costs were not conditional on prior tests. Otherwise, nothing changed from the baseline experiments. Table 8 summarizes the results of the experiment. ICET maintains its advantage over the other algorithms." }, { "figure_ref": [ "fig_5", "fig_5", "fig_2", "fig_5" ], "heading": "COMPLEX CLASSIFICATION COST MATRICES", "publication_ref": [], "table_ref": [ "tab_8", "tab_9" ], "text": "So far, we have only used simple classification cost matrices, where the penalty for a classification error is the same for all types of error. This assumption is not inherent in ICET. Each 13. While the other algorithms cannot currently handle delayed tests, it should be possible to alter them in some way, so that they can handle delayed tests. This comment also extends to groups of tests that share a common cost. ICET might be viewed as an alteration of EG2 that enables EG2 to handle delayed tests and common costs. In this experiment, we explore ICET's behavior when the classification cost matrix is complex. We use the term \"positive error\" to refer to a false positive diagnosis, which occurs when a patient is diagnosed as being sick, but the patient is actually healthy. Conversely, the term \"negative error\" refers to a false negative diagnosis, which occurs when a patient is diagnosed as being healthy, but is actually sick. The term \"positive error cost\" is the cost that is assigned to positive errors, while \"negative error cost\" is the cost that is assigned to negative errors. See Appendix A for examples. We were interested in ICET's behavior as the ratio of negative to positive error cost was varied. Table 9 shows the ratios that we examined. Figure 5 shows the performance of the five algorithms at each ratio.\nOur hypothesis was that the difference in performance between ICET and the other algorithms would increase as we move away from the middles of the plots, where the ratio is 1.0, since the other algorithms have no mechanism to deal with complex classification cost; they were designed under the implicit assumption of simple classification cost matrices. In fact, Figure 5 shows that the difference tends to decrease as we move away from the middles. This is most pronounced on the right-hand sides of the plots. When the ratio is 8.0 (the extreme right-hand sides of the plots), there is no advantage to using ICET. When the ratio is 0.125 (the extreme left-hand sides of the plots), there is still some advantage to using ICET. The interpretation of these plots is complicated by the fact that the gap between the algorithms tends to decrease as the penalty for classification errors increases (as we can see in Figure 3 -in retrospect, we should have held the sum of the negative error cost and the positive error cost at a constant value, as we varied their ratio). However, there is clearly an asymmetry in the plots, which we expected to be symmetrical about a vertical line centered on 1.0 on the x axis. The plots are close to symmetrical for the other algorithms, but they are asymmetrical for ICET. This is also apparent in Table 10, which focuses on a comparison of the performance of ICET and EG2, averaged across all five datasets (see the sixth plot in Figure 5). This suggests that it is more difficult to reduce negative errors (on the right-hand sides of the plots, negative errors have more weight) than it is to reduce positive errors (on the left-hand sides, positive errors have more weight). That is, it is easier to avoid false positive diagnoses (a patient is diagnosed as being sick, but the patient is actually healthy) than it is to avoid false negative diagnoses (a patient is diagnosed as being healthy, but is actually sick). This is unfortunate, since false negative diagnoses usually carry a heavier penalty, in real-life. Preliminary investigation suggests that false negative diagnoses are harder to avoid because the \"sick\" class is usually less frequent than the \"healthy\" class, which makes the \"sick\" class harder to learn." }, { "figure_ref": [], "heading": "POORLY ESTIMATED CLASSIFICATION COST", "publication_ref": [], "table_ref": [ "tab_10", "tab_10" ], "text": "We believe that it is an advantage of ICET that it is sensitive to both test costs and classification error costs. However, it might be argued that it is difficult to calculate the cost of classification errors in many real-world applications. Thus it is possible that an algorithm that ignores the cost of classification errors (e.g., EG2, CS-ID3, IDX) may be more robust and useful than an algorithm that is sensitive to classification errors (e.g., ICET). To address this possibility, we examine what happens when ICET is trained with a certain penalty for classification errors, then tested with a different penalty.\nOur hypothesis was that ICET would be robust to reasonable differences between the penalty during training and the penalty during testing. Table 11 shows what happens when ICET is trained with a penalty of $100 for classification errors, then tested with penalties of $50, $100, and $500. We see that ICET has the best performance of the five algorithms, although its edge is quite slight in the case where the penalty is $500 during testing.\nWe also examined what happens (1) when ICET is trained with a penalty of $500 and tested with penalties of $100, $500, and $1,000 and (2) when ICET is trained with a penalty of $1,000 and tested with penalties of $500, $1,000, and $5,000. The results show essentially the same pattern as in Table 11: ICET is relatively robust to differences between the training and testing penalties, at least when the penalties have the same order of magnitude. This suggests that ICET is applicable even in those situations where the reliability of the estimate of the cost of classification errors is dubious.\nWhen the penalty for errors on the testing set is $100, ICET works best when the penalty for errors on the training set is also $100. When the penalty for errors on the testing set is $500, ICET works best when the penalty for errors on the training set is also $500. When the penalty for errors on the testing set is $1,000, ICET works best when the penalty for errors on the training set is $500. This suggests that there might be an advantage in some situations to underestimating the penalty for errors during training. In other, words ICET may have a tendency to overestimate the benefits of tests (this is likely due to overfitting the training data)." }, { "figure_ref": [], "heading": "Searching Bias Space", "publication_ref": [], "table_ref": [], "text": "The final group of experiments analyzes ICET's method for searching in bias space. Section 4.3.1 studies the roles of the mutation and crossover operators. It appears that crossover is mildly beneficial, compared to pure mutation. Section 4.3.2 considers what happens when ICET is constrained to search in a binary bias space, instead of a real bias space. This constraint actually improves the performance of ICET. We hypothesized that the improvement was due to a hidden advantage of searching in binary bias space: When searching in binary bias space, ICET has direct access to the true costs of the tests. However, this advantage can be available when searching in real bias space, if the initial population of biases is seeded with the true costs of the tests. Section 4.3.3 shows that this seeding improves the performance of ICET." }, { "figure_ref": [], "heading": "CROSSOVER VERSUS MUTATION", "publication_ref": [ "b6", "b44", "b44", "b32", "b32" ], "table_ref": [ "tab_11" ], "text": "Past work has shown that a genetic algorithm with crossover performs better than a genetic algorithm with mutation alone (Grefenstette et al., 1990;Wilson, 1987). This section attempts to test the hypothesis that crossover improves the performance of ICET. To test this hypothesis, it is not sufficient to merely set the crossover rate to zero. Since crossover has a randomizing effect, similar to mutation, we must also increase the mutation rate, to compensate for the loss of crossover (Wilson, 1987;Spears, 1992). It is very difficult to analytically calculate the increase in mutation rate that is required to compensate for the loss of crossover (Spears, 1992). Therefore we experimentally tested three different mutation settings. 14 The results are summarized in Table 12. When the crossover rate was set to zero, the best mutation rate was 0.10. For misclassification error costs from $10 to $10,000, the performance of ICET without crossover was not as good as the performance of ICET with crossover, but the difference is not statistically significant. However, this comparison is not entirely fair to crossover, since we made no attempt to optimize the crossover rate (we simply used the default value). The results suggest that crossover is mildly beneficial, but do not prove that pure mutation is inferior." }, { "figure_ref": [], "heading": "SEARCH IN BINARY SPACE", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "ICET searches for biases in a space of real numbers. Inspired by Aha and Bankert (1994), we decided to see what would happen when ICET was restricted to a space of n binary numbers and 2 real numbers. We modified ICET so that EG2 was given the true cost of each test, instead of a \"pseudo-cost\" or bias. For conditional test costs, we used the nodiscount cost (see Section 4.2.2). The n binary digits were used to exclude or include a test. EG2 was not allowed to use excluded tests in the decision trees that it generated.\nTo be more precise, let be n binary numbers and let be n real numbers. For this experiment, we set to the true cost of the i-th test. In this experiment, GEN-ESIS does not change . That is, is constant for a given test in a given dataset. Instead, GENESIS manipulates the value of for each i. The binary number is used to determine whether EG2 is allowed to use a test in its decision tree. If , then EG2 is not allowed to use the i-th test (the i-th attribute). Otherwise, if , EG2 is allowed to use the i-th test. EG2 uses the ICF equation as usual, with the true costs . Thus this modified version of ICET is searching through a binary bias space instead of a real bias space.\nOur hypothesis was that ICET would perform better when searching in real bias space 14. Each of these three experiments took one week on a Sparc 10, which is why we only tried three settings for the mutation rate. \nn 2 + B 1 … B n , , C 1 … C n , , C i C i C i B i B i B i 0 = B i 1 = C i\nthan when searching in binary bias space. Table 13 shows that this hypothesis was not confirmed. It appears to be better to search in binary bias space, rather than real bias space. However, the differences are not statistically significant.\nWhen we searched in binary space, we set to the true cost of the i-th test. GENESIS manipulated instead of . When we searched in real space, GENESIS set to whatever value it found useful in its attempt to optimize fitness. We hypothesized that this gives an advantage to binary space search over real space search. Binary space search has direct access to the true costs of the tests, but real space search only learns about the true costs of the tests indirectly, by the feedback it gets from the fitness function.\nWhen we examined the experiment in detail, we found that ICET did well on the Heart Disease dataset when it was searching in binary bias space, although it did poorly when it was searching in real bias space (see Section 4.1.1). We hypothesized that ICET, when searching in real space, suffered most from the lack of direct access to the true costs when it was applied to the Heart Disease dataset. These hypotheses were tested by the next experiment." }, { "figure_ref": [ "fig_7" ], "heading": "SEEDED POPULATION", "publication_ref": [], "table_ref": [ "tab_13", "tab_13", "tab_13" ], "text": "In this experiment, we returned to searching in real bias space, but we seeded the initial population of biases with the true test costs. This gave ICET direct access to the true test costs. For conditional test costs, we used the no-discount cost (see Section 4.2.2). In the baseline experiment (Section 4.1), the initial population consists of 50 randomly generated strings, representing real numbers. In this experiment, the initial population consists of 49 randomly generated strings and one manually generated string. In the manually generated string, the first n numbers are the true test costs. The last two numbers were set to 1.0 (for ) and 25 (for CF). This string is exactly the bias of EG2, as implemented here (Section 3.2).\nOur hypotheses were (1) that ICET would perform better (on average) when the initial population is seeded than when it is purely random, (2) that ICET would perform better (on average) searching in real space with a seeded population than when searching in binary space, 15 and (3) that ICET would perform better on the Heart Disease dataset when the ini- \nC i B i C i C i n 2 + ω\ntial population is seeded than when it is purely random. Table 14 appears to support the first two hypotheses. Figure 6 appears to support the third hypothesis. However, the results are not statistically significant. 16 This experiment raises some interesting questions: Should seeding the population be built into the ICET algorithm? Should we seed the whole population with the true costs, perturbed by some random noise? Perhaps this is the right approach, but we prefer to modify (equation ( 2)), the device by which GENESIS controls the decision tree induction. We could alter this equation so that it contains both the true costs and some bias parameters. 17 This seems to make more sense than our current approach, which deprives EG2 of direct access to the true costs. We discuss some other ideas for modifying the equation in Section 5.2.\nIncidentally, this experiment lets us answer the following question: Does the genetic search in bias space do anything useful? If we start with the true costs of the tests and reasonable values for the parameters and CF, how much improvement do we get from the genetic search? In this experiment, we seeded the population with an individual that represents exactly the bias of EG2 (the first n numbers are the true test costs and the last two numbers are 1.0 for and 25 for CF). Therefore we can determine the value of genetic search by comparing EG2 with ICET. ICET starts with the bias of EG2 (as a seed in the first genera-15. Note that it does not make sense to seed the binary space search, since it already has direct access to the true costs. 16. We would need to go from the current 10 trials (10 random splits of the data) to about 40 trials to make the results significant. The experiments reported here took a total of 63 days of continuous computation on a Sun Sparc 10, so 40 trials would require about six more months. 17. This idea was suggested in conversation by K. De Jong. 14 shows the value of the bias built into EG2. The score of ICET in Table 14 shows how genetic search in bias space can improve the built-in bias of EG2. When the cost of misclassification errors has the same order of magnitude as the test costs ($10 to $100), EG2 averages 43% of the standard cost, while ICET averages 25% of the standard cost. When the cost of misclassification errors ranges from $10 to $10,000, EG2 averages 58% of the standard cost, while ICET averages 46% of the standard cost. Both of these differences are significant with more than 95% confidence. This makes it clear that genetic search is adding value. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "This section compares ICET to related work and outlines some possibilities for future work." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b17", "b18", "b34", "b36", "b16", "b3", "b7", "b4", "b20", "b24", "b10", "b23", "b23", "b37", "b27", "b16", "b29", "b28", "b31", "b24", "b29", "b2", "b22", "b21", "b12", "b33" ], "table_ref": [], "text": "There are several other algorithms that are sensitive to test costs (Núñez, 1988(Núñez, , 1991;;Tan & Schlimmer, 1989, 1990;Tan, 1993;Norton, 1989). As we have discussed, the main limitation of these algorithms is that they do not consider the cost of classification errors. We cannot rationally determine whether a test should be performed until we know both the cost of the test and the cost of classification errors.\nThere are also several algorithms that are sensitive to classification error costs (Breiman et al., 1984;Friedman & Stuetzle, 1981;Hermans et al., 1974;Gordon & Perlis, 1989;Pazzani et al., 1994;Provost, 1994;Provost & Buchanan, in press;Knoll et al., 1994). None of these algorithms consider the cost of tests. Therefore they all focus on complex classification cost matrices, since, when tests have no cost and the classification error matrix is simple, the problem reduces to maximizing accuracy.\nThe FIS system (Pipitone et al., 1991) attempts to find a decision tree that minimizes the average total cost of the tests required to achieve a certain level of accuracy. This approach could be implemented in ICET by altering the fitness function. The main distinction between FIS (Pipitone et al., 1991) and ICET is that FIS does not learn from data. The information gain of a test is estimated using a qualitative causal model, instead of training cases. Qualitative causal models are elicited from domain experts, using a special knowledge acquisition tool. When training data are available, ICET can be used to avoid the need for knowledge acquisition. Otherwise, ICET is not applicable and the FIS approach is suitable.\nAnother feature of ICET is that it does not perform purely greedy search. Several other authors have proposed non-greedy classification algorithms (Tcheng et al., 1989;Ragavan & Rendell, 1993;Norton, 1989;Schaffer, 1993;Rymon, 1993;Seshu, 1989). In general, these results show that there can be an advantage to more sophisticated search procedures. ICET is different from these algorithms in that it uses a genetic algorithm and it is applied to minimizing both test costs and classification error costs.\nICET uses a two-tiered search strategy. At the bottom tier, EG2 performs a greedy search through the space of classifiers. On the second tier, GENESIS performs a non-greedy search through a space of biases. The idea of a two-tiered search strategy (where the first tier is search in classifier space and the second tier is search in bias space) also appears in (Provost, 1994;Provost & Buchanan, in press;Aha & Bankert, 1994;Schaffer, 1993). Our work goes beyond Aha and Bankert (1994) by considering search in a real bias space, rather than search in a binary space. Our work fits in the general framework of Provost and Buchanan (in press), but differs in many details. For example, their method of calculating cost is a special case of ours (Section 2.3).\nOther researchers have applied genetic algorithms to classification problems. For example, Frey and Slate (1991) applied a genetic algorithm (in particular, a learning classifier system (LCS)) to letter recognition. However, Fogarty (1992) obtained higher accuracy using a simple nearest neighbor algorithm. More recent applications of genetic algorithms to classification have been more successful (De Jong et al., 1993). However, the work described here is the first application of genetic algorithms to the problem of cost-sensitive classification.\nWe mentioned in Section 2.1 that decision theory may be used to define the optimal solution to the problem of cost-sensitive classification. However, searching for the optimal solution is computationally infeasible (Pearl, 1988). We attempted to take a decision theoretic approach to this problem by implementing the AO* algorithm (Pearl, 1984) and designing a heuristic evaluation function to speed up the AO* search (Lirov & Yue, 1991). We were unable to make this approach execute fast enough to be practical.\nWe also attempted to apply genetic programming (Koza, 1993) to the problem of costsensitive classification. Again, we were unable to make this approach execute fast enough to be practical, although it was faster than the AO* approach.\nThe cost-sensitive classification problem, as we have treated it here, is essentially a problem in reinforcement learning (Sutton, 1992;Karakoulas, in preparation). The average cost of classification, measured as described in Section 2.2, is a reward/punishment signal that could be optimized using reinforcement learning techniques. This is something that might be explored as an alternative approach." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Future Work", "publication_ref": [ "b17", "b18", "b34", "b36", "b16", "b3", "b7", "b4", "b20", "b24", "b10" ], "table_ref": [ "tab_2" ], "text": "This paper discusses two types of costs, the cost of tests and the cost of misclassification errors. These two costs have been treated together in decision theory, but ICET is the first machine learning system that handles both costs together. The experiments in this paper have compared ICET to other machine learning systems that can handle test costs (Núñez, 1988(Núñez, , 1991;;Tan & Schlimmer, 1989, 1990;Tan, 1993;Norton, 1989), but we have not compared ICET to other machine learning systems that can handle classification error costs (Breiman et al., 1984;Friedman & Stuetzle, 1981;Hermans et al., 1974;Gordon & Perlis, 1989;Pazzani et al., 1994;Provost, 1994;Provost & Buchanan, in press;Knoll et al., 1994). In future work, we plan to address this omission. A proper treatment of this issue would make this paper too long.\nThe absence of comparison with machine learning systems that can handle classification error costs has no impact on most of the experiments reported here. The experiments in this paper focussed on simple classification cost matrices (except for Section 4.2.3). When the classification cost matrix is simple and the cost of tests is ignored, minimizing cost is exactly equivalent to maximizing accuracy (see Section 2.3). Therefore, C4.5 (which is designed to maximize accuracy) is a suitable surrogate for any of the systems that can handle classification error costs.\nWe also did not experiment with setting the test costs to zero. However, the behavior of ICET when the penalty for misclassification errors is very high (the extreme right-hand sides of the plots in Figure 3) is necessarily the same as its behavior when the cost of tests is very low, since ICET is sensitive to the relative differences between test costs and error costs, not the absolute costs. Therefore (given the behavior we can observe in the extreme right-hand sides of the plots in Figure 3) we can expect that the performance of ICET will tend to converge with the performance of the other algorithms as the cost of tests approaches zero.\nOne natural addition to ICET would be the ability to output an \"I don't know\" class. This is easily handled by the GENESIS component, by extending the classification cost matrix so that a cost is assigned to classifying a case as \"unknown\". We need to also make a small modification to the EG2 component, so that it can generate decision trees with leaves labelled \"unknown\". One way to do this would be to introduce a parameter that defines a confidence threshold. Whenever the confidence in a certain leaf drops below the confidence threshold, that leaf would be labelled \"unknown\". This confidence parameter would be made accessible to the GENESIS component, so that it could be tuned to minimize average classification cost.\nThe mechanism in ICET for handling conditional test costs has some limitations. As it is currently implemented, it does not handle the cost of attributes that are calculated from other attributes. For example, in the Thyroid dataset (Appendix A.5), the FTI test is calculated based on the results of the TT4 and T4U tests. If the FTI test is selected, we must pay for the TT4 and T4U tests. If the TT4 and T4U tests have already been selected, the FTI test is free (since the calculation is trivial). The ability to deal with calculated test results could be added to ICET with relatively little effort. ICET, as currently implemented, only handles two classes of test results: tests with \"immediate\" results and tests with \"delayed\" results. Clearly there can be a continuous range of delays, from seconds to years. We have chosen to treat delays as distinct from test costs, but it could be argued that a delay is simply another type of test cost. For example, we could say that a group of blood tests shares the common cost of a one-day wait for results. The cost of one of the blood tests is conditional on whether we are prepared to commit ourselves to doing one or more of the other tests in the group, before we see the results of the first test. One difficultly with this approach to handling delays is the problem of assigning a cost to the delay. How much does it cost to bring a patient in for two blood samples, instead of one? Do we include the disruption to the patient's life in our estimate of the cost? To avoid these questions, we have not treated delays as another type of test cost, but our approach does not readily handle a continuous range of delays.\nThe cost of a test can be a function of several things: (1) It can be a function of the prior tests that have been selected. (2) It can be a function of the actual class of the case. (3) It can be a function of other aspects of the case, where information about these other aspects may be available through other tests. (4) It can be a function of the test result. This list seems comprehensive, but there may be some possibilities we have overlooked. Let us consider each of these four possibilities.\nFirst, the cost of a test can be a function of the prior tests that have been selected. ICET handles a special case of this, where a group of tests shares a common cost. As it is currently implemented, ICET does not handle the general case. However, we could easily add this capability to ICET by modifying the fitness function.\nSecond, the cost of a test can be a function of the actual class of the case. For example, a test for heart disease might involve heavy exercise (Appendix A.2). If the patient actually has heart disease, the exercise might trigger a heart attack. This risk should be included in the cost of this particular test. Thus the cost of this test should vary, depending on whether the patient actually has heart disease. We have not implemented this, although it could easily be added to ICET by modifying the fitness function.\nThird, the cost of a test can be a function of the results of other tests. For example, drawing blood from a newborn is more costly than drawing blood from an adult. To assign a cost to a blood test, we need to know the age of the patient. The age of the patient can be represented as the result of another test -the \"patient-age\" test. This is slightly more complex than the preceding cases, because we must now insure that the blood test is always accompanied with the patient-age test. We have not implemented this, although it could be added to ICET.\nFourth, the cost of a test can be a function of the test result. For example, injecting a radio-opaque die for an X-ray might cause an allergic reaction in the patient. This risk should be added to the cost of the test. This makes the cost of the test a function of one of the possible outcomes of the test. In a situation like this, it may be wise to precede the injection of the die with a screening test for allergies. This could be as simple as asking a question to the patient. This question may have no relevance at all for determining the correct diagnosis of the patient, but it may serve to reduce the average cost of classification. This case is similar to the third case, above. Again, we have not implemented this, although it could be added to ICET.\nAttribute selection in EG2, CS-ID3, and IDX shares a common form. We may view attribute selection as a function from to , which takes as input n information gain values (one for each attribute) and generates as output the index of one of the attributes. We may view and as parameters in the attribute selection function. These parameters may be used to control the bias of the attribute selection procedure. In this view, ICET uses GENESIS to tune the parameters of EG2's attribute selection function.\nIn the future, we would like to investigate more general attribute selection functions. For example, we might use a neural network to implement a function from to . GENESIS would then be used to tune the weights in the neural network. 18 The attribute selection function might also benefit from the addition of an input that specifies the depth of the decision tree at the current node, where the information gain values are measured. This would enable the bias for a test to vary, depending on how many tests have already been selected.\nAnother area for future work is to explore the parameter settings that control GENESIS (Table 4). There are many parameters that could be adjusted in GENESIS. We think it is significant that ICET works well with the default parameter settings in GENESIS, since it shows that ICET is robust with respect to the parameters. However, it might be possible to substantially improve the performance of ICET by tuning some of these parameters. A recent trend in genetic algorithm research is to let the genetic algorithm adjust some of its own parameters, such as mutation rate and crossover rate (Whitley et al., 1993). Another possibility is to stop breeding when the fitness levels stop improving, instead of stopping after a fixed number of generations. Provost and Buchanan (in press) use a goodness measure as a stopping condition for the bias space search." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "The central problem investigated here is the problem of minimizing the cost of classification when the tests are expensive. We argued that this requires assigning a cost to classification errors. We also argued that a decision tree is the natural form of knowledge representation for this type of problem. We then presented a general method for calculating the average cost of classification for a decision tree, given a decision tree, information on the calculation of test costs, a classification cost matrix, and a set of testing data. This method is applicable to standard classification decision trees, without regard to how the decision tree is generated. The method is sensitive to test costs, sensitive to classification error costs, capable of handling conditional test costs, and capable of handling delayed tests.\nWe introduced ICET, a hybrid genetic decision tree induction algorithm. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. Each individual in the population represents one set of biases. The fitness of an individual is determined by using it to generate a decision tree with a training dataset, then calculating the average cost of classification for the decision tree with a testing dataset.\nWe analyzed the behavior of ICET in a series of experiments, using five real-world medical datasets. Three groups of experiments were performed. The first group looked at the baseline performance of the five algorithms on the five datasets. ICET was found to have sig-18. This idea was suggested in conversation by M. Brooks.\nℜ n 1 … n , , { } ∆I 1 … ∆I n , , C 1 … C n , , ω ℜ n 1 … n , , { }\nnificantly lower costs than the other algorithms. Although it executes more slowly, an average time of 23 minutes (for a typical dataset) is acceptable for many applications, and there is the possibility of much greater speed on a parallel machine. The second group of experiments studied the robustness of ICET under a variety of modifications to its input. The results show that ICET is robust. The third group of experiments examined ICET's search in bias space. We discovered that the search could be improved by seeding the initial population of biases.\nIn general, our research is concerned with pragmatic constraints on classification problems (Provost & Buchanan, in press). We believe that many real-world classification problems involve more than merely maximizing accuracy (Turney, in press). The results presented here indicate that, in certain applications, a decision tree that merely maximizes accuracy (e.g., trees generated by C4.5) may be far from the performance that is possible with an algorithm that considers such realistic constraints as test costs, classification error costs, conditional test costs, and delayed test results. These are just a few of the pragmatic constraints that are faced in real-world classification problems." }, { "figure_ref": [], "heading": "Appendix A. Five Medical Datasets", "publication_ref": [ "b15" ], "table_ref": [], "text": "This appendix presents the test costs for five medical datasets, taken from the Irvine collection (Murphy & Aha, 1994). The costs are based on information from the Ontario Ministry of Health (1992). Although none of the medical data were gathered in Ontario, it is reasonable to assume that other areas have similar relative test costs. For our purposes, the relative costs are important, not the absolute costs." }, { "figure_ref": [], "heading": "A.1 BUPA Liver Disorders", "publication_ref": [], "table_ref": [ "tab_15", "tab_16" ], "text": "The BUPA Liver Disorders dataset was created by BUPA Medical Research Ltd. and it was donated to the Irvine collection by Richard Forsyth. 19 Table 15 shows the test costs for the BUPA Liver Disorders dataset. The tests in group A are blood tests that are thought to be sensitive to liver disorders that might arise from excessive alcohol consumption. These tests share the common cost of $2.10 for collecting blood. The target concept was defined using the sixth column: Class 0 was defined as \"drinks < 3\" and class 1 was defined as \"drinks ≥ 3\". Table 16 shows the general form of the classification cost matrix that was used in the experiments in Section 4. For most of the experiments, the classification error cost equals the positive error cost equals the negative error cost. The exception is in Section 4.2.3, for the experiments with complex classification cost matrices. The terms \"positive error cost\" and \"negative error cost\" are explained in Section 4.2.3. There are 345 cases in this dataset, with no missing values. Column seven was originally used to split the data into training and testing sets. We did not use this column, since we required ten different random splits of the data. In our ten random splits, the ten training sets all had 230 cases and the ten testing sets all had 115 cases." }, { "figure_ref": [], "heading": "A.2 Heart Disease", "publication_ref": [], "table_ref": [ "tab_17", "tab_18" ], "text": "The Heart Disease dataset was donated to the Irvine collection by David Aha. 20 The princi-pal medical investigator was Robert Detrano, of the Cleveland Clinic Foundation. Table 17 shows the test costs for the Heart Disease dataset. A nominal cost of $1.00 was assigned to the first four tests. The tests in group A are blood tests that are thought to be relevant for heart disease. These tests share the common cost of $2.10 for collecting blood. The tests in groups B and C involve measurements of the heart during exercise. A nominal cost of $1.00 was assigned for tests after the first test in each of these groups. The class variable has the values \"buff\" (healthy) and \"sick\". There was a fifteenth column, which specified the class variable as \"H\" (healthy), \"S1\", \"S2\", \"S3\", or \"S4\" (four different types of \"sick\"), but we deleted this column. Table 18 shows the classification cost matrix. There are 303 cases in this dataset. We deleted all cases for which there were missing values. This reduced the dataset to 296 cases. In our ten random splits, the training sets had 197 cases and the testing sets had 99 cases." }, { "figure_ref": [], "heading": "A.3 Hepatitis Prognosis", "publication_ref": [], "table_ref": [], "text": "The Hepatitis Prognosis dataset was donated by Gail Gong. 21 Actual Class Guess Class Cost 0 (drinks < 3) 0 (drinks < 3) $0.00 0 (drinks < 3) 1 (drinks ≥ 3) Positive Error Cost 1 (drinks ≥ 3) 0 (drinks < 3) Negative Error Cost 1 (drinks ≥ 3) 1 (drinks ≥ 3) $0.00\ndiagnosis. With prognosis, the diagnosis is known, and the problem is to determine the likely outcome of the disease. The tests that were assigned a nominal cost of $1.00 either involve asking a question to the patient or performing a basic physical examination on the patient.\nThe tests in group A share the cost of $2.10 for collecting blood. Note that, although performing a histological examination of the liver costs $81.64, asking the patient whether a histology was performed only costs $1.00. Thus the prognosis can exploit the information conveyed by a decision (to perform a histological examination) that was made during the diagnosis. The class variable has the values 1 (die) and 2 (live). Table 20 shows the classification costs. The dataset contains 155 cases, with many missing values. In our ten random " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "Thanks to Dr. Louise Linney for her help with interpretation of the Ontario Ministry of Health's Schedule of Benefits. Thanks to Martin Brooks, Grigoris Karakoulas, Cullen Schaffer, Diana Gordon, Tim Niblett, Steven Minton, and three anonymous referees of JAIR for their very helpful comments on earlier versions of this paper. This work was presented in informal talks at the University of Ottawa and the Naval Research Laboratory. Thanks to both audiences for their feedback." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "splits, the training sets had 103 cases and the testing sets had 52 cases. We filled in the missing values, using a simple single nearest neighbor algorithm (Aha et al., 1991). The missing values were filled in using the whole dataset, before the dataset was split into training and testing sets. For the nearest neighbor algorithm, the data were normalized so that the mini- mum value of a feature was 0 and the maximum value was 1. The distance measure used was the sum of the absolute values of the differences. The difference between two values was defined to be 1 if one or both of the two values was missing." }, { "figure_ref": [], "heading": "A.4 Pima Indians Diabetes", "publication_ref": [], "table_ref": [], "text": "The Pima Indians Diabetes dataset was donated by Vincent Sigillito. 22 The data were collected by the National Institute of Diabetes and Digestive and Kidney Diseases. Table 21 shows the test costs for the Pima Indians Diabetes dataset. The tests in group A share the cost of $2.10 for collecting blood. The remaining tests were assigned a nominal cost of $1.00. All of the patients were females at least 21 years old of Pima Indian heritage. The class variable has the values 0 (healthy) and 1 (diabetes). Table 22 shows classification costs.\nThe dataset includes 768 cases, with no missing values. In our ten random splits, the training sets had 512 cases and the testing sets had 256 cases." }, { "figure_ref": [], "heading": "A.5 Thyroid Disease", "publication_ref": [], "table_ref": [], "text": "The Thyroid Disease dataset was created by the Garavan Institute, Sydney, Australia. the data from J.R. Quinlan. 23 Table 23 shows the test costs for the Thyroid Disease dataset.\nA nominal cost of $1.00 was assigned to the first 16 tests. The tests in group A share the cost of $2.10 for collecting blood. The FTI test involves a calculation based on the results of the TT4 and T4U tests. This complicates the calculation of the costs of these three tests, so we chose not to use the FTI test in our experiments. The class variable has the values 1 (hypothyroid), 2 (hyperthyroid), and 3 (normal). Table 24 shows the classification costs.\nThere are 3772 cases in this dataset, with no missing values. In our ten random splits, the training sets had 2515 cases and the testing sets had 1257 cases.\n23. The Thyroid Disease dataset has the URL ftp://ftp.ics.uci.edu/pub/machine-learning-databases/thyroid-disease/ann-train.data. " } ]
[ { "authors": "D Ackley; M Littman", "journal": "Addison-Wesley", "ref_id": "b0", "title": "Interactions between learning and evolution", "year": "1991" }, { "authors": "T C Fogarty", "journal": "Machine Learning", "ref_id": "b1", "title": "Technical note: First nearest neighbor classification on Frey and Slate's letter recognition problem", "year": "1992" }, { "authors": "P W Frey; D J Slate", "journal": "Machine Learning", "ref_id": "b2", "title": "Letter recognition using Holland-style adaptive classifiers", "year": "1991" }, { "authors": "J H Friedman; W Stuetzle", "journal": "Journal of the American Statistics Association", "ref_id": "b3", "title": "Projection pursuit regression", "year": "1981" }, { "authors": "D F Gordon; D Perlis", "journal": "Computational Intelligence", "ref_id": "b4", "title": "Explicitly biased generalization", "year": "1989" }, { "authors": "J J Grefenstette", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b5", "title": "Optimization of control parameters for genetic algorithms", "year": "1986" }, { "authors": "J J Grefenstette; C L Ramsey; A C Schultz", "journal": "Machine Learning", "ref_id": "b6", "title": "Learning sequential decision rules using simulation models and competition", "year": "1990" }, { "authors": "J Hermans; J D F Habbema; A T Van Der Burght", "journal": "Bulletin of the International Statistics Institute", "ref_id": "b7", "title": "Cases of doubt in allocation problems, k populations", "year": "1974" }, { "authors": "G E Hinton; S J Nowlan", "journal": "Complex Systems", "ref_id": "b8", "title": "How learning can guide evolution", "year": "1987" }, { "authors": "G Karakoulas", "journal": "", "ref_id": "b9", "title": "A Q-learning approach to cost-effective classification", "year": "" }, { "authors": "U Knoll; G Nakhaeizadeh; B Tausend", "journal": "Springer-Verlag", "ref_id": "b10", "title": "Cost-sensitive pruning of decision trees", "year": "1994" }, { "authors": "J R Koza", "journal": "MIT Press", "ref_id": "b11", "title": "Genetic Programming: On the programming of computers by means of natural selection", "year": "1992" }, { "authors": "Y Lirov; O.-C Yue", "journal": "Journal of Applied Intelligence", "ref_id": "b12", "title": "Automated network troubleshooting knowledge acquisition", "year": "1991" }, { "authors": "Maynard Smith; J ", "journal": "Nature", "ref_id": "b13", "title": "When learning guides evolution", "year": "1987" }, { "authors": "C L Morgan", "journal": "Science", "ref_id": "b14", "title": "On modification and variation", "year": "1896" }, { "authors": "P M Murphy; D W Aha", "journal": "", "ref_id": "b15", "title": "UCI Repository of Machine Learning Databases", "year": "1994" }, { "authors": "S W Norton", "journal": "", "ref_id": "b16", "title": "Generating better decision trees", "year": "1989" }, { "authors": "M Núñez", "journal": "Morgan Kaufmann", "ref_id": "b17", "title": "Economic induction: A case study", "year": "1988" }, { "authors": "M Núñez", "journal": "Machine Learning", "ref_id": "b18", "title": "The use of background knowledge in decision tree induction", "year": "1991" }, { "authors": "", "journal": "Ministry of Health", "ref_id": "b19", "title": "Schedule of benefits: Physician services under the health insurance act", "year": "1992-10-01" }, { "authors": "M Pazzani; C Merz; P Murphy; K Ali; T Hume; C Brunk", "journal": "", "ref_id": "b20", "title": "Reducing misclassification costs: Knowledge-intensive approaches to learning from noisy data", "year": "1994" }, { "authors": "J Pearl", "journal": "Addison-Wesley", "ref_id": "b21", "title": "Heuristics: Intelligent search strategies for computer problem solving", "year": "1984" }, { "authors": "J Pearl", "journal": "Morgan Kaufmann", "ref_id": "b22", "title": "Probabilistic reasoning in intelligent systems: Networks of plausible inference", "year": "1988" }, { "authors": "F Pipitone; Jong De; K A Spears; W M ", "journal": "Van Nostrand-Reinhold", "ref_id": "b23", "title": "An artificial intelligence approach to analog systems diagnosis", "year": "1991" }, { "authors": "F J Provost", "journal": "", "ref_id": "b24", "title": "Goal-directed inductive learning: Trading off accuracy for reduced error cost", "year": "1994" }, { "authors": "F J Provost; B G Buchanan", "journal": "Machine Learning", "ref_id": "b25", "title": "Inductive policy: The pragmatics of bias selection", "year": "" }, { "authors": "J R Quinlan", "journal": "Morgan Kaufmann", "ref_id": "b26", "title": "C4.5: Programs for machine learning", "year": "1992" }, { "authors": "H Ragavan; L Rendell", "journal": "Morgan Kaufmann", "ref_id": "b27", "title": "Lookahead feature construction for learning hard concepts", "year": "1993" }, { "authors": "R Rymon", "journal": "Morgan Kaufmann", "ref_id": "b28", "title": "An SE-tree based characterization of the induction problem", "year": "1993" }, { "authors": "C Schaffer", "journal": "Machine Learning", "ref_id": "b29", "title": "Selecting a classification method by cross-validation", "year": "1993" }, { "authors": "J D Schaffer; D Whitley; L J Eshelman", "journal": "IEEE Computer Society Press", "ref_id": "b30", "title": "Combinations of genetic algorithms and neural networks: A survey of the state of the art", "year": "1992" }, { "authors": "R Seshu", "journal": "Morgan Kaufmann", "ref_id": "b31", "title": "Solving the parity problem", "year": "1989" }, { "authors": "W M Spears", "journal": "Morgan Kaufmann", "ref_id": "b32", "title": "Crossover or mutation? Foundations of Genetic Algorithms 2", "year": "1992" }, { "authors": "R S Sutton", "journal": "Machine Learning", "ref_id": "b33", "title": "Introduction: The challenge of reinforcement learning", "year": "1992" }, { "authors": "M Tan; J Schlimmer", "journal": "", "ref_id": "b34", "title": "Cost-sensitive concept learning of sensor use in approach and recognition", "year": "1989" }, { "authors": "M Tan; J Schlimmer", "journal": "", "ref_id": "b35", "title": "CSL: A cost-sensitive learning system for sensing and grasping objects", "year": "1990" }, { "authors": "M Tan", "journal": "Machine Learning", "ref_id": "b36", "title": "Cost-sensitive learning of classification knowledge and its applications in robotics", "year": "1993" }, { "authors": "D Tcheng; B Lambert; S Lu; L Rendell", "journal": "", "ref_id": "b37", "title": "Building robust learning systems by combining induction and optimization", "year": "1989" }, { "authors": "P D Turney", "journal": "Machine Learning", "ref_id": "b38", "title": "Technical note: Bias and the quantification of stability", "year": "" }, { "authors": "F Verdenius", "journal": "Springer-Verlag", "ref_id": "b39", "title": "A method for inductive cost optimization", "year": "1991" }, { "authors": "C H Waddington", "journal": "Nature", "ref_id": "b40", "title": "Canalization of development and the inheritance of acquired characters", "year": "1942" }, { "authors": "D Whitley; S Dominic; R Das; C W Anderson", "journal": "Machine Learning", "ref_id": "b41", "title": "Genetic reinforcement learning for neurocontrol problems", "year": "1993" }, { "authors": "D Whitley; F Gruau", "journal": "Evolutionary Computation", "ref_id": "b42", "title": "Adding learning to the cellular development of neural networks: Evolution and the Baldwin effect", "year": "1993" }, { "authors": "D Whitley; S Gordon; K Mathias", "journal": "Springer-Verlag", "ref_id": "b43", "title": "Lamarckian evolution, the Baldwin effect and function optimization", "year": "1994" }, { "authors": "S W Wilson", "journal": "Machine Learning", "ref_id": "b44", "title": "Classifier systems and the animat problem", "year": "1987" } ]
[ { "formula_coordinates": [ 4, 147.97, 302.56, 300.18, 30.1 ], "formula_id": "formula_0", "formula_text": "c c c × C i j ," }, { "formula_coordinates": [ 7, 156.83, 118.72, 327.89, 588.86 ], "formula_id": "formula_1", "formula_text": "F T T T T T F F F F 0 0 0 1 1 1 pk p 0 1 [ , ] ∈" }, { "formula_coordinates": [ 8, 87, 129.56, 407.86, 318.24 ], "formula_id": "formula_2", "formula_text": "C i i , x = i j ≠ ( ) C i j , y = ( ) → x y < score A accuracy ⋅ B cost ⋅ - = 3.1 C4.5" }, { "formula_coordinates": [ 9, 87, 295.16, 421.97, 366.53 ], "formula_id": "formula_3", "formula_text": "ICF i ICF i 2 ∆I i 1 - C i 1 + ( ) ω ------------------------ = where 0 ω 1 ≤ ≤ ∆I i C i ∆I i ICF i ω ω 0 = ICF i ∆I i ω 1 = ICF i ω ω ω 2 ∆I i 1 - C i 1 + ----------------- ∆I i ( ) 2 C i ----------------" }, { "formula_coordinates": [ 10, 183.19, 214.47, 291, 340.86 ], "formula_id": "formula_4", "formula_text": "∆I i C i ------- C i C i C i" }, { "formula_coordinates": [ 11, 90.94, 400.58, 424.78, 179.74 ], "formula_id": "formula_5", "formula_text": "n 2 + n 2 + C i ω ω ω 12n 16 + C i i 1 … n , , = ω 1 ±" }, { "formula_coordinates": [ 12, 87, 467.16, 405.52, 168.9 ], "formula_id": "formula_6", "formula_text": "C i C i C i C i C i C i ICF i ω ICF i ∆I i" }, { "formula_coordinates": [ 15, 504.35, 310.16, 13.65, 54.58 ], "formula_id": "formula_7", "formula_text": "(7) (8) (9)" }, { "formula_coordinates": [ 15, 230.34, 194.56, 285.66, 246.9 ], "formula_id": "formula_8", "formula_text": "f i 0 1 [ , ] ∈ f i f i C i j , T T min i 1 f i - ( ) max i j , C i j , ⋅ + T min i 1 f i - ( ) max i j , C i j , T max i j , C i j , +" }, { "formula_coordinates": [ 24, 192.58, 476.16, 252.24, 168.9 ], "formula_id": "formula_9", "formula_text": "n 2 + B 1 … B n , , C 1 … C n , , C i C i C i B i B i B i 0 = B i 1 = C i" }, { "formula_coordinates": [ 25, 87, 363.16, 371.82, 285.74 ], "formula_id": "formula_10", "formula_text": "C i B i C i C i n 2 + ω" }, { "formula_coordinates": [ 31, 143.63, 141.68, 367.51, 93.22 ], "formula_id": "formula_11", "formula_text": "ℜ n 1 … n , , { } ∆I 1 … ∆I n , , C 1 … C n , , ω ℜ n 1 … n , , { }" } ]
Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm
This paper introduces ICET, a new algorithm for cost-sensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for cost-sensitive classification -EG2, CS-ID3, and IDX -and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five realworld medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICET's search in bias space and discovers a way to improve the search.
Peter D Turney Turney
[ { "figure_caption": "Figure 2 :2Figure 2: A sketch of the ICET algorithm.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Average cost of classification as a percentage of the standard cost of classification for the baseline experiment.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "classification cost matrix can have a different value.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Average cost of classification as a percentage of the standard cost of classification, with complex classification cost matrices.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Average cost of classification as a percentage of the standard cost of classification for the seeded population experiment.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Test costs for a simple example.", "figure_data": "TestGroup CostDelayed1 alpha$5.00no2 beta$10.00no3 deltaA$7.00 if first test in group A,yes$5.00 otherwise4 epsilonA$10.00 if first test in group A,yes$8.00 otherwiseTable 2: Classification costs for a simple example.Actual ClassGuess ClassCost00$0.0001$50.0010$50.0011$0.00", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Calculating the cost for a particular case.", "figure_data": "Figure 1: Decision tree for a simple example.StepActionResultCost1do alphaalpha = 6$5.002do deltadelta = 3$7.00 + $10.00 + $8.00 = $25.003do epsilonepsilon = 2already paid, in step #24guess class = 0actual class = 1$50.00total cost$80.00", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Parameter settings for GENESIS.", "figure_data": "ParameterSettingExperiments1Total Trials1000Population Size50Structure Length12n + 16Crossover Rate0.6Mutation Rate0.001Generation Gap1.0Scaling Window5Report Interval100Structures Saved1Max Gens w/o Eval2Dump Interval0Dumps Saved0OptionsacefglRandom Seed123456789Rank Min0.75", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Average percentage of standard cost for the baseline experiment.", "figure_data": "AlgorithmAverage Classification Cost as Percentage of Standard ± 95% ConfidenceMisclassification Error CostsMisclassification Error Costsfrom 10.00 to 10,000.00from 10.00 to 100.00ICET49 ± 729 ± 7EG258 ± 543 ± 3CS-ID361 ± 649 ± 4IDX58 ± 543 ± 3C4.577 ± 582 ± 4", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Elapsed run-time for the five algorithms.", "figure_data": "AlgorithmAverage Elapsed Run-Time for Each Dataset -Minutes:SecondsBUPAHeartHepatitisPimaThyroidAverageICET15:4313:1410:2928:1945:2522:38", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Average percentage of standard cost for the no-delay experiment.", "figure_data": "AlgorithmAverage Classification Cost as Percentage of Standard ± 95% ConfidenceMisclassification Error CostsMisclassification Error Costsfrom 10.00 to 10,000.00from 10.00 to 100.00ICET47 ± 628 ± 4EG254 ± 436 ± 2CS-ID354 ± 539 ± 3IDX54 ± 436 ± 2C4.564 ± 659 ± 4", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Average percentage of standard cost for the no-discount experiment.", "figure_data": "AlgorithmAverage Classification Cost as Percentage of Standard ± 95% ConfidenceMisclassification Error CostsMisclassification Error Costsfrom 10.00 to 10,000.00from 10.00 to 100.00ICET46 ± 625 ± 5EG256 ± 542 ± 3CS-ID359 ± 548 ± 4IDX56 ± 542 ± 3C4.575 ± 580 ± 4", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Actual error costs for each ratio of negative to positive error cost.", "figure_data": "Ratio of Negative toNegativePositivePositive Error CostError CostError Cost0.125504000.25502000.5501001.050502.0100504.0200508.040050", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Comparison of ICET and EG2with various ratios of negative to positive error cost.", "figure_data": "AlgorithmAverage Classification Cost as Percentage of Standard ± 95% Confidence, as the Ratio ofNegative to Positive Error Cost is Varied0.1250.250.51.02.04.08.0ICET25 ± 10 25 ± 8 29 ± 6 29 ± 4 34 ± 6 39 ± 6 39 ± 6EG239 ± 540 ± 4 41 ± 4 44 ± 3 42 ± 3 41 ± 4 40 ± 5ICET/EG2 (as %)64637166819598", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Performance when training set classification error cost is $100.", "figure_data": "AlgorithmAverage Classification Cost as Percentage of Standard ± 95% Confidence, for Testing SetClassification Error Cost of:$50$100$500ICET33 ± 1041 ± 1062 ± 9EG244 ± 349 ± 463 ± 6CS-ID349 ± 554 ± 665 ± 7IDX43 ± 349 ± 463 ± 6C4.582 ± 582 ± 578 ± 7", "figure_id": "tab_10", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Average percentage of standard cost for mutation experiment.", "figure_data": "ICETAverage Classification Cost as Percentage of Standard ± 95% ConfidenceCrossover RateMutation RateMisclassification Error Costs from 10.00 to 10,000.00Misclassification Error Costs from 10.00 to 100.000.60.00149 ± 729 ± 70.00.0551 ± 832 ± 90.00.1050 ± 829 ± 80.00.1551 ± 830 ± 9", "figure_id": "tab_11", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Average percentage of standard cost for the binary search experiment.", "figure_data": "AlgorithmAverage Classification Cost as Percentage of Standard ± 95% ConfidenceMisclassificationMisclassificationError CostsError Costsfrom 10.00 to 10,000.00from 10.00 to 100.00ICET -Binary Space48 ± 626 ± 5ICET -Real Space49 ± 729 ± 7EG258 ± 543 ± 3CS-ID361 ± 649 ± 4IDX58 ± 543 ± 3C4.577 ± 582 ± 4", "figure_id": "tab_12", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Average percentage of standard cost for the seeded population experiment.", "figure_data": "AlgorithmAverage Classification Cost as Percentage of Standard ± 95% ConfidenceMisclassificationMisclassificationError CostsError Costsfrom 10.00 to 10,000.00from 10.00 to 100.00ICET -Seeded46 ± 625 ± 5Search in Real SpaceICET -Unseeded49 ± 729 ± 7Search in Real SpaceICET -Unseeded48 ± 626 ± 5Search in Binary SpaceEG258 ± 543 ± 3CS-ID361 ± 649 ± 4IDX58 ± 543 ± 3C4.577 ± 582 ± 4ICF iωω", "figure_id": "tab_13", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Table 19 shows the test costs for the Hepatitis dataset. Unlike the other four datasets, this dataset deals with prognosis, not 21. The Hepatitis Prognosis dataset has the URL ftp://ftp.ics.uci.edu/pub/machine-learning-databases/hepatitis/ hepatitis.data.", "figure_data": "", "figure_id": "tab_14", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Test costs for the BUPA Liver Disorders dataset.", "figure_data": "TestDescriptionGroup CostDelayed1 mcvmean corpuscular volumeA$7.27 if first test in group A,yes$5.17 otherwise2 alkphosalkaline phosphotaseA$7.27 if first test in group A,yes$5.17 otherwise3 sgptalamine aminotransferaseA$7.27 if first test in group A,yes$5.17 otherwise4 sgotaspartate aminotransferaseA$7.27 if first test in group A,yes$5.17 otherwise5 gammagt gamma-glutamyl transpeptidaseA$9.86 if first test in group A,yes$7.76 otherwise6 drinksnumber of half-pint equivalents of alcoholic beverages drunk per daydiagnostic class: \"drinks < 3\" or \"drinks ≥ 3\"-7 selectorfield used to split data into two setsnot used-", "figure_id": "tab_15", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Classification costs for the BUPA Liver Disorders dataset.", "figure_data": "", "figure_id": "tab_16", "figure_label": "16", "figure_type": "table" }, { "figure_caption": "Test costs for the Heart Disease dataset.", "figure_data": "TestDescriptionGroup CostDelayed1ageage in years$1.00no2sexpatient's gender$1.00no3cpchest pain type$1.00no4trestbps resting blood pressure$1.00no5cholserum cholesterolA$7.27 if first test in group A,yes$5.17 otherwise6fbsfasting blood sugarA$5.20 if first test in group A,yes$3.10 otherwise7restecgresting electrocardiograph$15.50yes8thalach maximum heart rateB$102.90 if first test in group B,yesachieved$1.00 otherwise9exangexercise induced anginaC$87.30 if first test in group C,yes$1.00 otherwise10 oldpeak ST depression induced byC$87.30 if first test in group C,yesexercise relative to rest$1.00 otherwise11 slopeslope of peak exercise STC$87.30 if first test in group C,yessegment$1.00 otherwise12 canumber of major vessels$100.90yescoloured by fluoroscopy13 thal3 = normal; 6 = fixed defect;B$102.90 if first test in group B,yes7 = reversible defect$1.00 otherwise14 numdiagnosis of heart diseasediagnostic class-", "figure_id": "tab_17", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "Classification costs for the Heart Disease dataset.", "figure_data": "Actual ClassGuess ClassCostbuffbuff$0.00buffsickPositive Error CostsickbuffNegative Error Costsicksick$0.00", "figure_id": "tab_18", "figure_label": "18", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b21", "b28", "b5", "b23", "b2", "b17" ], "table_ref": [], "text": "Inductive learners normally use training examples, but they can also use background knowledge. E ectively integrating this knowledge into induction has been a widely studied research problem. Most work to date has been in the area of theory revision in which the knowledge given is a coarse, perhaps incomplete or incorrect, theory of the problem domain, and training examples are used to shape this initial theory into a re ned, more accurate theory (Ourston & Mooney, 1990;Thompson, Langley, & Iba, 1991;Cohen, 1992;Pazzani & Kibler, 1992;Ba es & Mooney, 1993;Mooney, 1993). We develop a more exible and more robust approach to the problem of learning from both data and theory knowledge by addressing the two following desirable qualities:\nBefore giving more precise de nitions of our terms, we motivate our work intuitively." }, { "figure_ref": [], "heading": "Intuitive Motivation", "publication_ref": [ "b19", "b17", "b21" ], "table_ref": [], "text": "The rst desirable quality, exibility of representation, arises because the theory representation most appropriate for describing the coarse, initial domain theory may be inadequate for the nal, revised theory. While the initial domain theory may be compact and concise in one representation, an accurate theory may be quite bulky and cumbersome in that representation. Furthermore, the representation that is best for expressing the initial theory may not be the best for carrying out re nements. A helpful re nement step may be clumsy to make in the initial representation yet be carried out quite simply in another representation.\nAs a simple example, a coarse domain theory may be expressed as the logical conjunction of N conditions that should be met. The most accurate theory, though, is one in which any M of these N conditions holds. Expressing this more accurate theory in the DNF representation used to describe the initial theory would be cumbersome and unwieldy (Murphy & Pazzani, 1991). Furthermore, arriving at the nal theory using the re nement operators most suitable for DNF (drop-condition, add-condition, modify-condition) would be a cumbersome task. But when an M-of-N representation is adopted, the re nement simply involves empirically nding the appropriate M, and the nal theory can be expressed concisely (Ba es & Mooney, 1993).\nSimilarly, the second desirable quality, exibility of structure, arises because the theory structure that was suitable for a coarse domain theory may be insu cient for a ne-tuned theory. In order to achieve the desired accuracy, a restructuring of the initial theory may be necessary. Many theory revision systems act by making a series of local changes, but this can lead to behavior at two extremes. The rst extreme is to rigidly retain the backbone structure of the initial domain theory, only allowing small, local changes. Figure 1 illustrates this situation. Minor revisions have been made { conditions have been added, dropped, and modi ed { but the re ned theory is trapped by the backbone structure of the initial theory. When only local changes are needed, these techniques have proven useful (Ourston & Mooney, 1990), but often more is required. When more is required, these systems often move to the other extreme; they drop entire rules and groups of rules and then build entire new rules and groups of rules from scratch to replace them. Thus they restructure, but they forfeit valuable knowledge in the process. An ideal theory revision system would glean knowledge from theory substructures that cannot be xed with small, local changes and use this in a restructured theory.\nAs an intuitive illustration, consider a piece of software that \\almost works.\" Sometimes it can be made useful through only a few local operations: xing a couple of bugs, adding a needed subroutine, and so on. In other cases, though, a piece of software that \\almost works\" is in fact far from full working order. It may need to be redesigned and restructured. A mistake at one extreme is to try to x a program like this by making a series of patches in the original code. A mistake at the other extreme is to discard the original program without learning anything from it and start from scratch. The best approach would be to examine the original program to see what can be learned from its design and to use this knowledge in the redesign. Likewise, attempting to improve a coarse domain theory through a series of local changes may yield little improvement because the theory is trapped by its initial " }, { "figure_ref": [ "fig_1" ], "heading": "Initial Theory", "publication_ref": [], "table_ref": [], "text": "Figure 1: Typical theory revision allows only limited structural exibility. Although conditions have been added, dropped, and modi ed, the revised theory is much constrained by the structure of the initial theory.\nstructure. This does not render the original domain theory useless; careful analysis of the initial domain theory can give valuable guidance for the design of the best nal theory. This is illustrated in Figure 2 where many substructures have been taken from the initial theory and adapted for use in the re ned theory. Information from the initial theory has been used, but the structure of the revised theory is not restricted by the structure of the initial theory. The revised theory has taken many substructures from the initial theory and adapted and recombined them for its use, but the structure of the revised theory is not restricted by the structure of the initial theory." }, { "figure_ref": [], "heading": "Terminology", "publication_ref": [], "table_ref": [], "text": "In this paper, all training data consist of examples which are classi ed vectors of feature/value pairs. We assume that an initial theory is a set of conditions combined using the operators AND, OR, and NOT and indicating one or more classes. While it is unreasonable to believe that all theories will always be of this form, it covers much existing theory revision research.\nOur work is intended as an informal exploration of exible representation and exible structure. Flexible representation means allowing the theory to be revised using a representation language other than that of the initial theory. An example of exible representation is the introduction of a new operator for combining features | an operator not used in the initial theory. In Section 1.1 the example was given of introducing the M-of-N operator to represent a theory originally expressed in DNF. Flexible structure means not limiting revision of the theory to a series of small, incremental modi cations. An example of this is breaking the theory down into its components and using them as building blocks in the construction of a new theory.\nConstructive induction is a process whereby the training examples are redescribed using a new set of features. These new features are combinations of the original features. Bias or knowledge may be used in the construction of the new features. A subtle point is that when we speak of exible representation, we are referring only to the representation of the domain theory, not the training data. Although the phrase \\change of representation\" is often applied to constructive induction, this refers to a change of the data. In our paper, the term exible representation is reserved for a change of theory representation. Thus a system can be performing constructive induction (changing the feature language of the data) without exhibiting exible representation (changing the representation of the theory)." }, { "figure_ref": [], "heading": "Overview", "publication_ref": [ "b24" ], "table_ref": [], "text": "Theory revision and constructive induction embody complementary aspects of the machine learning research community's ultimate goals. Theory revision uses data to improve a theory; constructive induction can use a theory to improve data to facilitate learning. In this paper we present a theory-guided constructive induction approach which addresses the two desirable qualities discussed in Section 1.1. The initial theory is analyzed, and new features are constructed based on the components of the theory. The constructed features need not be expressed in the same representational language as the initial theory and can be re ned to better match the training examples. Finally, a standard inductive learning algorithm, C4.5 (Quinlan, 1993), is applied to the redescribed examples.\nWe begin by analyzing how landmark theory revision and learning systems have exhibited exibility in handling a domain theory and what part this has played in their performance. From this analysis, we extract guidelines for system design and apply them to the design of our own limited system. In an e ort to integrate learning from theory and data, we borrow heavily from the theory revision, multistrategy learning, and constructive induction communities, but our guidelines for system design fall closest to classical constructive induction methods. The central focus of this paper is not the presentation of \\another new system\" but rather a study of exible representation and structure, their manifestation in previous work, and their guidance for future design.\nSection 2 gives the context of our work by analyzing previous research and its in uence on our work. Section 3 explores the Promoter Recognition domain and demonstrates how related theory revision systems behave in this domain. In Section 4, guidelines for theory-guided constructive induction are presented. These guidelines are a synthesis of the positive aspects of related research, and they address the two desirable qualities, exibility of representation and exibility of structure. Section 4 also presents a speci c theory-guided constructive induction algorithm which is an instantiation of the guidelines set forth earlier in that section. Results of experiments in three domains are given in Section 5 followed by a discussion of the strengths of theory-guided constructive induction in Section 6. Section 7 presents an experimental analysis of the limits of applicability of our simple algorithm followed by a discussion of limitations and future directions of our work in Section 8." }, { "figure_ref": [], "heading": "Context and Related Work", "publication_ref": [ "b15", "b11", "b31", "b27", "b7", "b14", "b22", "b25", "b13", "b21", "b9", "b28", "b5", "b23", "b2", "b10", "b30", "b8", "b3", "b4", "b29" ], "table_ref": [], "text": "Although our work bears some resemblance in form and objective to many papers in constructive induction (Michalski, 1983;Fu & Buchanan, 1985;Utgo , 1986;Schlimmer, 1987;Drastal & Raatz, 1989;Matheus & Rendell, 1989;Pagallo & Haussler, 1990;Ragavan & Rendell, 1993;Hirsh & Noordewier, 1994), theory revision (Ourston & Mooney, 1990;Feldman, Serge, & Koppel, 1991;Thompson et al., 1991;Cohen, 1992;Pazzani & Kibler, 1992;Ba es & Mooney, 1993), and multistrategy approaches (Flann & Dietterich, 1989;Towell, Shavlik, & Noordeweir, 1990;Dzerisko & Lavrac, 1991;Bloedorn, Michalski, & Wnek, 1993;Clark & Matwin, 1993;Towell & Shavlik, 1994), we focus only upon a handful of these systems, those that have signi cant, underlying similarities to our work. In this section we analyze Miro, Either, Focl, Labyrinth K , Kbann, Neither-MofN, and Grendel to discuss their related underlying contributions in relationship to our perspective." }, { "figure_ref": [], "heading": "Miro", "publication_ref": [ "b7", "b31", "b21", "b28", "b23" ], "table_ref": [], "text": "Miro (Drastal & Raatz, 1989) is a seminal work in knowledge-guided constructive induction. It takes knowledge about how low-level features interact and uses this knowledge to construct high-level features for its training examples. A standard learning algorithm is then run on these examples described using the new features. The domain theory is used to shift the bias of the induction problem (Utgo , 1986). Empirical results showed that describing the examples in these high-level, abstract terms improved learning accuracy.\nThe Miro approach provides a means of utilizing knowledge in a domain theory without being restricted by the structure of that theory. Substructures of the domain theory can be used to construct high-level features that a standard induction algorithm will arrange into a concept. Some constructed features will be used as they are, others will be ignored, others will be combined with low-level features, and still others may be used di erently in multiple contexts. The end result is that knowledge from the domain theory is utilized, but the structure of the nal theory is not restricted by the structure of the initial theory. Miro provides exible structure.\nAnother bene t is that Miro-like techniques can be applied even when only a partial domain theory exists, i.e., a domain theory that only speci es high-level features but does not link them together or a domain theory that speci es some high-level features but not others. One of Miro's shortcomings is that it provided no means of making minor changes in the domain theory but rather constructed the features exactly as the domain theory speci ed. Also the representation of Miro's constructed features was primitive | either an example met the conditions of a high-level feature or did not. An example of Miro's behavior is given in Section 3.2.\n2.2 Either, Focl, and Labyrinth K\nThe Either (Ourston & Mooney, 1990), Labyrinth K (Thompson et al., 1991), and Focl (Pazzani & Kibler, 1992) systems represent a broad spectrum of theory revision work. They make steps toward e ective integration of background knowledge and inductive learning. Although these systems have many super cial di erences with regard to supervised/unsupervised learning, concept description language, etc., they share the underlying principle of incrementally revising an initial domain theory through a series of local changes.\nWe will discuss Either as a representative of this class of systems. Either's theory revision operators include: removing unwanted conditions from a rule, adding needed conditions to a rule, removing rules, and adding totally new rules. Either rst classi es its training examples according to the current theory. If any are misclassi ed, it seeks to repair the theory by applying a theory revision operator that will result in the correct classi cation of some previously misclassi ed examples without losing any of the correct examples. Thus a series of local changes are made that allow for an improvement of accuracy on the training set without losing any of the examples previously classi ed correctly.\nEither-type methods provide simple yet powerful tools for repairing many important and common faults in domain theories, but they fail to meet the qualities of exible representation and exible structure. Because the theory revision operators make small, local modi cations in the existing domain theory, the nal theory is constrained to be similar in structure to the initial theory. When an accurate theory is signi cantly di erent in structure from the initial theory, these systems are forced to one of the two extremes discussed in Section 1. The rst extreme is to become trapped at a local maximum similar to the initial theory unable to reach the global maximum because only local changes can be made. The other extreme is to drop entire rules and groups of rules and replace them with new rules built from scratch thus forfeiting the knowledge contained in the domain theory.\nAlso, Either carries out all theory revision steps in the representation of the initial theory. Consequently, the representation of the nal theory is the same as that of the initial theory. Another representation may be more appropriate for the revised theory than the one in which the initial theory comes, but facilities are not provided to accommodate this. An advanced theory revision system would combine the locally acting strengths of Eithertype systems with exibility of structure and exibility of representation. An example of Either's behavior is given in Section 3.3." }, { "figure_ref": [], "heading": "Kbann and Neither-MofN", "publication_ref": [ "b30", "b29", "b26", "b2", "b2", "b5" ], "table_ref": [], "text": "The Kbann system (Towell et al., 1990;Towell & Shavlik, 1994) makes unique contributions to theory revision work. Kbann takes an initial domain theory described symbolically in logic and creates a neural network whose structure and initial weights encode this theory. Backpropagation (Rumelhart, Hinton, & McClelland, 1986) is then applied as a re nement tool for ne-tuning the network weights. Kbann has been empirically shown to give signi cant improvement over many theory revision systems for the widely-used Promoter Recognition domain. Although our work is di erent in implementation from Kbann, our abstract ideologies are similar.\nOne of Kbann's important contributions is that it takes a domain theory in one representation (propositional logic) and translates it into a less restricting representation (neural network). While logic is an appropriate representation for the initial domain theory for the promoter problem, the neural network representation is more convenient both for re ning this theory and for expressing the best revised theory. This change of representation is Kbann's real source of power. Much attention has been given to the fact that Kbann combines symbolic knowledge with a subsymbolic learner, but this combination can be viewed more generally as a means of implementing the important change of representation. It may be the change of representation that gives Kbann its power, not necessarily its speci c symbolic/subsymbolic implementation. Thus the Kbann system embodies the higher-level principle of allowing re nement to occur in an appropriate representation.\nIf an alternative representation is Kbann's source of power, the question must be raised as to whether the actual Kbann implementation is always the best means of achieving this goal. The neural network representation may be more expressive than is required. Accordingly, backpropagation often has more re nement power than is needed. Thus Kbann may carry excess baggage in translating into the neural net representation, performing expensive backpropagation, and extracting symbolic rules from the re ned network. Although the full extent of Kbann's power may be needed for some problems, many important problems may be solvable by applying Kbann's principles at the symbolic level using less expensive tools.\nNeither-MofN (Ba es & Mooney, 1993), a descendant of Either, is a second example of a system that allows a theory to be revised in a representation other than that of the initial theory. The domain theory input into Neither-MofN is expressed in propositional logic as an AND/OR tree. Neither-MofN interprets the theory less rigidly | an AND rule is true any time any M of its N conditions are true. Initially M is set equal to N (all conditions must be true for the rule to be true), and one theory re nement operator is to lower M for a particular rule. The end result is that examples that are a close enough partial match to the initial theory are accepted. Neither-MofN, since it is built upon the Either framework, also includes Either-like theory revision operators: add-condition, drop-condition, etc.\nThus Neither-MofN allows revision to take place in a representation appropriate for revision and appropriate for concisely expressing the best re ned theory. Neither-MofN has achieved results comparable to Kbann on the Promoter Recognition domain, which suggests that it is the change of representation which these two systems share that give them their power rather than any particular implementation. Neither-MofN also demonstrates that a small amount of representational exibility is sometimes enough. The M-of-N representation it employs is not as big a change from the original representation as the neural net representation which Kbann employs yet it achieves similar results and arrives at them much more quickly than Kbann (Ba es & Mooney, 1993).\nA shortcoming of Neither-MofN is that since it acts by making local changes in an initial theory, it can still become trapped by the structure of the initial theory. An advanced theory revision system would incorporate Neither-MofN's and Kbann's exibility of representation and allow knowledge-guided theory restructuring. Examples of Kbann's and Neither-MofN's behavior are given in Sections 3.4 and 3.5. Cohen (1992) analyzes a class of theory revision systems and draws some insightful conclusions. One is that \\generality in theory interpretation] comes at the expense of power.\" He draws this principle from the fact that a system such as Either or Focl treats every domain theory the same and therefore must treat every domain theory in the most general way. He argues that rather than just applying the most general re nement strategy to every problem, a small set of re nement strategies should be available that are narrow enough to gain leverage yet not so narrow that they only apply to a single problem. Cohen presents Grendel, a toolbox of translators each of which transforms a domain theory into an explicit bias. Each translator interprets the domain theory in a di erent way, and the most appropriate interpretation is applied to a given problem." }, { "figure_ref": [], "heading": "Grendel", "publication_ref": [], "table_ref": [], "text": "We apply Cohen's principle to the representation of domain theories. If all domain theories are translated into the same representation, then the most general, adaptable representation has to be used in order to accommodate the most general case. This comes at the expense of higher computational costs and possibly lower accuracy due to over t stemming from unbridled adaptability. The neural net representation into which Kbann translates domain theories allows 1) a measure of partial match to the domain theory 2) different parts of the domain theory to be weighted di erently 3) conditions to be added to and dropped from the domain theory. All these options of adaptability are probably not necessary for most problems and may even be detrimental. These options in Kbann also require the computationally expensive backpropagation method.\nThe representation used in Neither-MofN is not as adaptable as Kbann's | it does not allow individual parts of the domain theory to be weighted di erently. Neither-MofN runs more quickly than Kbann on small problems and probably matches or even surpasses Kbann's accuracy for many domains | domains for which ne-grained weighting is unfruitful or even detrimental. A toolbox of theory rerepresentation translators analogous to Grendel would allow a domain theory to be translated into a representation having the most appropriate forms of adaptability." }, { "figure_ref": [], "heading": "Outlook and Summary", "publication_ref": [], "table_ref": [], "text": "In summary, we brie y reexamine exible representation and exible structure, the two desirable qualities set forth in Section 1. We consider how the various systems exemplify some subset of these desirable qualities.\nKbann and Neither-MofN both interpreted a theory more exibly than its original representation allowed and revised the theory in this more adaptable representation. A nal, re ned theory often has many exceptions to the rule; it may tolerate partial matches and missing pieces of evidence; it may weight some evidence more heavily than other evidence. Kbann's and Neither-MofN's new representation may not be the most concise, appropriate representation for the initial theory, but the new representation allows concise expression of an otherwise cumbersome nal theory. These are cases of the principle of exible representation.\nStandard induction programs have been quite successful at building concise theories with high predictive accuracy when the target concept can be concisely expressed using the original set of features. When it can't, constructive induction is a means of creating new features such that the target concept can be concisely expressed. Miro uses constructive induction to take advantage of the strengths of both a domain theory and standard induction. Knowledge from the theory guides the construction of appropriate new features, and standard induction structures these into a concise description of the concept. Thus Miro-like construction coupled with standard induction provides a ready and powerful means of exibly restructuring the knowledge contained in an initial domain theory. This is a case of the principle of exible structure.\nIn the following section we introduce the DNA Promoter Recognition domain in order to illustrate tangibly how some of the systems discussed above integrate knowledge and induction." }, { "figure_ref": [], "heading": "Demonstrations of Related Work", "publication_ref": [ "b12", "b29", "b21", "b28", "b32", "b5", "b23", "b2", "b29" ], "table_ref": [], "text": "This section introduces the Promoter Recognition domain (Harley, Reynolds, & Noordewier, 1990) and brie y illustrates how a Miro-like system, Either, Kbann, and Neither-MofN behave in this domain. We implemented a Miro-like system for the promoter domain; versions of Either and Neither-MofN were available from Ray Mooney's group; Kbann's behavior is described by analyzing (Towell & Shavlik, 1994). We chose the promoter domain because it is a non-trivial, real-world problem which a number of theory revision researchers have used to test their work (Ourston & Mooney, 1990;Thompson et al., 1991;Wogulis, 1991;Cohen, 1992;Pazzani & Kibler, 1992;Ba es & Mooney, 1993;Towell & Shavlik, 1994). The promoter domain is one of three domains in which we evaluate our work, theory-guided constructive induction, in Section 5." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "The Promoter Recognition Domain", "publication_ref": [], "table_ref": [], "text": "A promoter sequence is a region of DNA that marks the beginning of a gene. Each example in the promoter recognition domain is a region of DNA classi ed either as a promoter or a non-promoter. As illustrated in Figure 3, examples consist of 57 features representing a sequence of 57 DNA nucleotides. Each feature can take on the values A,G,C, or T representing adenine, guanine, cytosine, and thymine at the corresponding DNA position. The features are labeled according to their position from p-50 to p+7 (there is no zero position). The notation \\p-N\" denotes the nucleotide that is N positions upstream from the beginning of the gene. The goal is to predict whether a sequence is a promoter from its nucleotides. A total of 106 examples are available: 53 promoters and 53 non-promoters.\nThe promoter recognition problem comes with the initial domain theory shown in Figure 4 (quoted almost verbatim from Towell and Shavlik's entry in the UCI Machine Learning Repository). The theory states that promoter sequences must have two regions that make contact with a protein and must also have an acceptable conformation pattern. There are four possibilities for the contact region at minus 35 (35 nucleotides upstream from the be- ginning of the gene). A match of any of these four possibilities will satisfy the minus 35 contact condition, thus they are joined by disjunction. Similarly, there are four possibilities for the contact region at minus 10 and four acceptable conformation patterns. Figure 5 gives a more pictorial presentation of portions of the theory. Of the 106 examples in the dataset, none matched the domain theory exactly, yielding an accuracy of 50%." }, { "figure_ref": [], "heading": "Miro in the Promoter Domain", "publication_ref": [ "b16", "b24" ], "table_ref": [], "text": "A Miro-like system in the promoter domain would use the rules in Figure 4 to construct new high-level features for each DNA segment. Figure 6 New features would similarly be created for the minus 10 rules and the conformation rules, and a standard induction algorithm could then be applied. We implemented a Mirolike system; Figure 7 gives an example theory created by it. (Drastal's original Miro used the candidate elimination algorithm (Mitchell, 1977) as its underlying induction algorithm. We used C4.5 (Quinlan, 1993).) As opposed to theory revision systems that incrementally modify the domain theory, Miro has broken the theory down into its components and has fashioned these components into a new theory using a standard induction program. Thus Miro has exhibited the exible structure principle for this domain { it was not restricted in any way by the structure of the initial theory. Rather, Miro exploited the strengths of standard induction to concisely characterize the training examples using the new features.\nPromoters have a region where a protein (RNA polymerase) must make contact and the helical DNA sequence must have a valid conformation so that the two pieces of the contact region spatially align. Prolog notation is used.\npromoter :-contact, conformation.\nThere are two regions \"upstream\" from the beginning of the gene at which the RNA polymerase makes contact.\ncontact :-minus_35, minus_10.\nThe following rules describe the compositions of possible contact regions. The following rules describe sequences that produce acceptable conformations. conformation :-p-47=c, p-46=a, p-45=a, p-43=t, p-42=t, p-40=a, p-39=c, p-22=g, p-18=t, p-16=c, p-8=g, p-7=c, p-6=g, p-5=c, p-4=c, p-2=c, p-1=c. conformation :-p-45=a, p-44=a, p-41=a. conformation :-p-49=a, p-44=t, p-27=t, p-22=a, p-18=t, p-16=t, p-15=g, p-1=a. conformation :-p-45=a, p-41=a, p-28=t, p-27=t, p-23=t, p-21=a, p-20=a, p-17=t, p-15=t, p-4=t.\nFigure 4: The initial domain theory for recognizing promoters (from Towell and Shavlik).\nA weakness Miro displays in this example is that it allows no exibility of representation of the theory. The representation of the features constructed by Miro is basically the same all-or-none representation of the initial theory; either a DNA segment matched a rule, or it did not." }, { "figure_ref": [], "heading": "Either in the Promoter Domain", "publication_ref": [], "table_ref": [], "text": "An Either-like system re nes the initial promoter theory by dropping and adding conditions and rules. We simulated Either by turning o the M-of-N option in Neither and ran it in the promoter domain. Figure 8 shows the re ned theory produced using a randomly selected training set of size 80. Because the initial promoter domain theory does not lend itself to revision through small, local changes, Either has only limited success. The conformation portion of the theory is too spread out to display pictorially.\nIn this run, the program exhibited the second behavioral extreme discussed in Section 1; it entirely removed groups of rules and then tried to build new rules to replace what was lost. The minus 10 and conformation rules have essentially been removed, and new rules have been added to the minus 35 group. These new minus 35 rules contain the condition p-12=t previously found in the minus 10 group and the condition p-44=a previously found in the conformation group.\nEither's behavior in this example is a direct result of its lack of exibility of representation and exibility of structure. It is di cult to transform the minus 10 and conformation rules into something useful in their initial representation using Either's locally-acting operators. Either handles this by dropping these sets of rules, losing their knowledge, and attempting to rediscover the lost knowledge empirically. The end result of this loss of knowledge is lower than optimal accuracy shown later in Section 5." }, { "figure_ref": [], "heading": "Kbann in the Promoter Domain", "publication_ref": [ "b29" ], "table_ref": [], "text": "Figure 9, modeled after a gure by Towell and Shavlik (1994), shows the setup of a Kbann network for the promoter theory. Each slot along the bottom represents one nucleotide in the DNA sequence. Each node at the rst level up from the bottom embodies a single domain rule, and higher levels encode groups of rules with the nal concept at the top. The links shown in the gure are the ones that are initially high-weighted. The net is next lled out to be fully connected with low-weight links. Backpropagation is then applied to re ne the network's weights.\nA DNA segment fragment:\n: : : p-38=g, p-37=c, p-36=t, p-35=t, p-34=g, p-33=a, p-32=c, p-31=t, p-30=t : : :\nThe minus 35 group of rules and corresponding constructed features: The neural net representation is more appropriate for this domain than the propositional logic representation of the initial theory. It allows for a measurement of partial match by weighting the links in such a way that a subset of a rule's conditions are enough to surpass a node's threshold. It also allows for variable weightings of di erent parts of the theory; therefore, more predictive nucleotides can be weighted more heavily, and only slightly predictive nucleotides can be weighted less heavily. Kbann has only limited exibility of structure. Because the re ned network is the result of a series of incremental modi cations in the initial network, a fundamental restructuring of the theory it embodies is unlikely. Kbann promoter :-contact, conformation. contact :-minus_35, minus_10. minus_35 :-p-35=t, p-34=g. minus_35 :-p-36=t, p-33=a, p-32=c. minus_35 :-p-36=t, p-32=c, p-50=c. minus_35 :-p-34=g, p-12=t. minus_35 :-p-34=g, p-44=a. minus_35 :-p-35=t, p-47=g. is limited to nding the best network with the same fundamental structure imposed on it by the initial theory.\nOne of Kbann's advantages is that it uses a standard learning algorithm as its foundation. Backpropagation has been widely used and consequently improved by previous researchers. Theory re nement tools that are built from the ground up or use a standard tool only tangentially su er from having to invent their own methods of handling standard problems such as over t, noisy data, etc. A wealth of neural net experience and resources is available to the Kbann user; as neural net technology advances, Kbann technology will passively advance with it." }, { "figure_ref": [], "heading": "Neither-MofN in the Promoter Domain", "publication_ref": [], "table_ref": [], "text": "Neither-MofN re nes the initial promoter theory not only by dropping and adding conditions and rules but also by allowing conjunctive rules to be true if only a subset of their conditions are true. We ran Neither-MofN with a randomly selected training set of size 80, and Figure 10 shows a re ned promoter theory produced. The theory expressed here with 9 M-of-N rules would require 30 rules using propositional logic, the initial theory's representation. More importantly, it is unclear how any system using the initial representation would reach the 30-rule theory from the initial theory. Thus the M-of-N representation adopted not only allows for the concise expression of the nal theory but also facilitates the re nement process.\npromoter :-2 of ( contact, conformation ). contact :-2 of ( minus_35, minus_10 ). minus_35 :-2 of ( p-36=t, p-35=t, p-34=g, p-32=c, p-31=a ). minus_35 :-5 of ( p-36=t, p-35=t, p-34=g, p-33=a, p-32=c\n).\nminus_10 :-2 of ( p-12=t, p-11=a, p-7=t ). minus_10 :-2 of ( p-13=t, p-12=a, p-10=a, p-8=t ). minus_10 :-6 of ( p-14=t, p-13=a, p-12=t, p-11=a, p-10=a, p-9=t\n). minus_10 :-2 of ( p-13=t, p-12=a, p-10=a, p-34=g ).\nconformation :-true.\nFigure 10: A revised theory produced by Neither-MofN.\nNeither-MofN displays exibility of representation by allowing an M-of-N interpretation of the original propositional logic, but it does not allow for as ne-grained re nement as Kbann. Both allow for a measure of partial match, but Kbann could weight more predictive features more heavily. For example, in the minus 35 rules, perhaps p-36=t is more predictive of a DNA segment being a promoter than p-34=g and therefore should be weighted more heavily. Neither-MofN simply counts the number of true conditions in a rule; therefore, every condition is weighted equally. Kbann's ne-grained weighting may be needed in some domains and not in others. It may actually be detrimental in some domains. An advanced theory revision system should o er a range of representations.\nLike Kbann, Neither-MofN has only limited exibility of structure. The re ned theory is reached through a series of small, incremental modi cations in the initial theory precluding a fundamental restructuring. Neither-MofN is therefore limited to nding the best theory with the same fundamental structure as the initial theory." }, { "figure_ref": [], "heading": "Theory-Guided Constructive Induction", "publication_ref": [], "table_ref": [], "text": "In the rst half of this section we present guidelines for theory-guided constructive induction that summarize the work discussed in Sections 2 and 3. The remainder of the section presents an algorithm that instantiates these guidelines. We evaluate the algorithm in Section 5." }, { "figure_ref": [], "heading": "Guidelines", "publication_ref": [], "table_ref": [], "text": "The following guidelines are a synthesis of the strengths of the previously discussed related work.\nAs in Miro, new features should be constructed using components of the domain theory. These new features are combinations of existing features, and a nal theory is created by applying a standard induction algorithm to the training examples described using the new features. This allows knowledge to be gleaned from the initial theory without forcing the nal theory to conform to the initial theory's backbone structure.\nIt takes full advantage of the domain theory by building high-level features from the original low-level features. It also takes advantage of a strength of standard induction | building concise theories having high predictive accuracy when the target concept can be concisely expressed using the given features.\nAs in Either, the constructed features should be modi able by various operators that act locally, such as adding or dropping conjuncts from a constructed feature.\nAs in Kbann and Neither-MofN, the representation of the constructed features need not be the exact representation in which the initial theory is given. For example, the initial theory may be given as a set of rules written in propositional logic. A new feature can be constructed for each rule, but it need not be a boolean feature telling whether all the conditions are met; for example it may be a count of how many conditions of that rule are met. This allows the nal theory to be formed and expressed in a representation that is more suitable than the representation of the initial theory. Like Grendel, a complete system should o er a library of interpreters allowing the domain theory to be translated into a range of representations with di ering adaptability. One interpreter might emulate Miro strictly translating a domain theory into boolean constructed features. Another interpreter might construct features that count the number of satis ed conditions of the corresponding component of the domain theory thus providing a measure of partial match. Still another interpreter might construct features that are weighted sums of the satis ed conditions. The weights could be re ned empirically by examining a set of training examples. Thus the most appropriate amount of expressive power can be applied to a given problem without incurring unnecessary expense." }, { "figure_ref": [ "fig_1", "fig_8", "fig_1", "fig_1" ], "heading": "A Speci c Interpreter", "publication_ref": [], "table_ref": [], "text": "This section describes an algorithm which is a limited instantiation of the guidelines just described. The algorithm is intended as a demonstration of the distillation and synthesis of the principles embodied in previous landmark systems. It contains a main module, Tgci described in Figure 12, and a speci c interpreter, Tgci1 described in Figure 11.\nThe main module Tgci redescribes the training and testing examples by calling Tgci1 and then applies C4.5 to the redescribed examples (just as Miro applied the candidate elimination algorithm to examples after redescribing them). Tgci1 can be viewed as a single interpreter from a potential Grendel-like toolbox. It takes as input a single example and a domain theory expressed as an AND/OR tree such as the one shown in Figure 13. It returns a new vector of features for that example that measure the partial match of the example to the theory. Thus it creates new features from components of the domain theory as in Miro, but because it measures partial match, it allows exibility in representing the information contained in the initial theory as in Kbann and Neither-MofN. One aspect of the guidelines in 4.1 that does not appear in this algorithm is Either's locally acting operators such as adding and dropping conditions from a portion of the theory.\nThe following two paragraphs explain in more detail the workings of Tgci1 and Tgci respectively.\nGiven: An example E and a domain theory with root node R. The domain theory is an AND/OR/NOT tree in which the leaves are conditions which can be tested to be true or false.\nFigure 11: The Tgci1 algorithm\nThe Tgci1 algorithm, given in Figure 11, is recursive. Its inputs are an example E and a domain theory with root node R. It ultimately returns a redescription of E in the form of a vector of new features F. It also returns a value F called the top feature which is used in intermediate calculations described below. The base case occurs if the domain theory is a single leaf node (i.e., R is a simple condition). In this case (Line 1), Tgci1 returns the top feature 1 if the condition is true and -1 if the condition is false. No new features are returned in the base case because they would simply duplicate the existing features. If the domain theory is not a single leaf node, Tgci1 recursively calls itself on each of R's children (Line 3). When a child of R, R j , is processed, it returns a vector of new features F j (which measures the partial match of the example to the jth child of R and its various subparts). It also returns the top feature F j which is included in F j but is marked as special because it measures the partial match of the example to the whole of the jth child of R. If there are n children, the result of Line 3 is n vectors of new features, F 1 to F n , and n top features, F 1 to F n . If the operator at node R is OR (Line 4), then F new , the new feature created for that node, is the maximum of F j . Thus F new measures how closely the best of R's children come to having its conditions met by the example. The vector of new features returned in this case is a concatenation of F new and all the new features from R's children. If the operator at node R is AND (Line 5), then F new is the average of F j . Thus F new measures how closely all of R's children as a group come to having their conditions met by the example. The vector of new features returned in this case is again a concatenation of F new and all the new features from R's children. If the operator at node R is NOT (Line 6), R should only have one child, and F new is F 1 negated. Thus F new measures the extent to which the conditions of R's child are not met by the example.\nGiven Figure 12: The Tgci algorithm If Tgci1 is called twice with two di erent examples but with the same domain theory, the two vectors of new features will be the same size. Furthermore, corresponding features measure the match of corresponding parts of the domain theory. The Tgci main module in Figure 12 takes advantage of this by creating redescribed example sets from the input example sets. Line 1 redescribes each example in the training set producing a new training set. Line 2 does the same for the testing set. Line 3 runs the standard induction program C4.5 on these redescribed example sets. The returned decision tree can be easily interpreted by examining which new features were used and what part of the domain theory they correspond to." }, { "figure_ref": [ "fig_8" ], "heading": "Tgci1 Examples", "publication_ref": [], "table_ref": [], "text": "As an example of how the Tgci1 interpreter works, consider the toy theory shown in Figure 13. Tgci1 redescribes the input example by constructing a new feature for each node in the input theory. Consider the situation where the input example matches conditions A, B, and D but not C and E. When Tgci1 evaluates the children of Node 6, it gets the values F 1 = 1, F 2 = 1, F 3 = 1, F 4 = 1, and F 5 = 1. Since the operator at Node 6 is AND, F new is the average of the values received from the children, 0.20 ((1 + 1 + ( 1) + 1 + ( 1))=5 = 0:20). Likewise, if condition G matchs but not F and H, F new for Node 5 will have the value 0. 33 ( 1 ((1 + ( 1) + ( 1))=3)) because two of three matching conditions at Node 7 give the value 0:33, and this is negated by the NOT at Node 5. Since Node 2 is a disjunction, its new feature measures the best partial match of its two children and has the value 0.33 (MAX(0.20,0.33)), and so on. Figure 14 shows how Tgci1 redescribes a particular DNA segment using the minus 35 rules of the promoter theory. A partial DNA segment is shown along with the four minus 35 rules and the new feature constructed for each rule (We have given the new features names here to simplify our illustration). For the rst rule, four of the six nucleotides match; therefore, for that DNA segment feat minus35 A has the value 0.33 ((1+1+1+1+( 1)+( 1))=6). For the second rule, four of the ve nucleotides match; therefore, feat minus35 B has the value 0.60. Because these and the other two minus 35 rules are joined by disjunc- tion in the original domain theory, feat minus35 all, the new feature constructed for this group, takes the maximum value of its four children; therefore, feat minus35 all has the value 0.60 because feat minus35 B has the value 0.60, the highest in the group. Intuitively, feat minus35 all represents the best partial match of this grouping | the extent to which the disjunction is partially satis ed. The results of running Tgci1 on each DNA sequence is a set of redescribed training examples. Each redescribed example has a value for feat minus35 A through feat minus35 D, feat minus35 all, and all other nodes in the promoter domain theory. The training set is essentially redescribed using a new feature vector derived from information contained in the domain theory. In this form, any o -the-shelf induction program can be applied to the new example set.\nAnomalous situations can be created in which Tgci1 gives a \\good score\" to a seemingly bad example and a bad score to a good example. Situations can also be created where logically equivalent theories give di erent scores for a single example. These occur because A DNA segment fragment: p-37=c, p-36=t, p-35=t, p-34=g, p-33=c, p-32=a, p-31=a, p-30=t : : : The minus 35 group of rules and corresponding constructed features: minus 35 :-p-37=c, p-36=t, p-35=t, p-34=g, p-33=a, p-32=c. feat minus35 A = 0.33 minus 35 :-p-36=t, p-35=t, p-34=g, p-32=c, p-31=a. feat minus35 B = 0.60 minus 35 :-p-36=t, p-35=t, p-34=g, p-33=a, p-32=c, p-31=a. feat minus35 C = 0.33 minus 35 :-p-36=t, p-35=t, p-34=g, p-33=a, p-32=c. feat minus35 D = 0.20 feat minus35 all = max(feat minus35 A, feat minus35 B, feat minus35 C, feat minus35 D) = 0.60 Figure 14: An example of how Tgci1 generates constructed features from a portion of the promoter domain theory and a DNA segment. Four of the conditions in the rst minus 35 rule match the DNA segment; therefore, the constructed feature for that rule has the value 0.33 ((1 + 1 + 1 + 1 + ( 1) + ( 1))=6). Feat minus35 all, the new feature for the entire minus 35 group takes the maximum value of its children thus embodying the best partial match of the group.\n: : : p-38=g,\nTgci1 is biased to favor situations where more matched conditions of an AND is desirable, but more matched conditions of an OR is not necessarily better. Eliminating these anomalies would remove this bias." }, { "figure_ref": [], "heading": "Experiments and Analysis", "publication_ref": [ "b12", "b20", "b6", "b24" ], "table_ref": [], "text": "This section presents the results of applying theory-guided constructive induction to three domains: the promoter domain (Harley et al., 1990), the primate splice-junction domain (Noordewier, Shavlik, & Towell, 1992), and the gene identi cation domain (Craven & Shavlik, 1995). In each case the Tgci1 interpreter was applied to the domain's theory and examples in order to redescribe the examples using new features. Then C4.5 (Quinlan, 1993) was applied to the redescribed examples." }, { "figure_ref": [ "fig_9" ], "heading": "The Promoter Domain", "publication_ref": [ "b29", "b21", "b28", "b28", "b29", "b17" ], "table_ref": [], "text": "Figure 15 shows a learning curve for theory-guided constructive induction in the promoter domain accompanied by curves for Either, Labyrinth K , Kbann, and Neither-MofN. Following the methodology described by Towell and Shavlik 1994] Ourston and Mooney (1990), Thompson, Langley, andIba (1991), andTowell andShavlik (1994) respectively and were obtained by a similar methodology 1 . The curve forTgci is the average of 50 independent random data partitions and is given along with 95% con dence ranges. The Neither-MofN program was obtained from Ray Mooney's group and was used in generating the Neither-MofN curve using the same 50 data partitions as were used for Tgci 2 . Tgci showed improvement over Either and Labyrinth K for all portions of the curve and also performed better than Kbann and Neither-MofN for all except the smallest training sets. Con dence intervals were not available for Either, Labyrinth K , and 1. Either used a testing set of size 25 and did not use the conformation portion of the domain theory. The testing set in LabyrinthK always consisted of 13 promoters and 13 non-promoters. 2. Ba es and Mooney (1993) " }, { "figure_ref": [], "heading": "Structure of Initial Promoter Theory", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Structure of Revised Promoter Theory", "publication_ref": [], "table_ref": [], "text": "Figure 16: The revised theory produced by theory-guided constructive induction has borrowed substructures from the initial theory, but as a whole has not been restricted by its structure.\nFigure 16 compares the initial promoter theory with a theory created by Tgci. Reasons for Tgci's improvement can be inferred from this gure. Tgci has extracted the components of the original theory that are most helpful and restructured them into a more concise theory. Neither Kbann nor Neither-MofN facilitates this radical extraction and restructuring. As seen in the leaf nodes, the new theory also measures the partial match of an example to components of the original theory. This aspect is similar in Kbann and Neither-MofN.\nPart of Tgci's improvement over Kbann and Neither-MofN may be due to a knowledge/bias con ict in the latter two systems, a situation where revision biases con ict with knowledge in such a way as to undo some of the knowledge's bene ts. This can occur whenever detailed knowledge is opened up to revision using a set of examples. The revision is not guided only by the examples but rather by the examples as interpreted by a set of algorithmic biases. Biases that are useful in the absence of knowledge may undo good knowledge when improperly applied. Yet these biases developed and perfected for pure induction are often unquestioningly applied to the revision of theories. The biases governing the dropping of conditions in Neither-MofN and reweighting conditions in Kbann may be neutralizing the promoter theory's potential. We speculate this because we conducted some experiments that allowed bias-guided dropping and adding of conditions within Tgci. We found that these techniques actually reduced accuracy in this domain. " }, { "figure_ref": [ "fig_10" ], "heading": "The Primate Splice-junction Domain", "publication_ref": [ "b20", "b29" ], "table_ref": [], "text": "The primate splice-junction domain (Noordewier et al., 1992) involves analyzing a DNA sequence and identifying boundaries between introns and exons. Exons are the parts of a DNA sequence kept after splicing; introns are spliced out. The task then involves placing a given boundary into one of three classes: an intron/exon boundary, an exon/intron boundary, or neither. An imperfect domain theory is available which has a 39.0% error rate on the entire set of available examples. Figure 17 shows learning curves for C4.5, backpropagation, Kbann, and Tgci in the primate splice-junction domain. The results for Kbann and backpropagation were taken from Towell and Shavlik (1994). The curves for plain C4.5 and the Tgci algorithm were created by training on sets of size 10,20,30,...90,100,120,...200 and testing on a set of size 800. The curves for C4.5 and Tgci are the average of 40 independent data partitions. No comparison was made with Neither-MofN because the implementation we obtained could handle only two-class concepts. For training sets larger than 200, Kbann, Tgci, and backpropagation all performed similarly.\nThe accuracy of Tgci appears slightly worse than that of Kbann but perhaps not signi cantly. Kbann's advantage over Tgci is its ability to assign ne-grained weightings to individual parts of a domain theory. Tgci's advantage over Kbann is its ability to more easily restructure the information contained in a domain theory. We speculate that Kbann's capability to assign ne-grained weights outweighted its somewhat rigid structuring of this domain theory. Theory-guided constructive induction has an advantage of speed over Kbann because C4.5, its underlying learner, runs much more quickly than backpropagation, Kbann's underlying learning algorithm." }, { "figure_ref": [ "fig_11" ], "heading": "The Gene Identi cation Domain", "publication_ref": [ "b6" ], "table_ref": [], "text": "The gene identi cation domain (Craven & Shavlik, 1995) involves classifying a given DNA segment as a coding sequence (one that codes a protein) or a non-coding sequence. No domain theory was available in the gene identi cation domain; therefore, we created an arti cial domain theory using the information that organisms may favor certain nucleotide triplets over others in gene coding. The domain theory embodies the knowledge that a DNA segment is likely to be a gene coding segment if its triplets are coding-favoring triplets or if its triplets are not noncoding-favoring triplets. The decision of which triplets were codingfavoring, which were noncoding-favoring, and which favored neither, was made empirically by analyzing the makeup of 2500 coding and 2500 noncoding sequences. The speci c articial domain theory used is described in Online Appendix 1.\nFigure 18 shows learning curves for C4.5 and Tgci in the gene identi cation domain. The original domain theory yields 31.5% error. The curves were created by training on example sets of size 50,200,400,...2000 and testing on a separate example set of size 1000. The curves are the average of 40 independent data partitions.\nOnly a partial curve is given for Neither-MofN because it became prohibitively slow for larger training sets. In the promoter domain where training sets were smaller than 100, Tgci and Neither-MofN ran at comparable speeds (approximately 10 seconds on a Sun4 workstation). In this domain Tgci ran in approximately 2 minutes for larger training sets. Neither-MofN took 21 times as long as Tgci on training sets of size 400, 69 times as long for size 800, and 144 times as long for size 1200. Consequently, Neither-MofN's curve only extends to 1200 and only represents ve randomly selected data partitions. For these reasons, a solid comparison of Neither-MofN and Tgci cannot be made from these curves, but it appears that Tgci's accuracy is slightly better. We speculate that Neither- MofN's slightly lower accuracy is partially due to the fact that it revises the theory to correctly classify all the training examples. The result is a theory which likely over ts the training examples. Tgci does not need to explicitly avoid over t because this is handled by its underlying learner." }, { "figure_ref": [], "heading": "Summary of Experiments", "publication_ref": [ "b18", "b29", "b29" ], "table_ref": [], "text": "Our goal in this paper has not been to present a new technique but rather to understand the behavior of landmark systems, distill their strengths, and synthesize them into a simple system, Tgci. The evaluation of this algorithm shows that its accuracy roughly matches or exceeds that of its predecessors. In the promoter domain, Tgci showed sizable improvement over many published results. In the splice-junction domain, Tgci narrowly falls short of Kbann's accuracy. In the gene identi cation domain, Tgci outperforms Neither-MofN. In all these domains Tgci greatly improves on the original theory alone and C4.5 alone.\nTgci is faster than its closest competitors. Tgci runs as much as 100 times faster than Neither-MofN on large datasets. A strict quantitative comparison of the speeds of Tgci and Kbann was not made because 1) backpropagation is known to be much slower than decision trees (Mooney, Shavlik, Towell, & Gove, 1989), 2) Kbann uses multiple hidden layers which makes its training time even longer (Towell & Shavlik, 1994), and 3) Towell and Shavlik (1994) point out that each run of Kbann must be made multiple times with di erent initial random weights, whereas a single run of Tgci is su cient.\nOverall, our experiments support two claims of this paper: First, the accuracy of Tgci substantiates our delineation of system strengths in terms of exible theory representation and exible theory structure, since this characterization is the basis for this algorithm's design. Second, Tgci's combination of speed and accuracy suggest that unnecessary computational complexity can be avoided in synthesizing the strengths of landmark systems.\nIn the following section we take a closer look at the strengths of theory-guided constructive induction." }, { "figure_ref": [], "heading": "Discussion of Strengths", "publication_ref": [], "table_ref": [], "text": "Below a number of strengths of theory-guided constructive induction are discussed within the context of the Tgci algorithm used in our experiments." }, { "figure_ref": [], "heading": "Flexible Representation", "publication_ref": [], "table_ref": [], "text": "As discussed in Section 1, for many domains the representation most appropriate for an initial theory may not be most appropriate for a re ned theory. Because theory-guided constructive induction allows the translation of the initial theory into a di erent representation, it can accommodate such domains. In the experiments in this paper a representation was needed which allowed for a measurement of partial match to the domain theory. Tgci1 accomplished this by simply counting the matching features and propagating this information up the theory appropriately. Either and Labyrinth K do not easily a ord this measure of partial match and therefore are more appropriate for problems in which the best representation of the nal theory is the same as that of the initial theory. Kbann allows a ner-grained measurement of partial match than both Neither-MofN and our work, but a price is paid in computational complexity. The theory-guided constructive induction framework allows for a variety of potential tools with varying degrees of granularity of partial match, although just one tool is used in our experiments." }, { "figure_ref": [], "heading": "Flexible Structure", "publication_ref": [], "table_ref": [], "text": "As discussed in Section 2.5, a strength of existing induction programs is fashioning a concise and highly predictive description of a concept when the target concept can be concisely described with the given features. Consequently, the value of a domain theory lies not in its overall structure. If the feature language is su cient, any induction program can build a good overall theory structure. Instead, the value of a domain theory lies in the information it contains about how to redescribe examples using high-level features. These high-level features facilitate a concise description of the target concept. Systems such as Either and Neither-MofN that reach a nal theory through a series of modi cations in the initial theory hope to gain something by keeping the theory's overall structure intact. If the initial theory is su ciently close to an accurate theory, this method works, but often clinging to the structure hinders full exploitation of the domain theory. Theory-guided constructive induction provides a means of fully exploiting both the information in the domain theory and the strengths of existing induction programs. Figure 16 in Section 5.1 gives a comparison of the structure of the initial promoter theory to the structure of a revised theory produced by theory-guided constructive induction. Substructures have been borrowed, but the revised theory as a whole has been restructured." }, { "figure_ref": [], "heading": "Use of Standard Induction as an Underlying Learner", "publication_ref": [], "table_ref": [], "text": "Because theory-guided constructive induction uses a standard induction program as its underlying learner, it does not need to reinvent solutions to over t avoidance, multi-class concepts, noisy data, etc. Over t avoidance has been widely studied for standard induction, and many standard techniques exist. Any system which modi es a theory to accommodate a set of training examples must also address the issue of over t to the training examples. In many theory revision systems existing over t avoidance techniques cannot be easily adapted, and the problem must be addressed from scratch. Theory-guided constructive induction can take advantage of the full range of previous work in over t avoidance for standard induction.\nWhen multiple theory parts are available for multi-class concepts, the interpreter is run on the multiple theory parts, and the resulting new feature sets are combined. The primate splice-junction domain presented in Section 5.2 has three classes: intron/exon boundaries, exon/intron boundaries, and neither. Theories are given for both intron/exon and exon/intron. Both theories are used to create new features, and then all new features are concatenated together for learning. Interpreters such as Tgci1 also trivially handle negation in a domain theory." }, { "figure_ref": [ "fig_12", "fig_12" ], "heading": "Use of Theory Fragments", "publication_ref": [], "table_ref": [], "text": "Theory-guided constructive induction is not limited to using full domain theories. If only part of a theory is available, this can be used. To demonstrate this, three experiments were run in which only fragments of the promoter domain theory were used. In the rst experiment, only the four minus 35 rules were used. Five features were constructed | one feature for each rule and then an additional feature for the group. Similar experiments were run for the minus 10 group and the conformation group.\nFigure 19 gives learning curves for these three experiments along with curves for the entire theory and for no theory (C4.5 using the original features). Although the conformation portion of the theory gives no signi cant improvement over C4.5, both the minus 35 and minus 10 portions of the theory give signi cant improvements in performance. Thus even partial theories and theory fragments can be used by theory-guided constructive induction to yield sizable performance improvements.\nThe use of theory fragments should be explored as a means of evaluating the contribution of di erent parts of a theory. In Figure 19, the conformation portion of the theory is shown to yield no improvement. This could signal a knowledge engineer that the knowledge that should be conveyed through that portion of the theory is not useful to the learner in its present form. " }, { "figure_ref": [], "heading": "Use of Multiple Theories", "publication_ref": [], "table_ref": [], "text": "Theory-guided constructive induction can use multiple competing and even incompatible domain theories. If multiple theories exist, theory-guided constructive induction provides a natural means of integrating them in such a way as to extract the best from all theories. Tgci1 would be called for each input theory producing new features. Next, all the new features are simply pooled together and the induction program selects from among them in fashioning the nal theory. This is seen on a very small scale in the promoter domain.\nIn Figure 4 some minus 35 rules subsume other minus 35 rules. According to the entry in the UCI Database, this is because \\the biological evidence is inconclusive with respect to the correct speci city.\" This is handled by simply using all four possibilities, and selection of the most useful knowledge is left to the induction program.\nTgci could also be used to evaluate the contributions of competing theories just as it was used to evaluate theory fragments above. A knowledge engineer could use this evaluation to guide his own revision and synthesis of competing theories. " }, { "figure_ref": [ "fig_13" ], "heading": "Easy Adoption of New Techniques", "publication_ref": [ "b25" ], "table_ref": [], "text": "Since theory-guided constructive induction can use any standard induction method as its underlying learner, as improvements are made in standard induction, theory-guided constructive induction passively improves. To demonstrate this, tests were also run with Lfc (Ragavan & Rendell, 1993) as the underlying induction program. Lfc is a decision tree learner that performs example-based constructive induction by looking ahead at combinations of features. Characteristically, Lfc improves accuracy for a moderate number of examples. Figure 20 shows the resulting learning curve along with the C4.5 Tgci curve. Both curves are the average of 50 separate runs with the same data partitions used for each program. In a pairwise comparison the improvement of Lfc over C4.5 was signi cant at the 0.025 level of con dence for training sets of size 72 and 80. More sophisticated underlying induction programs can further improve accuracy." }, { "figure_ref": [ "fig_14", "fig_14", "fig_14", "fig_15", "fig_15" ], "heading": "Testing the Limits of Tgci", "publication_ref": [], "table_ref": [], "text": "The purpose of this section is to explore the performance of theory-guided constructive induction on theory revision problems ranging from easy to di cult. In easy problems the underlying concept embodied in the training and testing examples matches the domain theory fairly closely; therefore, the examples themselves match the domain theory fairly closely. In di cult problems the underlying concept embodied in the examples does not match the domain theory very well so the examples do not either. Although many other factors determine the di culty of an individual problem, this aspect is an important component and worth exploring. Our experiment in this section is intended to relate ranges of di culty to the amount of improvement produced by Tgci. Since a number of factors a ect problem di culty we chose that the theory revision problems for the experiment should all be variations of a single problem. By doing this we are able to hold all other factors constant and vary the closeness of match to the domain theory. Because we wanted to avoid totally arti cial domains, we chose to start with the promoter domain and create \\new\" domains by perverting the example set.\nThese \\new\" domains were created by perverting the examples in the original promoter problem to either more closely match the promoter domain theory or less closely match the promoter domain theory. Only the positive examples were altered. For example, one domain was created with 30% fewer matches to the domain theory than the original promoter domain as follows: Each feature value in a given example was examined to see if it matched part of the theory. If so, with a 30% probability, it was randomly reassigned a new value from the set of possible values for that feature. The end result is a set of examples with 30% fewer matches to the domain theory than the original example set3 . For our experiment new domains such as this were created with 10%, 30%, 60%, and 90% fewer matches.\nFor some features, multiple values may match the theory because di erent disjuncts of the theory specify di erent values for a single feature. For example, referring back to Figure 4, feature p-12 matches two of the minus 10 rules if it has the value a and another two rules if it has the value t. So a single feature might accidentally match one part of a theory when in fact the example as a whole more closely matches another part of the theory. For cases such as these, true matches were separated from accidental matches by examining which part of the theory most clearly matched the example as a whole and expecting a match from that part of the theory.\nNew domains that more closely matched the theory were created in a similar manner. For example, a domain was created with 30% fewer mismatches to the domain theory than the original promoter domain as follows: Each feature value in a given example was examined to see if it matched its corresponding part of the theory. If not, with a 30% probability, it was reassigned a value that matched the theory. The end result is a set of examples in which 30% of the mismatches with the domain theory are eliminated. For our experiment new domains such as this were created with 30%, 60%, and 90% fewer mismatches.\nTen di erent example sets were created for each level of closeness to the domain theory: 10%, 30%, 60%, 90% fewer matches, and 30%, 60%, 90% fewer mismatches. In total, forty example sets were created which matched the original theory less closely than the original example set, and thirty example sets were created which matched the original theory more closely than the original example set. Each of these example sets was tested using a leaveone-out methodology using C4.5 and the Tgci algorithm. The results are summarized in Figure 21. The x-axis is a measure of theory proximity { closeness of an example set to the domain theory. \\0\" on the x-axis indicates no change in the original promoter examples. \\100\" on the x-axis means that each positive example exactly matches the domain theory. \\-100\" on the x-axis means that any match of a feature value of a positive example to the domain theory is totally by chance4 . Each datapoint in Figure 21 is the result of averaging the accuracies of the ten example sets for each level of theory proximity (except for the point at zero which is the accuracy of the exact original promoter examples). One notable portion of Figure 21 is the section between 0 and 60 on the x-axis. Domains in this region have a greater than trivial level of mismatch with the domain theory but not more than moderate mismatch. This is the region of Tgci's best performance. On these domains, Tgci achieves high accuracy while a standard learner, C4.5, using the original feature set gives mediocre performance. A second region to examine is between -60 and 0 on the x-axis where the level of mismatch ranges from moderate to extreme. In this region Tgci's performance falls o but its improvement over the original feature set remains high as shown in Figure 22 which plots the improvement of Tgci over C4.5. The nal two regions to notice are greater than 60 and less than -60 on the x-axis. As the level of mismatch between theory and examples becomes trivially small (x-axis greater than 60), C4.5 is able to pick out the theory's patterns leading to high accuracy that approaches that of Tgci's. As the level of mismatch becomes extreme (x-axis less than -60) the theory gives little help in problem-solving resulting in similarly poor accuracy for both methods. In summary, as shown in Figure 22 for variants of the promoter problem there is a wide range of theory proximity (centered around the real promoter problem) for which theory-guided constructive induction yields sizable improvement over standard learners. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b31" ], "table_ref": [], "text": "Our goal in this paper has not been just to present another new system, but rather to study the two qualities exible representation and exible structure. These capabilities are intended as a frame of reference for analyzing theory-guided systems. These two principles provide guidelines for purposeful design. Once we had distilled the essence of systems such as Miro, Kbann, and Neither-MofN, theory-guided constructive induction was a natural synthesis of their strengths. Our experiments have demonstrated that even a simple application of the two principles can e ectively integrate theory knowledge with training examples. Yet there is much room for improvement; the two principles could be quanti ed and made more precise, and the implementations that proceed from them should be explored and re ned.\nQuantifying representational exibility is one step. Section 4 gave three degrees of exibility: one measured the exact match to a theory, one counted the number of matching conditions, and one allowed for a weighted sum of the matching conditions. The amount of exibility should be quanti ed, and ner-grained degrees of exibility should be explored.\nThe accuracy in assorted domains should be evaluated as a function of representational exibility.\nFiner-grained structural exibility would be advantageous. We have presented systems that make small, incremental modi cations in a theory as lacking structural exibility. Yet theory-guided constructive induction falls at the other extreme, perhaps allowing excessive structural exibility. Fortunately, existing induction tools are capable of fashioning simple yet highly predictive theory structures when the problem features are suitably high-level. Nevertheless, approaches should be explored that take advantage of the structure of the initial theory without being unduly restricted by it.\nThe strength discussed in Section 6.5 should be given further attention. Although the promoter domain gives a very small example of synthesizing competing theories, this should be explored in a domain in which entire competing, inconsistent theories are available such as synthesizing the knowledge given by multiple experts. The point was made in Section 6.4 that Tgci can use theory fragments to evaluate the contribution of di erent parts of a theory. This should also be explored further.\nIn an exploration of bias in standard induction, Utgo (1986) refers to biases as ranging from weak to strong and from incorrect to correct. A strong bias restricts the concepts that can be represented more than a weak bias thus providing more guidance in learning. But as a bias becomes stronger, it may also become incorrect by ruling out useful concept descriptions. A similar situation arises in theory revision | a theory representation language that is inappropriately rigid may impose a strong, incorrect bias on revision. A language that allows adaptability along too many dimensions may provide too weak a bias. A Grendellike toolbox would allow a theory to be translated into a range of representations with varying dimensions of adaptability. Utgo advocates starting with a strong, possibly incorrect bias and shifting to an appropriately weak and correct bias. Similarly, a theory could be translated into successively more adaptable representations until an appropriate bias is found. We have implemented only a single tool; many open problems remain along this line of research.\nThe converse relationship of theory revision and constructive induction warrants further examination | theory revision uses data to improve a theory; constructive induction can use theory to improve data to facilitate learning. Since the long-term goal of machine learning is to use data, inference, and theory to improve any and all of them, we believe that a consideration of these related methods can be bene cial, particularly because each research area has some strengths that the other lacks.\nAn analysis of landmark theory revision and theory-guided learning systems has led to the two principles exible representation and exible structure. Because theory-guided constructive induction was based upon these high-level principles, it is simple yet achieves good accuracy. These principles provide guidelines for future work, yet as discussed above, the principles themselves are imprecise and call for further exploration." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank Geo Towell, Kevin Thompson, Ray Mooney, and Je Mahoney for their assistance in getting the datapoints for Kbann, Labyrinth K , and Either. We would also like to thank Paul Ba es for making the Neither program available and for advice on setting the program's parameters. We thank the anonymous reviewers for their constructive criticism of an earlier draft of this paper. We gratefully acknowledge the support of this work by a DoD Graduate Fellowship and NSF grant IRI-92-04473." } ]
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "If R is a directly testable condition, return P=(1,<>) if R is true for E and P=(-1,<>) if R is false for E. 2. n = the number of children of R 3. For each child R j of R, call Tgci1(R j ,E) and store the respective results in P j = (F j ; F j ). 4. If the major operator of R is OR, F new = MAX(F j )", "year": "" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "If the major operator of R is AND, F new = ( P n j=1 F j )=n. Return P = (F new ; concatenate", "year": "" }, { "authors": "P Ba Es; R Mooney", "journal": "", "ref_id": "b2", "title": "Symbolic revision of theories with M-of-N rules", "year": "1993" }, { "authors": "E Bloedorn; R Michalski; J Wnek", "journal": "", "ref_id": "b3", "title": "Multistrategy constructive induction: AQ17-MCI", "year": "1993" }, { "authors": "P Clark; S Matwin", "journal": "", "ref_id": "b4", "title": "Using qualitative models to guide inductive learning", "year": "1993" }, { "authors": "W Cohen", "journal": "", "ref_id": "b5", "title": "Compiling prior knowledge into an explicit bias", "year": "1992" }, { "authors": "M W Craven; J W Shavlik", "journal": "", "ref_id": "b6", "title": "Investigating the value of a good input representation", "year": "1995" }, { "authors": "G Drastal; S Raatz", "journal": "", "ref_id": "b7", "title": "Empirical results on learning in an abstraction space", "year": "1989" }, { "authors": "S Dzerisko; N Lavrac", "journal": "", "ref_id": "b8", "title": "Learning relations from noisy examples: An empirical comparison of LINUS and FOIL", "year": "1991" }, { "authors": "R Feldman; A Serge; M Koppel", "journal": "", "ref_id": "b9", "title": "Incremental re nement of approximate domain theories", "year": "1991" }, { "authors": "N Flann; T Dietterich", "journal": "Machine Learning", "ref_id": "b10", "title": "A study of explanation-based methods for inductive learning", "year": "1989" }, { "authors": "L M Fu; B G Buchanan", "journal": "", "ref_id": "b11", "title": "Learning intermediate concepts in constructing a hierarchical knowledge base", "year": "1985" }, { "authors": "C Harley; R Reynolds; M Noordewier", "journal": "", "ref_id": "b12", "title": "Creators of original promoter dataset", "year": "1990" }, { "authors": "H Hirsh; M Noordewier", "journal": "", "ref_id": "b13", "title": "Using background knowledge to improve inductive learning of DNA sequences", "year": "1994" }, { "authors": "C J Matheus; L A Rendell", "journal": "", "ref_id": "b14", "title": "Constructive induction on decision trees", "year": "1989" }, { "authors": "R S Michalski", "journal": "Arti cial Intelligence", "ref_id": "b15", "title": "A theory and methodology of inductive learning", "year": "1983" }, { "authors": "T Mitchell", "journal": "", "ref_id": "b16", "title": "Version spaces: A candidate elimination approach to rule learning", "year": "1977" }, { "authors": "R J Mooney", "journal": "Machine Learning", "ref_id": "b17", "title": "Induction over the unexplained: Using overly-general domain theories to aid concept learning", "year": "1993" }, { "authors": "R J Mooney; J W Shavlik; G G Towell; A Gove", "journal": "", "ref_id": "b18", "title": "An experimental comparison of symbolic and connectionist learning algorithms", "year": "1989" }, { "authors": "P Murphy; M Pazzani", "journal": "", "ref_id": "b19", "title": "ID2-of-3: Constructive induction of M-of-N concepts for discriminators in decision trees", "year": "1991" }, { "authors": "M Noordewier; J Shavlik; G Towell", "journal": "", "ref_id": "b20", "title": "Donors of original primate splice-junction dataset", "year": "1992" }, { "authors": "D Ourston; R Mooney", "journal": "", "ref_id": "b21", "title": "Changing the rules: A comprehensive approach to theory re nement", "year": "1990" }, { "authors": "G Pagallo; D Haussler", "journal": "Machine Learning", "ref_id": "b22", "title": "Boolean feature discovery in empirical learning", "year": "1990" }, { "authors": "M Pazzani; D Kibler", "journal": "Machine Learning", "ref_id": "b23", "title": "The utility of knowledge in inductive learning", "year": "1992" }, { "authors": "J R Quinlan", "journal": "Morgan Kaufmann", "ref_id": "b24", "title": "C4.5: Programs for Machine Learning", "year": "1993" }, { "authors": "H Ragavan; L Rendell", "journal": "", "ref_id": "b25", "title": "Lookahead feature construction for learning hard concepts", "year": "1993" }, { "authors": "D E Rumelhart; G E Hinton; J L Mcclelland", "journal": "MIT Press", "ref_id": "b26", "title": "A general framework for parallel distributed processing", "year": "1986" }, { "authors": "J C Schlimmer", "journal": "", "ref_id": "b27", "title": "Learning and representation change", "year": "1987" }, { "authors": "K Thompson; P Langley; W Iba", "journal": "", "ref_id": "b28", "title": "Using background knowledge in concept formation", "year": "1991" }, { "authors": "G Towell; J Shavlik", "journal": "Arti cial Intelligence", "ref_id": "b29", "title": "Knowledge-based arti cial neural networks", "year": "1994" }, { "authors": "G Towell; J Shavlik; M Noordeweir", "journal": "", "ref_id": "b30", "title": "Re nement of approximately correct domain theories by knowledge-based neural networks", "year": "1990" }, { "authors": "P E Utgo", "journal": "Morgan Kaufmann", "ref_id": "b31", "title": "Shift of bias for inductive concept learning", "year": "1986" }, { "authors": "J Wogulis", "journal": "", "ref_id": "b32", "title": "Revising relational domain theories", "year": "1991" } ]
[ { "formula_coordinates": [ 20, 90, 124.32, 47.76, 14.4 ], "formula_id": "formula_0", "formula_text": ": : : p-38=g," } ]
Rerepresenting and Restructuring Domain Theories: A Constructive Induction Approach
training examples with a coarse domain theory to produce a more accurate theory. There are two challenges that theory revision and other theory-guided systems face. First, a representation language appropriate for the initial theory may be inappropriate for an improved theory. While the original representation may concisely express the initial theory, a more accurate theory forced to use that same representation may be bulky, cumbersome, and di cult to reach. Second, a theory structure suitable for a coarse domain theory may be insu cient for a ne-tuned theory. Systems that produce only small, local changes to a theory have limited value for accomplishing complex structural alterations that may be required. Consequently, advanced theory-guided learning systems require exible representation and exible structure. An analysis of various theory revision systems and theory-guided learning systems reveals speci c strengths and weaknesses in terms of these two desired properties. Designed to capture the underlying qualities of each system, a new system uses theory-guided constructive induction. Experiments in three domains show improvement over previous theory-guided systems. This leads to a study of the behavior, limitations, and potential of theory-guided constructive induction. Flexible Representation. A theory-guided system should utilize the knowledge contained in the initial domain theory without having to adhere closely to the initial theory's representation language. Flexible Structure. A theory-guided system should not be unnecessarily restricted by the structure of the initial domain theory.
Steven K Donoho; Larry A Rendell
[ { "figure_caption": "Figure 2 :2Figure2: More exible structural modi cation. The revised theory has taken many substructures from the initial theory and adapted and recombined them for its use, but the structure of the revised theory is not restricted by the structure of the initial theory.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: An instance in the promoter domain consists of a sequence of 57 nucleotides labeled from p-50 to p+7. Each nucleotide can take on the values A,G,C, or T representing adenine, guanine, cytosine, and thymine.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "shows an example of this. A DNA segment is shown from position p-38 through position p-30. The minus 35 rules from the theory are also shown, and four new features (feat minus35 A through feat minus35 D) have been constructed for that DNA segment, one for each minus 35 rule. The new fea- tures feat minus35 A and feat minus35 D both have the value 1 because the DNA fragment matches the rst and fourth minus 35 rules. Likewise, feat minus35 B and feat minus35 C both have the value 0 because the DNA fragment does not match the second and third rules. Furthermore, since the four minus 35 rules are joined by disjunction, a new feature, feat minus35 all, is created for the group that would have the value 1 because at least one of the minus 35 rules matches.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The contact portion of the theory. There are four possibilities for both the minus 35 and minus 10 portions of the theory. A \\*\" matches any nucleotide.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "1 Figure 6 :Figure 7 :167Figure 6: An example of feature construction in a Miro-like system. The constructed features for the rst and fourth rules in the minus 35 group are true (value = 1) because the DNA segment matches these rules. The constructed feature for the entire group, feat minus35 all, is true because the four minus 35 rules are joined by disjunction.", "figure_data": "", "figure_id": "fig_5", "figure_label": "167", "figure_type": "figure" }, { "figure_caption": "Figure 8 :Figure 9 :89Figure 8: A revised theory produce by Either.", "figure_data": "", "figure_id": "fig_7", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: An example theory in the form of an AND/OR tree that might be used by the interpreter to generate constructed features.", "figure_data": "", "figure_id": "fig_8", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Learning curves for theory-guided constructive induction and other systems in the promoter domain.", "figure_data": "", "figure_id": "fig_9", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: Learning curves for Tgci and other systems in the primate splice-junction domain.", "figure_data": "", "figure_id": "fig_10", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: Learning curves for Tgci and other systems in the gene identi cation domain.", "figure_data": "", "figure_id": "fig_11", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 :19Figure 19: Learning curves for theory-guided constructive induction with only fragments of the promoter domain theory. The minus 35 portion of the theory, the minus 10 portion of the theory, and the conformation portion of the theory were used separately in feature construction. Curves are also given for the full theory and for C4.5 alone for comparison.", "figure_data": "", "figure_id": "fig_12", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 20 :20Figure 20: Theory-guided constructive induction with Lfc and C4.5 as the underlying learning system. Theory-guided constructive induction can use any inductive learner as its underlying learning component. Therefore, more sophisticated underlying induction programs can further improve accuracy.", "figure_data": "", "figure_id": "fig_13", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Figure 21 :21Figure 21: Seven altered promoter domains were created, three that more closely matched the theory than the original domains and four that less closely matched. A 100 on the x-axis indicates a domain in which the positive examples match the domain theory 100%. A negative 100 indicates a domain in which any match of the positive examples to the domain theory is purely chance. The accuracy of C4.5 and Tgci are plotted for di erent levels of proximity to the domain theory.", "figure_data": "", "figure_id": "fig_14", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Figure 22 :22Figure22: The di erence in error between C4.5 and Tgci for di erent levels of proximity of the example set to the domain theory.", "figure_data": "", "figure_id": "fig_15", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": ": A set of training examples E train , a set of testing examples E test , and a domain theory with root node R. . For each example E i 2 E test , call Tgci1(R,E i ). which returns P i = (F i ; F i ). E test new = fF i g. 3. Call C4.5 with training examples E train new and testing examples E test new . Return decision tree and accuracy on E test new .", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", the set of 106 examples was randomly divided into a training set of size 80 and a testing set of size 26. A learning", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "report a slightly better learning curve for Neither-MofN than we obtained, but after communication with Paul Ba es, we think the di erence is caused by the random selection of data partitions. , but in a pairwise comparison with Neither-MofN, the improvement of Tgci was signi cant at the 0.0005 level of con dence for training sets of size 48 and larger.", "figure_data": "Donoho & Rendell100% of100% of100% of100% offirstsecondthirdfourthconf.conf.conf.conf.rulerulerulerule100% of100% of100% of100% of100% of100% of100% of100% offirstsecondthirdfourthfirstsecondthirdfourthminus_35minus_35minus_35minus_35minus_10minus_10minus_10minus_10rulerulerulerulerulerulerulerule>20% ofsecondminus_35rule>33% of>33% of>33% of>33% offirstsecondthirdfourthminus_10minus_10minus_10minus_10rulerulerulerule", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b19", "b16", "b17", "b1", "b12", "b7", "b11", "b21", "b6", "b10", "b5", "b8", "b25", "b18", "b23", "b24", "b14", "b3", "b13", "b25", "b2", "b20", "b9" ], "table_ref": [], "text": "Improving search e ciency is one of the main goals of arti cial intelligence. E ciency is particularly important for researchers interested in constraint satisfaction problems (csps), also called constraint networks. In order to achieve this aim, various approaches have been tried, such as ltering techniques (mainly arc and path consistencies) (Montanari, 1974;Mackworth, 1977;Mohr & Henderson, 1986;Bessi ere, 1994) or improvements to the backtrack process (Haralick & Elliot, 1980;Dechter & Pearl, 1988;Ginsberg, 1993;Prosser, 1993). Other work concerned the characterization of classes of polynomial problems, based on the size of their domains (Dechter, 1992) or on the structure of the constraint network (Freuder, 1978), leading to the presentation of decomposition methods such as the cyclecutset (Dechter, 1990) or the tree clustering (Dechter & Pearl, 1989) methods. A more recent approach consists of taking into account semantic properties of the constraints (as opposed to structural or topological properties of the network) to achieve arc consistency e ciently for speci c classes of constraints (Van Hentenryck, Deville, & Teng, 1992;Mohr & Masini, 1988), or to characterize some tractable classes of problems (van Beek, 1992;van Beek & Dechter, 1994;Kirousis, 1993;Cooper, Cohen, & Jeavons, 1994). We present these results further in this paper.\nSome frequently encountered constraints are functional constraints, for instance in peptide synthesis (Janssen, J egou, Nouguier, Vilarem, & Castro, 1990) or in Constraint Logic c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nProgramming (Van Hentenryck et al., 1992). Many constraint-based systems and languages use functional constraints, including thinglab (Borning, 1981), garnet (Myers, Giuse, & Vander Zanden, 1992) and kaleidoscope (Freeman-Benson & Borning, 1992).\nIn this paper, we study functional constraints: the relations they represent are partial functions (notice those are not the same as bijective constraints: see sections 2.1 and 2.2). More precisely, we study the properties given to them by a new local consistency, pivot consistency. We notably show that, under some conditions, this local consistency guarantees the existence of solutions and makes them easier to nd: rst, we characterize a subset of the variables, called a root set and denoted R; we then show that if the network is pivot consistent, then any consistent instantiation of R can be linearly extended to a solution, provided the instantiation order is R-compatible (i.e., it possesses some topological properties associated with R and the functional constraints).\nWe then introduce a new method for solving any functional csp by decomposing it into two subproblems: rst, nding a consistent instantiation of the root set, and second, extending this partial instantiation to a solution (in a backtrack-free manner).\nThe main di culty is clearly to nd a consistent instantiation of the root set, which remains exponential in its size. The method we present here is therefore all the more e cient since the root set is of small size.\nAnother aspect we wish to point out is that, unlike most of the work dealing with constraints which possess some speci c properties, this method not only applies when all the constraints possess the given property (or properties), but also when only some of them do.\nThis paper is organized as follows: section 2 rst introduces the basic de nitions and notations regarding csps which will be used in the following. We then de ne functional constraints and discuss previous work and its relations with this paper. The last part of the section presents notions associated with functional constraints, which are mainly found in graph theory. We then introduce pivot consistency and an O(n 2 d 2 ) algorithm allowing to achieve the latter in section 3, before presenting in section 4 the properties supplied to a csp by this consistency. We nally propose a method based on those properties for solving csps composed of functional constraints (with or without additional non-functional constraints)." }, { "figure_ref": [ "fig_1" ], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "De nition 2.1 A binary csp P is de ned by (X; D; C; R), where X = fX 1 ; X 2 ; : : : ; X i ; : : : ; X n g is the set of its n variables. D = fD 1 ; : : : ; D i ; : : : ; D n g is the set of its n domains, where each D i = fa i ; b i ; : : :g is the set of the possible values for variable X i ; the size of the largest domain of D is called d.\nC is the set of its e constraints fC 1 ; : : : ; C k ; : : : ; C e g; for the sake of simplicity, a constraint C k = fX i ; X j g involving variables X i and X j is denoted C ij .\nR is the set of the e relations associated with the constraints, where the relation R ij is the subset of the Cartesian product D i D j specifying the pairs of mutually compatible values.\nA graph G = (X; C) called the constraint graph can be associated with P, where the vertices and the edges (or arcs) respectively represent the variables and the constraints. Another graph, representing the relations of the csp, can also be de ned: the consistency graph.\nFigure 1 represents the constraint and consistency graphs of the following example:\nA travel agency o ers ve destinations to its customers: Paris, London, Washington, New-York and Madrid, and employs three guides to accompany them: Alice, Bob and Chris. These guides provide the information below:\nAlice wishes to go to Paris or New-York, but she only speaks French.\nBob speaks English and French, and does not want to go to Madrid (but accepts any other city).\nChris only speaks Spanish, and refuses to go to any city but New-York.\nThe manager would like to nd all the possibilities left to him, i.e., which guide he can send to which city, with which currency, knowing that each guide must of course speak the language of the country he (or she) visits, and the currency must be that of the country.\nThis problem can be encoded by the csp P ex , composed of ve variables and ve constraints:\nThe set of the ve variables is The set of the ve constraints is:\nC = fC GUIDES{CITIES ; C CITIES{COUNTRIES ; C GUIDES{LANGUAGES ; C COUNTRIES{CURRENCIES ; C COUNTRIES{LANGUAGES g\nwhere these constraints respectively represent the cities the guides wish (or accept) to visit, the countries the cities belong to, the languages spoken by the guides, the currencies used in the countries, and the o cial languages of the countries. Notice there is no explicit constraint between the currencies and the guides: this constraint will be induced by the cities (and thus the countries) visited by the guides on the one hand, and by the currencies of the countries on the other hand.\nThe ve associated relations are: If there is no constraint between variables X i and X j (C ij 6 2 C), all pairs of values are therefore allowed: R ij = D i D j . R ij is then called a universal relation.\nThe notion of support is related to the pairs that belong or not to the relations: we say that a value a k 2 D k is a support of (or supports) a value a i 2 D i for the constraint C ik if and only if (a i ; a k ) 2 R ik . Also, the value a k 2 D k is a support of the pair (a i ; a j ) 2 R ij if and only if a k both supports a i for the constraint C ik and a j for the constraint C j k . Y stands for any subset of X, and Y k represents the set of the rst k variables of X, w.r.t. an order explained in the context. The default order will be X 1 ; X 2 ; : : : ; X n 1 ; X n . Also, the order for P ex will be GUIDES, CITIES, COUNTRIES, CURRENCIES, LANGUAGES.\nGiven Y t = fX 1 ; : : : ; X t g a subset of X, the assignment to each variable 2 presents two examples of such constraints), before discussing related work. We then recall some graph theory notions before characterizing a property necessary for the method we present in this paper.\nDe nition 2.2 Given two variables X i and X k , we denote X i ! X k i for all a i in D i , there is at most one a k in D k such that (a i ; a k ) 2 R ik . We then denote a k = f ik (a i ) (or a k = f i!k (a i )) if this value exists, \" otherwise.\nA constraint C ik is functional i X i ! X k or X k ! X i . In the following, we will call X i the origin and X k the target of the functional constraint X i ! X k .\nFrom this de nition, we can deduce a partition of the constraint set of the csp: on the one hand C f , the set of the functional constraints, on the other hand C o , the others (the non functional constraints). A csp is said to be functional if it contains some functional constraints (C f 6 = ;), and strictly functional if it only contains such constraints: C = C f (C o = ;)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b18", "b25", "b23", "b24", "b14" ], "table_ref": [], "text": "In the eld of constraint satisfaction problems, one of the most important objectives is to reduce computation time. Many studies have been carried out to achieve this (see for instance the references in the introduction of this paper). They permit to reduce, sometimes drastically, the time required to solve csps. However, an obstacle limits their e ciency: they are general methods, and, as such, they do not take into account the speci cities of the problems to be solved, notably w.r.t. semantics of the constraints. This de ciency was overcome by various (and, for most of them, recent) results. They exhibit some classes of constraints which possess particular properties. These results fall into two classes: on the one hand, those proposing algorithms speci cally adapted to some classes of constraints, more e cient than general algorithms, and on the other hand, those characterizing classes of \\tractable\" problems. In this section as in the rest of this paper, we will suppose that, unless otherwise stated, the networks are binary.\nThe rst class contains those techniques which make use of the semantics of the constraints to propose speci c processings, namely arc consistency lterings. Mohr and Masini (1988) propose to modify AC-4 to improve its e ciency on equality, inequality and disequality constraints. They reduce time complexity of AC-4 from O(ed 2 ) in general case, to O(ed) for those speci c constraints.\nAC-5, a new arc-consistency algorithm, is presented by Van Hentenryck et al. (1992). Actually, AC-5 is not an algorithm: it should rather be considered as a model with several possible specializations, according to the classes of constraints to process (e.g., functional 1 , anti-functional and monotonic constraints | notice that these constraints respectively correspond to Mohr and Masini's equality, disequality and inequality constraints). AC-5 achieves arc-consistency in O(ed) for those constraints, which is consistent with the former result. 2 The second class contains those techniques which identify constraints for which some local consistency is su cient to guarantee global consistency, or propose polynomial algorithms to solve some classes of problems.\nvan Beek (1992) shows that path consistency induces global consistency if the constraints are row convex. 3 van Beek and Dechter (1994) show that if the constraints are m-tight 4 , then strong (m + 2)-consistency induces global consistency. 5 A tractable class of binary constraints is identi ed by Cooper et al. (1994), 0=1=all constraints. If for any pair of variables X i ; X j , every value of D i is supported by either 0, 1 or all values of D j , then there exists an algorithm, called zoa (Zero/One/All), which either nds a solution, or informs that no solution exists in polynomial time (namely O(ed(n + d))). It is also shown that any other class of constraints closed under permutation and restriction of the domains is NP-complete. Kirousis (1993) presents a polynomial algorithm to solve csps with implicational constraints (which, for binary constraints, are the same as 0=1=all constraints 6 ).\n1. Which should in fact be called bijective.\n2. It is also shown that, in the eld of constraint logic programming over nite domains, for a restricted class of basic constraints, node-and arc-consistency algorithms provide a decision procedure whose complexity is the same as the complexity of AC-5, namely O(ed).\n3. A binary constraint Cij is row convex i for any ai 2 Di, all its supports are consecutive in Dj, provided there exists an (implicit) ordering of the domains. 5. They furthermore show how this result applies to non-binary networks: for any csp whose constraints have arity r or less and are at most m-tight (an r-ary constraint is m-tight i all its binary projections are m-tight), if the csp is strongly ((m + 1)(r 1) + 1)-consistent, then it is globally consistent. 6. For the sake of simplicity we will not present non-binary implicational constraints (for which an O(d 2 r 2 ne) algorithm is presented), nor a parallel algorithm, also studied in his paper." }, { "figure_ref": [], "heading": "A binary constraint", "publication_ref": [ "b23", "b25", "b4" ], "table_ref": [], "text": "A rst comment about those results: a class of binary constraints is a subclass of all those presented above, namely bijective constraints: they are 1-tight, 0=1(/all), and row convex. Bijective constraints are known to be tractable. Notice that constraints called functional by van Beek (1992) and by Van Hentenryck et al. (1992) are actually bijective constraints7 . On the other hand, a functional constraint X i ! X j is neither necessarily 1-tight, nor 0=1=all (see for example constraint w = v 2 in Figure 2), but R ij remains row convex (even though R j i may not be). But achieving path consistency in a network whose constraints are all functional may create non functional constraints (or transform some functional constraint into non functional ones). van Beek's result does not apply anymore (since there are non row convex constraints). Functional constraints, unlike bijective constraints, are intractable in general case.\nThe second comment is more important: all of those results assume that all of the constraints of the network belong to a given class (e.g., m-tight, 0=1=all, row convex. . . ), and only apply in this case. The di erence, and, in our opinion, an interesting improvement, is that the results we present in this paper also apply when only some of the constraints are functional.\nIn a previous paper (David, 1993), it is shown that in a path consistent csp, any consistent instantiation can be extended to a solution if, for every non-instantiated variable, there exists a sequence of functional or bijective constraints from an (any) instantiated variable to this non-instantiated variable. A subset of the variables is thus de ned, such that any of its consistent instantiation can be linearly extended to a solution. Actually, path consistency is not necessary: a weaker local consistency, pivot consistency, is su cient, as we now propose to show." }, { "figure_ref": [ "fig_6" ], "heading": "Functional Constraints and Directed Graphs", "publication_ref": [ "b0" ], "table_ref": [], "text": "If a csp P is functional, its constraint graph can be divided into two subsets. C f contains the (directed) arcs representing the functional constraints, where an arc is directed from X i to X k i X i ! X k , and C o contains the edges (i.e., undirected), for non functional constraints. We denote G = (X; C f C o ) this graph, and G f = (X; C f ) its directed subgraph.\nBefore going further, we need to recall some graph theory notions: the descendant of a vertex (variable), and the root of a directed graph: De nition 2.3 (Berge, 1970) A vertex X k is a descendant of X i in a directed graph (X; C f ) i there exists in C f a path (X i ; X j 1 ; : : : ; X jq ; : : : ; X jp ; X k ) such that X i ! X j 1 ; : : : ; X jq ! X j q+1 ; : : : ; X jp ! X k , 8q 2 1::p 1.\nA vertex X r is a root of a directed graph (X; C f ) i any other vertex is a descendant from X r . By extension, we de ne a root set of a directed graph. It is easy to nd out that a root is a root set which contains a single vertex.\nDe nition 2.4 A subset R of X is a root set of a directed graph (X; C f ) i any vertex of X R is a descendant from an element of R. R is called minimal i there does not exist a root set R 0 such that R 0 R. R is called minimum i there does not exist a root set R 0 such that jR 0 j < jRj. is not R-compatible: it is not pre xed by R. Figure 4: R-compatible and non R-compatible orderings of P ex Figure 3 illustrates the notions of directed subgraph and of root set (here, a minimum root set) applied to our example P ex . Notice that the arc is only directed from COUNTRIES to CURRENCIES, and not from CURRENCIES to COUNTRIES though we have both COUNTRIES ! CURRENCIES and CURRENCIES ! COUNTRIES. This is only for the sake of simplicity: this information will not be useful in the following.\nWith a root set R, we associate R-compatibility, a property that can be veri ed by an ordering of X: De nition 2.5 An ordering of X X 1 ; X 2 ; : : : ; X n is called R-compatible i :\n1. 8i jRj, X i 2 R, and 2. 8k > jRj, 9j < k such that X j ! X k . In other words, the rst variables in this ordering are those which belong to R (we then say that this ordering is pre xed by R), and any other variable X k has at least one ancestor X j in the ordering (X j is ordered before X k ) such that X j ! X k (see Figure 4)." }, { "figure_ref": [], "heading": "Pivot Consistency", "publication_ref": [ "b7" ], "table_ref": [], "text": "First of all, why did we call it pivot consistency? Pivot consistency depends on the assignment ordering 8 . On every step of the assignment, two elements have to be distinguished: rstly, the set of the formerly instantiated variables (the instantiated set), and secondly the 8. Similarly to directional path consistency and adaptative consistency (Dechter & Pearl, 1988).\nnext variable to be instantiated. One can consider pivot consistency as a property between each instantiated set and the next variable to be instantiated. For each of these sets, a constraint has a particular role to play: the consistency checks turn on it. This constraint is thus called the pivot of the set." }, { "figure_ref": [], "heading": "Presentation", "publication_ref": [], "table_ref": [], "text": "Further in this section, we will introduce the de nition of pivot consistency in 3 steps: consistency between 3 variables, then pivot of a subset of X, and nally pivot consistency of the csp.\nDe nition 3.1 Let X i ; X j ; X k 2 X such that C ik ; C j k 2 C. C ik and C j k are X k -compatible i any tuple of R ij has at least one support in D k for C ik and C j k . Formally, 8(a i ; a j ) 2 R ij , 9a k 2 D k s.t. (a i ; a k ) 2 R ik and (a j ; a k ) 2 R j k\nFour comments arise from this de nition:\n1. The existence of a constraint between X i and X j is not compulsory: the relation R ij may be universal. 2. C ik and C j k are not necessarily di erent. If not, C ik must be X k -compatible with itself; in that case we assimilate the pair (a i ; a i ) of the relation R ii to the value a i of the domain D i . 3. This de nition may be seen as a local version of strong 3-consistency (2 and 3 consistencies): 2-consistency: it is due to the remark above: any value in D i must have a support in D k .\n3-consistency: any consistent instantiation of fX i ; X j g may be extended to a third variable, here X k .\n4. Knowing that path consistency is equivalent to 3-consistency, we can deduce that path consistency can be rewritten in terms of X k -compatibility:\nA csp is path consistent i for all X i ; X j ; X k pairwise distinct, fX i ; X k g and fX j ; X k g are both X k -compatible.\nWe already said that some constraints called the pivots have a particular part to play; we now de ne them:\nDe nition 3.2 Let Y X and C ik 2 C s.t. X i ! X k , X k 2 X Y and X i 2 Y . X i ! X k is a pivot of Y i 8X j 2 Y s.t. C j k 2 C, C ik and C j k are X k -compatible.\nIn other words, given any proper subset Y of X, a functional constraint X i ! X k \\coming out\" from Y (X i 2 Y and X k 2 X Y ) is a pivot of Y if and only if for any consistent instantiation (a i ; a j ) of X i and of any other variable X j in Y , there exists at least one value a k in D k such that (a i ; a j ) can be extended to a consistent instantiation (a i ; a j ; a k ) of fX i ; X j ; X k g. As for functional constraints, we call X i the origin and X k the target of the pivot X i ! X k .\nX 1 a 1 b 1 X 2 a 2 b 2 c 2 X 3 a 3 b 3 R = fX 1 ; X 2 g The pivot of fX 1 ; X 2 g is X 2 ! X 3\nThis csp is pivot consistent: the 3 pairs of R 12 , that is (a 1 ; a 2 ), (b 1 ; a 2 ) and (b 1 ; c 2 ) all have a support in D 3 (respectively a 3 , a 3 and b 3 ).\nOn the other hand, it is neither path consistent, since (b 2 ; b 3 ) has no support in D 1 , nor arc consistent, since b 2 has no support in D 1 .\nFigure 5\n: A pivot consistent csp X 1 a 1 b 1 X 2 a 2 b 2 c 2 X 3 a 3 b 3 R = fX 1 ; X 2 g The pivot candidate of fX 1 ; X 2 g is X 2 ! X 3 .\nThis csp is not pivot consistent: (a 1 ; b 2 ) has no support in D 3 .\nFigure 6: A non pivot consistent csp Now that the pivot of a subset Y of X has been de ned, we can introduce the notion of pivot consistency of a csp: De nition 3.3 Given a csp P = (X; D; C; R), a root set R of (X; C) and an R-compatible assignment ordering X 1 ; X 2 ; : : : ; X n , we say P is pivot consistent w.r.t. this ordering i 8k > r = jRj, 9h < k s.t. X h ! X k is a pivot of Y k 1 = X 1 ; X 2 ; : : : ; X k 1 .\nInformally, every variable which does not belong to R is the target of a pivot whose origin is before it in the assignment ordering. Figures 5 and6 show a pivot consistent csp and a non pivot consistent csp.\nPivot consistency of a csp relies on the existence of a set of functional constraints possessing particular properties: the pivots (De nition 3.3 above). Thus, a minimum set of functional constraints can be characterized, so that the csp is pivot consistent. This is the purpose of the three conditions stated in De nition 3.4: condition 1 ensures there exists a pivot for each Y k 1 , k > r: the network is therefore pivot consistent; conditions 2 and 3 leave unnecessary constraints out: only one pivot needs to \\target\" each X k 2 X R, and no pivot is required inside R.\nDe nition 3.4 Given a csp P = (X; D; C = C f C o ; R), a root set R, an R-compatible assignment ordering X 1 ; X 2 ; : : : ; X n , and a set of functional constraints P C f , if the following three conditions are satis ed\n1. 8X k 2 X R, 9X h ! X k 2 P s.t. h < k and X h ! X k is pivot of Y k 1\n(any variable of X R is the target of a pivot) 2. 8X k 2 X R, fX h ! X k 2 P and X j ! X k 2 Pg ) h = j (any variable of X R is the target of at most one pivot) 3. 8X j 2 R, 6 9X i ! X j 2 P (no variable of R is the target of a pivot) then P is called a pivot set of the csp, and P is pivot consistent.\nWe have introduced a new local consistency, pivot consistency. However, as for any local consistency, a csp generally does not satisfy it. The problem has then to be ltered in order to obtain its pivot consistent closure, which is presented in the next section. This ltering is achieved w.r.t. a given pivot set P: this is why we now propose the de nition of pivot consistency with respect to a pivot set P: De nition 3.5 Given a csp P = (X; D; C; R), a root set R of (X; C), an R-compatible assignment ordering X 1 ; X 2 ; : : : ; X n and a constraint set P C, we say P is pivot consistent with respect to P and this ordering i P is a pivot set of P." }, { "figure_ref": [], "heading": "Pivot Consistent Closure", "publication_ref": [], "table_ref": [], "text": "Let us assume a csp P is not pivot consistent, and that we wish to make it so. We then obtain a new problem, say P p , but we want this problem to meet some properties. The purpose of this section is to specify these properties. We rst de ne the pivot consistent closure of a given csp: De nition 3.6 P p = (X; D p ; C p ; R p ) is called the pivot consistent closure of the csp P = (X; D; C; R) i 1. P p P (i.e., 8i 2 1::n; D p i D i , C C p , and 8i; j 2 1::n; R p ij R ij or C ij 6 2 C), and 2. P p is pivot consistent, and 3. P p is maximal: there does not exist P 0 pivot consistent such that P p P 0 P and P p 6 = P 0 . Before presenting two properties of the pivot consistent closure P p of a csp P, we introduce the following lemma, which we will use later in one of the proofs. Lemma 3.1 Let P = (X; D; C; R) be a csp. Let two csps P 1 = (X; D 1 ; C 1 ; R 1 ) P and P 2 = (X; D 2 ; C 2 ; R 2 ) P be pivot consistent w.r.t. a root set R, a pivot set P = fX Origin(k) ! X k , 8k > jRjg and an R-compatible assignment ordering. Let the csp P 3 = (X; D 3 = D 1 t D\n2 ; C 3 = C 1 \\ C 2 ; R 3 = R 1 t R 2 ), with D 3 = D 1 t D 2 = fD 3 i = D 1 i D 2 i 8i 2 1::ng and R 3 = R 1 t R 2 = fR 3 ij = R 1 ij R 2 ij 8i; j 2 1::n s.t. C 3 ij 2 C 3 g.\nThen rstly P 3 is pivot consistent w.r.t. these same sets R and P and this same R-compatible assignment ordering, and secondly P 1 ; P 2 P 3 P.\nProof: We rst show that P 3 is pivot consistent. Let us denote X Origin(k) ! X k (or, more brie y, X O(k)!X k ) the pivot whose target is X k in P 1 and P 2 , for all k > jRj. Let us show that for all k > jRj, X O(k)!X k is the pivot of Y k 1 in P 3 . Let X j in Y k 1 such that C j k 2 C 1 \\ C 2 . For all (a j ; a O(k) ) 2 R 3 j O(k) , either (a j ; a O(k) ) 2 R 1 j O(k) or (a j ; a O(k) ) 2 R 2 j O(k) : If (a j ; a O(k) ) 2 R 1 j O(k) : as P 1 is pivot consistent, there exists a k in D 1\nk such that (a j ; a k ) 2 R 1 j k (therefore R 3 j k ) and (a O(k) ; a k ) 2 R 1\nO(k)k (therefore R 3 O(k)k ). It is the same if (a j ; a O(k) ) 2 R 2 j O(k) . C j k and X O(k) ! X k are consequently X k -compatible: X O(k) ! X k is pivot of Y k 1 in P 3 .\nFor all k > jRj, there exists X O(k) ! X k pivot of Y k 1 : P 3 is pivot consistent. Let us now show that P 1 ; P 2 P 3 P: 8i 2 1::n, D 3\ni = D 1 i D 2 i , so D 1 i D 3\ni and D 2 i D 3\ni .\n(1) Furthermore, we have D 1 i D i and D 2 i D i ; consequently, D 3 i D i .\n(1 0 )\nC 3 = C 1 \\ C 2 , so C 3 C 1 and C 3 C 2 .\n(2) Also, since C C 1 and C C 2 , we have C C 3 = C 1 \\ C 2 .\n(2 0 ) 8i; j 2 1::\nn such that C 3 ij 2 C 3 , R 3 ij = R 1 ij R 2 ij so R 1 ij R 3 ij and R 2 ij R 3 ij . (3) 8i; j 2 1::n such that C ij 2 C, do we have R 3 ij R ij ? Let C ij 2 C. Then C 3 ij 2 C 3 (C C 3 ). By construction of R 3 , we therefore have R 3 ij = R 1 ij R 2 ij . Now R 1 ij R ij and R 2 ij R ij , which implies R 3 ij R ij . (3 0 )\nWe deduce from (1), ( 2) and (3) that P 1 P 3 and P 2 P 3 , and P 3 P from (1 0 ), (2 0 ) and (3 0 )." }, { "figure_ref": [], "heading": "2", "publication_ref": [], "table_ref": [], "text": "We can now present the properties we announced in the beginning of this section.\nProperty 3.2 There only exists one maximal P p P pivot consistent, for a given root set R, a given pivot set and a given assignment ordering.\nProof: Suppose that P p is not unique: thus, there exists P 0 = (X; D 0 ; C 0 ; R 0 ) P such that P 0 is maximal and pivot consistent, and P 0 6 = P p . Let us build the csp P 00 = (X; D p t D 0 ; C p \\ C 0 ; R p t R 0 ). This csp is pivot consistent and P p P 00 P (Lemma 3.1): P p is not maximal, which contradicts the previous assumption." }, { "figure_ref": [], "heading": "2", "publication_ref": [], "table_ref": [], "text": "The second property of P p is perhaps the most important: Property 3.3 Both csps P and P p are equivalent, in that they have the same set of solutions.\nProof: Let us denote S(P) and S(P p ) their respective sets of solutions. Obviously, S(P p ) S(P). Let us now show that S(P) S(P p ): suppose there exists I n = (s 1 ; : : : ; s n ) solution of P which is not a solution of P p . Then 9i; j 2 1::n such that (s i ; s j ) 6 2 R p ij . Now (s i ; s j ) 2 R ij (I n is a solution of P): as P p is maximal, that means that (s i ; s j ) has no support for at least one constraint: 9k s.t. 8a k 2 D k , (s i ; a k ) 6 2 R ik or (s j ; a k ) 6 2 R j k . We therefore have (s i ; s k ) 6 2 R ik or (s j ; s k ) 6 2 R j k : I n is not a solution of P, that is S(P) S(P p ). 2\nWe now know that there exists a unique csp P 0 , called the pivot consistent closure of P, such that P 0 P, P 0 is maximal and P 0 is pivot consistent. Moreover, this csp has the same set of solutions as P, the problem it comes from. But, even though it is interesting to know that such a csp exists, it seems reasonable to wish to obtain it: that is the purpose of the next section, which proposes an algorithm achieving a pivot consistent ltering." }, { "figure_ref": [], "heading": "A Filtering Algorithm", "publication_ref": [], "table_ref": [], "text": "First, a notation: the set of the functional constraints X h ! X k chosen to be the pivots of each Y k is called a set of pivot candidates, and denoted PC. After the ltering, PC = P, the pivot set. We will suppose in the course of this section that both the assignment ordering and PC are known; we explain how they are obtained in appendix A.2.\nThe algorithm is composed of several procedures. Their di erent levels represent the three steps followed to de ne pivot consistency. We present these procedures in an ascending way:\n1. Make the constraints C j k and C hk = X h ! X k X k -compatible. 2. Compute a pivot X h ! X k for all X k in X R 3. Achieve pivot consistency for the csp.\nProcedure Compatible (X h ; X k ; X j ) makes constraints X h ! X k and C j k X k -compatible: it removes from R hj those tuples which do not have a common support in D k for these two constraints. If necessary, it creates the constraint C hj between the variables X h and X j .\nProcedure Compatible (X h ; X k ; X j ) begin for all (a h ; a j ) 2 R hj do a k f hk (a h ) 9 if (a h ; a k ) 6 2 R hk or (a j ; a k ) 6 2 R jk then Suppress (a h ; a j ) from R hj end for end Procedure Pivot (X h ; X k ) makes all constraints C j k containing X k and such that X j 2 Y k 1 X k -compatible with X h ! X k by successive calls to the subroutine Compatible (X h ; X k ; X j ). After its computation, X h ! X k is therefore a pivot of Y k 1 .\nProcedure Pivot (X h ; X k ) begin for all X j 2 Y k 1 s.t. C jk 2 C do Compatible (X h ; X k ; X j ) end The csp is pivot consistent i any variable X k of X R is the target of a pivot of Y k 1 . 9. By convention, if a h has no support in D k for C hk , we denote f hk (a h ) = \", and, of course (a h ; \") 6 2 R hk and (aj; \") 6 2 R jk . \n(X h ; X k ), X h ! X k is a pivot of Y k 1 .\nMoreover, only the necessary suppressions are performed: according to the de nition of Compatible, a pair is suppressed from a relation only if it has no support in the target of the current pivot candidate.\nWe consequently simply have to show that any suppression due to Pivot (X h ; X k ) has no in uence on the pivots X g ! X l for l > k. More precisely, if X g ! X l is a pivot of Y l 1 before computing Pivot (X h ; X k ), then it will remain so.\nFor the sake of brevity, we will denote in the course of this proof Pv(k) the computation of the procedure Pivot (X h ; X k ). Let l be such that X g ! X l is a pivot of Y l 1 before Pv(k). Suppose X g ! X l is no longer a pivot of Y l 1 after Pv(k). Consequently, there exists j < l such that C j l and C gl are not X l -compatible: 9(a j ; a g ) 2 R j g s.t. (a j ; a l ) 6 2 R j l or (a g ; a l ) 6 2 R gl . Pv(k) has thus suppressed (a j ; a l ) from R j l or (a g ; a l ) from R gl . This is impossible, since by de nition procedure Pivot (X h ; X k ) only modi es constraints C hi , with h; i < k, so a fortiori h; i < l (k < l). So, Pv(k) does not in uence pivots X g ! X l for l > k. This algorithm therefore computes the pivot consistent closure of a csp." }, { "figure_ref": [], "heading": "2", "publication_ref": [], "table_ref": [], "text": "Notice that unlike arc or path consistencies which only need knowledge about the csp, the pivot consistent closure is in addition de ned w.r.t. a root set, a pivot set and an assignment ordering: Figure 7 shows how the choice of the pivot(s) may in uence the ltered problem." }, { "figure_ref": [], "heading": "Application to the Example", "publication_ref": [], "table_ref": [], "text": "In this section, we illustrate the ltering algorithm using the travel agency example. We rst recall the initial problem and expose the data necessary to the algorithm (root set R, set of pivot candidates PC and R-compatible ordering). R = fGUIDES, CITIESg PC = fCITIES ! COUNTRIES, COUNTRIES ! CURRENCIES, COUNTRIES ! LANGUAGESg An R-compatible ordering is:" }, { "figure_ref": [ "fig_4" ], "heading": "GUIDES, CITIES, COUNTRIES, CURRENCIES, LANGUAGES", "publication_ref": [], "table_ref": [], "text": "In order to achieve pivot consistency, we need to compute Pivot (COUNTRIES,LANGUAGES), Pivot (COUNTRIES,CURRENCIES) and Pivot (CITIES,COUNTRIES). We now detail these computations: Computation of Pivot (COUNTRIES,LANGUAGES) (Figure 9): We perform Compatible (COUNTRIES,LANGUAGES,COUNTRIES) which does not modify anything, and Compatible (COUNTRIES,LANGUAGES,GUIDES), which creates the constraint fGUIDES, COUNTRIESg." }, { "figure_ref": [ "fig_5" ], "heading": "Computation of Pivot (COUNTRIES,CURRENCIES):", "publication_ref": [], "table_ref": [], "text": "We perform Compatible (COUNTRIES,CURRENCIES,COUNTRIES) which does not modify anything. Computation of Pivot (CITIES,COUNTRIES) (Figure 10):\nWe perform Compatible (CITIES,COUNTRIES,CITIES) which does not change anything, and Compatible (CITIES,COUNTRIES,GUIDES), which modi es the constraint fGUIDES, CITIESg." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Pivot Consistency vs Path Consistency", "publication_ref": [], "table_ref": [], "text": "We previously said that arc-and-path consistency was not necessary to give some properties to a functional csp, and that pivot consistency was su cient. We rst show in this section that pivot consistency is a weakened form of path consistency, and in section 4 we will present some properties and a method for solving functional csps based upon them. Note that, since pivot consistency is directed, one could wish to compare it not only with \\full\" path consistency, but also with directional path consistency. We therefore propose to present relationships between pivot consistency and both path consistency and directional path consistency. We will thus see that pivot consistency is a restricted version of directional path consistency as well. First, let us recall a remark we made when de ning X k -compatibility:\nA csp is path consistent i for all X k in X and for all X h ; X i in X, fX h ; X k g and fX i ; X k g are X k -compatible.\nA slight di erence appears with directional path consistency:\nA csp is directional path consistent i for all X k in X and for all X h ; X i in X such that h; i < k, fX h ; X k g and fX i ; X k g are X k -compatible.\nPivot consistency does not need so many requirements. First, X k -compatibility is only required for variables in X R. Furthermore, for each of these X k , not all of the possible pairs of constraints are ltered: as this consistency is directed, X k -compatibility is only achieved for constraints whose rst variable precedes X k (the second one) in the assignment ordering. Finally, the ltering is only achieved w.r.t. the pivot candidate whose target is X k (say X h k ! X k ). In other words, A csp is pivot consistent i for all X k in X R and for all X i in X such that i < k, fX h k ; X k g and fX i ; X k g are X k -compatible, provided that X h k ! X k 2 PC.\nTo sum up, one can roughly say that for each X k , path consistency needs to consider O(n 2 ) triangles, directional path consistency O(k 2 ) and pivot consistency O(k). The di erence between path and pivot consistencies clearly appears on the travel agency example: only two constraints are altered. The rst one (fGUIDES, COUNTRIESg) is created, and the second one (fGUIDES, CITIESg) is modi ed, whereas path consistency modi ed the ve existing constraints, and created the ve other possible constraints, making the constraint graph complete (see Figure 11). However, this example does not highlight any di erence between pivot consistency and directional path consistency. Figure 12 presents how pivot consistency and directional path consistency may di erently modify the constraint graph of a csp when processing a node X k .\nTime complexity is lower for pivot consistency than for path consistency or directional path consistency, as will now be shown." }, { "figure_ref": [], "heading": "Complexity", "publication_ref": [], "table_ref": [], "text": "Procedure Pivot is computed exactly once for each variable X k in X R; it then calls at most k 1 times procedure Compatible (once for each of the k 1 former variables).\nAchieving X k -compatibility between two constraints C h k k and C ik corresponds to achieving path consistency for relation R hi w.r.t. variable X k . This operation needs O(d 3 ) in general. But since the constraint C h k k is functional, this complexity is now O(d 2 ). The overall complexity of the ltering is therefore\nO( n X k=r+1 (k 1)d 2 ) = O((n 2 r 2 )d 2 )\n(instead of O(n 3 d 3 ) for path consistency or directional path consistency10 ; n is the number of variables, d is the size of the domains and r is the size of the root set)." }, { "figure_ref": [], "heading": "From a Local Instantiation to a Solution", "publication_ref": [], "table_ref": [], "text": "The intrinsic characteristic of a local consistency is to ensure that a partial instantiation can be extended to a new variable. We rst present the properties of pivot consistency, in particular the conditions under which a consistent partial instantiation may be extended to a solution. We then explain how to compute the data required by the ltering algorithm, and nally present a method for solving functional csps." }, { "figure_ref": [], "heading": "Properties", "publication_ref": [], "table_ref": [], "text": "We proceed in two stages: rst, the addition of a new variable to the current instantiation, then the extension from the root set to a solution. Property 4.1 Let P = (X; D; C; R) be a csp, I k 1 be a consistent instantiation of Y k 1 , and X k be a new variable to instantiate. If there exists X h 2 Y k 1 such that X h ! X k is a pivot of Y k 1 , then I k 1 may be extended into I k , a consistent instantiation of Y k .\nProof: Let us show that any constraint included in Y k is satis ed. Let X h 2 Y k 1 such that X h ! X k is a pivot of Y k 1 , and a h its value in I k 1 . Consequently, there exists a k = f h!k (a h ). Let us denote I k = (a 1 ; : : : ; a h ; : : : ; a k 1 ; a k ).\n1. Any constraint satis ed by I k 1 and which does not contain X k is therefore obviously also satis ed by I k .\n2. For all i < k s.t. C ik 2 C. First, X i 2 Y k 1 : let a i be its value in I k 1 . Second, X h ! X k is a pivot of Y k 1 . C ik and X h ! X k are consequently X k -compatible. We thus have (a i ; a k ) 2 R ik ; so, C ik is satis ed by I k . Every constraint included in Y k is satis ed by I k . Hence I k is a consistent instantiation. 2\nWe now present the theorem at the center of the method we introduce further.\nTheorem 1 Let P = (X; D; C; R), R X a root set, and an R-compatible assignment ordering such that P is pivot consistent w.r.t. them. If a consistent instantiation of R exists, then it can be extended to a backtrack-free solution.\nProof: The variables of X R only need to be instantiated along with the R-compatible ordering: as each of them is the target of a pivot (since P is pivot consistent), Property 4.1 is veri ed step by step from the instantiation of R to a solution. 2 4.2 A Solving-by-Decomposing Method In section 4.1 (Theorem 1), we saw that, given a consistent instantiation of R, and provided that the problem is pivot consistent w.r.t. an R-compatible assignment ordering and a pivot set, we can not only guarantee there is a solution to the whole problem, but also nd it without any backtracking. We can therefore deduce a method from this property. We now introduce this method, decomposed into four phases, and for each of them, we present the time complexity and the result of its computation on the example.\nPhase 1 Computation of a root set R, a pivot candidate set PC and an R-compatible assignment ordering. This phase can roughly be outlined as nding the sources of a graph where each strongly connected component is reduced to one node (computation of R), nding a set of directed arcs covering all nodes in X R (P C) and then performing topological sort (assignment ordering). For more details, see appendices A.1 and A.2.\nComplexity: O(e + n) to compute R (appendix A.1), and also O(e + n) for PC and the assignment ordering (appendix A.2). So, for the whole phase: O(e + n).\nThe root set is R = fGUIDES, CITIESg.\nThe pivot candidate set is PC = fCITIES ! COUNTRIES, COUNTRIES ! CURREN-CIES, COUNTRIES ! LANGUAGESg The R-compatible assignment ordering we chose is GUIDES, CITIES, COUNTRIES, CURRENCIES, LANGUAGES.\nPhase 2 Pivot consistent ltering\nComplexity: O((n 2 r 2 )d 2 )) (see 3.6)\nThe pivot consistent problem is computed in section 3.4. (Alice,Paris,France,FrF,French) (Bob,Paris,France,FrF,French) (Bob,London,GBob,$,English) (Bob,Washington,USA,$,English) (Bob,New-York,USA,$,English)\nThe overall time complexity of this method is therefore O((n 2 r 2 )d 2 +e R d r ). This method thus reduces the search for one solution (or all) of a functional csp to the one induced by its root set R. As the time required obviously depends on the size of R, we immediately see how interesting it is to compute a minimum sized root set. A direct consequence is that the smaller R is, the more e cient this method will be. It consequently seems worthwhile to know its size as soon as possible: this has be done in phase 1.\nA last remark before concluding: the reader probably noticed that the number of consistent instantiations of R is the same as the number of solutions of the whole problem. This is not a coincidence, as we see below:\nProperty 4.2 Let P be pivot consistent, and let R be its root set. The number of solutions of P equals the number of consistent instantiations of R: each solution is the extension of exactly one of those instantiations.\nProof: Let I r = (a 1 ; : : : ; a r ) be a consistent instantiation of the root set R = fX 1 ; : : : ; X r g. According to Theorem 1, I r can be extended to at least one solution to P. R is a root set of G. Any X k in X R is therefore a descendant from a variable X r k of R. There consequently exists a unique value a k 2 D k such that (a r k ; a k ) 2 R r k k , and, so, an only consistent instantiation of R X k including I r , therefore only one solution including I r . Moreover, as two di erent instantiations of R obviously cannot be extended to the same solution, there consequently exists a one to one mapping between the set of the consistent instantiations of R and the set S of the solutions to P. 2" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b15" ], "table_ref": [], "text": "Taking into account semantic properties of the constraints is a still recent approach to increase the e ciency of nding solutions to csps. This paper belongs to this frame. We rst introduced a new local consistency, pivot consistency, to deal with functional constraints for a lower cost than path consistency (we furthermore showed that pivot consistency is a weak form of path consistency). We then proposed an algorithm to achieve pivot consistent ltering. Later, we studied some properties that pivot consistency provides a functional csp with, in particular conditions under which a consistent instantiation can be extended to a solution.\nWe were then led to present a decomposition method based on those properties, which decreases the complexity of solving functional csps. An interesting point is that the search for solutions to the former problem is reduced to the search for solutions to the subproblem induced by a particular subset, the root set; this new problem can be solved by any method, including heuristics.\nFurthermore, we must add that this method deals with problems in which not all the constraints have to be functional (as we saw in the example). This is in our opinion an interesting improvement: most previous work dealing with properties of speci c classes of constraints assumes that all the constraint of the network possess a given property, and only applies in this case.\nHowever, some problems remain. First, and this is a classical problem as far as preprocessings are concerned: in what extent is the application of the method we present here useful? A partial answer can be given to this question: we indeed saw that one of the advantages of this method is that the cost of the search is known early in the process; one can thus estimate if the continuation of the process is worthwhile or not.\nOn the other hand, pivot consistency and the associated consistency properties can of course be generalized to non binary csps; yet, some problems appear: rst of all, nding a minimum root set becomes exponential (it is the same problem as nding a key of minimum size in the eld of relational databases, Lucchesi & Osborn, 1978); moreover, as we have seen, some constraints may be induced by the ltering processing; if this is not a problem for binary csps (the new constraints are still binary), in n-ary csps, some constraints may induce new ones of greater arity, which may from step to step lead to an explosion of the csp arity.\nFinding and characterizing classes of problems having root sets of small size as well as problems whose arity does not grow when achieving pivot consistency are therefore in the continuity of the work we presented in this paper.\nTo conclude, taking into account the semantics of constraints seems to be an interesting research area, and, as such, we can think of extending this kind of study (characterization of properties and of processings speci c to a class of constraints) to other classes of constraints." }, { "figure_ref": [ "fig_6", "fig_1", "fig_1" ], "heading": "Appendix A. Computation of Structural Data", "publication_ref": [], "table_ref": [], "text": "A.1 Computation of a Minimum Root Set R\nThe example used in this paper is not suited to the illustration of the computation we present here; we therefore use another one (Figures 13 to 15).\nComputation of G q , reduced graph of G f :\n1: Compute T = fT 1 ; : : : ; T p g, the set of the strongly connected components of G f : 2: Compute graph G q = (T ; C q ), where each vertex t i of T is the reduction of a strongly connected component T i , and there exists an arc from t i to t j if and only if an arc existed from a vertex of T i to a vertex of T j .\nComputation of the sources of G q , and the root set of G f : 3: Compute the set S q = ft s 1 ; : : : ; t sr g of the sources of G q . 4: Choose for each source t s k of S q any vertex of T s k , which will represent it. 5: The set R processed in that way is a root set of G f .\nT 5 X 9\nX 7 T 3 T 1 T 2 X 3 X 2 X 4 X 8 X 5 T 4\nX 6 X 1\nT 1 = fX 1 g T 2 = fX 2 ; X 3 ; X 4 g T 3 = fX 5 g T 4 = fX 6 g T 5 = fX 7 ; X 8 ; X 9 g Figure 13: Step 1: the strongly connected components t 4 S q t 5 t 3 t 2 t 1 S q = ft 1 ; t 2 g The only possible vertex to represent t 1 is X 1 .\nA vertex is chosen amongst T 2 = fX 2 ; X 3 ; X 4 g to represent t 2 , for instance X 2 .\nWe thus obtain R = fX 1 ; X 2 g Figure 15: Steps 4 and 5: a minimum root set selected variable, and to select a new variable among the unmarked ones, which is the target of a functional constraint whose origin is already marked. This functional constraint is then included into PC, and its target (the new variable) is then marked. The set PC we obtain this way is the set of the pivot candidates used by the pivot consistency algorithm (section 3.3). This set induces a partial order R on X (we only have to add the transitivity constraints). In order to satisfy the conditions of De nition 3.3, the assignment ordering consequently has to be a linear extension of the partial order R pre xed by R. Actually, computing R is not necessary: PC is su cient to compute the linear extension. There are no conditions on the variables of R; the algorithm can thus be decomposed into two steps: 1. Ordering of R Number from 1 to r = jRj the variables of R and mark them 2. Ordering of X R Repeat Choose a new unmarked variable X k in X R s.t. there exists a marked X h s.t. X h ! X k Number and mark X k Until all variables are marked Figure 16 presents an algorithm computing both the set of pivot candidates PC and an R-compatible assignment ordering. Let us repeat that any linear extension issued from PC is an R-compatible ordering.\nThe set Marked contains the variables that have already been treated The set NextPossible contains the next variables that can be chosen, that is the unmarked ones which are the target of a functional constraint whose origin is marked\nThe sets Origin j] contain the origins of functional constraints whose targets are described above (NextPossible) Num represents the number of the current variable in the assignment ordering Correctness:\nSet PC:\nCondition 1: a functional constraint X h ! X k is added to PC each time a new variable X k is chosen: we therefore need to prove that any variable of X R is selected exactly once, and that none is selected from R. The variables are selected from NextPossible, which only contains unmarked variables; since no marked variable may be unmarked back, and every selected variable is marked, every variable can be selected at most once.\nLet us now prove that every variable of X R is actually (at least once) selected: at any time, NextPossible is the set of all direct descendants of the marked variables that do not belong to R; at each outer loop of step 2, a new variable is extracted from NextPossible and marked; from the de nition of R, all variables of X R will therefore be reached and inserted in NextPossible." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I wish to thank the \\constraints\" team of the Computer Science Laboratory of Montpellier for their helpful advice, and the anonymous reviewers for their useful comments and suggestions. I also thank Anne Bataller and Pascal Jappy for polishing the English. They all helped improve this paper. This research was partially supported by the prc-ia research project \\csp flex\" of the cnrs." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "R = fX 1 ; X 2 ; X 3 g The three possible pivots are X 1 ! X 4 , X 2 ! X 4 and X 3 ! X 4 .\nThe assignment ordering we have chosen is X 1 ; X 2 ; X 3 ; X 4 ." }, { "figure_ref": [], "heading": "X1 X3 X4", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "X2", "publication_ref": [ "b22" ], "table_ref": [], "text": "If we choose the pivot X 1 ! X 4 the ltering does not suppress anything; the resulting problem P 1 is therefore the same as the initial one P. \nFigure 12: Processing pivot consistency and directional path consistency on node X k , (with g; h k ; i; j < k): the bold edges represent the constraints possibly created by pivot consistency (w.r.t. X h k ! X k ), the dotted edges the extra ones possibly created by directional path consistency A question arises: Is the set R really a minimum root set? There are actually two questions: rst, is R a root set? and if so, is this set minimum?\nWe rst show that the set R we computed is a root set: Let us show that any vertex is a descendant from an element of R. Let X i be a vertex of X R. Two possibilities:\n1. An element of R belongs to the same strongly connected component as X i . By denition, X i is then a descendant from this element.\n2. No element in R belongs to the same strongly connected component as X i . Let us denote T i the strongly connected component which contains X i , and t i its reduction in the graph G q . So, t i is not a source of this graph; consequently, there exists a source t s i in G q such that t i is a descendant from t s i . Let X s i be the vertex of X we chose to represent t s i in R. According to the de nition of the reduced graph, knowing that t i is a descendant from t s i , any element of T i is a descendant from any element of T s i , and therefore from X s i . X i is consequently a descendant from an element of R. R is a root set.\n2 We now show that R is minimum (i.e., there does not exist any root set of smaller size): Let us denote r = jRj. Assume there exists a root set R 0 whose size is r 0 < r. Let SCC(R 0 ) be the set of the Strongly Connected Components that contain all the elements of R 0 , and let T (R 0 ) be the set of their reductions in G q . By de nition of R, the reduced graph G q has r sources. There exists at least one source t s k in G q which does not belong to T (R 0 ) (since jT (R 0 )j r 0 < r). Consequently, there exists no path from an element of T (R 0 ) to t s k .\nLet us denote SCC(k) the strongly connected component reduced to t s k . From the de nition of the reduced graph, there exists no path from an element of R 0 to an element of SCC(k). Consequently, R 0 is not a root set. R is thus a minimum root set.\n2\nComplexity: It is the same as the complexity required for computing the strongly connected components, that is O(e + n) (Tarjan, 1972), if e is the number of edges and n the number of vertices." }, { "figure_ref": [], "heading": "A.2 Choice of the Pivots and Computation of an R-compatible Order", "publication_ref": [], "table_ref": [], "text": "We now describe how to choose the pivot candidates and to compute an R-compatible ordering. We rst present the conditions the pivot candidates must satisfy; we then present an algorithm that computes both PC and an R-compatible ordering. The de nition of pivot consistency implies two conditions on the pivots:\n1. Any X k in X R must be the target of one and only one pivot, and no variable of R is the target of a pivot.\n2. If X h ! X k is a pivot, then X h is before X k in the ordering. For each X k in X R, we have to choose a variable X h in X so that X h ! X k (X h necessarily exists, since X k does not belong to R). Moreover, PC must not contain any circuit, which would be in contradiction with condition 2. A way to prevent circuits is to mark every Condition 2: by construction of the sets Origin j], a variable X i is inserted in Origin j] immediately after X i has been numbered; this only applies for the unmarked X j . So, X j is necessarily numbered after X i .\nIs this assignment ordering R-compatible? The rst variables are obviously those taken from R (step 1). The other condition is a direct consequence of condition 2 above.\nComplexity: Both steps have the same structure: choose a new variable X i , mark and number it, and for each of its (unmarked) direct descendants, do some O(1) operations (during step 2, a constraint is added to PC which also requires O(1)). where + (i) is the set of the direct descendants from X i ." } ]
[ { "authors": "C Berge", "journal": "Dunod", "ref_id": "b0", "title": "Graphes et Hypergraphes", "year": "1970" }, { "authors": "C Bessi Ere", "journal": "Arti cial Intelligence", "ref_id": "b1", "title": "Arc-consistency and arc-consistency again", "year": "1994" }, { "authors": "A Borning", "journal": "ACM Transactions on Programming Languages and Systems", "ref_id": "b2", "title": "The Programming Language Aspects of ThingLab, A Constraint-Oriented Simulation Laboratory", "year": "1981" }, { "authors": "M C Cooper; D A Cohen; P G Jeavons", "journal": "Arti cial Intelligence", "ref_id": "b3", "title": "Characterising tractable constraints", "year": "1994" }, { "authors": "P David", "journal": "Chamb ery", "ref_id": "b4", "title": "When functional and bijective constraints make a CSP polynomial", "year": "1993" }, { "authors": "R Dechter", "journal": "Arti cial Intelligence", "ref_id": "b5", "title": "Enhancement Schemes for Constraint Processing: Backjumping, Learning, and Cutset Decomposition", "year": "1990" }, { "authors": "R Dechter", "journal": "Arti cial Intelligence", "ref_id": "b6", "title": "From local to global consistency", "year": "1992" }, { "authors": "R Dechter; J Pearl", "journal": "Arti cial Intelligence", "ref_id": "b7", "title": "Network-based heuristics for constraint satisfaction problems", "year": "1988" }, { "authors": "R Dechter; J Pearl", "journal": "Arti cial Intelligence", "ref_id": "b8", "title": "Tree clustering for Constraint Networks", "year": "1989" }, { "authors": "B N Freeman-Benson; A Borning", "journal": "", "ref_id": "b9", "title": "Integrating constraints with an objectoriented languuage", "year": "1992" }, { "authors": "E C Freuder", "journal": "Communications of the ACM", "ref_id": "b10", "title": "Synthesizing Constraint Expressions", "year": "1978" }, { "authors": "M L Ginsberg", "journal": "Journal of Arti cial Intelligence Research", "ref_id": "b11", "title": "Dynamic Backtracking", "year": "1993" }, { "authors": "R M Haralick; G L Elliot", "journal": "Arti cial Intelligence", "ref_id": "b12", "title": "Increasing tree search e ciency for constraint satisfaction problems", "year": "1980" }, { "authors": "P Janssen; P Nouguier; B Vilarem; M.-C Castro; B ", "journal": "New Journal of Chemistry", "ref_id": "b13", "title": "SYNTHIA: Assisted design of peptide synthesis plans", "year": "1990" }, { "authors": "L M Kirousis", "journal": "Arti cial Intelligence", "ref_id": "b14", "title": "Fast parallel constraint satisfaction", "year": "1993" }, { "authors": "C L Lucchesi; S L Osborn", "journal": "J. Computer and System Sciences", "ref_id": "b15", "title": "Candidate keys for relations", "year": "1978" }, { "authors": "A K Mackworth", "journal": "Arti cial Intelligence", "ref_id": "b16", "title": "Consistency in Networks of Relations", "year": "1977" }, { "authors": "R Mohr; T C Henderson", "journal": "Arti cial Intelligence", "ref_id": "b17", "title": "Arc and Path Consistency Revisited", "year": "1986" }, { "authors": "R Mohr; G Masini", "journal": "Springer-Verlag", "ref_id": "b18", "title": "Running e ciently arc consistency", "year": "1988" }, { "authors": "U Montanari", "journal": "Information Sciences", "ref_id": "b19", "title": "Networks of Constraints: fundamental properties and applications to picture processing", "year": "1974" }, { "authors": "B A Myers; D A Giuse; B Vander Zanden", "journal": "", "ref_id": "b20", "title": "Declarative programming in a prototype-instance system: object-oriented programming without writing methods", "year": "1992" }, { "authors": "P Prosser", "journal": "Computational Intelligence", "ref_id": "b21", "title": "Hybrid algorithms for the constraint satisfaction problem", "year": "1993" }, { "authors": "R Tarjan", "journal": "SIAM Journal on Computing", "ref_id": "b22", "title": "Depth-rst search and linear Graph algorithms", "year": "1972" }, { "authors": "P Van Beek", "journal": "", "ref_id": "b23", "title": "On the Minimality and Decomposability of Constraint Networks", "year": "1992" }, { "authors": "P Van Beek; R Dechter", "journal": "", "ref_id": "b24", "title": "Constraint Tightness versus Global Consistency", "year": "1994" }, { "authors": "P Van Hentenryck; Y Deville; C.-M Teng", "journal": "Arti cial Intelligence", "ref_id": "b25", "title": "A generic arc-consistency algorithm and its specializations", "year": "1992" } ]
[ { "formula_coordinates": [ 3, 117.24, 502.62, 307.2, 31.2 ], "formula_id": "formula_0", "formula_text": "C = fC GUIDES{CITIES ; C CITIES{COUNTRIES ; C GUIDES{LANGUAGES ; C COUNTRIES{CURRENCIES ; C COUNTRIES{LANGUAGES g" }, { "formula_coordinates": [ 9, 90, 584.52, 431.28, 31.02 ], "formula_id": "formula_1", "formula_text": "De nition 3.2 Let Y X and C ik 2 C s.t. X i ! X k , X k 2 X Y and X i 2 Y . X i ! X k is a pivot of Y i 8X j 2 Y s.t. C j k 2 C, C ik and C j k are X k -compatible." }, { "formula_coordinates": [ 10, 100.7, 90.54, 340.66, 132.27 ], "formula_id": "formula_2", "formula_text": "X 1 a 1 b 1 X 2 a 2 b 2 c 2 X 3 a 3 b 3 R = fX 1 ; X 2 g The pivot of fX 1 ; X 2 g is X 2 ! X 3" }, { "formula_coordinates": [ 10, 100.7, 229.08, 389.38, 156.14 ], "formula_id": "formula_3", "formula_text": ": A pivot consistent csp X 1 a 1 b 1 X 2 a 2 b 2 c 2 X 3 a 3 b 3 R = fX 1 ; X 2 g The pivot candidate of fX 1 ; X 2 g is X 2 ! X 3 ." }, { "formula_coordinates": [ 11, 102.84, 135.54, 352.44, 17.58 ], "formula_id": "formula_4", "formula_text": "1. 8X k 2 X R, 9X h ! X k 2 P s.t. h < k and X h ! X k is pivot of Y k 1" }, { "formula_coordinates": [ 11, 90, 649.2, 431.52, 30.78 ], "formula_id": "formula_5", "formula_text": "2 ; C 3 = C 1 \\ C 2 ; R 3 = R 1 t R 2 ), with D 3 = D 1 t D 2 = fD 3 i = D 1 i D 2 i 8i 2 1::ng and R 3 = R 1 t R 2 = fR 3 ij = R 1 ij R 2 ij 8i; j 2 1::n s.t. C 3 ij 2 C 3 g." }, { "formula_coordinates": [ 12, 90, 160.92, 432.12, 49.08 ], "formula_id": "formula_6", "formula_text": "O(k)k (therefore R 3 O(k)k ). It is the same if (a j ; a O(k) ) 2 R 2 j O(k) . C j k and X O(k) ! X k are consequently X k -compatible: X O(k) ! X k is pivot of Y k 1 in P 3 ." }, { "formula_coordinates": [ 12, 172.32, 247.44, 113.16, 17.04 ], "formula_id": "formula_7", "formula_text": "i = D 1 i D 2 i , so D 1 i D 3" }, { "formula_coordinates": [ 12, 117.24, 287.76, 196.92, 17.22 ], "formula_id": "formula_8", "formula_text": "C 3 = C 1 \\ C 2 , so C 3 C 1 and C 3 C 2 ." }, { "formula_coordinates": [ 12, 117.24, 328.08, 404.76, 66.6 ], "formula_id": "formula_9", "formula_text": "n such that C 3 ij 2 C 3 , R 3 ij = R 1 ij R 2 ij so R 1 ij R 3 ij and R 2 ij R 3 ij . (3) 8i; j 2 1::n such that C ij 2 C, do we have R 3 ij R ij ? Let C ij 2 C. Then C 3 ij 2 C 3 (C C 3 ). By construction of R 3 , we therefore have R 3 ij = R 1 ij R 2 ij . Now R 1 ij R ij and R 2 ij R ij , which implies R 3 ij R ij . (3 0 )" }, { "formula_coordinates": [ 14, 338.88, 225.42, 183.12, 17.58 ], "formula_id": "formula_10", "formula_text": "(X h ; X k ), X h ! X k is a pivot of Y k 1 ." }, { "formula_coordinates": [ 19, 221.76, 219.36, 172.08, 45 ], "formula_id": "formula_11", "formula_text": "O( n X k=r+1 (k 1)d 2 ) = O((n 2 r 2 )d 2 )" }, { "formula_coordinates": [ 20, 117.24, 589.8, 179.52, 16.96 ], "formula_id": "formula_12", "formula_text": "Complexity: O((n 2 r 2 )d 2 )) (see 3.6)" }, { "formula_coordinates": [ 23, 165.12, 114.18, 135.72, 177 ], "formula_id": "formula_13", "formula_text": "X 7 T 3 T 1 T 2 X 3 X 2 X 4 X 8 X 5 T 4" } ]
Using Pivot Consistency to Decompose and Solve Functional CSPs
Many studies have been carried out in order to increase the search e ciency of constraint satisfaction problems; among them, some make use of structural properties of the constraint network; others take into account semantic properties of the constraints, generally assuming that all the constraints possess the given property. In this paper, we propose a new decomposition method bene ting from both semantic properties of functional constraints (not bijective constraints) and structural properties of the network; furthermore, not all the constraints need to be functional. We show that under some conditions, the existence of solutions can be guaranteed. We rst characterize a particular subset of the variables, which we name a root set. We then introduce pivot consistency, a new local consistency which is a weak form of path consistency and can be achieved in O(n 2 d 2 ) complexity (instead of O(n 3 d 3 ) for path consistency), and we present associated properties; in particular, we show that any consistent instantiation of the root set can be linearly extended to a solution, which leads to the presentation of the aforementioned new method for solving by decomposing functional csps.
Philippe David
[ { "figure_caption": "X= fGUIDES, CITIES, COUNTRIES, CURRENCIES, LANGUAGESg Their ve domains are: D GUIDES = fAlice (A), Bob (B), Chris (C)g D CITIES = fParis, London, Washington, New-York, Madridg D COUNTRIES = fFrance, GB, USA, Spaing D CURRENCIES = fFrF, $, $, Pesg D LANGUAGES = fFrench, English, Spanishg", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "RFigure 1 :1Figure 1: Constraint graph and Consistency graph of P ex", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Cij is m-tight i every value ai 2 Di is supported by either at most m values in Dj, or all values of Dj, and, conversely, every value aj 2 Dj either has at most m supports in Di or is supported by all values of Di.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3: Subgraph and root set of P ex", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: After Pivot (COUNTRIES,LANGUAGES)", "figure_data": "", "figure_id": "fig_4", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: After Pivot (CITIES,COUNTRIES): end of the ltering", "figure_data": "", "figure_id": "fig_5", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Phase 33Instantiation of R Complexity: O(e R d r ), where e R is the number of constraints included in R. There are ve consistent instantiations of R = fGUIDES, CITIESg: f(Alice,Paris),(Bob,Paris),(Bob,London),(Bob,Washington),(Bob,New-York)g Phase 4 Instantiation of X R Complexity: O(n r): extending a consistent instantiation of R to a solution only needs linear time. The solutions of the problem are:", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Steps 2 and 3: the reduced graph and its sources", "figure_data": "", "figure_id": "fig_7", "figure_label": "14", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b34" ], "table_ref": [], "text": "This article investigates multi-agent reinforcement learning in the context of a concrete problem of undisputed importance { load balancing. Real life provides us with many examples of emergent, uncoordinated load balancing: tra c on alternative highways tends to even out over time; members of the computer science department tend to use the most powerful of the networked workstations, but eventually nd the lower load on other machines more inviting; and so on. We would like to understand the dynamics of such emergent load-balancing systems and apply the lesson to the design of multi-agent systems.\nWe de ne a formal yet concrete framework in which to study the issues, called a multiagent multi-resource stochastic system, which involves a set of agents, a set of resources, probabilistically changing resource capacities, probabilistic assignment of new jobs to agents, and probabilistic job sizes. An agent must select a resource for each new job, and the e ciency with which the resource handles the job depends on the capacity of the resource over the lifetime of the job as well as the number of other jobs handled by the resource over that period of time. Our performance measure for the system aims at globally optimizing the resource usage in the system while ensuring fairness (that is, a system shouldn't be made e cient at the expense of any particular agent), two common criteria for load balancing. How should an agent choose an appropriate resource in order to optimize these measures? Here we make an important assumption, in the spirit of reinforcement learning (Sutton, 1992): The information available to the agent is only its prior experience. In particular, the agent does not necessarily know the past, present, or future capacities of the resources,1 and is unaware of past, current, or future jobs submitted by the various agents, not even the relevant probability distributions. The goal of each agent is thus to adapt its resourceselection behavior to the behavior of the other agents as well as to the changing capacities of the resources and to the changing load, without explicitly knowing what they are.\nWe are interested in several basic questions:\nWhat are good resource-selection rules?\nHow does the fact that di erent agents may use di erent resource-selection rules a ect the system behavior?\nCan communication among agents improve the system e ciency?\nIn the following sections we show illuminating answers to these questions. The contribution of this paper is therefore twofold. We apply multi-agent reinforcement learning to the domain of adaptive load balancing and we use this basic domain in order to demonstrate basic phenomena in multi-agent reinforcement learning.\nThe structure of this paper is as follows. In Section 2 we discuss our general setting. The objective of this section is to motivate our study and point to its impact. The formal framework is de ned and discussed in Section 3. Section 4 completes the discussion of this framework by introducing the resource selection rule and its parameters, which function as the \\control knobs\" of the adaptive process. In Section 5 we present experimental results on adaptive behavior within our framework and show how various parameters a ect the e ciency of adaptive behavior. The case of heterogeneous populations is investigated in Section 6, and the case of communicating populations is discussed in Section 7. In Section 8 we discuss the impact of our results. In Section 9 we put our work in the perspective of related work. Finally, in Section 10 we conclude with a brief summary." }, { "figure_ref": [], "heading": "The General Setting", "publication_ref": [ "b30", "b26", "b5", "b14", "b27", "b41", "b9", "b23", "b28", "b42", "b21", "b23", "b2" ], "table_ref": [], "text": "This paper applies reinforcement learning to the domain of adaptive load balancing. However, before presenting the model we use and our detailed study, we need to clarify several points about our general setting. In particular, we need to explain the interpretation of reinforcement learning and the interpretation of load balancing we adopt.\nMuch work has been devoted in the recent years to distributed and adaptive load balancing. One can nd related work in the eld of distributed computer systems (e.g., Pulidas, Towsley, & Stankovic, 1988;Mirchandaney & Stankovic, 1986;Billard & Pasquale, 1993;Glockner & Pasquale, 1993;Mirchandaney, Towsley, & Stankovic, 1989;Zhou, 1988;Eager, Lazowska, & Zahorjan, 1986), in organization theory and management science (e.g., Malone, 1987), and in distributed AI (e.g., Bond & Gasser, 1988). Although some motivations of the above-mentioned lines of research are similar, the settings discussed have some essential di erences.\nWork on distributed computer systems adopts the view of a set of computers each of which controls certain resources, has an autonomous decision-making capability, and jobs arrive to it in a dynamic fashion. The decision-making agents of the di erent computers (also called nodes) try to share the system load and coordinate their activities by means of communication. The actual action to be performed, based on the information received from other computers, may be controlled in various ways. One of the ways adopted to control the related decisions is through learning automata (Narendra & Thathachar, 1989).\nIn the above-mentioned work each agent is associated with a set of resources, where both the agent and the related resources are associated with a node in the distributed system. Much work in management science and in distributed AI adopts a somewhat complementary view. In di erence to classical work in distributed operating systems, an agent is not associated with a set of resources that it controls. The agents are autonomous entities which negotiate among themselves (Zlotkin & Rosenschein, 1993;Kraus & Wilkenfeld, 1991) on the use of shared resources. Alternatively, the agents (called managers in this case) may negotiate the task to be executed with the processors which may execute it (Malone, 1987).\nThe model we adopt has the avor of models used in distributed AI and organization theory. We assume a strict separation between agents and resources. Jobs arrive to agents who make decisions about where to execute them. The resources are passive (i.e., do not make decisions). A typical example of such a setting in a computerized framework is a set of PCs, each of which is controlled by a di erent user and submits jobs to be executed on one of several workstations. The workstations are assumed to be independent of each other and shared among all the users. The above example is a real-life situation which motivated our study and the terminology we adopt is taken from such a framework. However, there are other real-life situations related to our model in areas di erent from classical distributed computer systems.\nA canonical problem related to our model is the following one (Arthur, 1994): An agent, embedded in a multi-agent system, has to select among a set of bars (or a set of restaurants). Each agent makes an autonomous decision but the performance of the bar (and therefore of the agents that use it) is a function of its capacity and of the number of agents that use it. The decision of going to a bar is a stochastic process but the decision of which bar to use is an autonomous decision of the respective agent. A similar situation arises when a product manager decides which processor to use in order to perform a particular task. The model we present in Section 3 is a general model where such situations can be investigated. In these situations a job arrives to an agent (rather than to a node consisting of particular resources) who decides upon the resource (e.g., restaurant) where his job should be executed; there is a-priori no association between agents and resources.\nWe now discuss the way the agents behave in such a framework. The common theme among the above-mentioned lines of research is that load-balancing is achieved by means of communication among active agents or active resources (through the related decisionmaking agents). In our study we adopt a complementary view. We consider agents who act in a purely local fashion, based on purely local information as described in the recent reinforcement learning literature. As we mentioned, learning automata were used in the eld of distributed computer systems in order to perform adaptive load balancing. Nevertheless, the related learning procedures rely heavily on communication among agents (or among decision-making agents of autonomous computers). Our work applies recent work on reinforcement learning in AI where the information the agent gets is purely local. Hence, an agent will know how e cient the service in a restaurant has been only by choosing it as a place to eat. We don't assume that agents may be informed by other agents about the load in other restaurants or that the restaurants will announce their current load. This makes our work strictly di erent from other work applying reinforcement learning to adaptive load balancing.\nThe above features make our model and study both basic and general. Moreover, the above discussion raises the question of whether reinforcement learning (based on purely local information and feedback) can guarantee useful load balancing. The combination of the model we use and our perspective on reinforcement learning makes our contribution novel. Nevertheless, as we mentioned above (and as we discuss in Section 9) the model we use is not original to us and captures many known problems and situations in distributed load balancing. We apply reinforcement learning, as discussed in the recent AI literature, to that model and investigate the properties of the related process." }, { "figure_ref": [], "heading": "The Multi-Agent Multi-Resource Stochastic System", "publication_ref": [ "b26" ], "table_ref": [], "text": "In this section we de ne the concrete framework in which we study dynamic load balancing. The model we present captures adaptive load balancing in the general setting mentioned in Section 2. We restrict the discussion to discrete, synchronous systems (and thus the de nition below will refer to N, the natural numbers); similar de nitions are possible in the continuous case. We concentrate on the case where a job can be executed using any of the resources. Although somewhat restricting, this is a common practice in much work in distributed systems (Mirchandaney & Stankovic, 1986).\nDe nition 3.1 A multi-agent multi-resource stochastic system is a 6-tuple hA; R; P;D; C; SRi, where A = fa 1 ; : : :; a N g is a set of agents, R = fr 1 ; : : :; r M g is a set of resources, P : A N ! 0; 1] is a job submission function, D : A N ! < is a probabilistic job size function, C : R N ! < is a probabilistic capacity function, and SR is a resource-selection rule.\nThe intuitive interpretation of the system is as follows. Each of the resources has a certain capacity, which is a real number; this capacity changes over time, as determined by the function C. At each time point each agent is either idle or engaged. If it is idle, it may submit a new job with probability given by P. Each job has a certain size which is also a real number. The size of any submitted job is determined by the function D. (We will use the unit token where referring to job sizes and resource capacities, but we do not mean that tokens come only in integer quantities.) For each new job the agent selects one of the resources. This choice is made according to the rule SR; since there is much to say about this rule, we discuss it separately in the next section.\nIn our model, any job may run on any resource. Furthermore, there is no limit on the number of jobs served simultaneously by a given resource (and thus no queuing occurs). However, the quality of the service provided by a resource at a given time deteriorates with the number of agents using it at that time. Speci cally, at every time point the resource distributes its current capacity (i.e., its tokens) equally among the jobs being served by it. The size of each job is reduced by this amount and, if it drops to (or below) zero, the job is completed, the agent is noti ed of this, and becomes idle again. Thus, the execution time of a job j depends on its size, on the capacity over time of the resource processing it, and on the number of other agents using that resource during the execution of j.\nOur measure of the system's performance will be twofold: We aim to minimize timeper-token, averaged over all jobs, as well as to minimize the standard deviation of this random variable. Minimizing both quantities will ensure overall system e ciency as well as fairness. The question is which selection rules yield e cient behavior; so we turn next to the de nition of these rules." }, { "figure_ref": [], "heading": "Adaptive Resource-Selection Rules", "publication_ref": [ "b36", "b6", "b28", "b17" ], "table_ref": [], "text": "The rule by which agents select a resource for a new job, the selection rule (SR), is the heart of our adaptive scheme and the topic of this section. Throughout this section and the following one we make an assumption of homogeneity. Namely, we assume that all the agents use the same SR. Notice that although the system is homogeneous, each agent will act based only on its local information. In Sections 6 and 7 we relax the homogeneity assumption and discuss heterogeneous and communicating populations.\nAs we have already emphasized, among all possible adaptive SRs we are interested in purely local SRs, ones that have access only to the experience of the particular agent. In our setting this experience consists of results of previous job submissions; for each job submitted by the agent and already completed, the agent knows the name r of the resource used, the point in time, t start , the job started, the point in time, t stop , the job was nished, and the job size S. Therefore, the input to the SR is, in principle, a list of elements in the form (r; t start ; t stop ; S). Notice that this type of input captures the general type of systems we are interested in. Basically, we wish to assume as little as possible about the information available to an agent in order to capture real loosely-coupled systems where more global information is unavailable.\nWhenever agent i selects a resource for its job execution, i may get its feedback after non-negligible time, where this feedback may depend on decisions made by other agents before and after agent i's decision. This forces the agent to rely on a non-trivial portion of its history and makes the problem much harder.\nThere are uncountably many possible adaptive SRs and our aim is not to gain exhaustive understanding of them. Rather, we have experimented with a family of intuitive and relatively simple SRs and have compared them with some non-adaptive ones. The motivation for choosing our particular family of SRs is partially due to observations made by cognitive psychologists on how people tend to behave in multi-agent stochastic and recurrent situations. In principle, our set of SRs captures the two most robust aspects of these observations: \\The law of e ect\" (Thronkide, 1898) and the \\Power law of practice\" (Blackburn, 1936). In our family of rules, called , which partially resembles the learning rules discussed in the learning automata literature (Narendra & Thathachar, 1989), and partially resembles the interval estimation algorithm (Kaelbling, 1993), agents do not maintain complete history of their experience. Instead, each agent, A, condenses this history into a vector, called the e ciency estimator, and denoted by ee A . The length of this vector is the number of resources, and the i'th entry in the vector represents the agent's evaluation of the current e ciency of resource i (speci cally, ee A (R) is a positive real number). This vector can be seen as the state of a learning automaton. In addition to ee A , agent A keeps a vector jd A , which stores the number of completed jobs which were submitted by agent A to each of the resources, since the beginning of time. Thus, within , we need only specify two elements:\n1. How agent A updates ee A when a job is completed 2. How agent A selects a resource for a new job, given ee A and jd A Loosely speaking, ee A will be maintained as a weighted sum of the new feedback and the previous value of ee A , and the resource selected will most probably be the one with highest ee A entry except that with low probability some other resource will be chosen. These two steps are explained more precisely in the following two subsections." }, { "figure_ref": [], "heading": "Updating the E ciency Estimator", "publication_ref": [], "table_ref": [], "text": "We take the function updating ee A to be ee A (R) :\n= WT + (1 W)ee A (R)\nwhere T represents the time-per-token of the newly completed job and is computed from the feedback (R; t start ; t stop ; S) in the following way:2 T = (t stop t start )=S\nWe take W to be a real value in the interval 0; 1], whose actual value depends on jd A (R).\nThis means that we take a weighted average between the new feedback value and the old value of the e ciency estimator, where W determines the weights given to these pieces of information. The value of W is obtained from the following function:\nW = w + (1 w)=jd A (R)\nIn the above formula w is a real-valued constant. The term (1 w)=jd A (R) is a correcting factor, which has a major e ect only when jd A (R) is low; when jd A (R) increases, reaching a value of several hundreds, this term becomes negligible with respect to w." }, { "figure_ref": [], "heading": "Selecting the Resource", "publication_ref": [], "table_ref": [], "text": "The second ingredient of adaptive SRs in is a function pd A selecting the resource for a new job based on ee A and jd A . This function is probabilistic. We rst de ne the following\nfunction pd 0 A (R) := ( ee A (R) n if jd A (R) > 0 E ee A ] n if jd A (R) = 0\nwhere n is a positive real-valued parameter and E ee A ] represents the average of the values of ee A (R) over all resources satisfying jd A (R) > 0. To turn this into a probability function, we de ne the pd A as the normalized version of pd 0 A :\npd A (R) := pd 0 A (R)= where = R pd 0 A (R) is a normalization factor. 3\nThe function pd A clearly biases the selection towards resources that have performed well in the past. The strength of the bias depends on n; the larger the value of n, the stronger the bias. In extreme cases, where the value of n is very high (e.g., 20), the agent will always choose the resource with the best record. This strategy of \\always choosing the best\", although perhaps intuitively appealing, is in general not a good one; it does not allow the agent to exploit improvements in the capacity or load on other resources. We discuss this SR in the following subsection, and expand on the issue of exploration versus exploitation in Sections 6 and 7.\nTo summarize, we have de ned a general setting in which to investigate emergent load balancing. In particular, we have de ned a family of adaptive resource-selection rules, parameterized by a pair (w; n). These parameters serve as knobs with which we tune the system so as to optimize its performance. In the next section we turn to experimental results obtained with this system." }, { "figure_ref": [], "heading": "The Best Choice SR (BCSR)", "publication_ref": [ "b32", "b2" ], "table_ref": [], "text": "The Best Choice SR (BCSR) is a learning rule that assumes a high value of n, i.e, which always chooses the best resource in a given point. We will assume w is xed to a given value while discussing BCSR. In our previous work (Shoham & Tennenholtz, 1992, 1994), we showed that learning rules that strongly resemble BCSR are useful for several natural multi-agent learning settings. This suggests that we need to carefully study it in the case of adaptive load balancing. As we will demonstrate, BCSR is not always useful in the load balancing setting.\nThe di erence between BCSR and a learning rule where the value of n is low, is that in the latter case the agent gives relatively high probability for the selection of a resource that didn't give the best results in the past. In that case the agent might be able to notice that the behavior of one of the resources has been improved due to changes in the system.\nNote that the exploration of \\non-best\" resources is crucial when the dynamics of the system includes changes in the capacities of the resources. In such cases, the agent could not take advantage of possible increases in the capacity of resources if it uses the BCSR. One might wonder, however, whether in cases where the main dynamic changes of the system stem from load changes, relying on BCSR is su cient. If the latter is true, we will be able to ignore the parameter n and to concentrate only on the BCSR, in systems where the capacity of resources is xed. In order to clarify this point, we consider the following example.\nSuppose there are only two resources, R 1 and R 2 , whose respective ( xed) capacities, c R 1 and c R 2 , satisfy the equality c R 1 = 2c R 2 . Assume now that the load of the system varies between a certain low value and a certain high one.\nIf the system's load is low and the agents adopt BCSR, then the system will evolve in a way where almost all of the agents would be preferring R 1 to R 2 . This is due to the fact that, in the case of low load, there are only few overlaps of jobs, hence R 1 is much more e cient. On the other hand, when the system's load is high, R 1 could be very busy and some of the agents would then prefer R 2 , since the performance obtained using the less crowded resource R 2 could be better than the one obtained using the overly crowded resource R 1 . In the extreme case of a very high load, we expect the agents to use R 2 one third of the time.\nAssume now that the load of the system starts from a low level, then increases to a high value, and then decreases to reach its original value. When the load increases, the agents, that were mostly using R 1 , will start observing that R 1 's performance is becoming worse and, therefore, following the BCSR they will start using R 2 too. Now, when the load decreases, the agents which were using R 2 will observe an improvement in the performance of R 2 , but the value they have stored for R 1 (i.e., ee A (1)), will still re ect the previous situation. Hence, the agents will keep on using R 2 , ignoring the possibility of obtaining much better results if they moved back to R 1 . In this situation, the randomized selection makes the agents able to use R 1 (with a certain probability) and therefore some of them may discover that the performance of R 1 is better than that of R 2 and switch back to R 1 . This will improve the system's e ciency in a signi cant manner.\nThe above example shows that the BCSR is, in the general case, not a good choice. This is in general true when the value of n is too high.\nIn the above discussion we have assumed that the changes in the load are unforeseen. If we are able to predict the changes in the load, the agents can simply use the BCSR while the load is xed and then use a low value of n during the changes. In our case, instead, without even realizing that the system has changed in some way, the agents would need to (and, as we will see, would be able to) adapt to dynamic changes as well as to each other." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "In this section we compare SRs in to each another, as well as to some non-adaptive, benchmark selection rules.\nThe non-adaptive SRs we consider in this paper are those in which the agents partition themselves according to the capacities and the load of the system in a xed predetermined manner and each agent uses always the same resource. Later in the paper, a SR of this kind is identi ed by a con guration vector, which speci es, for each resource, how many agents use it. When we test our adaptive SRs, we compare the performance against the nonadaptive SRs that perform best on the particular problem. This creates a highly competitive set of benchmarks for our adaptive SRs.\nIn addition, we compare our adaptive SRs to the load-querying SR which is de ned as follows: Each agent, when it has a new job, asks all the resources how busy they are and always chooses the less crowded one." }, { "figure_ref": [], "heading": "An Experimental Setting", "publication_ref": [ "b2" ], "table_ref": [], "text": "We now introduce a particular experimental setting, in which many of the results described below were obtained. We present it in order to be concrete about the experiments; however, the qualitative results of our experiments were observed in a variety of other experimental settings.\nOne motivation of our particular setting stems from the PCs and workstations problem mentioned in Section 2. For example, part of our study is related to a set of computers located at a single site. These computers have relatively high load with some peak hours during the day and a low load at night (i.e., the chances a user of a PC submits a job is higher during the day time of the week days than at night and on weekend). Another part of our study is related to a set of computers split all around the world, where the load has quite random structure (i.e., due to di erence in time zones, users may use PCs in unpredictable hours).\nAnother motivation of our particular setting stems from the restaurant problem mentioned in Section 2 (for discussion on the related \\bar problem\" see Arthur, 1994). For example, we can consider a set of snack bars located at an industrial park. These snack bars have relatively high loads with some peak hours during the day and low load at night (i.e., the chances an employee will choose to go to a snack-bar is higher during the day because there are more employees present during the day). Conversely, we can assume a set of bars near an airport where the load has quite random structure (i.e., the airport employees may like to use these snack-bars in quite unpredicted hours).\nAlthough these are particular real-situations, we would like to emphasize the general motivation of our study and the fact that the related phenomena have been observed in various di erent settings.\nWe take N, the number of agents, to be 100, and M, the number of resources, to be 5. In the rst set of experiments we take the capacities of the resources to be xed. In particular, we take them to be c 1 = 40; c 2 = 20; c 3 = 20; c 4 = 10; c 5 = 10. We assume that all agents have the same probability of submitting a new job. We also assume that all agents have the same distribution over the size of jobs they submit; speci cally, we assume it to be a uniform distribution over the integers in the range 50,150].\nFor ease of exposition, we will assume that each point in time corresponds to a second, and we consequently count the time in minutes, hours, days, and weeks. The hour is our main point of reference; we assume, for simplicity, that the changes in the system (i.e., load change and capacity change) happen only at the beginning of a new hour. The probability of submitting a job at each second, which corresponds to the load of the system, can vary over time; this is the crucial factor to which the agents must adapt. Note that agents can submit jobs at any second, but the probability of such submission may change. In particular we concentrate on three di erent values of this quantity, called L lo ; L hi and L peak , and we assume that the system load switches between those values. The actual values of L lo ; L hi and L peak in the following quantitative results are 0:1%, 0:3% and 1%, which roughly correspond to each agent submitting 3.6, 10.8, and 36 jobs per hour (per agent) respectively. In the following, when measuring success, we will refer only to the average time-pertoken. 4 However, the adaptive SRs that give the best average time-per-token were also found to be fair." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Fixed Load", "publication_ref": [], "table_ref": [], "text": "We start with the case in which the load is xed. This case is not the most interesting for adaptive behavior; however, a satisfactory SR should show reasonably e cient behavior in that basic case, in order to be useful when the system stabilizes.\nWe start by showing the behavior of non-adaptive benchmark SRs in the case of xed load. 5 Figure 1 shows those that give the best results, for each of the three loads.\nAs we can see, there is a big di erence between the three loads mentioned above. When the load is particularly high, the agents should scatter around all the resources at a rate proportional to their capacities; when the load is low they should all use the best resource. Given the above, it is easy to see that an adaptive SR can be e ective only if it enables moving quickly from one con guration to the other.\nIn a static setting such as this, we can expect the best non-adaptive SRs to perform better than adaptive ones, since the information gained by the exploration of the adaptive SRs can be built-in in the non-adaptive ones. The experimental results con rm this intuition, as shown in Figure 2 for L hi . The gure shows the performance obtained by the population when the value of n varies between 2 to 10 and for three values of w: 0.1, 0.3, and 0.5. Note that for the values of (n; w) that are good choices in the dynamic cases (see later in the paper, values in the intervals 3; 5] and 0:1; 0:5], respectively), the deterioration in the performance of the adaptive SRs with respect to the non-adaptive ones is small. This is an encouraging result, since adaptive SRs are meant to be particularly suitable for dynamic systems. In the following subsections we see that indeed they are." }, { "figure_ref": [ "fig_3" ], "heading": "Changing Load", "publication_ref": [ "b37" ], "table_ref": [], "text": "We now begin to explore more dynamic settings. Here we consider the case in which the load on the system (that is, the probability of agents submitting a job at any time) changes over time. In this paper we present two dynamic settings: One in which the load changes according to a xed pattern with only a few random perturbations and another in which the load varies in some random fashion. Speci cally, in the rst case we x the load to be L hi for ten consecutive hours, for ve days a week, with two randomly chosen hours in which it is L peak , and to be L lo for the rest of the week. In the second case, we x the number of hours in a week for each load as in the rst case, and we distribute them completely randomly in a week. The results obtained for the two cases are similar. Figure 3 shows the results obtained by the adaptive SRs in the case of random load. The best non-adaptive deterministic SR gives the time-per-token value of 69:201 obtained with the con guration (partition of agents) f52; 22; 22; 2; 2g; the adaptive SRs are superior. The load-querying SR instead gets the time-per-token value of 48:116, which is obviously better, but is not so far from the performances of the adaptive SRs.\nWe also observe the following phenomenon: Given a xed n (resp. a xed w) the average time-per-token is non-monotonic in w (resp. in n). This phenomenon is strongly related to the issue of exploration versus exploitation mentioned before and to phenomena observed in the study of Q-learning (Watkins, 1989).\nWe also notice how the two parameters n and w interplay. In fact, for each value of w the minimum of the time per token value is obtained with a di erent value of n. More precisely, the higher w is the lower n must be in order to obtain the best results. This means that, in order to obtain high performance, highly exploratory activity (low n) should be matched with giving greater weight to the more recent experience (high w). This \\parameter matching\" can be intuitively explained in the following qualitative way: The exploration activity pays because it allows the agent to detect changes in the system. However, it is more e ective if, when a change is detected, it can signi cantly a ect the e ciency estimator (i.e., if w is high). Otherwise, the cost of the exploration activity is greater than its gain." }, { "figure_ref": [ "fig_4" ], "heading": "Changing Capacities", "publication_ref": [], "table_ref": [], "text": "We now consider the case in which the capacity of the resources can vary over time. In particular, we will demonstrate our results in the case of the previously mentioned setting.\nWe will assume the capacities rotate randomly among the resources and, in ve consecutive days, each resource gets the capacity of 40 for one day, 20 for 2 days, and 10 for the other 2 days. 6 The load also varies randomly. The results of this experiment are shown in Figure 4. The best non-adaptive SR in this case gives the time-per-token value of 118:561 obtained with the con guration f20; 20; 20; 20; 20g. 7 The adaptive SRs give much better results, which are only slightly 6. Usually the capacities will change in a less dramatic fashion. We use the above-mentioned setting in order to demonstrate the applicability of our approach under severe conditions. 7. The load-querying SR gives the same results as in the case of xed capacities, because such SR is obviously not in uenced by the change. " }, { "figure_ref": [ "fig_5", "fig_5", "fig_6", "fig_7", "fig_5" ], "heading": "Heterogeneous Populations", "publication_ref": [ "b32" ], "table_ref": [], "text": "Throughout the previous section we have assumed that all the agents use the same SR, i.e. Homogeneity Assumption. Such assumption models the situation in which there is a sort of centralized o -line controller which, in the beginning, tells the agents how to behave and then leaves the agents to make their own decisions. The situation described above is very di erent from having an on-line centralized controller which makes every decision. However, we would like now to move even further from that and investigate the situation in which each agent is able to make its own decision about which strategy to use and, maybe, adjust it over time.\nAs a step toward the study of systems of this kind, we drop the Homogeneity Assumption and consider the situation in which part of the population uses one SR and the other part uses a second one.\nIn the rst set of experiments, we consider the setting discussed in Subsection 5.1 and we confront one with the other, two populations (called 1 and 2) of the same size (50 agents each). Each population uses a di erent SR in . The SR of population i (for i = 1; 2) will be determined by the pair of parameters (w i ; n i ). The measure of success of population i will be de ned as the average time-per-token of its members, and will be denoted by T i .\nFigure 5 shows the result obtained for w 1 = w 2 = 0:3, and n 1 = 4, and for di erent values of n 2 , in the case of randomly varying load.\nOur results expose the following phenomenon: The two populations obtain di erent outcomes from the ones they obtain in the homogeneous case. More speci cally, for 4 n 2 6 , the results obtained by the agents which use n 2 are generally better than the results obtained by the ones which use n 1 , despite the fact that an homogeneous population which uses n 1 gets better results than an homogeneous population which uses n 2 .\nThe phenomenon described above has the following intuitive explanation. For n 2 in the above-mentioned range, the population which uses n 2 is less \\exploring\" (i.e., more \\exploiting\") than the other one, and when it is left on its own it might not be able to adapt to the changes in a satisfactory manner. However, when it is joined with the other population, it gets the advantages of the experimental activity of agents in that population, without paying for it. In fact, the more exploring agents, in trying to unload the most crowded resources, make a service to the other agents as well.\nIt is worth observing in Figure 5 that when n 2 is low (e.g., n 2 3) the agents that use n 2 take the role of explorers and lose a lot, while the agents that use n 1 gain from that situation. Conversely, for high values of n 2 (e.g., n 2 7) the performances of the exploiters, For a better understanding of the phenomena involved, we have experimented with an asymmetric population, composed of one large group and one small one, instead of two groups of similar size. Figure 6 shows the results obtained using a setting similar to the one above, but where population 1 is composed of 90 members while population 2 consists of only 10 members. In this case, for every value of n 2 4, the exploiters do better than the explorers. The experiments also show that in this case, the higher n 2 is the better T 2 is, i.e. the more the exploiters exploit, the more they gain.\nThe above results suggest that a single agent gets the best results for itself by being noncooperative and always adopting the resource with the best performance (i.e., use BCSR), given that the rest of the agents use an adaptive (i.e., cooperative) SR. However, if all of the agents are non-cooperative then all of them will lose. 8 In conclusion, the sel sh interest of an agent does not match with the interest of the population. This is contrary to results obtained in other basic contexts of multi-agent learning (Shoham & Tennenholtz, 1992). In such cases, the agents adopting the lower value of w are in general the winners, as shown in Figure 7 for n 1 = n 2 = 4 and w 1 = 0:3. When w is very low then the corresponding agents get poor results and they are no longer the winners, as in the case of very high n in Figure 5.\nAnother interesting phenomenon is obtained when confronting adaptive agents with load-querying agents. Load-querying agents are agents who are able to consult the resources about where they should submit their jobs. A load-querying agent will submit its job to the most unloaded resource at the given point. When confronting load-querying agents with adaptive ones, the results obtained by the adaptive agents are obviously worse than the results obtained by the load-querying ones, but are better than the results obtained by a complete population of adaptive agents. This means that load-querying agents do not play the role of \\parasites\", as the above-mentioned \\exploiters\"; the load-querying agents help in maintaining the load balancing among the resources, and therefore help the rest of the agents. Another result we obtain is that agents who adopt deterministic SRs may behave as parasites and worsen the performance of adaptive agents.\nThese assertions are supported by the experiments described in Figure 8, where a population of 90 agents, each of which uses an adaptive SR with parameters (n; w), is faced with a minority of 10 agents which use di erent SRs, as stated above. In particular, in the four cases we consider, the minority behaves in the following ways: (i) they choose the resource " }, { "figure_ref": [ "fig_8", "fig_1" ], "heading": "Communication among Agents", "publication_ref": [ "b32", "b35", "b22" ], "table_ref": [], "text": "Up to this point, we have assumed that there is no direct communication among the agents. The motivation for this was that we considered situations in which there were absolutely no transmission channels and protocols. This assumption is in agreement with the idea of multi-agent reinforcement learning. In systems where massive communication is feasible we are not so much concerned with multiple agent adaptation, and the problem reduces to supplying satisfactory communication mechanisms. Multi-agent reinforcement learning is most interesting where real life forces agents to act without a-priori arranged communication channels and we must rely on action-feedback mechanisms. However, it is of interest to understand the e ects of communication on the system e ciency (as in Shoham & Tennenholtz, 1992;Tan, 1993), where the agents are augmented with some sort of communication capabilities. Our study of this extension led to some illuminating results, which we will now present.\nWe assume that each agent can communicate only with some of the other agents, which we call its neighbors. We therefore consider a relation neighbor-of and assume it is re exive, symmetric and transitive. As a consequence, the relation neighbor-of partitions the population into equivalence classes, that we call neighborhoods.\nThe form of communication we consider is based on the idea that the e ciency estimators of agents within a neighborhood will be shared among them when a decision is made (i.e., when an agent chooses a resource). The reader should notice that this is a naive form of communication and that more sophisticated types of communication are possible. However, the above form of communication is most natural when we concentrate on agents that update their behavior based only on past information. In particular, this type of communication is similar to the ones used in the above-mentioned work on incorporating communication into the framework of multi-agent reinforcement learning.\nWe suppose that di erent SRs may be used by di erent agents in the same population, but we impose the condition that within a single neighborhood, the same SR is used by all its members.\nWe also assume that each agent keeps its own history and updates it by itself in the usual way. The choice, instead, is based not only on the agent e ciency estimator, but on The neighborhood e ciency estimator has no physical storage: Its value is recalculated each time a member needs it.\nIn order to compare the behavior of communicating agents and non-communicating ones, we assume that in a single population there might be, aside from the neighborhoods de ned above, also some neighborhoods that do not allow the sharing of e ciency estimators among its members. The members of these neighborhoods behave as described in the previous sections, i.e., each agent relies only on its own history. The only thing that is common among the members of such a neighborhood is that all its members use the same SR.\nWe call communicating neighborhood (CN), a neighborhood in which the e ciency estimators are shared when a decision is taken and non-communicating neighborhood (NCN), a neighborhood in which this is not done.\nThe rst set of experiments we ran, regards a population composed of only CNs, all of the same size. In particular, we considered CNs of various sizes, starting from 50 CNs of size 2, going to 5 CNs of size 20. The load pro le exploited is the random load change de ned in Subsection 5.3, the value of w is taken to be 0:3, and n is taken to have various values. The results obtained are shown in Figure 9.\nThe results show that such communicating populations do not get good results. The reason for this is that members of a CN tend to be very conservative, in the sense that they mostly use the best resource. In fact, since they rely on an average of several agents, the picture they have of the system tends to be much more static. In particular, the bigger is the CN the more conservative its members tend to be. For example, consider the values of (n; w) that give the best results for non-communicating agents, those values give quite bad performance for CNs since they turn to be too conservative.\nUsing more adaptive values of (n; w), the behavior of a communicating population improves and reaches a performance that is just slightly worse than the performance of a non-communicating population. Tuning the parameters using a ner grain, it is possible to obtain a performance that is equal to the one obtained by a non-communicating population. However, it seems clear that no obvious gain is achieved from this form of communication capability. The intuitive explanation is that there are two opposite e ects caused by the communication. On the one hand, the agents get a fairer picture of the system which prevents them from using bad resources and therefore getting bad performance. On the other hand, since all of the agents in a CN have a \\better\" picture of the system, they all tend to use the best resources and thus they all compete for them. In fact, the agents behave sel shly and their sel sh interest may not agree with the interest of the population as a whole.\nThe interesting message that we get is that the fact that some agents may have a \\distorted\" picture of the system (which is typical for non-communicating populations), turns out to be an advantage for the population as a whole.\nSharing the data among agents leads to poorer performances also because in this case the agents have common views of loads and target jobs toward the same (lightly loaded) resources, which quickly become overloaded. In order to pro tably use the shared data, we should allow for some form of reasoning about the fact that the data is shared. This problem however is out of the scope of this paper (see e.g., Lesser, 1991).\nIn order to understand the behavior of the system when CNs and NCNs face each other, we consider an NCN of 80 agents together with a set of CNs of equal size, for di erent values of that size. The results of the corresponding experiments are shown in Figure 10. The members of the CNs, being more inclined to use the best resources, behave as parasites in the sense explained in Section 6. They exploit the adaptiveness of the rest of the population to obtain good performance from the best resources. For this reason they get better results than the rest of the population, as shown by the experimental results.\nIt it interesting to observe that when the NCN uses a very conservative selection rule, the CNs obtain even better results. The intuitive explanation for this behavior is that although all groups, i.e., both the communicating ones and the one with high value of n, tend to be conservative, the communicating ones \\win\" because they are conservative in a more \\clever\" way, that is making use of a better picture of the situation.\nThe conclusion we draw in this section is that the proposed form of communication between agents may not provide useful means to improve the performance of a population in our setting. However, we do not claim that communication between agents is completely useless. Nevertheless, we have observed that it does not provide a straightforward signi cant improvement. Our results support the claim that the sole past history of an agent is a " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b26", "b16", "b32", "b2", "b4" ], "table_ref": [], "text": "The previous sections were devoted to a report on our experimental study. We now synthesize our observations in view of our motivation, as discussed in Sections 1 and 2.\nAs we mentioned, our model is a general model where active autonomous agents have to select among several resources in a dynamic fashion and based on local information. The fact that the agents use only local information makes the possibility of e cient loadbalancing questionable. However, we showed that adaptive load balancing based on purely local feedback is a feasible task. Hence, our results are complementary to the ones obtained in the distributed computer systems literature. As Mirchandaney and Stankovic (1986) put it: \\: : :what is signi cant about our work is that we have illustrated that is possible to design a learning controller that is able to dynamically acquire relevant job scheduling information by a process of trial and error, and use that information to provide good performance.\"\nThe study presented in our paper supplies a complementary contribution where we are able to show that useful adaptive load balancing can be obtained using purely local information and in the framework of a general organizational-theoretic model.\nIn our study we identi ed various parameters of the adaptive process and investigated how they a ect the e ciency of adaptive load balancing. This part of our study supplies useful guidelines for a systems designer who may force all the agents to work based on a common selection rule. Our observations, although somewhat related to previous observations made in other contexts and models (Huberman & Hogg, 1988), enable to demonstrate aspects of purely local adaptive behavior in a non-trivial model.\nOur results about the disagreement between sel sh interest of agents and the common interest of the population is in sharp contrast to previous work on multi-agent learning (Shoham & Tennenholtz, 1992, 1994) and to the dynamic programming perspective of earlier work on distributed systems (Bertsekas & Tsitsiklis, 1989). Moreover, we explore how the interaction between di erent agent types a ects the system's e ciency as well as the individual agent's e ciency. The related results can be also interpreted as guidelines for a designer who may have only partial control of a system.\nThe synthesis of the above observations teaches us about adaptive load balancing when one adopts a reinforcement learning perspective where the agents rely only on their local information and activity. An additional step we performed attempts to bridge some of the gap between our local view and previous work on adaptive load balancing by communicating agents, whose decisions may be controlled by learning automata or by other means. We therefore rule out the possibility of communication about the current status of resources and of joint decision-making, but enable a limited sharing of previous history. We show that such limited communication may not help, and even deteriorate system e ciency. This leaves us with a major gap between previous work where communication among agents is the basic tool for adaptive load balancing and our work. Much is left to be done in attempting to bridge this gap. We see this as a major challenge for further research." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b26", "b5", "b14", "b27", "b41", "b9", "b26", "b25", "b4", "b5", "b35", "b11", "b7", "b20", "b15", "b38", "b32", "b2", "b28", "b35", "b40", "b31", "b18", "b19", "b16", "b1", "b28", "b10", "b0", "b29", "b39", "b13", "b23", "b8", "b23", "b2" ], "table_ref": [], "text": "In Section 2 we mentioned some related work in the eld of distributed computer systems (Mirchandaney & Stankovic, 1986;Billard & Pasquale, 1993;Glockner & Pasquale, 1993;Mirchandaney et al., 1989;Zhou, 1988;Eager et al., 1986). A typical example of such work is the paper by Mirchandaney and Stankovic (1986). In this work learning automata are used in order to decide on the action to be taken. However, the suggested algorithms heavily rely on communication and information sharing among agents. This is in sharp contrast to our work. In addition, there are di erences between the type of model we use and the model presented in the above-mentioned work and in other work on distributed computer systems.\nApplications of learning algorithms to load balancing problems are given by Mehra (1992), Mehra and Wah (1993). However, in that work as well, the agents (sites, in the authors' terminology) have the ability to communicate and to exchange workload values, even though such values are subject to uncertainty due to delays. In addition, di erently from our work, the learning activity is done o -line. In particular, in the learning phase the whole system is dedicated to the acquisition of workload indices. Such load indices are then used in the running phase as threshold values for job migration between di erent sites.\nIn spite of the di erences, there are some similarities between our work and the abovementioned work. One important similarity is the use of learning procedures. This is in di erence from the more classical work on parallel and distributed computation (Bertsekas & Tsitsiklis, 1989) which applies numerical and iterative methods to the solution of problems in network ow and parallel computing. Other similarities are related to our study of the division of the society into groups. This somewhat resembles work on group formation (Billard & Pasquale, 1993) in distributed computer systems. The information sharing we allow in Section 7 is similar to the limited communication discussed by Tan (1993). In the classi cation of load-balancing problems given by Ferrari (1985), our work falls into the category of load-independent and non-preemptive pure load-balancing. The problems we investigate can be also seen as sender-initiated problems, although in our case the sender is the agent and not the (overloaded) resource.\nOne may wonder how our work di ers from other work on adaptive load balancing in Operations Research (OR) (e.g., queuing theory Bonomi, Doshi, Kaufmann, Lee, & Kumar, 1990). Indeed, there are some commonalities. In both OR and our work, individual decisions are made locally, based on information obtained dynamically during runtime. And in both cases the systems constructed are su ciently complex that the most interesting results tend to be obtained experimentally. However, a careful look at the relevant OR literature reveals an essential di erence between the perspective of OR on the topic and our reinforcement-learning perspective: OR permits free communication within the system, and thus there is no signi cant element of uncertainty in that framework. In particular, the issue of exploration versus exploitation, which lies at the heart of our approach, is completely absent from work in OR.\nSome work on adaptive load balancing and related topics has been carried out also by the Arti cial Intelligence community (see e.g., Kosoresow, 1993;Gmytrasiewicz, Durfee, & Wehe, 1991;Wellman, 1993). This work too, however, tends to be based on some form of communication among the agents, whereas in our case the load balancing is obtained purely from a learning activity.\nThis article is related to our previous work on co-learning (Shoham & Tennenholtz, 1992, 1994). The framework of co-learning is a framework for multi-agent learning, which di ers from other frameworks discussed in multi-agent reinforcement learning (Narendra & Thathachar, 1989;Tan, 1993;Yanco & Stein, 1993;Sen, Sekaran, & Hale, 1994) due to the fact that it considers the case of stochastic interactions among subsets of the agents, where there is purely local feedback revealed to the agents based on these interactions. The framework of co-learning is similar in some respects to a number of dynamic frameworks in economics (Kandori, Mailath, & Rob, 1991), physics (Kinderman & Snell, 1980), computational ecologies (Huberman & Hogg, 1988), and biology (Altenberg & Feldman, 1987). Our study of adaptive load balancing can be treated as a study in co-learning.\nRelevant to our work is also the literature in the eld of Learning Automata (see Narendra & Thathachar, 1989). In fact, an agent in our setting can be seen as a learning automaton. Therefore, one may hope that theoretical results on interconnected automata and N-player games (see e.g., El-Fattah, 1980;Abdel-Fattah, 1983;Narendra & Wheeler Jr., 1983;Wheeler Jr. & Narendra, 1985) could be imported in our framework. Unfortunately, due to the stochastic nature of job submissions (i.e., agent interactions) and the real-valued (instead of binary) feedback, our problem does not t completely in to the theoretical framework of learning automata. Hence, results concerning optimality, convergence or expediency of learning rules such as Linear Reward-Penalty or Linear Reward-Inaction, can not be easily adapted into our setting. The fact that we use a stochastic model for the interaction among agents, makes our work closely related to the above-mentioned work on co-learning. Nevertheless, our work is largely in uenced by learning automata theory and our resource-selection rules closely resemble reinforcement schemes for learning automata.\nLast but not least, our work is related to work applying organization theory and management techniques to the eld of Distributed AI (Fox, 1981;Malone, 1987;Durfee, Lesser, & Corkill, 1987). Our model is closely related to models of decision-making in management and organization theory (e.g., Malone, 1987) and applies a reinforcement learning perspective to that context. This makes our work related to psychological models of decision-making (Arthur, 1994)." }, { "figure_ref": [], "heading": "Summary", "publication_ref": [], "table_ref": [], "text": "This work applies the idea of multi-agent reinforcement learning to the problem of load balancing in a loosely-coupled multi-agent system, in which agents need to adapt to one another as well as to a changing environment. We have demonstrated that adaptive behavior is useful for e cient load balancing in this context and identi ed a pair of parameters that a ect that e ciency in a non-trivial fashion. Each parameter, holding the other parameter to be xed, gives rise to a certain tradeo , and the two parameters interplay in a non-trivial and illuminating way. We have also exposed illuminating results regarding heterogeneous populations, such as how a group of parasitic less adaptive agents can gain from the exibility of other agents. In addition, we showed that naive use of communication may not improve, and might even deteriorate, the system e ciency." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We thank the anonymous reviewers and Steve Minton, whose stimulating comments helped us in improving on an earlier version of this paper." } ]
[ { "authors": "Y M Abdel-Fattah", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b0", "title": "Stochastic automata modeling of certain problems of collective behavior", "year": "1983" }, { "authors": "L Altenberg; M W Feldman", "journal": "Genetics", "ref_id": "b1", "title": "Selection, generalized transmission, and the evolution of modi er genes. I. The reduction principle", "year": "1987" }, { "authors": "W Arthur", "journal": "", "ref_id": "b2", "title": "Inductive reasoning, bounded rationality and the bar problem", "year": "1994" }, { "authors": "R Axelrod", "journal": "Basic Books", "ref_id": "b3", "title": "The Evolution of Cooperation", "year": "1984" }, { "authors": "D Bertsekas; J Tsitsiklis", "journal": "Prentice Hall", "ref_id": "b4", "title": "Parallel and Distributed Computation: Numerical Methods", "year": "1989" }, { "authors": "E Billard; J Pasquale", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b5", "title": "E ects of delayed communication in dynamic group formation", "year": "1993" }, { "authors": "J M Blackburn", "journal": "Ablex Publishing Corporation", "ref_id": "b6", "title": "Acquisition to skill: An analysis of learning curves", "year": "1936" }, { "authors": "F Bonomi; B Doshi; J Kaufmann; T Lee; A Kumar", "journal": "Queuing Systems", "ref_id": "b7", "title": "A case study of adaptive load balancing algorithm", "year": "1990" }, { "authors": "E H Durfee; V R Lesser; D D Corkill", "journal": "IEEE Transactions on Computers", "ref_id": "b8", "title": "Coherent cooperation among communicating problem solvers", "year": "1987" }, { "authors": "D Eager; E Lazowska; J Zahorjan", "journal": "IEEE Transactions on Software Engineering", "ref_id": "b9", "title": "Adaptive load sharing in homogeneous distributed systems", "year": "1986" }, { "authors": "Y M El-Fattah", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b10", "title": "Stochastic automata modeling of certain problems of collective behavior", "year": "1980" }, { "authors": "D Ferrari", "journal": "", "ref_id": "b11", "title": "A study of load indices for load balancing schemes", "year": "1985" }, { "authors": "D Ferrari; G Serazzi; A Zeigner", "journal": "Prentice Hall", "ref_id": "b12", "title": "Measurement and Tuning of Computer Systems", "year": "1983" }, { "authors": "M S Fox", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b13", "title": "An organizational view of distributed systems", "year": "1981" }, { "authors": "A Glockner; J Pasquale", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b14", "title": "Coadaptive behavior in a simple distributed job scheduling system", "year": "1993" }, { "authors": "P Gmytrasiewicz; E Durfee; D Wehe", "journal": "", "ref_id": "b15", "title": "The utility of communication in coordinating intelligent agents", "year": "1991" }, { "authors": "B A Huberman; T Hogg", "journal": "Elsevier Science", "ref_id": "b16", "title": "The behavior of computational ecologies", "year": "1988" }, { "authors": "L Kaelbling", "journal": "MIT Press", "ref_id": "b17", "title": "Learning in Embedded Systems", "year": "1993" }, { "authors": "M Kandori; G Mailath; R Rob", "journal": "Mimeo. University of Pennsylvania", "ref_id": "b18", "title": "Learning, mutation and long equilibria in games", "year": "1991" }, { "authors": "R Kinderman; S L Snell", "journal": "American Mathematical Society", "ref_id": "b19", "title": "Markov Random Fields and their Applications", "year": "1980" }, { "authors": "A P Kosoresow", "journal": "", "ref_id": "b20", "title": "A fast rst-cut protocol for agent coordination", "year": "1993" }, { "authors": "S Kraus; J Wilkenfeld", "journal": "", "ref_id": "b21", "title": "The function of time in cooperative negotiations", "year": "1991" }, { "authors": "V R Lesser", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b22", "title": "A retrospective view of FA/C distributed problem solving", "year": "1991" }, { "authors": "T W Malone", "journal": "Management Science", "ref_id": "b23", "title": "Modeling coordination in organizations and markets", "year": "1987" }, { "authors": "P Mehra", "journal": "", "ref_id": "b24", "title": "Automated Learning of Load-Balancing Strategies For A Distributed Computer System", "year": "1992" }, { "authors": "P Mehra; B W Wah", "journal": "AIAA", "ref_id": "b25", "title": "Population-based learning of load balancing policies for a distributed computer system", "year": "1993" }, { "authors": "R Mirchandaney; J Stankovic", "journal": "Journal of Parallel and Distributed Computing", "ref_id": "b26", "title": "Using stochastic learning automata for job scheduling in distributed processing systems", "year": "1986" }, { "authors": "R Mirchandaney; D Towsley; J Stankovic", "journal": "IEEE Transactions on Computers", "ref_id": "b27", "title": "Analysis of the e ects of delays on load sharing", "year": "1989" }, { "authors": "K Narendra; M A L Thathachar", "journal": "Prentice Hall", "ref_id": "b28", "title": "Learning Automata: An Introduction", "year": "1989" }, { "authors": "K Narendra; R M Wheeler", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b29", "title": "An N-player sequential stochastic game with identical payo s", "year": "1983" }, { "authors": "S Pulidas; D Towsley; J Stankovic", "journal": "IEEE", "ref_id": "b30", "title": "Imbedding gradient estimators in load balancing algorithms", "year": "1988" }, { "authors": "S Sen; M Sekaran; J Hale", "journal": "", "ref_id": "b31", "title": "Learning to coordinate without sharing information", "year": "1994" }, { "authors": "Y Shoham; M Tennenholtz", "journal": "", "ref_id": "b32", "title": "Emergent conventions in multi-agent systems: initial experimental results and observations", "year": "1992" }, { "authors": "Y Shoham; M Tennenholtz", "journal": "", "ref_id": "b33", "title": "Co-learning and the evolution of social activity", "year": "1994" }, { "authors": "R Sutton", "journal": "Machine Learning", "ref_id": "b34", "title": "Special issue on reinforcement learning", "year": "1992" }, { "authors": "M Tan", "journal": "", "ref_id": "b35", "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents", "year": "1993" }, { "authors": "E L Thronkide", "journal": "Psychological Monographs", "ref_id": "b36", "title": "Animal intelligence: An experimental study of the associative processes in animals", "year": "1898" }, { "authors": "C Watkins", "journal": "", "ref_id": "b37", "title": "Learning With Delayed Rewards", "year": "1989" }, { "authors": "M P Wellman", "journal": "Journal of Arti cial Intelligence Research", "ref_id": "b38", "title": "A market-oriented programming environment and its application to distributed multicommodity ow problems", "year": "1993" }, { "authors": "R M Wheeler; K Narendra", "journal": "Automatica", "ref_id": "b39", "title": "Learning models for decentralized decision making", "year": "1985" }, { "authors": "H Yanco; L Stein", "journal": "", "ref_id": "b40", "title": "An adaptive communication protocol for cooperating mobile robots", "year": "1993" }, { "authors": "S Zhou", "journal": "IEEE Transactions on Software Engineering", "ref_id": "b41", "title": "A trace-driven simulation study of dynamic load balancing", "year": "1988" }, { "authors": "G Zlotkin; J S Rosenschein", "journal": "", "ref_id": "b42", "title": "A domain theory for task oriented negotiation", "year": "1993" } ]
[ { "formula_coordinates": [ 6, 270, 351.6, 114.96, 17.04 ], "formula_id": "formula_0", "formula_text": "= WT + (1 W)ee A (R)" }, { "formula_coordinates": [ 6, 244.56, 504.48, 123.12, 17.04 ], "formula_id": "formula_1", "formula_text": "W = w + (1 w)=jd A (R)" }, { "formula_coordinates": [ 6, 90, 631.56, 312.24, 41.88 ], "formula_id": "formula_2", "formula_text": "function pd 0 A (R) := ( ee A (R) n if jd A (R) > 0 E ee A ] n if jd A (R) = 0" }, { "formula_coordinates": [ 7, 90, 144.36, 259.2, 49.8 ], "formula_id": "formula_3", "formula_text": "pd A (R) := pd 0 A (R)= where = R pd 0 A (R) is a normalization factor. 3" } ]
Adaptive Load Balancing: A Study in Multi-Agent Learning
We study the process of multi-agent reinforcement learning in the context of load balancing in a distributed system, without use of either central coordination or explicit communication. We rst de ne a precise framework in which to study adaptive load balancing, important features of which are its stochastic nature and the purely local information available to individual agents. Given this framework, we show illuminating results on the interplay between basic adaptive behavior parameters and their e ect on system e ciency. We then investigate the properties of adaptive load balancing in heterogeneous populations, and address the issue of exploration vs. exploitation in that context. Finally, we show that naive use of communication may not improve, and might even harm system e ciency.
Andrea Schaerf; Yoav Shoham
[ { "figure_caption": "c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Best non-adaptive SRs for xed load", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Performance of the adaptive Selection Rules for xed load", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance of the adaptive Selection Rules for random load", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance of the adaptive Selection Rules for changing capacities", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance of 2 populations of 50 agents with n 1 = 4 and w 1 = w 2 = 0:3", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Performance of 2 populations of 90/10 agents with n 1 = 4 and w 1 = w 2 = 0:3", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Performance of 2 populations of 50 agents with n 1 = n 2 = 4 and w 1 = 0:3", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Performance of the adaptive Selection Rules for random load pro le for communicating agents", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Performance of CNs and NCNs together reasonable information on which to base its decision, assuming we do not consider available any kind of real-time information (e.g., current load of the resources).", "figure_data": "80 agents20 agentsT 1T 2(.3,4) 1 NCN (.3,4) 1 CN 65.287 63.054 (.3,4) 1 NCN (.3,4) 2 CNs 65.069 63.307 (.3,4) 1 NCN (.3,4) 5 CNs 65.091 62.809 (.3,4) 1 NCN (.3,4) 10 CNs 64.895 63.840(.3,10) 1 NCN (.3,4) 1 CN 68.419 60.018 (.3,10) 1 NCN (.3,4) 2 CNs 68.319 59.512 (.3,10) 1 NCN (.3,4) 5 CNs 68.529 60.674 (.3,10) 1 NCN (.3,4) 10 CNs 68.351 61.711Figure 10:", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12" ], "table_ref": [], "text": "Since before the beginning of arti cial intelligence, philosophers, control theorists and economists have looked for a satisfactory de nition of rational behaviour. This is needed to underpin theories of ethics, inductive learning, reasoning, optimal control, decision-making, and economic modelling. Doyle (1983) has proposed that AI itself be de ned as the computational study of rational behaviour|e ectively equating rational behaviour with intelligence. The role of such de nitions in AI is to ensure that theory and practice are correctly aligned. If we de ne some property P, then we hope to be able to design a system that provably possesses property P. Theory meets practice when our systems exhibit P in reality. Furthermore, that they exhibit P in reality should be something that we actually care about. In a sense, the choice of what P to study determines the nature of the eld.\nThere are a number of possible choices for P: Perfect rationality: the classical notion of rationality in economics and philosophy.\nA perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given the information it has acquired from the environment. Since action selection requires computation, and computation takes time, perfectly rational agents do not exist for non-trivial environments.\nCalculative rationality: the notion of rationality studied in AI. A calculatively rational agent eventually returns what would have been the rational choice at the beginning of its deliberation. There exist systems such as in uence diagram evaluators that exhibit this property for a decision-theoretic de nition of rational choice, and systems such as nonlinear planners that exhibit it for a logical de nition of rational choice. This is assumed to be an interesting property for a system to exhibit since it constitutes an \\in-principle\" capacity to do the right thing. Calculative rationality is of limited value in practice, because the actual behaviour exhibited by such systems is absurdly far from being rational; for example, a calculatively rational chess program will choose the right move, but may take 10 50 times too long to do so. As a result, AI systembuilders often ignore theoretical developments, being forced to rely on trial-and-error engineering to achieve their goals. Even in simple domains such as chess, there is little theory for designing and analysing high-performance programs.\nMetalevel rationality: a natural response to the problems of calculative rationality. A metalevel rational system optimizes over the object-level computations to be performed in the service of selecting actions. In other words, for each decision it nds the optimal combination of computation-sequence-plus-action, under the constraint that the action must be selected by the computation. Full metalevel rationality is seldom useful because the metalevel computations themselves take time, and the metalevel decision problem is often more di cult than the object-level problem. Simple approximations to metalevel rationality have proved useful in practice|for example, metalevel policies that limit lookahead in chess programs|but these engineering expedients merely serve to illustrate the lack of a theoretical basis for agent design. Bounded optimality: a bounded optimal agent behaves as well as possible given its computational resources. Bounded optimality speci es optimal programs rather than optimal actions or optimal computation sequences. Only by the former approach can we avoid placing constraints on intelligent agents that cannot be met by any program. Actions and computations are, after all, generated by programs, and it is over programs that designers have control. We make three claims: 1. A system that exhibits bounded optimality is desirable in reality. 2. It is possible to construct provably bounded optimal programs. 3. Arti cial intelligence can be usefully characterized as the study of bounded optimality, particularly in the context of complex task environments and reasonably powerful computing devices. The rst claim is unlikely to be controversial. This paper supports the second claim in detail. The third claim may, or may not, stand the test of time.\nWe begin in section 2 with a necessarily brief discussion of the relationship between bounded optimality and earlier notions of rationality. We note in particular that some important distinctions can be missed without precise de nitions of terms. Thus in section 3 we provide formal de nitions of agents, their programs, their behaviour and their rationality.\nTogether with formal descriptions of task environments, these elements allow us to prove that a given agent exhibits bounded optimality. Section 4 examines a class of agent architectures for which the problem of generating bounded optimal con gurations is e ciently soluble. The solution involves a class of interesting and practically relevant optimization problems that do not appear to have been addressed in the scheduling literature. We illustrate the results by showing how the throughput of an automated mail-sorting facility might be improved. Section 5 initiates a discussion of how bounded optimal con gurations might be learned from experience in an environment. In section 6, we de ne a weaker property, asymptotic bounded optimality (ABO), that may be more robust and tractable than the strict version of bounded optimality. In particular, we can construct universal ABO programs. A program is universally ABO if it is ABO regardless of the speci c form of time dependence of the utility function. 1 Universal ABO programs can therefore be used as building blocks for more complex systems. We conclude with an assessment of the prospects for further development of this approach to arti cial intelligence." }, { "figure_ref": [], "heading": "Historical Perspective", "publication_ref": [ "b30", "b26", "b27", "b24", "b14", "b37", "b28", "b29", "b18", "b36", "b5", "b8", "b16", "b9", "b6", "b33", "b15", "b34" ], "table_ref": [], "text": "The classical idea of perfect rationality, which developed from Aristotle's theories of ethics, work by Arnauld and others on choice under uncertainty, and Mill's utilitarianism, was put on a formal footing in decision theory by Ramsey (1931) andvonNeumann andMorgernstern (1947). It stipulates that a rational agent always act so as to maximize its expected utility. The expectation is taken according to the agent's own beliefs; thus, perfect rationality does not require omniscience.\nIn arti cial intelligence, the logical de nition of rationality, known in philosophy as the \\practical syllogism\", was put forward by McCarthy (1958), and reiterated strongly by Newell (1981). Under this de nition, an agent should take any action that it believes is guaranteed to achieve any of its goals. If AI can be said to have had a theoretical foundation, then this de nition of rationality has provided it. McCarthy believed, probably correctly, that in the early stages of the eld it was important to concentrate on \\epistemological adequacy\" before \\heuristic adequacy\" | that is, capability in principle rather than in practice. The methodology that has resulted involves designing programs that exhibit calculative rationality, and then using various speedup techniques and approximations in the hope of getting as close as possible to perfect rationality. Our belief, albeit unproven, is that the simple agent designs that ful ll the speci cation of calculative rationality may not provide good starting points from which to approach bounded optimality. Moreover, a theoretical foundation based on calculative rationality cannot provide the necessary guidance in the search.\nIt is not clear that AI would have embarked on the quest for calculative rationality had it not been operating in the halcyon days before formal intractability results were discovered. One response to the spectre of complexity has been to rule it out of bounds. Levesque and Brachman (1987) suggest limiting the complexity of the environment so that calculative and perfect rationality coincide. Doyle and Patil (1991) argue strongly against this position.\nEconomists have used perfect rationality as an abstract model of economic entities, for the purposes of economic forecasting and designing market mechanisms. This makes it possible to prove theorems about the properties of markets in equilibrium. Unfortunately, as Simon (1982) pointed out, real economic entities have limited time and limited powers of deliberation. He proposed the study of bounded rationality, investigating \\: : : the shape of a system in which e ectiveness in computation is one of the most important weapons of survival.\" Simon's work focussed mainly on satis cing designs, which deliberate until reaching some solution satisfying a preset \\aspiration level.\" The results have descriptive value for modelling various actual entities and policies, but no general prescriptive framework for bounded rationality was developed. Although it proved possible to calculate optimal aspiration levels for certain problems, no structural variation was allowed in the agent design.\nIn the theory of games, bounds on the complexity of players have become a topic of intense interest. For example, it is a troubling fact that defection is the only equilibrium strategy for unbounded agents playing a xed number of rounds of the Prisoners' Dilemma game. Neyman's theorem (Neyman, 1985), recently proved by Papadimitriou and Yannakakis (1994), shows that an essentially cooperative equilibrium exists if each agent is a nite automaton with a number of states that is less than exponential in the number of rounds. This is essentially a bounded optimality result, where the bound is on space rather than on speed of computation. This type of result is made possible by a shift from the problem of selecting actions to the problem of selecting programs. I. J. Good (1971) distinguished between perfect or \\type I\" rationality, and metalevel or \\type II\" rationality. He de nes this as \\the maximization of expected utility taking into account deliberation costs.\" Simon (1976) also says: \\The global optimization problem is to nd the least-cost or best-return decision, net of computational costs.\" Although type II rationality seems to be a step in the right direction, it is not entirely clear whether it can be made precise in a way that respects the desirable intuition that computation is important. We will try one interpretation, although there may be others. 2 The key issue is the space over which the \\maximization\" or \\optimization\" occurs. Both Good and Simon seem to be referring to the space of possible deliberations associated with a particular decision. Conceptually, there is an \\object-level machine\" that executes a sequence of computations under the control of a \\meta-level machine.\" The outcome of the sequence is the selection of an external action. An agent exhibits type II rationality if at the end of its deliberation and subsequent action, its utility is maximized compared to all possible deliberate/act pairs in which it could have engaged. For example, Good discusses one possible application of type II rationality in chess programs. In this case, the object-level steps are node expansions in the game tree, followed by backing up of leaf node evaluations to show the best move. For simplicity we will assume a per-move time limit. Then a type II rational agent will execute whichever sequence of node expansions chooses the best move, of all those that nish before the time limit. 3 Unfortunately, the computations required in the \\metalevel machine\" to select the object-level deliberation may be extremely expensive. Good actually proposes a fairly simple (and nearly practical) metalevel decision procedure for chess, but it is far from optimal. It is hard to see how a type II rational agent could justify executing a suboptimal object-level computation sequence if we limit the scope of the optimization problem to a single decision. The di culty can only be resolved by thinking about the design of the agent program, which generates an unbounded set of possible deliberations in response to an unbounded set of circumstances that may arise during the life of the agent.\nPhilosophy has also seen a gradual evolution in the de nition of rationality. There has been a shift from consideration of act utilitarianism | the rationality of individual acts | to rule utilitarianism, or the rationality of general policies for acting. This shift has been caused by di culties with individual versus societal rationality, rather than any consideration of the di culty of computing rational acts. Some consideration has been given more recently to the tractability of general moral policies, with a view to making them understandable and usable by persons of average intelligence (Brandt, 1953). Cherniak (1986) has suggested a de nition of \\minimal rationality\", specifying lower bounds on the reasoning powers of any rational agent, instead of upper bounds. A philosophical proposal generally consistent with the notion of bounded optimality can be found in Dennett's \\Moral First Aid Manual\" (1986). Dennett explicitly discusses the idea of reaching equilibrium within the space of decision procedures. He uses as an example the PhD admissions procedure of a philosophy department. He concludes, as do we, that the best procedure may be neither elegant nor illuminating. The existence of such a procedure, and the process of reaching it, are the main points of interest.\nMany researchers in AI, some of whose work is discussed below, have worked on the problem of designing agents with limited computational resources. The 1989 AAAI Symposium on AI and Limited Rationality (Fehling & Russell, 1989) contains an interesting variety of work on the topic. Much of this work is concerned with metalevel rationality.\nMetareasoning | reasoning about reasoning | is an important technique in this area, since it enables an agent to control its deliberations according to their costs and bene ts. Combined with the idea of anytime (Dean & Boddy, 1988) or exible algorithms (Horvitz, 1987), that return better results as time goes by, a simple form of metareasoning allows an agent to behave well in a real-time environment. A simple example is provided by iterative-deepening algorithms used in game-playing. Breese and Fehling (1990) apply similar ideas to controlling multiple decision procedures. Russell and Wefald (1989) give a general method for precompiling certain aspects of metareasoning so that a system can eciently estimate the e ects of individual computations on its intentions, giving ne-grained control of reasoning. These techniques can all be seen as approximating metalevel rationality; they provide useful insights into the general problem of control of reasoning, but there is no reason to suppose that the approximations used are optimal in any sense.\nThe intuitive notion of bounded optimality seems to have become current in the AI community in the mid-1980's. Horvitz (1987) uses the term bounded optimality to refer to \\the optimization of computational utility given a set of assumptions about expected problems and constraints in reasoning resources.\" Russell and Wefald (1991) say that an agent exhibits bounded optimality for a given task environment \\if its program is a solution to the constrained optimization problem presented by its architecture.\" Recent work by Etzioni (1989) and Russell and Zilberstein (1991) can be seen as optimizing over a wellde ned set of agent designs, thereby making the notion of bounded optimality more precise. In the next section, we build a suitable set of general de nitions from the ground up, so that we can begin to demonstrate examples of provably bounded optimal agents." }, { "figure_ref": [], "heading": "Agents, Architectures and Programs", "publication_ref": [ "b0", "b7", "b17" ], "table_ref": [], "text": "Intuitively, an agent is just a physical entity that we wish to view in terms of its perceptions and actions. What counts in the rst instance is what it does, not necessarily what it thinks, or even whether it thinks at all. This initial refusal to consider further constraints on the internal workings of the agent (such as that it should reason logically, for example) helps in three ways: rst, it allows us to view such \\cognitive faculties\" as planning and reasoning as occurring in the service of nding the right thing to do; second, it makes room for those among us (Agre & Chapman, 1987;Brooks, 1986) who take the position that systems can do the right thing without such cognitive faculties; third, it allows more freedom to consider various speci cations, boundaries and interconnections of subsystems.\nWe begin by de ning agents and environments in terms of the actions and percepts that they exchange, and the sequence of states they go through. The agent is described by an agent function from percept sequences to actions. This treatment is fairly standard (see, e.g., Genesereth & Nilsson, 1987). We then go \\inside\" the agent to look at the agent program that generates its actions, and de ne the \\implementation\" relationship between a program and the corresponding agent function. We consider performance measures on agents, and the problem of designing agents to optimize the performance measure." }, { "figure_ref": [], "heading": "Specifying agents and environments", "publication_ref": [], "table_ref": [], "text": "An agent can be described abstractly as a mapping (the agent function) from percept sequences to actions. Let O be the set of percepts that the agent can receive at any instant, and A be the set of possible actions the agent can carry out in the external world. Since we are interested in the behaviour of the agent over time, we introduce a set of time points or instants, T. The set T is totally ordered by the < relation with a unique least element. Without loss of generality, we let T be the set of non-negative integers.\nThe percept history of an agent is a sequence of percepts indexed by time. We de ne the set of percept histories to be O T = fO T : T ! Og. The pre x of a history O T 2 O T till time t is denoted O t and is the projection of O T on 0::t]. We can de ne the set of percept history pre xes as O t = fO t j t 2 T; O T 2 O T g. Similarly, we de ne the set of action histories A T = fA T : T ! Ag. The set of action history pre xes is A t , de ned as the set of projections A t of histories A T 2 A T .\nDe nition 1 Agent function: a mapping\nf : O t ! A where A T (t) = f(O t )\nNote that the agent function is an entirely abstract entity, unlike the agent program that implements it. Note also that the \\output\" of the agent function for a given percept sequence may be a null action, for example if the agent is still thinking about what to do. The agent function speci es what the agent does at each time step. This is crucial to the distinction between perfect rationality and calculative rationality.\nAgents live in environments. The states of an environment E are drawn from a set X. The set of possible state trajectories is de ned as X T = fX T : T ! Xg. The agent does not necessarily have full access to the current state X T (t), but the percept received by the agent does depend on the current state through the perceptual ltering function f p . The e ects of the agent's actions are represented by the environment's transition function f e , which speci es the next state given the current state and the agent's action. An environment is therefore de ned as follows:\nDe nition 2 Environment E: a set of states X with a distinguished initial state X 0 , a transition function f e and a perceptual lter function f p such that X T (0) = X 0 X T (t + 1) = f e (A T (t); X T (t))\nO T (t) = f p (X T (t))\nThe state history X T is thus determined by the environment and the agent function. We use the notation e ects(f; E) to denote the state history generated by an agent function f operating in an environment E. We will also use the notation E; A t ] to denote the state history generated by applying the action sequence A t starting in the initial state of environment E. Notice that the environment is discrete and deterministic in this formulation. We can extend the de nitions to cover non-deterministic and continuous environments, but at the cost of additional complexity in the exposition. None of our results depend in a signi cant way on discreteness or determinism." }, { "figure_ref": [], "heading": "Specifying agent implementations", "publication_ref": [], "table_ref": [], "text": "We will consider a physical agent as consisting of an architecture and a program. The architecture is responsible for interfacing between the program and the environment, and for running the program itself. With each architecture M, we associate a nite programming language L M , which is just the set of all programs runnable by the architecture. An agent program is a program l 2 L M that takes a percept as input and has an internal state drawn from a set I with initial state i 0 . (The initial internal state depends on the program l, but we will usually suppress this argument.) The set of possible internal state histories is I T = fI T : T ! Ig. The pre x of an internal state history I T 2 I T till time t is denoted I t and is the projection of I T on 0::t].\nDe nition 3 An architecture M is a xed interpreter for an agent program that runs the program for a single time step, updating its internal state and generating an action: M : L M I O ! I A where hI T (t + 1); A T (t)i = M(l; I T (t); O T (t)) Thus, the architecture generates a stream of actions according to the dictates of the program. Because of the physical properties of the architecture, running the program for a single time step results in the execution of only a nite number of instructions. The program may often fail to reach a \\decision\" in that time step, and as a result the action produced by the architecture may be null (or the same as the previous action, depending on the program design)." }, { "figure_ref": [], "heading": "Relating agent speci cations and implementations", "publication_ref": [], "table_ref": [], "text": "We can now relate agent programs to the corresponding agent functions. We will say that an agent program l running on a machine M implements the agent function Agent(l; M). The agent function is constructed in the following de nition by specifying the action sequences produced by l running on M for all possible percept sequences. Note the importance of the \\Markovian\" construction using the internal state of the agent to ensure that actions can only be based on the past, not the future.\nDe nition 4 A program l running on M implements the agent function f = Agent(l; M), de ned as follows. For any environment E = (X; f e ; f p ), f(O t ) = A T (t) where hI T (t + 1); A T (t)i = M(l; I T (t); O T (t))\nO T (t) = f p (X T (t)) X T (t + 1) = f e (A T (t); X T (t)) X T (0) = X 0 I T (0) = i 0\nAlthough every program l induces a corresponding agent function Agent(l; M), the action that follows a given percept is not necessarily the agent's \\response\" to that percept; because of the delay incurred by deliberation, it may only re ect percepts occurring much earlier in the sequence. Furthermore, it is not possible to map every agent function to an implementation l 2 L M . We can de ne a subset of the set of agent functions f that are implementable on a given architecture M and language L M :\nFeasible(M) = ff j 9l 2 L M ; f = Agent(l; M)g Feasibility is related to, but clearly distinct from, the notion of computability. Computability refers to the existence of a program that eventually returns the output speci ed by a function, whereas feasibility refers to the production of the output at the appropriate point in time. The set of feasible agent functions is therefore much smaller than the set of computable agent functions." }, { "figure_ref": [], "heading": "Performance measures for agents", "publication_ref": [], "table_ref": [], "text": "To evaluate an agent's performance in the world, we de ne a real-valued utility function U on state histories:\nU : X T ! <\nThe utility function should be seen as external to the agent and its environment. It de nes the problem to be solved by the designer of the agent. Some agent designs may incorporate an explicit representation of the utility function, but this is by no means required. We will use the term task environment to denote the combination of an environment and a utility function.\nRecall that the agent's actions drive the environment E through a particular sequence of states in accordance with the function e ects(f; E). We can de ne the value of an agent function f in the environment E as the utility of the state history it generates:\nV (f; E) = U(e ects(f; E))\nIf the designer has a set E of environments with a probability distribution p over them, instead of a single environment E, then the value of the agent in E is de ned as the expected value over the elements of E. By a slight abuse of notation,\nV (f; E) = X E2E p(E)V (f; E)\nWe can assign a value V (l; M; E) to a program l executed by the architecture M in the environment E simply by looking at the e ect of the agent function implemented by the program:\nV (l; M; E) = V (Agent(l; M); E) = U(e ects(Agent(l; M); E)) As above, we can extend this to a set of possible environments as follows:\nV (l; M; E) = X E2E p(E)V (l; M; E)" }, { "figure_ref": [], "heading": "Perfect rationality and bounded optimality", "publication_ref": [ "b13", "b19" ], "table_ref": [], "text": "As discussed in Section 2, a perfectly rational agent selects the action that maximizes its expected utility, given the percepts so far. In our framework, this amounts to an agent function that maximizes V (f; E) over all possible agent functions. De nition 5 A perfectly rational agent for a set E of environments has an agent function\nf opt such that f opt = argmax f (V (f; E))\nThis de nition is a persuasive speci cation of an optimal agent function for a given set of environments, and underlies several recent projects in intelligent agent design (Dean & Wellman,1991;Doyle, 1988;Hansson & Mayer, 1989). A direct implementation of this speci cation, which ignores the delay incurred by deliberation, does not yield a reasonable solution to our problem { the calculation of expected utilities takes time for any real agent. In terms of our simple formal description of agents introduced above, it is easy to see where the di culty has arisen. In designing the agent program, logicists and decision theorists have concentrated on specifying an optimal agent function f opt in order to guarantee the selection of the best action history. The function f opt is independent of the architecture M.\nUnfortunately, no real program in L M implements this function in a non-trivial environment, because optimal actions cannot usually be computed before the next percept arrives. That is, quite frequently, f opt 6 2 Feasible(M).\nSuppose the environment consists of games of chess under tournament rules against some population of human grandmasters, and suppose M is some standard personal computer. Then f opt describes an agent that always plays in such a way as to maximize its total expected points against the opposition, where the maximization is over the moves it makes. We claim that no possible program can play this way. It is quite possible, using depth-rst alpha-beta search to termination, to execute the program that chooses (say) the optimal minimax move in each situation, but the agent function induced by this program is not the same as f opt . In particular, it ignores such percepts as the dropping of its ag indicating a loss on time.\nThe trouble with the perfect rationality de nition arose because of unconstrained optimization over the space of f's in the determination of f opt , without regard to feasibility. (Similarly, metalevel rationality assumes unconstrained optimization over the space of deliberations.) To escape this quandary, we propose a machine-dependent standard of rationality, in which we maximize V over the implementable set of agent functions Feasible(M). That is, we impose optimality constraints on programs rather than on agent functions or deliberations.\nDe nition 6 A bounded-optimal agent with architecture M for a set E of environments has an agent program l opt such that l opt = argmax l2L M V (l; M; E)\nWe can see immediately that this speci cation avoids the most obvious problems with Type I and Type II rationality. Consider our chess example, and suppose the computer has a total program memory of 8 megabytes. Then there are 2 2 26 possible programs that can be represented in the machine, of which a much smaller number play legal chess. Under tournament conditions, one or more of these programs will have the best expected performance. Each is a suitable candidate for l opt . Thus bounded optimality is, by de nition, a feasible speci cation; moreover, a program that achieves it is highly desirable. We are not yet ready to announce the identity of l opt for chess on an eight-megabyte PC, so we will begin with a more restricted problem." }, { "figure_ref": [], "heading": "Provably Bounded-Optimal Agents", "publication_ref": [], "table_ref": [], "text": "In order to construct a provably bounded optimal agent, we must carry out the following steps:\nSpecify the properties of the environment in which actions will be taken, and the utility function on the behaviours.\nSpecify a class of machines on which programs are to be run. Propose a construction method.\nProve that the construction method succeeds in building bounded optimal agents. The methodology is similar to the formal analysis used in the eld of optimal control, which studies the design of controllers (agents) for plants (environments). In optimal control theory, a controller is viewed as an essentially instantaneous implementation of an optimal agent function. In contrast, we focus on the computation time required by the agent, and the relation between computation time and the dynamics of the environment." }, { "figure_ref": [ "fig_0" ], "heading": "Episodic, real-time task environments", "publication_ref": [], "table_ref": [], "text": "In this section, we will consider a restricted class of task environments which we call episodic environments. In an episodic task environment, the state history generated by the actions of the agent can be considered as divided into a series of episodes, each of which is terminated by an action. Let A ? A be a distinguished set of actions that terminate an episode.\nThe utility of the complete history is given by the sum of the utilities of each episode, which is determined in turn by the state sequence. After each A 2 A ? , the environment \\resets\" to a state chosen at random from a stationary probability distribution P init . In order to include the e ects of the choice of A in the utility of the episode, we notionally divide the environment state into a \\con guration\" part and a \\value\" part, such that the con guration part determines the state transitions while the value part determines the utility of a state sequence. Actions in A ? reset the con guration part, while their \\value\" is recorded in the value part. These restrictions mean that each episode can be treated as a separate decision problem, and translate into the following property: if agent program l 1 has higher expected utility on individual episodes than agent l 2 , it will have higher expected utility in the corresponding episodic task environment.\nA real-time task environment is one in which the utility of an action depends on the time at which it is executed. Usually, this dependence will be su ciently strong to make calculative rationality an unacceptably bad approximation to perfect rationality.\nAn automated mail sorter4 provides an illustrative example of an episodic task environment (see Figure 1). Such a machine scans handwritten or printed addresses (zipcodes) on mail pieces and dispatches them to appropriate bins. Each episode starts with the arrival of a new mail piece and terminates with the execution of the physical action recommended by the sorter: routing of the piece to a speci c bin. The \\con guration part\" of the environment corresponds to the letter feeder side, which provides a new, randomly selected letter after the previous letter is sorted. The \\value part\" of the state corresponds to the state of the receiving bins, which determines the utility of the process. The aim is to maximize the accuracy of sorting while minimizing the reject percentage and avoiding jams. A jam occurs if the current piece is not routed to the appropriate bin, or rejected, before the arrival of the next piece.\nWe now provide formal de nitions for three varieties of real-time task environments: xed deadlines, xed time cost and stochastic deadlines. The simplest and most commonly studied kind of real-time task environment contains a deadline at a known time. In most work on real-time systems, such deadlines are described informally and systems are built to meet the deadline. Here, we need a formal speci cation in order to connect the description of the deadline to the properties of agents running in deadline task environments. One might think that deadlines are part of the environment description, but in fact they are mainly realized as constraints on the utility function. One can see this by considering the opposite of a deadline | the \\starter's pistol.\" The two are distinguished by di ering constraints on the utilities of acting before or after a speci c time.\nDe nition 7 Fixed deadline: The task environment hE; Ui has a xed deadline at time t d if the following conditions hold.\nTaking an action in A ? at any time before the deadline results in the same utility:\nU( E; A t 1 ]) = U( E; A (t d 1) 2 A T 1 (t)])\nwhere \\ \" denotes sequence concatenation, t t d , A T 1 (t) 2 A ? , and A (t 1) 1 and A (t d 1) 2 contain no action in A ? .\nActions taken after t d have no e ect on utility: (1) A T 1 (t 1 ) 2 A ? and A T 2 (t 2 ) = A T 1 (t 1 )\nU( E; A t 1 ]) U( E; A t 2 ]) if U( E; A t d 1 ]) U( E; A t d 2 ])\n(2) A (t 1 1) 1 and A (t 2 1) 2 contain no action in A ?\nthe utilities di er by the di erence in time cost:\nU( E; A t 2 2 ]) = U( E; A t 1 1 ]) c(t 2 t 1 ) Strictly speaking, there are no task environments with xed time cost. Utility values have a nite range, so one cannot continue incurring time costs inde nitely. For reasonably short times and reasonably small costs, a linear utility penalty is a useful approximation." }, { "figure_ref": [], "heading": "Stochastic deadlines", "publication_ref": [ "b9" ], "table_ref": [], "text": "While xed-deadline and xed-cost task environments occur frequently in the design of real-time systems, uncertainty about the time-dependence of the utility function is more common. It also turns out to be more interesting, as we see below.\nA stochastic deadline is represented by uncertainty concerning the time of occurrence of a xed deadline. In other words, the agent has a probability distribution p d for the deadline time t d . We assume that the deadline must come eventually, so that P t2T p d (t) = 1. We also de ne the cumulative deadline distribution P d .\nIf the deadline does not occur at a known time, then we need to distinguish between two cases:\nThe agent receives a percept, called a herald (Dean & Boddy, 1988), which announces an impending deadline. We model this using a distinguished percept O d :\nO T (t d ) = O d\nIf the agent responds immediately, then it \\meets the deadline.\" No such percept is available, in which case the agent is walking blindfolded towards the utility cli . By deliberating further, the agent risks missing the deadline but may improve its decision quality. An example familiar to most readers is that of deciding whether to publish a paper in its current form, or to embellish it further and risk being \\scooped.\" We do not treat this case in the current paper. Formally, the stochastic deadline case is similar to the xed deadline case, except that t d is drawn from the distribution p d . The utility of executing an action history pre x A t in E is the expectation of the utilities of that state history pre x over the possible deadline times.\nDe nition 9 Stochastic deadline: A task environment class hE; Ui of xed-deadline task environments has a stochastic deadline distributed according to p d if, for any action history pre x A t ,\nU( E; A t ]) = X t 0 2T p d (t 0 )U( E t 0 ; A t ])\nwhere hE t 0 ; Ui is a task environment in hE; Ui with a xed deadline at t 0 .\nThe mail sorter example is well described by a stochastic deadline. The time between the arrival of mail pieces at the image processing station is distributed according to a density function p d , which will usually be Poisson." }, { "figure_ref": [], "heading": "Agent programs and agent architecture", "publication_ref": [], "table_ref": [], "text": "We consider simple agent programs for episodic task environments, constructed from elements of a set R = fr 1 ; : : : ; r n g of decision procedures or rules. Each decision procedure recommends (but does not execute) an action A i 2 A ? , and an agent program is a xed sequence of decision procedures. For our purposes, a decision procedure is a black box with two parameters: a run time t i 0, which is an integer that represents the time taken by the procedure to compute an action. a quality q i 0, which is a real number. This gives the expected reward resulting from executing its action A i at the start of an episode:\nq i = U( E; A i ])(1)\nLet M J denote an agent architecture that executes decision procedures in the language J .\nLet t M denote the maximum runtime of the decision procedures that can be accommodated in M. For example, if the runtime of a feedforward neural network is proportional to its size, then t M will be the runtime of the largest neural network that ts in M.\nThe architecture M executes an agent program s = s 1 : : : s m by running each decision procedure in turn, providing the same input to each as obtained from the initial percept. When a deadline arrives (at a xed time t d , or heralded by the percept O d ), or when the entire sequence has been completed, the agent selects the action recommended by the highest-quality procedure it has executed: M(s; I T (t d ); O T (t d )) = hi 0 ; action(I T (t d ))i M(s; I T (t s ); O T (t s )) = hi 0 ; action(I T (t s ))i where t s = P s i 2s t i\n(2) M(s; I T (t); O d ) = hi 0 ; action(I T (t))i\nwhere M updates the agent's internal state history I T (t) such that action(I T (t)) is the action recommended by a completed decision procedure with the highest quality. When this action is executed, the internal state of the agent is re-initialized to i 0 . This agent design works in all three of the task environment categories described above.\nNext we derive the value V (s; M; E) of an agent program s in environment E running on M for the three real-time regimes and show how to construct bounded optimal agents for these task environments." }, { "figure_ref": [], "heading": "Bounded optimality with xed deadlines", "publication_ref": [], "table_ref": [], "text": "From Equation 2, we know that the agent picks the action in A ? recommended by the decision procedure r with the highest quality that is executed before the deadline t d arrives.\nLet s 1 : : : s j be the longest pre x of the program s such that P j i=1 t i t d . From De nition 7 and Equation 1, it follows that V (s; M; E) = Q j (3)\nwhere Q i = maxfq 1 ; : : : ; q i g. Given this expression for the value of the agent program, we can easily show the following:\nTheorem 1 Let r = arg max r i 2 R;t i t d q i . The singleton sequence r is a bounded optimal program for M in an episodic task environment with a known deadline t d .\nThat is, the best program is the single decision procedure of maximum quality whose runtime is less than the deadline." }, { "figure_ref": [], "heading": "Bounded optimality with xed time cost", "publication_ref": [], "table_ref": [], "text": "From Equation 2, we know that the agent picks the action in A ? recommended by the best decision procedure in the sequence, since M runs the entire sequence s = s 1 : : : s m when there is no deadline. From De nition 8 and Equation 1, we have\nV (s; M; E) = Q m c m X i=1 t i (4)\nGiven this expression for the value of the agent program, we can easily show the following:\nTheorem 2 Let r = arg max r i 2 R q i ct i . The singleton sequence r is a bounded optimal program for M in an episodic task environment with a xed time cost c. That is, the optimal program is the single decision procedure whose quality, net of time cost, is highest." }, { "figure_ref": [], "heading": "Bounded optimality with stochastic deadlines", "publication_ref": [], "table_ref": [], "text": "With a stochastic deadline distributed according to p d , the value of an agent program s = s 1 : : : s m is an expectation. From De nition 9, we can calculate this as P t2T p d (t)V (s; M; E t ), where hE t ; Ui is a task environment with a xed deadline at t. After substituting for V (s; M; E t ) from Equation 3, this expression simpli es to a summation, over the procedures in the sequence, of the probability of interruption after the i th procedure in the sequence multiplied by the quality of the best completed decision procedure:\nV (s) V (s; M; E) = m X i=1 P d ( P i+1 j=1 t j ) P d ( P i j=1 t j )]Q i (5\n)\nwhere P d (t) = R t 1 p d (t 0 )dt 0 and P d (t) = 1 for t P m i=1 t i .\nA simple example serves to illustrate the value function. Consider R = fr 1 ; r 2 ; r 3 g. The rule r 1 has a quality of 0.2 and needs 2 seconds to run: we will represent this by r 1 = (0:2; 2). The other rules are r 2 = (0:5; 5); r 3 = (0:7; 7). The deadline distribution function p d is a uniform distribution over 0 to 10 seconds. The value of the sequence r 1 r 2 r 3 is V (r 1 r 2 r 3 ) = :7 :2]:2 + 1 :7]:5 + 1 1]:7 = :25 A geometric intuition is given by the notion of a performance pro le, as shown in De nition 10 Performance pro le: For a sequence s, the performance pro le Q s (t) gives the quality of the action returned if the agent is interrupted at t:\nQ s (t) = maxfq i : i X j=1 t j tg\nFor a uniform deadline density function, the value of a sequence is proportional to the area under the performance pro le up to the last possible interrupt time. Note that the height of the pro le during the interval of length t i while rule i is running is the quality of the best of the previous rules.\nFrom De nition 10, we have the following obvious property:\nLemma 1 The performance pro le of any sequence is monotonically nondecreasing.\nIt is also the case that a sequence with higher quality decisions at all times is a better sequence:\nLemma 2 If 8t Q s 1 (t) Q s 2 (t), then V (s 1 ) V (s 2 ).\nIn this case we say that Q s 1 dominates Q s 2 . We can use the idea of performance pro les to establish some useful properties of optimal sequences.\nLemma 3 There exists an optimal sequence that is sorted in increasing order of q's.\nWithout Lemma 3, there are P n i=1 i! possible sequences to consider. The ordering con- straint eliminates all but 2 n sequences. It also means that in proofs of properties of sequences, we now need consider only ordered sequences. In addition, we can replace Q i in Equation 5 by q i .\nThe following lemma establishes that a sequence can always be improved by the addition of a better rule at the end: Lemma 4 For every sequence s = s 1 : : : s m sorted in increasing order of quality, and single step z with q z q s m , V (sz) V (s).\nCorollary 1 There exists an optimal sequence ending with the highest-quality rule in R.\nThe following lemma re ects the obvious intuition that if one can get a better result in less time, there's no point spending more time to get a worse result:\nLemma 5 There exists an optimal sequence whose rules are in nondecreasing order of t i .\nWe now apply these preparatory results to derive algorithms that construct bounded optimal programs for various deadline distributions." }, { "figure_ref": [], "heading": "General distributions", "publication_ref": [ "b3" ], "table_ref": [], "text": "For a general deadline distribution, the dynamic programming method can be used to obtain an optimal sequence of decision rules in pseudo-polynomial time. We construct an optimal sequence by using the de nition of V (s; M; E) in Equation 5. Optimal sequences generated by the methods are ordered by q i , in accordance with Lemma 3.\nWe construct the table S(i; t), where each entry in the table is the highest value of any sequence that ends with rule r i at time t. We assume the rule indices are arranged in increasing order of quality, and t ranges from the start time 0 to the end time L = P r i 2R t i . The update rule is: S(i; t) = max k2 0:::i 1] S(k; t t i ) + (q i q k ) 1 P d (t)]] with boundary condition S(i; 0) = 0 for each rule i and S(0; t) = 0 for each time t\nFrom Corollary 1, we can read o the best sequence from the highest value in row n of the matrix S.\nTheorem 3 The DP algorithm computes an optimal sequence in time O(n 2 L) where n is the number of decision procedures in R.\nThe dependence on L in the time complexity of the DP algorithm means that the algorithm is not polynomial in the input size. Using standard rounding and scaling methods, however, a fully polynomial approximation scheme can be constructed. Although we do not have a hardness proof for the problem, John Binder (1994) has shown that if the deadline distribution is used as a constant-time oracle for nding values of P(t), any algorithm will require an exponential number of calls to the oracle in the worst case." }, { "figure_ref": [], "heading": "Long uniform distributions", "publication_ref": [], "table_ref": [], "text": "If the deadline is uniformly distributed over a time interval greater than the sum of the running times of the rules, we will call the distribution a long uniform distribution. Consider the rule sequence s = s 1 : : : s m drawn from the rule set R. With a long uniform distribution, the probability that the deadline arrives during rule s i of the sequence s is independent of the time at which s i starts. This permits a simpler form of Equation 5: V (s; M; E) = P m 1 i=1 P d (t i+1 )q i + q m (1 P m i=1 P d (t i )) (6)\nTo derive an optimal sequence under a long uniform distribution, we obtain a recursive speci cation of the value of a sequence as with a 2 R and s = s 1 : : : s m being some sequence in R.\nV (as; M; E) = V (s; M; E) + q a P d (t 1 ) q m P d (t a ) (7) This allows us to de ne a dynamic programming scheme for calculating an optimal sequence using a state function S(i; j) denoting the highest value of a rule sequence that starts with rule i and ends in rule j. From Lemma 3 and Equation 7, the update rule is: S(i; j) = max i<k j S(k; j) + P d (t k )q i P d (t i )q j ] (8) with boundary condition\nS(i; i) = (1 P d (t i ))q i (9)\nFrom Corollary 1, we know that an optimal sequence for the long uniform distribution ends in r n , the rule with the highest quality in R. Thus, we only need to examine S(i; n); 1 i n. Each entry requires O(n) computation, and there are n entries to compute. Thus, the optimal sequence for the long uniform case can be calculated in O(n 2 ).\nTheorem 4 An optimal sequence of decision procedures for a long uniform deadline distribution can be determined in O(n 2 ) time where n is the number of decision procedures in R." }, { "figure_ref": [], "heading": "Short uniform distributions", "publication_ref": [], "table_ref": [], "text": "When P n i=1 P d (t i ) > 1, for a uniform deadline distribution P d , we call it short. This means that some sequences are longer than the last possible deadline time, and therefore some rules in those sequences have no possibility of executing before the deadline. For such sequences, we cannot use Equation 7to calculate V (s). However, any such sequence can be truncated by removing all rules that would complete execution after the last possible deadline. The value of the sequence is una ected by truncation, and for truncated sequences the use of Equation 7 is justi ed. Furthermore, there is an optimal sequence that is a truncated sequence.\nSince the update rule 8 correctly computes S(i; j) for truncated sequences, we can use it with short uniform distributions provided we add a check to ensure that the sequences considered are truncated. Unlike the long uniform case, however, the identity of the last rule in an optimal sequence is unknown, so we need to compute all n 2 entries in the S(i; j) table. Each entry computation takes O(n) time, thus the time to compute an optimal sequence is O(n 3 ).\nTheorem 5 An optimal sequence of decision procedures for a short uniform deadline distribution can be determined in O(n 3 ) time where n is the number of decision procedures in R." }, { "figure_ref": [], "heading": "Exponential distributions", "publication_ref": [], "table_ref": [], "text": "For an exponential distribution, P d (t) = 1 e t . Exponential distributions allow an optimal sequence to be computed in polynomial time. Let p i stand for the probability that rule i is interrupted, assuming it starts at 0. Then p i = P d (t i ) = 1 e t i : For the exponential distribution, V (s; M; E) simpli es out as:\nV (s; M; E) = m 1 X i=1 h i j=1 (1 p j ) i p i+1 q i + h m j=1 (1 p j ) i q m\nThis yields a simple recursive speci cation of the value V (as; M; E) of a sequence that begins with the rule a:\nV (as; M; E) = (1 p a )p 1 q a + (1 p a )V (s; M; E)\nWe will use the state function S(i; j) which represents the highest value of any rule sequence starting with i and ending in j.\nS(i; j) = max i<k j (1 p i )p k q i + (1 p i )S(k; j)] with boundary condition S(i; i) = q i (1 p i ). For any given j, S(i; j) can be calculated in O(n 2 ). From Corollary 1, we know that there is an optimal sequence whose last element is the highest-valued rule in R.\nTheorem 6 An optimal sequence of decision procedures for an exponentially distributed stochastic deadline can be determined in O(n 2 ) time where n is the number of decision procedures in R.\nThe proof is similar to the long uniform distribution case." }, { "figure_ref": [ "fig_3", "fig_4", "fig_5", "fig_5" ], "heading": "Simulation results for a mail-sorter", "publication_ref": [ "b4" ], "table_ref": [], "text": "The preceding results provide a set of algorithms for optimizing the construction of an agent program for a variety of general task environment classes. In this section, we illustrate these results and the possible gains that can be realized in a speci c task environment, namely, a simulated mail-sorter.\nFirst, let us be more precise about the utility function U on episodes. There are four possible outcomes; the utility of outcome i is u i .\n1. The zipcode is successfully read and the letter is sent to the correct bin for delivery.\n2. The zipcode is misread and the letter goes to the wrong bin.\n3. The letter is sent to the reject bin.\n4. The next letter arrives before the recognizer has nished, and there is a jam. Since letter arrival is heralded, jams cannot occur with the machine architecture given in Equation 2. Without loss of generality, we set u 1 = 1:0 and u 2 = 0:0. If the probability of a rule recommending a correct destination bin is p i , then q i = p i u 1 + (1 p i )u 2 = p i . We assume that u 2 u 3 , hence there is a threshold probability below which the letter should be sent to the reject bin instead. We will therefore include in the rule set R a rule r reject that has zero runtime and recommends rejection. The sequence construction algorithm will then automatically exclude rules with quality lower than q reject = u 3 . The overall utility for an episode is chosen to be a linear combination of the quality of sorting (q i ), the probability of rejection or the rejection rate (given by P(t 1 ), where t 1 is the runtime of the rst non-reject rule executed), and the speed of sorting (measured by the arrival time mean).\nThe agent program in (Boser et al. 1992) uses a single neural network on a chip. We show that under a variety of conditions an optimized sequence of networks can do signi cantly better than any single network in terms of throughput or accuracy. We examine the following experimental conditions:\nWe assume that a network that executes in time t has a recognition accuracy p that depends on t. We consider p = 1 e t . The particular choice of is irrelevant because the scale chosen for t is arbitrary. We choose = 0:9, for convenience (Figure 3(a)). We include r reject with q reject = u 3 and t reject = 0. We consider arrival time distributions that are Poisson with varying means. (d) 90% rule: the rule whose execution time guarantees that it will complete in 90% of cases.\nIn the last three cases, we add r reject as an initial step; the BO sequence will include it automatically. We measure the utility per second as a function of the mean arrival rate (Figure 4). This shows that there is an optimal setting of the sorting machinery at 6 letters per minute (inter-arrival time = 10 seconds) for the bounded optimal program, given that we have xed at 0.9. Finally, we investigate the e ect of the variance of the arrival time on the relative performance of the four program types. For this purpose, we use a uniform distribution centered around 20 seconds but with di erent widths to vary the variance without a ecting the mean (Figure 5).\nWe notice several interesting things about these results:\nThe policy of choosing a rule with a 90% probability of completion performs poorly for rapid arrival rates ( 3), but catches up with the performance of the best single rule for slower arrival rates ( > 4). This is an artifact of the exponential accuracy pro le for any > 0:5, where the di erence in quality of the rules with run times greater than 6 seconds is quite small. The policy of choosing a rule with a 50% probability of completion fares as well as the best single rule for very high arrival rates ( 2), but rapidly diverges from it thereafter, performing far worse for arrival time means greater than 5 seconds. Both the best sequence and the best single rule give their best overall performance at an arrival rate of around 6 letters per minute. The performance advantage of the optimal sequence over the best single rule is about 7% at this arrival rate. It should be noted that this is a signi cant performance advantage that is obtainable with no extra computational resources. For slower arrival rates ( 7), the di erence between the performance of the best rule and the best sequence arises from the decreased rejection rate of the best sequence. With the exponential accuracy pro le ( 0:5) the advantage of running a rule with a shorter completion time ahead of a longer rule is the ability to reduce the probability of rejecting a letter. For high arrival rates (inter-arrival times of 1 to 4 seconds), it is useful to have a few short rules instead of a longer single rule.\nFigure 5 shows that the best sequence performs better than the best single rule as the variance of the arrival time increases. 5 The performance of the optimal sequence also appears to be largely una ected by variance. This is exactly the behaviour we expect to observe | the ability to run a sequence of rules instead of committing to a single one gives it robustness in the face of increasing variance. Since realistic environments can involve unexpected demands of many kinds, the possession of a variety of default behaviours of graded sophistication would seem to be an optimal design choice for a bounded agent.\n5. The performance of the 50% rule is at because the uniform distributions used in this experiment have xed mean and are symmetric, so that the 50% rule is always the rule that runs for 20 seconds. The 90% rule changes with the variance, and the curve exhibits some discretization e ects. These could be eliminated using a ner-grained set of rules." }, { "figure_ref": [], "heading": "Learning Approximately Bounded-Optimal Programs", "publication_ref": [ "b21" ], "table_ref": [], "text": "The above derivations assume that a suitable rule set R is available ab initio, with correct qualities q i and runtimes t i , and that the deadline distribution is known. In this section, we study ways in which some of this information can be learned, and the implications of this for the bounded optimality of the resulting system. We will concentrate on learning rules and their qualities, leaving runtimes and deadline distributions for future work.\nThe basic idea is that the learning algorithms will converge, over time, to a set of optimal components | the most accurate rules and the most accurate quality estimates for them. As this happens, the value of the agent constructed from the rules, using the quality estimates, converges to the value of l opt . Thus there are two sources of suboptimality in the learned agent:\nThe rules in R may not be the best possible rules | they may recommend actions that are of lower utility than those that would be recommended by some other rules. There may be errors in estimating the expected utility of the rule. This can cause the algorithms given above to construct suboptimal sequences, even if the best rules are available. Our notional method for constructing bounded optimal agents (1) learns sets of individual decision procedures from episodic interactions, and (2) arranges them in a sequence using one of the algorithms described earlier so that the performance of an agent using the sequence is at least as good as that of any other such agent. We assume a parameterized learning algorithm L J ;k that will be used to learn one rule for each possible runtime k 2 f1; : : : ; t M g. Since there is never a need to include two rules with the same runtime in the R, this obviates the need to consider the entire rule language J in the optimization process.\nOur setting places somewhat unusual requirements on the learning algorithm. Like most learning algorithms, L J ;k works by observing a collection T of training episodes in E, including the utility obtained for each episode. We do not, however, make any assumptions about the form of the correct decision rule. Instead, we make assumptions about the hypotheses, namely that they come from some nite language J k , the set of programs in J of complexity at most k. This setting has been called the agnostic learning setting by Kearns, Schapire and Sellie (1992), because no assumptions are made about the environment at all. It has been shown (Theorems 4 and 5 in Kearns, Schapire and Sellie, 1992) that, for some languages J , the error in the learned approximation can be bounded to within of the best rule in J k that ts the examples, with probability 1 . The sample size needed to guarantee these bounds is polynomial in the complexity parameter k, as well as 1 and 1 . In addition to constructing the decision procedures, L J ;k outputs estimates of their quality q i . Standard Cherno -Hoe ding bounds can be used to limit the error in the quality estimate to be within q with probability 1 q . The sample size for the estimation of quality is also polynomial in 1 q and 1 q . Thus the error in each agnostically learned rule is bounded to within of the best rule in its complexity class with probability 1 . The error in the quality estimation of these rules is bounded by q with probability 1 q . From these bounds, we can calculate a bound on the utility de cit in the agent program that we construct, in comparison to l opt : Theorem 7 Assume an architecture M J that executes sequences of decision procedures in an agnostically learnable language J whose runtimes range over 1::t M ]. For real time task environments with xed time cost, xed deadline, and stochastic deadline, we can construct a program l such that V (l opt ; M; E) V (l; M; E) + 2 q with probability greater than 1 m( + q ), where m is the number of decision procedures in l opt .\nProof: We prove this theorem for the stochastic deadline regime, where the bounded optimal program is a sequence of decision procedures. The proofs for the xed cost and xed deadline regimes, where the bounded optimal program is a singleton, follow as a special case. Let the best decision procedures for E be the set R = fr 1 ; : : : ; r n g, and let l opt = s 1 : : : s m be an optimal sequence constructed from R . Let R = fr 1 ; : : : r n g be the set of decision procedures returned by the learning algorithm. With probability greater than 1 m , q i q i for all i, where q i refers to the true quality of r i . The error in the estimated quality qi of decision procedure r i is also bounded: with probability greater than 1 m q , jq i q i j q for all i. Let s = s 1 : : : s m be those rules in R that come from the same runtime classes as the rules s 1 : : : s m in R . Then, by Equation 5, we have V (l opt ; M; E) V (s; M; E) because the error in V is a weighted average of the errors in the individual q i . Similarly, we have j V (s; M; E) V (s; M; E)j q Now suppose that the sequence construction algorithm applied to R produces a sequence l = s 1 0 : : : s l 0 . By de nition, this sequence appears to be optimal according to the estimated value function V . Hence V (l; M; E) V (s; M; E)\nAs before, we can bound the error on the estimated value: j V (l; M; E) V (l; M; E)j q Combining the above inequalities, we have V (l opt ; M; E) V (l; M; E) + 2 q 2 Although the theorem has practical applications, it is mainly intended as an illustration of how a learning procedure can converge on a bounded optimal con guration. With some additional work, more general error bounds can be derived for the case in which the rule execution times t i and the real-time utility variation (time cost, xed deadline, or deadline distribution) are all estimated from the training episodes. We can also obtain error bounds for the case in which the rule language J is divided up into a smaller number of coarser runtime classes, rather than the potentially huge number that we currently use." }, { "figure_ref": [], "heading": "Asymptotic Bounded Optimality", "publication_ref": [], "table_ref": [], "text": "The strict notion of bounded optimality may be a useful philosophical landmark from which to explore arti cial intelligence, but it may be too strong to allow many interesting, general results to be obtained. The same observation can be made in ordinary complexity theory: although absolute e ciency is the aim, asymptotic e ciency is the game. That a sorting algorithm is O(n log n) rather than O(n 2 ) is considered signi cant, but replacing a \\multiply by 2\" by a \\shift-left 1 bit\" is not considered a real advance. The slack allowed by the de nitions of complexity classes is essential in building on earlier results, in obtaining robust results that are not restricted to speci c implementations, and in analysing the complexity of algorithms that use other algorithms as subroutines. In this section, we begin by reviewing classical complexity. We then propose de nitions of asymptotic bounded optimality that have some of the same advantages, and show that classical optimality is a special case of asymptotic bounded optimality. Lastly, we report on some preliminary investigations into the use of asymptotic bounded optimality as a theoretical tool in constructing universal real-time systems." }, { "figure_ref": [], "heading": "Classical complexity", "publication_ref": [], "table_ref": [], "text": "A problem, in the classical sense, is de ned by a pair of predicates and such that output z is a solution for input x if and only if (x) and (x; z) hold. A problem instance is an input satisfying , and an algorithm for the problem class always terminates with an output z satisfying (x; z) given an input x satisfying (x). Asymptotic complexity describes the growth rate of the worst-case runtime of an algorithm as a function of the input size. We can de ne this formally as follows. Let T a (x) be the runtime of algorithm a on input x, and let T a (n) be the maximum runtime of a on any input of size n. Then algorithm a has complexity O(f(n)) if 9k; n 0 8n n > n 0 ) T a (n) kf(n) Intuitively, a classically optimal algorithm is one that has the lowest possible complexity. For the purposes of constructing an asymptotic notion of bounded optimality, it will be useful to have a de nition of classical optimality that does not mention the complexity directly. This can be done as follows:\nDe nition 11 Classically optimal algorithm: An algorithm a is classically optimal if and only if 9k; n 0 8a 0 ; n n > n 0 ) T a (n) kT a 0 (n) To relate classical complexity to our framework, we will need to de ne the special case of task environments in which traditional programs are appropriate. In such task environments, an input is provided to the program as the initial percept, and the utility function on environment histories obeys the following constraint:\nDe nition 12 Classical task environment: hE P ; Ui is a classical task environment for problem P if V (l; M; E P ) = ( u(T(l; M; E P )) if l outputs a correct solution for P 0 otherwise where T(l; M; E P ) is the running time for l in E P on M, M is a universal Turing machine, and u is some positive decreasing function.\nThe notion of a problem class in classical complexity theory thus corresponds to a class of classical task environments of unbounded complexity. For example, the Traveling Salesperson Problem contains instances with arbitrarily large numbers of cities." }, { "figure_ref": [], "heading": "Varieties of asymptotic bounded optimality", "publication_ref": [], "table_ref": [], "text": "The rst thing we will need is a complexity measure on environments. Let n(E) be a suitable measure of the complexity of an environment. We will assume the existence of environment classes that are of unbounded complexity. Then, by analogy with the de nition of classical optimality, we can de ne a worst-case notion of asymptotic bounded optimality (ABO).\nLetting V (l; M; n; E) be the minimum value of V (l; M; E) for all E in E of complexity n, we have\nDe nition 13 Worst-case asymptotic bounded optimality: an agent program l is timewise (or spacewise) worst-case asymptotically bounded optimal in E on M i 9k; n 0 8l 0 ; n n > n 0 ) V (l; kM; n; E) V (l 0 ; M; n; E)\nwhere kM denotes a version of the machine M speeded up by a factor k (or with k times more memory).\nIn English, this means that the program is basically along the right lines if it just needs a faster (larger) machine to have worst-case behaviour as good as that of any other program in all environments.\nIf a probability distribution is associated with the environment class E, then we can use the expected value V (l; M; E) to de ne an average-case notion of ABO: De nition 14 Average-case asymptotic bounded optimality: an agent program l is timewise (or spacewise) average-case asymptotically bounded optimal in E on M i 9k 8l 0 V (l; kM; E) V (l 0 ; M; E)\nFor both the worst-case and average-case de nitions of ABO, we would be happy with a program that was ABO for a nontrivial environment on a nontrivial architecture M, unless k were enormous. 6 In the rest of the paper, we will use the worst-case de nition of ABO.\nAlmost identical results can be obtained using the average-case de nition. The rst observation that can be made about ABO programs is that classically optimal programs are a special case of ABO programs: 7 6. The classical de nitions allow for optimality up to a constant factor k in the runtime of the algorithms.\nOne might wonder why we chose to use the constant factor to expand the machine capabilities, rather than to increase the time available to the program. In the context of ordinary complexity theory, the two alternatives are exactly equivalent, but in the context of general time-dependent utilities, only the former is appropriate. It would not be possible to simply \\let l run k times longer,\" because the programs we wish to consider control their own execution time, trading it o against solution quality. One could imagine slowing down the entire environment by a factor of k, but this is merely a less realistic version of what we propose. 7. This connection was suggested by Bart Selman.\nTheorem 8 A program is classically optimal for a given problem P if and only if it is timewise worst-case ABO for the corresponding classical task environment class hE P ; Ui. This observation follows directly from De nitions 11, 12, and 13.\nIn summary, the notion of ABO will provide the same degree of theoretical robustness and machine-independence for the study of bounded systems as asymptotic complexity does for classical programs. Having set up a basic framework, we can now begin to exercise the de nitions." }, { "figure_ref": [ "fig_6" ], "heading": "Universal asymptotic bounded optimality", "publication_ref": [ "b34", "b23", "b39", "b40" ], "table_ref": [], "text": "Asymptotic bounded optimality is de ned with respect to a speci c value function V. In constructing real-time systems, we would prefer a certain degree of independence from the temporal variation in the value function. We can achieve this by de ning a family V of value functions, di ering only in their temporal variation. By this we mean that the value function preserves the preference ordering of external actions over time, with all value functions in the family having the same preference ordering. 8 For example, in the xed-cost regime we can vary the time cost c to generate a family of value functions; in the stochastic deadline case, we can vary the deadline distribution P d to generate another family. Also, since each of the three regimes uses the same quality measure for actions, then the union of the three corresponding families is also a family. What we will show is that a single program, which we call a universal program, can be asymptotically bounded-optimal regardless of which value function is chosen within any particular family.\nDe nition 15 Universal asymptotic bounded optimality (UABO): An agent program l is UABO in environment class E on M for the family of value functions V i l is ABO in E on M for every V i 2 V.\nA UABO program must compete with the ABO programs for every individual value function in the family. A UABO program is therefore a universal real-time solution for a given task. Do UABO programs exist? If so, how can we construct them?\nIt turns out that we can use the scheduling construction from (Russell & Zilberstein, 1991) to design UABO programs. This construction was designed to reduce task environments with unknown interrupt times to the case of known deadlines, and the same insight applies here. The construction requires the architecture M to provide program concatenation (e.g., the LISP prog construct), a conditional-return construct, and the null program . The universal program l U has the form of a concatenation of individual programs of increasing runtime, with an appropriate termination test after each. It can be written as l U = l 0 l 1 l j ] where each l j consists of a program and a termination test. The program part in l j is any program in L M that is ABO in E for a value function V j that corresponds to a xed deadline at t d = 2 j , where is a time increment smaller than the execution time of any non-null program in L M .\n8. The value function must therefore be separable (Russell & Wefald, 1989), since this preservation of rank order allows a separate time cost to be de ned. See chapter 9 of (Keeney & Rai a, 1976) for a thorough discussion of time-dependent utility. Lemma 6 If l U is a universal program in E for V, and l i is ABO on M in E for V i 2 V, then Q (t; l U ; kM; n; E) dominates Q (t; l i ; M; n; E) for k 4 max j k j , n > max j n j . This lemma establishes that, for a small constant penalty, we can ignore the speci c realtime nature of the task environment in constructing bounded optimal programs. However, we still need to deal with the issue of termination. It is not possible in general for l U to terminate at an appropriate time without access to information concerning the timedependence of the utility function. For example, in a xed-time-cost task environment, the appropriate termination time depends on the value of the time cost c.\nFor the general case with deterministic time-dependence, we can help out l U by supplying, for each V i , an \\aspiration level\" Q i (t i ; l i ; M; n; E), where t i is the time at which l i acts. l U terminates when it has completed an l j such that q j Q i (t i ; l i ; M; n; E). By construction, this will happen no later than t i because of Lemma 6.\nTheorem 9 In task environments with deterministic time-dependence, an l U with a suitable aspiration level is UABO in E on M.\nWith deadline heralds, the termination test is somewhat simpler and does not require any additional input to l U .\nTheorem 10 In a task environment with stochastic deadlines, l U is UABO in E on M if it terminates when the herald arrives.\nReturning to the mail-sorting example, it is fairly easy to see that l U (which consists of a sequence of networks, like the optimal programs for the stochastic deadline case) will be ABO in the xed-deadline regime. It is not so obvious that it is also ABO in any particular stochastic deadline case | recall that both regimes can be considered as a single family. We have programmed a constructor function for universal programs, and applied it to the mail-sorter environment class. Varying the letter arrival distribution gives us di erent value functions V i 2 V. Figure 7 shows that l U (on 4M) has higher throughput and accuracy than l i opt across the entire range of arrival distributions. Given the existence of UABO programs, it is possible to consider the behaviour of compositions thereof. The simplest form of composition is functional composition, in which the output of one program is used as input by another. More complex, nested compositional structures can be entertained, including loops and conditionals (Zilberstein, 1993). The main issue in constructing UABO compositions is how to allocate time among the components. Provided that we can solve the time allocation problem when we know the total runtime allowed, we can use the same construction technique as used above to generate composite UABO programs, where optimality is among all possible compositions of the components. Zilberstein and Russell (1993), show that the allocation problem can be solved in linear time in the size of the composite system, provided the composition is a tree of bounded degree." }, { "figure_ref": [], "heading": "Conclusions And Further Work", "publication_ref": [ "b2" ], "table_ref": [], "text": "We examined three possible formal bases for arti cial intelligence, and concluded that bounded optimality provides the most appropriate goal in constructing intelligent systems. We also noted that similar notions have arisen in philosophy and game theory for more or less the same reason: the mismatch between classically optimal actions and what we have called feasible behaviours|those that can be generated by an agent program running on a computing device of nite speed and size.\nWe showed that with careful speci cation of the task environment and the computing device one can design provably bounded-optimal agents. We exhibited only very simple agents, and it is likely that bounded optimality in the strict sense is a di cult goal to achieve when a larger space of agent programs is considered. More relaxed notions such as asymptotic bounded optimality (ABO) may provide more theoretically robust tools for further progress. In particular, ABO promises to yield useful results on composite agent designs, allowing us to separate the problem of designing complex ABO agents into a discrete structural problem and a continuous temporal optimization problem that is tractable in many cases. Hence, we have reason to be optimistic that arti cial intelligence can be usefully characterized as the study of bounded optimality. We may speculate that provided the computing device is neither too small (so that small changes in speed or size cause signi cant changes in the optimal program design) nor too powerful (so that classically optimal decisions can be computed feasibly), ABO designs should be stable over reasonably wide variations in machine speed and size and in environmental complexity. The details of the optimal designs may be rather arcane, and learning processes will play a large part in their discovery; we expect that the focus of this type of research will be more on questions of convergence to optimality for various structural classes than on the end result itself.\nPerhaps the most important implication, beyond the conceptual foundations of the eld itself, is that research on bounded optimality applies, by design, to the practice of arti cial intelligence in a way that idealized, in nite-resource models may not. We have given, by way of illustrating this de nition, a bounded optimal agent: the design of a simple system consisting of sequences of decision procedures that is provably better than any other program in its class. A theorem that exhibits a bounded optimal design translates, by de nition, into an agent whose actual behaviour is desirable.\nThere appear to be plenty of worthwhile directions in which to continue the exploration of bounded optimality. From a foundational point of view, one of the most interesting questions is how the concept applies to agents that can incorporate a learning component. (Note that in section 5, the learning algorithm was external to the agent.) In such a case, there will not necessarily be a largely stable bounded optimal con guration if the agent program is not large enough; instead, the agent will have to adapt to a shorter-term horizon and rewrite itself as it becomes obsolete.\nWith results on the preservation of ABO under composition, we can start to examine much more interesting architectures than the simple production system studied above. For example, we can look at optimal search algorithms, where the algorithm is constrained to apply a metalevel decision procedure at each step to decide which node to expand, if any (Russell & Wefald, 1989). We can also extend the work on asymptotic bounded optimality to provide a utility-based analogue to \\big-O\" notation for describing the performance of agent designs, including those that are suboptimal.\nIn the context of computational learning theory, it is obvious that the stationarity requirement on the environment, which is necessary to satisfy the preconditions of PAC results, is too restrictive. The fact that the agent learns may have some e ect on the distribution of future episodes, and little is known about learning in such cases (Aldous & Vazirani, 1990). We could also relax the deterministic and episodic requirement to allow non-immediate rewards, thereby making connections to current research on reinforcement learning.\nThe computation scheduling problem we examined is interesting in itself, and does not appear to have been studied in the operations research or combinatorial optimization literature. Scheduling algorithms usually deal with physical rather than computational tasks, hence the objective function usually involves summation of outputs rather than picking the best. We would like to resolve the formal question of its tractability in the general case, and also to look at cases in which the solution qualities of individual processes are interdependent (such as when one can use the results of another). Practical extensions include computation scheduling for parallel machines or multiple agents, and scheduling combinations of computational and physical (e.g., job-shop and ow-shop) processes, where objective functions are a combination of summation and maximization. The latter extension broadens the scope of applications considerably. An industrial process, such as designing and manufacturing a car, consists of both computational steps (design, logistics, factory scheduling, inspection etc.) and physical processes (stamping, assembling, painting etc.). One can easily imagine many other applications in real-time nancial, industrial, and military contexts.\nIt may turn out that bounded optimality is found wanting as a theoretical framework. If this is the case, we hope that it is refuted in an interesting way, so that a better framework can be created in the process." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to acknowledge stimulating discussions with Michael Fehling, Michael Genesereth, Russ Greiner, Eric Horvitz, Henry Kautz, Daphne Koller, and Bart Selman on the subject of bounded optimality; with Dorit Hochbaum, Nimrod Megiddo, and Kevin Glazebrook on the subject of dynamic programming for scheduling problems; and with Nick Littlestone and Michael Kearns on the subject of agnostic learning. We would also like to thank the reviewers for their many constructive suggestions. Many of the early ideas on which this work is based arose in discussions with the late Eric Wefald. Thanks also to Ron Parr for his work on the uniform-distribution case, Rhonda Righter for extending the results to the exponential distribution, and Patrick Zieske for help in implementing the dynamic programming algorithm. The rst author was supported by NSF grants IRI-8903146, IRI-9211512 and IRI-9058427, by a visiting fellowship from the SERC while on sabbatical in the UK, and by the NEC Research Institute. The second author was supported by NSF grant IRI-8902721." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b34" ], "table_ref": [], "text": "Before proceeding to a statement that l U is indeed UABO, let us look at an example. Consider the simple, sequential machine architecture described earlier. Suppose we can select rules from a three-rule set with r 1 = (0:2; 2), r 2 = (0:5; 5) and r 3 = (0:7; 7). Since the shortest runtime of these rules is 2 seconds, we let = 1. Then we look at the optimal programs l 0 ; l 1 ; l 2 ; l 3 ; : : : for the xed-deadline task environments with t d = 1; 2; 4; 8; : : :. These are: l 0 = ; l 1 = r 1 ; l 2 = r 1 ; l 3 = r 3 ; : : : Hence the sequence of programs in l U is ; r 1 ; r 1 ; r 3 ; : : :]. Now consider a task environment class with a value function V i that speci es a stochastic deadline uniformly distributed over the range 0: : : 10]. For this class, l opt = r 1 r 2 is a bounded optimal sequence. 9 It turns out that l U has higher utility than l opt provided it is run on a machine that is four times faster. We can see this by plotting the two performance pro les: Q U for l U on 4M and Q opt for l opt on M. Q U dominates Q opt , as shown in Figure 6.\nTo establish that the l U construction yields UABO programs in general, we need to de ne a notion of worst-case performance pro le. Let Q (t; l; M; n; E) be the minimum value obtained by interrupting l at t, over all E in E of complexity n. We know that each l j in l U satis es the following: 8l 0 ; n n > n j ) V j (l j ; k j M; n; E) V j (l 0 ; M; n; E)\nfor constants k j , n j . The aim is to prove that 8V i 2 V 9k; n 0 8l 0 ; n n > n 0 ) V i (l U ; kM; n; E) V i (l 0 ; M; n; E)\nGiven the de nition of worst-case performance pro le, it is fairly easy to show the following lemma (the proof is essentially identical to the proof of Theorem 1 in Russell and Zilberstein, 1991):\n9. Notice that, in our simple model, the output quality of a rule depends only on its execution time and not on the input complexity. This also means that worst-case and average-case behaviour are the same." }, { "figure_ref": [], "heading": "Appendix: Additional Proofs", "publication_ref": [], "table_ref": [], "text": "This appendix contains formal proofs for three subsidiary lemmata in the main body of the paper.\nLemma 3 There exists an optimal sequence that is sorted in increasing order of q's.\nProof: Suppose this is not the case, and s is an optimal sequence. Then there must be two adjacent rules i, i + 1 where q i > q i+1 (see Figure 8). Removal of rule i + 1 yields a sequence s 0 such that Q s 0 (t) Q s (t), from Lemma 1 and the fact that t i+2 t i+1 +t i+2 . By Lemma 2, s 0 must also be optimal. We can repeat this removal process until s 0 is ordered by q i , proving the theorem by reductio ad absurdum.2\nLemma 4 For every sequence s = s 1 : : : s m sorted in increasing order of quality, and single step z with q z q s m , V (sz) V (s).\nProof: We calculate V (sz) V (s) using Equation 5and show that it is non-negative:\nV (sz) V (s) = q z 1 P d (( P m j=1 t j ) + t z )] q m 1 P d (( P m j=1 t j ) + t z )] = (q z q m ) 1 P d (( P m j=1 t j ) + t z )]\nwhich is non-negative since q z q m .2\nFigure 8: Proof for ordering by q i ; lower dotted line indicates original pro le; upper dotted line indicates pro le after removal of rule i + 1.\nLemma 5 There exists an optimal sequence whose rules are in nondecreasing order of t i . Proof: Suppose this is not the case, and s is an optimal sequence. Then there must be two adjacent rules i, i + 1 where q i q i+1 and t i > t i+1 (see Figure 9). Removal of rule i yields a sequence s 0 such that Q s 0 (t) Q s (t), from Lemma 1. By Lemma 2, s 0 must also be optimal. We can repeat this removal process until s 0 is ordered by t i , proving the theorem by reductio ad absurdum.2 t i+1 q i+1 q i q i-1 t i q t t i+1\nFigure 9: Proof for ordering by t i ; dotted line indicates pro le after removal of rule i." } ]
[ { "authors": "P Agre; D Chapman", "journal": "", "ref_id": "b0", "title": "Pengi: An implementation of a theory of activity", "year": "1987" }, { "authors": "Morghan Kaufmann", "journal": "", "ref_id": "b1", "title": "", "year": "" }, { "authors": "D Aldous; U Vazirani", "journal": "IEEE Comput. Soc. Press", "ref_id": "b2", "title": "A markovian extension of valiant's learning model", "year": "1990" }, { "authors": "J Binder", "journal": "", "ref_id": "b3", "title": "On the complexity of deliberation scheduling with stochastic deadlines", "year": "1994" }, { "authors": "B E Boser; E Sackinger; J Bromley; Y Lecun", "journal": "IEEE Micro", "ref_id": "b4", "title": "Hardware requirements for neural network pattern classi ers | a case study and implementation", "year": "1992" }, { "authors": "R Brandt", "journal": "", "ref_id": "b5", "title": "In search of a credible form of rule utilitarianism", "year": "1953" }, { "authors": "J S Breese; M R Fehling", "journal": "", "ref_id": "b6", "title": "Control of problem-solving: Principles and architecture", "year": "1990" }, { "authors": "R A Brooks", "journal": "IEEE Journal of Robotics and Automation", "ref_id": "b7", "title": "A robust, layered control system for a mobile robot", "year": "1986" }, { "authors": "C Cherniak", "journal": "MIT Press", "ref_id": "b8", "title": "Minimal rationality", "year": "1986" }, { "authors": "T Dean; M Boddy", "journal": "", "ref_id": "b9", "title": "An analysis of time-dependent planning", "year": "1988" }, { "authors": "T L Dean; M P Wellman", "journal": "Morgan Kaufmann", "ref_id": "b10", "title": "Planning and control", "year": "1991" }, { "authors": "D Dennett", "journal": "", "ref_id": "b11", "title": "The moral rst aid manual", "year": "1986" }, { "authors": "J Doyle", "journal": "AI Magazine", "ref_id": "b12", "title": "What is rational psychology? toward a modern mental philosophy", "year": "1983" }, { "authors": "J Doyle", "journal": "", "ref_id": "b13", "title": "Arti cial intelligence and rational self-government", "year": "1988" }, { "authors": "J Doyle; R Patil", "journal": "Arti cial intelligence", "ref_id": "b14", "title": "Two theses of knowledge representation: language restrictions, taxonomic classi cation, and the utility of representation services", "year": "1991" }, { "authors": "O Etzioni", "journal": "", "ref_id": "b15", "title": "Tractable decision-analytic control", "year": "1989" }, { "authors": "M Fehling; S J Russell", "journal": "AAAI", "ref_id": "b16", "title": "Proceedings of the AAAI Spring Symposium on Limited Rationality", "year": "1989" }, { "authors": "M R Genesereth; N J Nilsson", "journal": "Morgan Kaufmann", "ref_id": "b17", "title": "Logical Foundations of Arti cial Intelligence", "year": "1987" }, { "authors": "I J Good", "journal": "Holt, Rinehart", "ref_id": "b18", "title": "Twenty-seven principles of rationality", "year": "1971" }, { "authors": "O Hansson; A Mayer", "journal": "", "ref_id": "b19", "title": "Heuristic search as evidential reasoning", "year": "1989" }, { "authors": "E J Horvitz", "journal": "", "ref_id": "b20", "title": "Reasoning about beliefs and actions under computational resource constraints", "year": "1988" }, { "authors": "M Kearns; R Schapire; L Sellie", "journal": "", "ref_id": "b21", "title": "Toward e cient agnostic learning", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "R Keeney; H ", "journal": "Wiley", "ref_id": "b23", "title": "Decisions with multiple objectives: Preferences and value tradeo s", "year": "1976" }, { "authors": "H Levesque; R Brachman", "journal": "Computational Intelligence", "ref_id": "b24", "title": "Expressiveness and tractability in knowledge representation and reasoning", "year": "1987" }, { "authors": "M Luby; A Sinclair; D Zuckerman", "journal": "Information Processing Letters", "ref_id": "b25", "title": "Optimal speedup of las vegas algorithms", "year": "1993" }, { "authors": "J Mccarthy", "journal": "HMSO", "ref_id": "b26", "title": "Programs with common sense", "year": "1958" }, { "authors": "A Newell", "journal": "AI Magazine", "ref_id": "b27", "title": "The knowledge level", "year": "1981" }, { "authors": "A Neyman", "journal": "Economics Letters", "ref_id": "b28", "title": "Bounded complexity justi es cooperation in the nitely repeated prisoners' dilemma", "year": "1985" }, { "authors": "C Papadimitriou; M Yannakakis", "journal": "", "ref_id": "b29", "title": "On complexity as bounded rationality", "year": "1994" }, { "authors": "F P Ramsey", "journal": "Harcourt Brace Jovanovich", "ref_id": "b30", "title": "Truth and probability", "year": "1931" }, { "authors": "S J Russell; E H Wefald", "journal": "", "ref_id": "b31", "title": "On optimal game tree search using rational metareasoning", "year": "1989" }, { "authors": "S J Russell; E H Wefald", "journal": "", "ref_id": "b32", "title": "Principles of metareasoning", "year": "1989" }, { "authors": "S J Russell; E H Wefald", "journal": "MIT Press", "ref_id": "b33", "title": "Do the right thing: Studies in limited rationality", "year": "1991" }, { "authors": "S J Russell; S Zilberstein", "journal": "", "ref_id": "b34", "title": "Composing real-time systems", "year": "1991" }, { "authors": "E Sackinger; B E Boser; J Bromley; Y Lecun", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b35", "title": "Application of the anna neural network chip to high-speed character recognition", "year": "1992" }, { "authors": "H A Simon", "journal": "", "ref_id": "b36", "title": "On how to decide what to do", "year": "1976" }, { "authors": "H A Simon", "journal": "MIT Press", "ref_id": "b37", "title": "Models of bounded rationality", "year": "1982" }, { "authors": "J Von Neumann; O Morgenstern", "journal": "Princeton", "ref_id": "b38", "title": "Theory of games and economic behavior", "year": "1947" }, { "authors": "S Zilberstein", "journal": "", "ref_id": "b39", "title": "Operational Rationality Through Compilation of Anytime Algorithms", "year": "1993" }, { "authors": "S Zilberstein; S Russell", "journal": "Arti cial Intelligence", "ref_id": "b40", "title": "Optimal composition of real-time systems", "year": "1993" } ]
[ { "formula_coordinates": [ 6, 117.36, 688.95, 55.08, 18.3 ], "formula_id": "formula_0", "formula_text": "f : O t ! A where A T (t) = f(O t )" }, { "formula_coordinates": [ 8, 154.26, 448.2, 157.68, 68.27 ], "formula_id": "formula_1", "formula_text": "O T (t) = f p (X T (t)) X T (t + 1) = f e (A T (t); X T (t)) X T (0) = X 0 I T (0) = i 0" }, { "formula_coordinates": [ 9, 117.36, 348.21, 130.68, 37.26 ], "formula_id": "formula_2", "formula_text": "V (f; E) = X E2E p(E)V (f; E)" }, { "formula_coordinates": [ 9, 117.36, 476.91, 157.86, 37.44 ], "formula_id": "formula_3", "formula_text": "V (l; M; E) = X E2E p(E)V (l; M; E)" }, { "formula_coordinates": [ 9, 90, 605.84, 145.8, 39.91 ], "formula_id": "formula_4", "formula_text": "f opt such that f opt = argmax f (V (f; E))" }, { "formula_coordinates": [ 12, 144.54, 520.07, 170.46, 20.34 ], "formula_id": "formula_5", "formula_text": "U( E; A t 1 ]) = U( E; A (t d 1) 2 A T 1 (t)])" }, { "formula_coordinates": [ 12, 144.54, 604.53, 248.58, 20.48 ], "formula_id": "formula_6", "formula_text": "U( E; A t 1 ]) U( E; A t 2 ]) if U( E; A t d 1 ]) U( E; A t d 2 ])" }, { "formula_coordinates": [ 13, 144.54, 439.56, 60.3, 19.17 ], "formula_id": "formula_7", "formula_text": "O T (t d ) = O d" }, { "formula_coordinates": [ 13, 117.36, 650.61, 162.18, 37.62 ], "formula_id": "formula_8", "formula_text": "U( E; A t ]) = X t 0 2T p d (t 0 )U( E t 0 ; A t ])" }, { "formula_coordinates": [ 14, 144.54, 325.21, 377.64, 16.7 ], "formula_id": "formula_9", "formula_text": "q i = U( E; A i ])(1)" }, { "formula_coordinates": [ 15, 117.36, 310.77, 404.82, 37.44 ], "formula_id": "formula_10", "formula_text": "V (s; M; E) = Q m c m X i=1 t i (4)" }, { "formula_coordinates": [ 15, 117.36, 544.59, 400.14, 37.44 ], "formula_id": "formula_11", "formula_text": "V (s) V (s; M; E) = m X i=1 P d ( P i+1 j=1 t j ) P d ( P i j=1 t j )]Q i (5" }, { "formula_coordinates": [ 15, 517.5, 555.35, 4.68, 15.3 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 16, 117.36, 307.71, 133.02, 37.44 ], "formula_id": "formula_13", "formula_text": "Q s (t) = maxfq i : i X j=1 t j tg" }, { "formula_coordinates": [ 18, 117.36, 278.78, 404.82, 16.88 ], "formula_id": "formula_14", "formula_text": "S(i; i) = (1 P d (t i ))q i (9)" }, { "formula_coordinates": [ 19, 117.36, 171.45, 283.32, 38.7 ], "formula_id": "formula_15", "formula_text": "V (s; M; E) = m 1 X i=1 h i j=1 (1 p j ) i p i+1 q i + h m j=1 (1 p j ) i q m" } ]
Provably Bounded-Optimal Agents
Since its inception, arti cial intelligence has relied upon a theoretical foundation centred around perfect rationality as the desired property of intelligent systems. We argue, as others have done, that this foundation is inadequate because it imposes fundamentally unsatis able requirements. As a result, there has arisen a wide gap between theory and practice in AI, hindering progress in the eld. We propose instead a property called bounded optimality. Roughly speaking, an agent is bounded-optimal if its program is a solution to the constrained optimization problem presented by its architecture and the task environment. We show how to construct agents with this property for a simple class of machine architectures in a broad class of real-time environments. We illustrate these results using a simple model of an automated mail sorting facility. We also de ne a weaker property, asymptotic bounded optimality (ABO), that generalizes the notion of optimality in classical complexity theory. We then construct universal ABO programs, i.e., programs that are ABO no matter what real-time constraints are applied. Universal ABO programs can be used as building blocks for more complex systems. We conclude with a discussion of the prospects for bounded optimality as a theoretical basis for AI, and relate it to similar trends in philosophy, economics, and game theory. 1. This usage of the term \universal" derives from its use in the scheduling of randomized algorithms by Luby, Sinclair and Zuckerman (1993).
Stuart J Russell; Devika Subramanian
[ { "figure_caption": "Figure 1 :1Figure 1: An automated mail-sorting facility provides a simple example of an episodic, real-time task environment.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Performance pro le for r 1 r 2 r 3 , with p d superimposed.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: (a) Accuracy pro le (1 e x ), for = 0:9. (b) Poisson arrival distribution, for mean = 9 sec", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Graph showing the achievable utility per second as a function of the average time per letter, for the four program types. = 0:9.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Graphs showing the utility gain per second as a function of the arrival time variance, for the four program types for the uniform distribution with a mean of 20 seconds.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Throughput and accuracy improvement of l U over l i opt , as a function of mean arrival time, = 0.2, Poisson arrivals.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "nition 8 Fixed time cost: The task environment hE; Ui has a xed time cost if, for", "figure_data": "any action history pre xes A t 1 1 and A t 2 2 satisfyingand t t d4.1.2 Fixed time cost", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b27", "b21", "b45", "b43", "b4", "b44", "b1", "b2", "b26", "b0", "b19", "b23", "b9" ], "table_ref": [], "text": "One active area of research in machine learning is learning concepts expressed in rstorder logic. Since most researchers have used some variant of Prolog to represent learned concepts, this subarea is sometimes called inductive logic programming (ILP) (Muggleton, 1992;Muggleton & De Raedt, 1994).\nWithin ILP, researchers have considered two broad classes of learning problems. The rst class of problems, which we will call here logic based relational learning problems, are rst-order variants of the sorts of classi cation problems typically considered within AI machine learning community: prototypical examples include Muggleton et al.'s (1992) formulation of -helix prediction, King et al.'s (1992) formulation of predicting drug activity, and Zelle and Mooney's (1994) use of ILP techniques to learn control heuristics for deterministic parsers. Logic-based relational learning often involves noisy examples that reect a relatively complex underlying relationship; it is a natural extension of propositional machine learning, and has already enjoyed a number of experimental successes.\nIn the second class of problems studied by ILP researchers, the target concept is a Prolog program that implements some common list-processing or arithmetic function; prototypical problems from this class might be learning to append two lists, or to multiply two numbers. These learning problems are similar in character to those studied in the area of automatic programming from examples (Summers, 1977;Biermann, 1978), and hence might be appropriately called automatic logic programming problems. Automatic logic programming problems are characterized by noise-free training data and recursive target concepts. Thus a problem that is central to the enterprise of automatic logic programming|but not, perhaps, logic-based relational learning|is the problem of learning recursive logic programs. c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nThe goal of this paper is to formally analyze the learnability of recursive logic programs in Valiant's (1984) model of pac-learnability, thus hopefully shedding some light on the task of automatic logic programming. To summarize our results, we will show that some simple recursive programs are pac-learnable from examples alone, or from examples plus a small number of additional \\hints\". The largest learnable class we identify in a standard learning model is the class of one-clause constant-depth determinate programs with at most a constant number of \\closed\" recursive literals. The largest learnable class we identify that requires extra \\hints\" is the class of constant-depth determinate programs consisting of a single nonrecursive base clause and a single recursive clause from the class described above. All of our results are proved in the model of identi cation from equivalence queries (Angluin, 1988(Angluin, , 1989)), which is somewhat stronger than pac-learnability. Identi cation from equivalence queries requires that the target concept be exactly identi ed, in polynomial time, and using only a polynomial number of equivalence queries. An equivalence query asks if a hypothesis program H is equivalent to the target program C; the answer to a query is either \\yes\" or an adversarily chosen example on which H and C di er. This model of learnability is arguably more appropriate for automatic logic programming tasks than the weaker model of pac-learnability, as it is unclear how often an approximately correct recursive program will be useful.\nInterestingly, the learning algorithms analyzed are di erent from most existing ILP learning methods; they all employ an unusual method of generalizing examples called forced simulation. Forced simulation is a simple and analytically tractable alternative to other methods for generalizing recursive programs against examples, such as n-th root nding (Muggleton, 1994), sub-uni cation (Aha, Lapointe, Ling, & Matwin, 1994) and recursive anti-uni cation (Idestam-Almquist, 1993), but it has been only rarely used in experimental ILP systems (Ling, 1991).\nThe paper is organized as follows. After presenting some preliminary de nitions, we begin by presenting (primarily for pedagogical reasons) a procedure for identifying from equivalence queries a single non-recursive constant-depth determinate clause. Then, in Section 4, we extend this learning algorithm, and the corresponding proof of correctness, to a simple class of recursive clauses: the class of \\closed\" linear recursive constant-depth determinate clauses. In Section 5, we relax some assumptions made to make the analysis easier, and present several extensions to this algorithm: we extend the algorithm from linear recursion to k-ary recursion, and also show how a k-ary recursive clause and a non-recursive clause can be learned simultaneously given an additional \\basecase\" oracle. We then discuss related work and conclude.\nAlthough the learnable class of programs is large enough to include some well-known automatic logic programming benchmarks, it is extremely restricted. In a companion paper (Cohen, 1995), we provide a number of negative results, showing that relaxing any of these restrictions leads to di cult learning problems: in particular, learning problems that are either as hard as learning DNF (an open problem in computational learning theory), or as hard as cracking certain presumably secure cryptographic schemes. Thus, taken together with the results of the companion paper, our results delineate a boundary of learnability for recursive logic programs.\nAlthough the two papers are independent, we suggest that readers wishing to read both this paper and the companion paper read this paper rst." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b25" ], "table_ref": [], "text": "In this section we will present the technical background necessary to state our results. We will assume, however, that the reader is familiar with the basic elements of logic programming; readers without this background are referred to one of the standard texts, for example (Lloyd, 1987)." }, { "figure_ref": [], "heading": "Logic Programs", "publication_ref": [ "b39", "b11", "b36", "b35", "b34", "b28" ], "table_ref": [], "text": "Our treatment of logic programs is standard, except that we will usually consider the body of a clause to be an ordered set of literals.\nFor most of this paper, we will consider logic programs without function symbols| i.e., programs written in Datalog. 1 The purpose of such a logic program is to answer certain questions relative to a database, DB, which is a set of ground atomic facts. (When convenient, we will also think of DB as a conjunction of ground unit clauses.) The simplest use of a Datalog program is to check the status of a simple instance. A simple instance (for a program P and a database DB) is a fact f. The pair (P; DB) is said to cover f i DB ^P `f. The set of simple instances covered by (P; DB) is precisely the minimal model of the logic program P ^DB.\nIn this paper, we will primarily consider extended instances which consist of two parts: an instance fact f, which is simply a ground fact, and a description D, which is a nite set of ground unit clauses. An extended instance e = (f; D) is covered by (P; DB) i DB ^D ^P `f\nIf extended instances are allowed, then function-free programs are expressive enough to encode surprisingly interesting programs. In particular, many programs that are usually written with function symbols can be re-written as function-free programs, as the example below illustrates.\nExample. Consider the usual program for appending two lists.\nappend( ],Ys,Ys). append( XjXs1],Ys, XjZs1]) append(Xs1,Ys,Zs1).\nOne could use this program to classify atomic facts containing function symbols such as append( 1,2], 3], 1,2,3]). This program can be rewritten as a Datalog program that classi es extended instances as follows:\nProgram P: append(Xs,Ys,Ys) null(Xs). append (Xs,Ys,Zs) components(Xs,X,Xs1) ĉomponents(Zs,X,Zs1) 1.\nappend(Xs1,Ys,Zs1).\nDatabase DB:\nnull(nil).\nThe predicate components(A,B,C) means that A is a list with head B and tail C; thus an extended instance equivalent to append( 1,2], 3], 1,2,3]) would be Instance fact f:\nappend(list12,list3,list123).\nDescription D: components(list2,2,nil). components(list123,1,list23). components(list23,2,list3). components(list3,3,nil).\ncomponents(list12,1,list2).\nWe note that using extended instances as examples is closely related to using ground clauses entailed by the target clause as examples: speci cally, the instance e = (f; D) is covered by P; DB i P ^DB `(f D). As the example above shows, there is also a close relationship between extended instances and literals with function symbols that have been removed by \\ attening\" (Rouveirol, 1994;De Raedt & D zeroski, 1994). We have elected to use Datalog programs and the model of extended instances in this paper for several reasons. Datalog is relatively easy to analyze. There is a close connection between Datalog and the restrictions imposed by certain practical learning systems, such FOIL (Quinlan, 1990;Quinlan & Cameron-Jones, 1993), FOCL (Pazzani & Kibler, 1992), and GOLEM (Muggleton & Feng, 1992). Finally, using extended instances addresses the following technical problem. The learning problems considered in this paper involve restricted classes of logic programs. Often, the restrictions imply that the number of simple instances is polynomial; we note that with only a polynomial-size domain, questions about pac-learnability are usually trivial. Requiring learning algorithms to work over the domain of extended instances precludes trivial learning techniques, however, as the number of extended instances of size n is exponential in n even for highly restricted programs." }, { "figure_ref": [], "heading": "Restrictions on Logic Programs", "publication_ref": [], "table_ref": [], "text": "In this paper, we will consider the learnability of various restricted classes of logic programs. Below we will de ne some of these restrictions; however, we will rst introduce some terminology.\nIf A B 1 ^: : :^B r is an (ordered) de nite clause, then the input variables of the literal B i are those variables appearing in B i which also appear in the clause A B 1 ^: : : ^Bi 1 ; all other variables appearing in B i are called output variables. Also, if A B 1 ^: : :^B r is a de nite clause, then B i is said to be a recursive literal if it has the same predicate symbol and arity as A, the head of the clause." }, { "figure_ref": [], "heading": "Types of Recursion", "publication_ref": [], "table_ref": [], "text": "The rst set of restrictions concern the type of recursion that is allowed in a program. If every clause in a program has at most one recursive literal, then the program is linear recursive. If every clause in a program has at most k recursive literals, then the program is k-ary recursive. Finally, if every recursive literal in a program contains no output variables, then we will say that the program is closed recursive." }, { "figure_ref": [], "heading": "Determinacy and Depth", "publication_ref": [ "b28", "b14", "b28", "b37", "b22", "b7" ], "table_ref": [], "text": "The second set of restrictions are variants of restrictions originally introduced by Muggleton and Feng (1992). If A B 1 ^: : : ^Br is an (ordered) de nite clause, the literal B i is determinate i for every possible substitution that uni es A with some fact e such that DB `B1 ^: : : ^Bi 1 there is at most one maximal substitution so that DB `Bi . A clause is determinate if all of its literals are determinate. Informally, determinate clauses are those that can be evaluated without backtracking by a Prolog interpreter.\nWe also de ne the depth of a variable appearing in a clause A B 1 ^: : :^B r as follows.\nVariables appearing in the head of a clause have depth zero. Otherwise, let B i be the rst literal containing the variable V , and let d be the maximal depth of the input variables of B i ; then the depth of V is d+1. The depth of a clause is the maximal depth of any variable in the clause.\nMuggleton and Feng de ne a logic program to be ij-determinate if it is is determinate, of constant depth i, and contains literals of arity j or less. In this paper we use the phrase \\constant-depth determinate\" instead to denote this class of programs. Below are some examples of constant-depth determinate programs, taken from D zeroski, Muggleton and Russell (1992).\nExample. Assuming successor is functional, the following program is determinate. The maximum depth of a variable is one, for the variable C in the second clause, and hence the program is of depth one. The program GOLEM (Muggleton & Feng, 1992) learns constant-depth determinate programs, and related restrictions have been adopted by several other practical learning systems (Quinlan, 1991;Lavra c & D zeroski, 1992;Cohen, 1993c). The learnability of constant-depth determinate clauses has also received some formal study, which we will review in Section 6." }, { "figure_ref": [], "heading": "Mode Constraints and Declarations", "publication_ref": [], "table_ref": [], "text": "We de ne the mode of a literal L appearing in a clause C to be a string s such that the initial character of s is the predicate symbol of L, and for j > 1 the j-th character of s is a \\+\" if the (j 1)-th argument of L is an input variable and a \\ \" if the (j 1)-th argument of L is an output variable. (This de nition coincides with the usual de nition of Prolog modes only when all arguments to the head of a clause are inputs. This simpli cation is justi ed, however, as we are considering only how clauses behave in classifying extended instances, which are ground.) A mode constraint is simply a set of mode strings R = fs 1 ; : : :; s k g, and a clause C is said to satisfy a mode constraint R for p if for every literal L in the body of C, the mode of L is in R.\nExample. In the following append program, every literal has been annotated with its mode. append (Xs,Ys,Ys) null(Xs).\n% mode: null+ Mode constraints are commonly used in analyzing Prolog code; for instance, they are used in many Prolog compilers. We will sometimes use an alternative syntax for mode constraints that parallels the syntax used in most Prolog systems: for instance, we may write the mode constraint \\components + \" as \\components(+; ; )\".\nWe de ne a declaration to be a tuple (p; a 0 ; R) where p is a predicate symbol, a 0 is an integer, and R is a mode constraint. We will say that a clause C satis es a declaration if the head of C has arity a 0 and predicate symbol p, and if for every literal L in the body of C the mode of L appears in R." }, { "figure_ref": [], "heading": "A Model of Learnability", "publication_ref": [ "b1", "b2" ], "table_ref": [], "text": "In this section, we will present our model of learnability. We will rst review the necessary de nitions for a standard learning model, the model of learning from equivalence queries (Angluin, 1988(Angluin, , 1989)), and discuss its relationship to other learning models. We will then introduce an extension to this model which is necessary for analyzing ILP problems." }, { "figure_ref": [], "heading": "Identification From Equivalence Queries", "publication_ref": [], "table_ref": [], "text": "Let X be a set. We will call X the domain, and call the elements of X instances. De ne a concept C over X to be a representation of some subset of X, and de ne a language Lang to be a set of concepts. In this paper, we will be rather casual about the distinction between a concept and the set it represents; when there is a risk of confusion we will refer to the set represented by a concept C as the extension of C. Two concepts C 1 and C 2 with the same extension are said to be (semantically) equivalent.\nAssociated with X and Lang are two size complexity measures, for which we will use the following notation:\nThe size complexity of a concept C 2 Lang is written j jCj j. The size complexity of an instance e 2 X is written j jej j. If S is a set, S n stands for the set of all elements of S of size complexity no greater than n. For instance, X n = fe 2 X : j jej j ng and Lang n = fC 2 Lang : j jCj j ng.\nWe will assume that all size measures are polynomially related to the number of bits needed to represent C or e.\nThe rst learning model that we consider is the model of identi cation with equivalence queries. The goal of the learner is to identify some unknown target concept C 2 Lang| that is, to construct some hypothesis H 2 Lang such that H C. Information about the target concept is gathered only through equivalence queries. The input to an equivalence query for C is some hypothesis H 2 Lang. If H C, then the response to the query is \\yes\". Otherwise, the response to the query is an arbitrarily chosen counterexample|an instance e that is in the symmetric di erence of C and H.\nA deterministic algorithm Identify identi es Lang from equivalence queries i for every C 2 Lang, whenever Identify is run (with an oracle answering equivalence queries for C) it eventually halts and outputs some H 2 Lang such that H C. Identify polynomially identi es Lang from equivalence queries i there is a polynomial poly(n t ; n e ) such that at any point in the execution of Identify the total running time is bounded by poly(n t ; n e ), where n t = j jCj j and n e is the size of the largest counterexample seen so far, or 0 if no equivalence queries have been made." }, { "figure_ref": [], "heading": "Relation to Pac-Learnability", "publication_ref": [ "b1", "b2", "b44", "b14", "b8" ], "table_ref": [], "text": "The model of identi cation from equivalence queries has been well-studied (Angluin, 1988(Angluin, , 1989)). It is known that if a language is learnable in this model, then it is also learnable in Valiant's (1984) model of pac-learnability. (The basic idea behind this result is that an equivalence query for the hypothesis H can be emulated by drawing a set of random examples of a certain size. If any of them is a counterexample to H, then one returns the found counterexample as the answer to the equivalence query. If no counterexamples are found, one can assume with high con dence that H is approximately equivalent to the target concept.) Thus identi cation from equivalence queries is a strictly stronger model than pac-learnability.\nMost existing positive results on the pac-learnability of logic programs rely on showing that every concept in the target language can be emulated by a boolean concept from some pac-learnable class (D zeroski et al., 1992;Cohen, 1994). While such results can be illuminating, they are also disappointing, since one of the motivations for considering rstorder representations in the rst place is that they allow one to express concepts that cannot be easily expressed in boolean logic. One advantage of studying the exact identi cation model and considering recursive programs is that it essentially precludes use of this sort of proof technique: while many recursive programs can be approximated by boolean functions over a xed set of attributes, few can be be exactly emulated by boolean functions." }, { "figure_ref": [], "heading": "Background Knowledge in Learning", "publication_ref": [], "table_ref": [], "text": "The framework described above is standard, and is one possible formalization of the usual situation in inductive concept learning, in which a user provides a set of examples (in this case counterexamples to queries) and the learning system attempts to nd a useful hypothesis. However, in a typical ILP system, the setting is slightly di erent, as usually the user provides clues about the target concept in addition to the examples. In most ILP systems the user provides a database DB of \\background knowledge\" in addition to a set of examples; in this paper, we will assume that the user also provides a declaration. To account for these additional inputs it is necessary to extend the framework described above to a setting where the learner accepts inputs other than training examples.\nTo formalize this, we introduce the following notion of a \\language family\". If Lang is a set of clauses, DB is a database and Dec is a declaration, we will de ne Lang DB; Dec] to be the set of all pairs (C; DB) such that C 2 Lang and C satis es Dec. Semantically, such a pair will denote the set of all extended instances (f; D) covered by (C; DB). Next, if DB is a set of databases and DEC is a set of declarations, then de ne Lang DB; DEC] = fLang DB; Dec] : DB 2 DB and Dec 2 DECg This set of languages is called a language family.\nWe will now extend the de nition of identi cation from equivalence queries to language families as follows. A language family Lang DB; DEC] is identi able from equivalence queries i every language in the set is identi able from equivalence queries. A language family Lang DB; DEC] is uniformly identi able from equivalence queries i there is a single algorithm Identify(DB; Dec) that identi es any language Lang DB; Dec] in the family given DB and Dec. Uniform polynomial identi ability of a language family is de ned analogously:\nLang DB; DEC] is uniformly polynomially identi able from equivalence queries i there is a polynomial time algorithm Identify(DB; Dec) that identi es any language Lang DB; Dec] in the family given DB and Dec. Note that Identify must run in time polynomial in the size of the inputs Dec and DB as well as the target concept." }, { "figure_ref": [], "heading": "Restricted Types of Background Knowledge", "publication_ref": [], "table_ref": [], "text": "We will now describe a number of restricted classes of databases and declarations.\nOne restriction which we will make throughout this paper is to assume that all of the predicates of interest are of bounded arity. We will use the notation a-DB for the set of all databases that contain only facts of arity a or less, and the notation a-DEC for the set of all declarations (p; a 0 ; R) such that every string s 2 R is of length a + 1 or less.\nFor technical reasons, it will often be convenient to assume that a database contains an equality predicate|that is, a predicate symbol equal such that equal(t i ; t i ) 2 DB for every constant t i appearing in DB, and equal(t i ; t j ) 6 2 DB for any t i 6 = t j . Similarly, we will often wish to assume that a declaration allows literals of the form equal(X,Y), where X and Y are input variables. If DB (respectively DEC) is any set of databases (declarations) we will use DB = (DEC = ) to denote the corresponding set, with the additional restriction that the database (declaration) must contain an equality predicate (respectively the mode equal(+; +)).\nIt will sometimes also be convenient to assume that a declaration (p; a 0 ; R) allows only a single valid mode for each predicate: i.e., that for each predicate q there is in R only a single mode constraint of the form q . Such a declaration will be called a unique-mode declaration. If DEC is any set of declarations we will use DEC 1 to denote the corresponding set of declarations with the additional restriction that the declaration is unique-mode.\nFinally, we note that in a typical setting, the facts that appear in a database DB and descriptions D of extended instances are not arbitrary: instead, they are representative of some \\real\" predicate (e.g., the relationship of a list to its components in the example above).\nOne way of formalizing this is assume that all facts will be drawn from some restricted set F; using this assumption one can de ne the notion of a determinate mode. If f = p(t 1 ; : : :; t k ) is a fact with predicate symbol p and p is a mode, then de ne inputs(f; p ) to be the tuple ht i 1 ; : : :; t i k i, where i 1 , : : :, i k are the indices of containing a \\+\". Also de ne outputs(f; p ) to be the tuple ht j 1 ; : : :; t j l i, where j 1 , : : :, j l are the indices of containing a \\ \". A mode string p for a predicate p is determinate for F i the relation fhinputs(f; p ); outputs(f; p )i : f 2 Fg is a function. Informally, a mode is determinate if the input positions of the facts in F functionally determine the output positions.\nThe set of all declarations containing only modes determinate for F will be denoted DetDEC F . However, in this paper, the set F will be assumed to be xed, and thus we will generally omit the subscript.\nA program consistent with a determinate declaration Dec 2 DetDEC must be determinate, as de ned above; in other words, consistency with a determinate declaration is a su cient condition for semantic determinacy. It is also a condition that can be veri ed with a simple syntactic test." }, { "figure_ref": [], "heading": "Size Measures for Logic Programs", "publication_ref": [], "table_ref": [], "text": "Assuming that all predicates are arity a or less for some constant a also allows very simple size measures to be used. In this paper, we will measure the size of a database DB by its cardinality; the size of an extended instance (f; D) by the cardinality of D; the size of a declaration (p; a 0 ; R) by the cardinality of R; and the size of a clause A B 1 ^: : : ^Br by the number of literals in its body." }, { "figure_ref": [], "heading": "Learning a Nonrecursive Clause", "publication_ref": [], "table_ref": [], "text": "The learning algorithms presented in this paper all use a generalization technique which we call forced simulation. By way of an introduction to this technique, we will consider a learning algorithm for non-recursive constant-depth clauses. While this result is presented primarily for pedagogical reasons, it may be of interest on its own: it is independent of previous proofs of the pac-learnability of this class (D zeroski et al., 1992), and it is also somewhat more rigorous than previous proofs.\nAlthough the details and analysis of the algorithm for non-recursive clauses are somewhat involved, the basic idea behind the algorithm is quite simple. First, a highlyspeci c \\bottom clause\" is constructed, using two operations that we call DEEPEN and CONSTRAIN. Second, this bottom clause is generalized by deleting literals so that it covers the positive examples: the algorithm for generalizing a clause to cover an example is (roughly) to simulate the clause on the example, and delete any literals that would cause the clause to fail. In the remainder of this section we will describe and analyze this learning algorithm in detail." }, { "figure_ref": [], "heading": "Constructing a \\Bottom Clause\"", "publication_ref": [ "b28" ], "table_ref": [], "text": "Let Dec = (p; a 0 ; R) be a declaration and let A B 1 ^: : : ^Br be a de nite clause. We de ne DEEPEN Dec (A B 1 ^: : : ^Br ) A B 1 ^: : : ^Br ^(\nLi 2L D L i )\nwhere L D is a maximal set of literals L i that satisfy the following conditions: the clause A B 1 ^: : : ^Br ^Li satis es the mode constraints given in R; if L i 2 L D has the same mode and predicate symbol as some other L j 2 L D , then the input variables of L i are di erent from the input variables of L j ; every L i has at least one output variable, and the output variables of L i are all di erent from each other, and are also di erence from the output variables of any other L j 2 L D .\nAs an extension of this notation, we de ne DEEPEN i Dec (C) to be the result of applying the function DEEPEN Dec repeatedly i times to C, i.e.,\nDEEPEN i Dec (C) ( C if i = 0 DEEPEN Dec (DEEPEN i 1 Dec (C)) otherwise We de ne the function CONSTRAIN Dec as CONSTRAIN Dec (A B 1 ^: : : ^Br ) A B 1 ^: : : ^Br ^( Li 2L C L i )\nwhere L C is the set of all literals L i such that A B 1 ^: : : ^Br ^Li satis es the mode constraints given in R, and L i contains no output variables.\nExample. Let D0 be the declaration (p; 2; R) where R contains the mode constraints mother(+; ), father(+; ), male(+), female(+), and equal(+; +). Then \nCONSTRAIN D0 (DEEPEN D0 (p(X,Y) )) p(X,Y) mother(X,XM)^father(X,XF)^mother(Y,YM)^father(Y,YF)m ale(X)^female(X)^male(Y)^female(Y)m ale(XM)^female(XM)^male(XF)^female(XF)m ale(YM)^female(YM)^male(YF)^female(YF)ê qual(X,X)^equal(X,XM)^equal(X,XF)ê qual(X,Y)^equal(X,YM)^equal(X,YF)ê qual(XM,X)^equal(XM,XM)^equal(XM,XF)ê qual(XM,Y)^equal(XM,YM)^equal(XM,YF)ê qual(XF,X)^equal(XF,XM)^equal(XF,XF)ê qual(XF,Y)^equal(XF,YM)^equal(XF,YF)ê qual(Y,X)^equal(Y,XM)^equal(Y,XF)ê qual(Y,Y)^equal(Y,YM)^equal(Y,YF)ê qual(YM,X)^equal(YM,XM)^equal(YM,XF)ê qual(YM,Y)^equal(YM,YM)^equal(YM,YF)ê qual(YF,X)^equal(YF,XM)^equal(YF,XF)ê qual(YF,Y)^equal(YF,YM)^equal(YF,YF)\nLet us say that clause C 1 is a subclause of clause C 2 if the heads of C 1 and C 2 are identical, if every literal in the body of C 1 also appears in C 2 , and if the literals in the body of C 1 appear in the same order as they do in C 2 . The functions DEEPEN and CONSTRAIN allow one to easily describe a clause with an interesting property.\nTheorem 1 Let Dec = (p; a 0 ; R) be a declaration in a-DetDEC = , let X 1 ; : : :; X a 0 be distinct variables, and de ne the clause BOTTOM d as follows: BOTTOM d (Dec) CONSTRAIN Dec (DEEPEN d Dec (p(X 1 ; : : :; X a 0 ) )) For any constants d and a, the following are true: the size of BOTTOM d (Dec) is polynomial in j jDecj j; Proof: See Appendix A. A related result also appears in Muggleton and Feng (1992).\nExample. Below C 1 and D 1 are equivalent, as are C 2 and D 2 . Notice that D 1 and D 2 are subclauses of BOTTOM 1 (D0). For C 1 and D 1 , p(X,Y) is true when X is Y 's brother. For C 2 and D 2 , p(X,Y) is true when X is Y 's daughter, and Y is X's father." }, { "figure_ref": [], "heading": "The Learning Algorithm", "publication_ref": [ "b10", "b41", "b39" ], "table_ref": [], "text": "Theorem 1 suggests that it may be possible to learn non-recursive constant-depth determinate clauses by searching the space of subclauses of BOTTOM d in some e cient manner. Figures 1 and2 present an algorithm called Force1 NR that does this when Dec is a unique-mode declaration.\nFigure 1 presents the top-level learning algorithm, Force1 NR . Force1 NR takes as input a database DB and a declaration Dec, and begins by hypothesizing the clause BOTTOM d (Dec). After each positive counterexample e + , the current hypothesis is generalized as little as possible in order to cover e + . This strategy means that the hypothesis is begin subroutine ForceSim NR (H ; f ; Dec; DB): % \\forcibly simulate\" H on fact f if f 2 DB then return H elseif the head of H and f cannot be uni ed then return FAILURE else let H 0 H let be the mgu of f and the head of H 0 for each literal L in the body of H 0 do if there is a substitution 0 such that L 0 2 DB then 0 , where 0 is the most general such substitution else delete L from the body of H 0 , together with all literals L 0 supported (directly or indirectly) by L endif endfor return H 0 endif end Figure 2: Forced simulation for nonrecursive depth-d determinate clauses always the least general hypothesis that covers the positive examples; hence, if a negative counterexample e is ever seen, the algorithm will abort with a message that no consistent hypothesis exists.\nTo minimally generalize a hypothesis H, the function ForceSim NR is used. This subroutine is shown in Figure 2. In the gure, the following terminology is used. If some output variable of L is an input variable of L 0 , then we say that L directly supports L 0 . We will say that L supports L 0 i L directly supports L 0 , or if L directly supports some literal L 00 that supports L 0 . (Thus \\supports\" is the transitive closure of \\directly supports\".) ForceSim NR deletes from H the minimal number of literals necessary to let H cover e + . To do this, ForceSim NR simulates the action of a Prolog interpreter in evaluating H, except that whenever a literal L in the body of H would fail, that literal is deleted, along with all literals L 0 supported by L.\nThe idea of learning by repeated generalization is an old one; in particular, previous methods exist for learning a de nite clause by generalizing a highly-speci c one. For example, CLINT (De Raedt & Bruynooghe, 1992) generalizes a \\starting clause\" guided by queries made to the user; PROGOL (Srinivasan, Muggleton, King, & Sternberg, 1994) guides a top-down generalization process with a known bottom clause; and Rouveirol (1994) describes a method for generalizing bottom clauses created by saturation. The Force1 NR algorithm is thus of interest not for its novelty, but because it is provably correct and e cient, as noted in the theorem below.\nIn particular, let d-DepthNonRec be the language of nonrecursive clauses of depth d or less (and hence i-DepthNonRec DB; j-DetDEC] is the language of nonrecursive ijdeterminate clauses). We have the following result:\nTheorem 2 For any constants a and d, the language family d-DepthNonRec DB = ; a-DetDEC =1 ]\nis uniformly identi able from equivalence queries.\nProof: We will show that Force1 NR uniformly identi es this language family with a polynomial number of queries. We begin with the following important lemma, which characterizes the behavior of ForceSim NR .\nLemma 3 Let Dec declaration in DetDEC =1 , let DB be a database, let f be a fact, and let H be a determinate nonrecursive clause that satis es Dec. Then one of following conditions must hold:\nForceSim NR (H ; f ; Dec; DB) returns FAILURE, and no subclause H 0 of H satis es both Dec and the constraint H 0 ^DB `f; or, ForceSim NR (H ; f ; Dec; DB) returns a clause H 0 , and H 0 is the unique syntactically largest subclause of H that satis es both Dec and the constraint H 0 ^DB `f. Proof of lemma: To avoid repetition, we will refer to the syntactically maximal subclauses H 0 of H that satisfy both Dec and the constraint H 0 ^DB `f as \\admissible subclauses\" in the proof below.\nClearly the lemma is true if H or FAILURE is returned by ForceSim NR . In the remaining cases the for loop of the algorithm is executed, and we must establish these two claims (under the assumptions that A and f unify, and that f 6 2 DB): Claim 1. If L is retained, then every admissible subclause contains L. Claim 2. If L is deleted, then no admissible subclause contains L.\nFirst, however, observe that deleting a literal L may cause the mode of some other literals to violate the mode declarations of Dec. It is easy to see that if L is deleted from a clause C, then the mode of all literals L 0 directly supported by L will change. Thus if C satis es a unique-mode declaration prior to the deletion of L, then after the deletion of L all literals L 0 that are directly supported by L will have invalid modes. Now, to see that Claim 1 is true, suppose instead that it is false. Then there must be some maximal subclause C 0 of H that satis es Dec, covers the fact f, and does not contain L. By the argument above, if C 0 does not contain L but satis ed Dec, then C 0 contains no literals L 0 from H that are supported by L. Hence the output variables of L are disjoint from the variables appearing in C 0 . This means that if L were to be added to C 0 the resulting clause would still satisfy Dec and cover f, which leads to a contradiction since C 0 was assumed to be maximal.\nTo verify Claim 2, let us introduce the following terminology. If C = (A B 1 ^: : :^B r ) is a clause and DB is a database, we will say that the substitution is a (DB; f)-witness for C i is associated with a proof that C ^DB `f (or more precisely, i A = f and 8i : 1 i r; B i 2 DB.) We claim that the following condition is an invariant of the for loop of the ForceSim NR algorithm.\nInvariant 1. Let C be any admissible subclause that contains all the literals in H 0 preceding L (i.e., that contains all those literals of H that were retained on previous iterations of the algorithm). Then every (DB; f)-witness for C is a superset of .\nThis can be easily established by induction on the number of iterations of the for loop. The condition is true when the loop is rst entered, since is initially the most general uni er of A and f. The condition remains true after an iteration in which L is deleted, since is unchanged. Finally, the condition remains true after an iteration in which L is retained: because 0 is maximally general, it may only assign values to the output variables of L, and by determinacy only one assignment to the output variables of L can make L true. Hence every (DB; f)-witness for C must contain the bindings in .\nNext, with an inductive argument and Claim 1 one can show that every admissible subclause C must contain all the literals that have been retained in previous iterations of the loop, leading to the following strengthening of Invariant 1: Invariant 1 0 . Let C be any admissible subclause. Then every (DB; f)-witness for C is a superset of . Now, notice that only two types of literals are deleted: (a) literals L such that no superset of can make L true, and (b) literals L 0 that are supported by a literal L of the preceding type. In case (a), clearly L cannot be part of any admissible subclause, since no superset of makes L succeed, and only such supersets can be witnesses of admissible clauses. In case (b), again L 0 cannot be part of any admissible subclause, since its declaration is invalid unless L is present in the clause, and by the argument above L cannot be in the clause.\nThis concludes the proof of the lemma.\nTo prove the theorem, we must now establish the following properties of the identi cation algorithm.\nCorrectness. By Theorem 1, if the target program is in d-DepthNonRec DB; Dec], then there is some clause C T that is equivalent to the target, and is a subclause of BOTTOM d (Dec). H is initially BOTTOM d and hence a superclause of C T . Now consider invoking ForceSim NR on any positive counterexample e + . By Lemma 3, if this invocation is successful, H will be replaced by H 0 , the longest subclause of H that covers e + . Since C T is a subclause of H that covers e + , this means that H 0 will again be a superclause of C T . Inductively, then, the hypothesis is always a superclause of the target.\nFurther, since the counterexample e + is always an instance that is not covered by the current hypothesis H, every time the hypothesis is updated, the new hypothesis is a proper subclause of the old. This means that Force1 NR will eventually identify the target clause.\nE ciency. The number of queries made is polynomial in j jDecj j and j jDBj j, since H is initially of size polynomial in j jDecj j, and is reduced in size each time a counterexample is provided. To see that each counterexample is processed in time polynomial in n r , n e , and n t , notice that since the length of H is polynomial, the number of repetitions of the for loop of ForceSim NR is also polynomial; further, since the arity of literals L is bounded by a, only an b + an e constants exist in DB D, and hence there are at most (an b + an e ) a substitutions 0 to check inside the for loop, which is again polynomial. Thus each execution of ForceSim NR requires only polynomial time.\nThis concludes the proof." }, { "figure_ref": [], "heading": "Learning a Linear Closed Recursive Clause", "publication_ref": [], "table_ref": [], "text": "Recall that if a clause has only one recursive literal, then the clause is linear recursive, and that if no recursive literal contains output variables, then the clause is closed linear recursive. In this section, we will describe how the Force1 algorithm can be extended to learn a single linear closed recursive clause. 2 Before presenting the extension, however, we would rst like to discuss a reasonable-sounding approach that, on closer examination, turns out to be incorrect." }, { "figure_ref": [], "heading": "A Remark on Recursive Clauses", "publication_ref": [], "table_ref": [], "text": "One plausible rst step toward extending Force1 to recursive clauses is to allow recursive literals in hypotheses, and treat them the same way as other literals|that is, to include recursive literals in the initial clause BOTTOM d , and delete these literals gradually as positives examples are received. A problem with this approach is that there is no simple way to check if a recursive literal in a clause succeeds or fails on a particular example. This makes it impossible to simply run ForceSim NR on clauses containing recursive literals.\nA straightforward (apparent) solution to this problem is to assume that an oracle exists which can be queried as to the success or failure of any recursive literal. For closed recursive clauses, it is su cient to assume that there is an oracle MEMBER Ct (DB; f) that answers the question Does DB ^P `f ? where C t is the unknown target concept, f is a ground fact, and DB is a database. Given such an oracle, one can determine if a closed recursive literal L r should be retained by checking if MEMBER C T (DB; L r ) is true. Such an oracle is very close to the notion of a membership query as used in computational learning theory. This is a natural extension of the Force1 NR learning algorithm to recursive clauses|in fact an algorithm based on similar ideas has been been previously conjectured to pac-learn closed recursive constant-depth determinate clauses (D zeroski et al., 1992). Unfortunately, this algorithm can fail to return a clause that is consistent with a positive counterexample. To illustrate this, consider the following example. components(Xs,X,Xs1), components(Zs,Z,Zs1), X1=Z1, append(Xs1,Ys,Zs1)." }, { "figure_ref": [], "heading": "Example. Consider using the extension of", "publication_ref": [ "b3", "b13" ], "table_ref": [], "text": "This program is determinate, has depth 1, and satis es the following set of declarations:\ncomponents(+, , ). null(+). equal(+,+). odd(+). append(+,+,+).\nWe will assume also a database DB that de nes the predicate null to be true for empty lists, and odd to be true for the constants 1 and 3.\nTo see how the forced simulation can fail, consider the following positive instance e = (f; D):\nf = append(l12; l3; l123) D = f cons(l123,1,l23), cons(l23,2,l3), cons(l3,3,nil), cons(l12,1,l2), cons(l2,2,nil), append(nil,l3,l3) g\nThis is simply a \\ attened\" form of append( 1,2], 3], 1,2,3]), together with the appropriate base case append( ], 3], 3]). Now consider beginning with the clause BOTTOM 1 and generalizing it using ForceSim NR to cover this positive instance. This process is illustrated in Figure 3. The clause on the left in the gure is BOTTOM d (Dec); the clause on the right is the output of forcibly simulating this clause on f with ForceSim NR . (For clarity we've assumed that only the single correct recursive call remains after forced simulation.)\nThe resulting clause is incorrect, in that it does not cover the given example e.\nThis can be easily seen by stepping through the actions of a Prolog interpreter with the generalized clause of Figure 3. The nonrecursive literals will all succeed, leading to the subgoal append(l2,l3,l23) (or in the usual Prolog notation, append( 2], 3], 2,3])). This subgoal will fail at the literal odd(X1), because X1 is bound to 2 for this subgoal, and the fact odd(2) is not true in DB D.\nThis example illustrates a pitfall in the policy of treating recursive and non-recursive literals in a uniform manner (For more discussion, see also (Bergadano & Gunetti, 1993;De Raedt, Lavra c, & D zeroski, 1993).) Unlike nonrecursive literals, the truth of the fact L r (corresponding to the recursive literal L r ) does not imply that a clause containing L r will succeed; it may be that while the rst subgoal L r succeeds, deeper subgoals fail. " }, { "figure_ref": [ "fig_4" ], "heading": "Forced Simulation for Recursive Clauses", "publication_ref": [], "table_ref": [], "text": "A solution to this problem is to replace the calls to the membership oracle in the algorithm sketched above with a call to a routine that forcibly simulates the actions of a top-down theorem-prover on a recursive clause. In particular, the following algorithm is suggested. First, build a nonrecursive \\bottom clause\", as was done in ForceSim NR . Second, nd some recursive literal L r such that appending L r to the bottom clause yields a recursive clause that can be generalized to cover the positive examples.\nAs in the nonrecursive case, a clause is generalized by deleting literals, using a straightforward generalization of the procedure for forced simulation of nonrecursive clauses. During forced simulation, any failing nonrecursive subgoals are simply deleted; however, when a recursive literal L r is encountered, one forcibly simulates the hypothesis clause recursively let A be the head of H 0 let be the mgu of A and e for each literal L in the body of H 0 do if there is a substitution 0 such that L 0 2 DB then 0 , where 0 is the most general such substitution else delete L from the body of H 0 , together with all literals L 0 supported (directly or indirectly) by L endif endfor % 5. generalize H 0 on the recursive subgoal L r if L r is ground then return ForceSim(H 0 fL r g; L r ; Dec; DB; h 1) else return FAILURE endif endif end The extended algorithm is similar to ForceSim NR , but di ers in that when the recursive literal L r is reached in the simulation of H, the corresponding subgoal L r is created, and the hypothesized clause is recursively forcibly simulated on this subgoal. This ensures that the generalized clause will also succeed on the subgoal. For reasons that will become clear shortly, we would like this algorithm to terminate, even if the original clause H enters an in nite loop when used in a top-down interpreter. In order to ensure termination, an extra argument h is passed to ForceSim. The argument h represents a depth bound for the forced simulation.\nTo summarize, the basic idea behind the algorithm of Figure 4 is to simulate the hypothesized clause H on f, and generalize H by deleting literals whenever H would fail on f or on any subgoal of f.\nExample.\nConsider using ForceSim to forcibly simulate the following recursive clause Here the recursive literal L r is append(Xs1,Ys,Zs1). We will also assume that f is taken from the extended query e = (f; D), which is again the attened version of the instance append( 1,2], 3], 1,2,3]) used in the previous example; that Dec is the set of declarations of in the previous example; and that the database DB is D null(nul).\nAfter executing steps 1-4 of ForceSim, a number of failing literals are deleted, leading to the substitution3 of fXs = 1; 2], Ys = 3], Zs = 1; 2; 3], X1 = 1, Xs1 = 2], Y1 = 3, Ys1 = ], Z1 = 1, Zs1 = 2; 3]g and the following reduced clause:\nappend(Xs,Ys,Zs) components(Xs,X1,Xs1)^components(Ys,Y1,Ys1)^components(Zs,Z1,Zs1)n ull(Ys1)^odd(X1)^odd(Y1)^odd(Z1)^equal(X1,Z1)â ppend(Xs1,Ys,Zs1)\nHence the recursive subgoal is L r = append(Xs1; Ys; Zs1) = append( 2]; 3]; 2; 3])\nRecursively applying ForceSim to this goal produces the substitution fXs = 2], Ys = 3], Zs = 2; 3], X1 = 2, Xs1 = ], Y1 = 3, Ys1 = ], Z1 = 2, Zs1 = 3]g and also results in deleting the additional literals odd(X1) and odd(Z1). The next recursive subgoal is L r = append( ]; 3]; 3]); since this clause is included in the database DB, ForceSim will terminate. The nal clause returned by ForceSim in this case is the following: append(Xs,Ys,Zs) components(Xs,X1,Xs1)^components(Ys,Y1,Ys1)^components(Zs,Z1,Zs1)n ull(Ys1)^odd(Y1)^equal(X1,Z1)â ppend(Xs1,Ys,Zs1)\nNotice that this clause does cover e.\nAs in Section 3 we begin our analysis by showing the correctness of the forced simulation algorithm|i.e., by showing that forced simulation does indeed produce a unique maximally speci c generalization of the input clause that covers the example.\nThis proof of correctness uses induction on the depth of a proof. Let us introduce again some additional notation, and write P ^DB `h f if the Prolog program (P; DB) can be used to prove the fact f in a proof of depth h or less. (The notion of depth of a proof is the usual one; we will de ne looking up f in the database DB to be a proof of depth zero.) We have the following result concerning the ForceSim algorithm.\nTheorem 4 Let Dec be a declaration in DetDEC =1 , let DB be a database, let f be a fact, and let H be a determinate closed linear recursive clause that satis es Dec. Then one of the following conditions must hold:\nForceSim(H; f; Dec; DB; h) returns FAILURE, and no recursive subclause H 0 of H satis es both Dec and the constraint H 0 ^DB `h f; or, ForceSim(H; f; Dec; DB; h) returns a clause H 0 , and H 0 is the unique syntactically largest recursive subclause of H that satis es both Dec and the constraint H 0 ^DB `h f. Proof: Again to avoid repetition, we will refer to syntactically maximal recursive (nonrecursive) subclauses H 0 of H that satisfy both Dec and the constraint H 0 ^DB `h f as \\admissible recursive (nonrecursive) subclauses\" respectively.\nThe proof largely parallels the proof of Lemma 3|in particular, similar arguments show that the clause returned by ForceSim satis es the conditions of the theorem whenever FAILURE is returned and whenever H is returned. Note that the correctness of ForceSim when H is returned establishes the base case of the theorem for h = 0.\nFor the case of depth h > 0, let us assume the theorem holds for depth h 1 and proceed using mathematical induction. The arguments of Lemma 3 show that the following condition is true after the for loop terminates. Invariant 1 0 . H 0 is the unique maximal nonrecursive admissible subclause of H, and every (DB; f)-witness for H 0 is a superset of . Now, let us assume that there is some admissible recursive subclause H . Clearly H must contain the recursive literal L r of H, since L r is the only recursive literal of H. Further, the nonrecursive clause Ĥ = H fL r g must certainly satisfy Dec and also Ĥ ^DB `f, so it must (by the maximality of H 0 ) be a subclause of H 0 . Hence H must be a subclause of H 0 fL r g. Finally, if L r is ground (i.e., if L r is closed in the clause H 0 L r ) then by Invariant 1 0 , the clause H must also satisfy H ^DB `Lr by a proof of depth h 1. (This is simply equivalent to saying that the recursive subgoal of L r generated in the proof must succeed.)\nBy the inductive hypothesis, then, the recursive call must return the unique maximal admissible recursive subclause of H 0 L r , which by the argument above must also be the unique maximal admissible recursive subclause of H.\nThus by induction the theorem holds." }, { "figure_ref": [ "fig_6" ], "heading": "A Learning Algorithm for Linear Recursive Clauses", "publication_ref": [ "b18" ], "table_ref": [], "text": "Given this method for generalizing recursive clauses, one can construct a learning algorithm for recursive clauses as follows. First, guess a recursive literal L r , and make H = BOTTOM d L r the initial hypothesis of the learner. Then, ask a series of equivalence queries. After a positive counterexample e + , use forced simulation to minimally generalize H to cover e + . After a negative example, choose another recursive literal L 0 r , and reset the hypothesis to H = BOTTOM d L 0 r . Figure 5 presents an algorithm that operates along these lines. Let d-DepthLinRec denote the language of linear closed recursive clauses of depth d or less. We have the following result:\nTheorem 5 For any constants a and d, the language family d-DepthLinRec DB = ; a-DetDEC =1 ]\nis uniformly identi able from equivalence queries.\nProof: We will show that Force1 uniformly identi es this language family with a polynomial number of queries.\nCorrectness and query e ciency. There are at most aj jDj j + aj jDBj j constants in any set DB D, at most (aj jDj j + aj jDBj j) a 0 a 0 -tuples of such constants, and hence at most (aj jDj j+ aj jDBj j) a 0 distinct recursive subgoals L r that might be produced in proving that a linear recursive clause C covers an extended instance (f; D). Thus every terminating proof of a fact f using a linear recursive clause C must be of depth (aj jDj j+ aj jDBj j) a 0 or less; i.e., for h = (aj jDj j + aj jDBj j) a 0 , C ^DB ^D `h f i C ^DB ^D `f Thus Theorem 4 can be strengthened: for the value of h used in Force1, the subroutine ForceSim returns the syntactically largest subclause of H that covers the example (f; D) whenever any such a subclause exists, and returns FAILURE otherwise.\nWe now argue the correctness of the algorithm as follows. Assume that the hypothesized recursive literal is \\correct\"|i.e., that the target clause C T is some subclause of BOTTOM d L r . In this case it is easy to see that Force1 will identify C T , using an argument that parallels the one made for Force1 NR . Again by analogy to Force1 NR , it is easy to see that only a polynomial number of equivalence queries will be made involving the correct recursive literal.\nNext assume that L r is not the correct recursive literal. Then C T need not be a subclause of BOTTOM d L r , and the response to an equivalence query may be either a positive or negative counterexample. If a positive counterexample e + is received and ForceSim is called, then the result may be FAILURE, or it may be a proper subclause of H that covers e + . Thus the result of choosing an incorrect L r will be a (possibly empty) sequence of positive counterexamples followed by either a negative counterexample or FAILURE. Since all equivalence queries involving the correct recursive literal will be answered by either a positive counterexample or \\yes\"4 , then if a negative counterexample or FAILURE is obtained, it must be that L r is incorrect.\nThe number of variables in BOTTOM d can be bounded by aj jBOTTOM d (Dec)j j, and as each closed recursive literal is completely de ned by an a 0 -tuple of variables, the number of possible closed recursive literals L r can be bounded by p = (aj jBOTTOM d (Dec)j j) a 0 Since j jBOTTOM d (Dec)j j is polynomial in j jDecj j, p is also polynomial in j jDecj j. This means that only a polynomial number of incorrect L r 's need to be discarded. Further since each successive hypothesis using a single incorrect L r is a proper subclause of the previous hypothesis, only a polynomial number of equivalence queries are needed to discard an incorrect L r . Thus only a polynomial number of equivalence queries can be made involving incorrect recursive literals.\nThus Force1 needs only a polynomial number of queries to identify C t .\nE ciency. ForceSim runs in time polynomial in its arguments H , f, Dec, DB D and h. When ForceSim is called from Force1, h is always polynomial in n e and j jDBj j, and H is always no larger than j jBOTTOM d (Dec)j j + 1, which in turn is polynomial in the size of Dec. Hence every invocation of ForceSim requires time polynomial in n e , Dec, and DB, and hence Force1 processes each query in polynomial time.\nThis completes the proof.\nThis result is somewhat surprising, as it shows that recursive clauses can be learned even given an adversarial choice of training examples. In contrast, most implemented ILP systems require well-choosen examples to learn recursive clauses.\nThis formal result can also be strengthened in a number of technical ways. One of the more interesting strengthenings is to consider a variant of Force1 that maintains a xed set of positive and negative examples, and constructs the set of all least general clauses that are consistent with these examples: this could be done by taking each of the clauses BOTTOM d L r 1 , : : :, BOTTOM d L rp , forcibly simulating them on each of the positive examples in turn, and then discarding those clauses that cover one of more negative examples. This set of clauses could then be used to tractably encode the version space of all consistent programs, using the S; N] representation for version spaces (Hirsh, 1992)." }, { "figure_ref": [], "heading": "Extending the Learning Algorithm", "publication_ref": [], "table_ref": [], "text": "We will now consider a number of ways in which the result of Theorem 5 can be extended." }, { "figure_ref": [], "heading": "The Equality-Predicate and Unique-Mode Assumptions", "publication_ref": [], "table_ref": [], "text": "Theorem 5 shows that the language family d-DepthLinRec DB = ; a-DetDEC =1 ] is identi able from equivalence queries. It is natural to ask if this result can be extended by dropping the assumptions that an equality predicate is present and that the declaration contains a unique legal mode for each predicate: that is, if the result can be extended to the language family d-DepthLinRec DB; a-DetDEC] This extension is in fact straightforward. Given a database DB and a declaration Dec = (p; a 0 ; R) that do not satisfy the equality-predicate and unique-mode assumptions, one can modify them as follows.\n1. For every constant c appearing in DB, add the fact equal(c; c) to DB. 2. For every predicate q that has k valid modes qs 1 , : : :, qs k in R:\n(a) remove the mode declarations for q, and replace them with k mode strings for the k new predicates q s 1 , : : :, q s k , letting q s i s i be the unique legal mode for the predicate q s i ; (b) remove every fact q(t 1 ; : : :; t a ) of the predicate q from DB, and replace it with the k facts q s 1 (t 1 ; : : :; t a ), : : :, q s k (t 1 ; : : :; t a ). Note that if the arity of predicates is bounded by a constant a, then the number of modes k for any predicate q is bounded by the constant 2 a , and hence these transformations can be performed in polynomial time, and with only a polynomial increase in the size of Dec and DB.\nClearly any target clause C t 2 d-DepthLinRec DB; Dec] is equivalent to some clause C 0 t 2 d-DepthLinRec DB 0 ; Dec 0 ], where DB 0 and Dec 0 are the modi ed versions of DB and Dec constructed above. Using Force1 it is possible to identify C 0 t . (In learning C 0 t , one must also perform steps 1 and 2b above on the description part D of every counterexample (f; D).) Finally, one can convert C 0 t to an equivalent clause in d-DepthLinRec DB; Dec] by repeatedly resolving against the clause equal(X,X) , and also replacing every predicate symbol q s i with q.\nThis leads to the following strengthening of Theorem 5:\nProposition 6 For any constants a and d, the language family d-DepthLinRec DB; a-DetDEC]\nis uniformly identi able from equivalence queries." }, { "figure_ref": [], "heading": "The Datalog Assumption", "publication_ref": [ "b39", "b40", "b33", "b14" ], "table_ref": [], "text": "So far we have assumed that the target program contains no function symbols, and that the background knowledge provided by the user is a database of ground facts. While convenient for formal analysis, these assumptions can be relaxed.\nExamination of the learning algorithm shows that the database DB is used in only two ways.\nIn forcibly simulating a hypothesis on an extended instance (f; D), it is necessary to nd a substitution 0 that makes a literal L true in the database DB D. While this can be done algorithmically if DB and D are sets of ground facts, it is also plausible to assume that the user has provided an oracle that answers in polynomial time any mode-correct query L to the database DB. Speci cally, the answer of the oracle will be either { the (unique) most-general substitution 0 such that DB ^D `L 0 and L 0 is ground; or { \\no\" if no such 0 exists. Such an oracle would presumably take the form of an e cient theorem-prover for DB.\nWhen calling ForceSim, the top-level learning algorithm uses DB and D to determine a depth bound on the length of a proof made using the hypothesis program. Again, it is reasonable to assume that the user can provide this information directly, in the form of an oracle. Speci cally, this oracle would provide for any fact f a polynomial upper bound on the depth of the proof for f in the target program.\nFinally we note that if e cient (but non-ground) background knowledge is allowed, then function symbols always can be removed via attening (Rouveirol, 1994). This transformation also preserves determinacy, although it may increase depth|in general, the depth of a attened clause depends also on term depth in the original clause. Thus, the assumption that the target program is in Datalog can be replaced by assumptions that the term depth is bounded by a constant, and that two oracles are available: an oracle that answers queries to the background knowledge, and a depth-bound oracle. Both types of oracles have been frequently assumed in the literature (Shapiro, 1982;Page & Frisch, 1992;D zeroski et al., 1992)." }, { "figure_ref": [], "heading": "Learning k-ary Recursive Clauses", "publication_ref": [], "table_ref": [], "text": "It is also natural to ask if Theorem 5 can be extended to clauses that are not linear recursive.\nOne interesting case is the case of closed k-ary recursive clauses for constant k. It is straightforward to extend Force1 to guess a tuple of k recursive literals L r 1 , : : :, L r k , and then to extend ForceSim to recursively generalize the hypothesis clause on each of the facts L r 1 , : : :, L r k . The arguments of Theorems 4 and 5 can be modi ed to show that this extension will identify the target clause after a polynomial number of equivalence queries.\nUnfortunately, however, it is no longer the case that ForceSim runs in polynomial time. This is easily seen if one considers a tree of all the recursive calls made by ForceSim; in general, this tree will have branching factor k and polynomial depth, and hence exponential size. This result is unsurprising, as the implementation of ForceSim described forcibly simulates a depth-bounded top-down interpreter, and a k-ary recursive program can take exponential time to interpret with such an interpreter.\nThere are at least two possible solutions to this problem. One possible solution is to retain the simple top-down forced simulation procedure, and require the user to provide a depth bound tighter than (aj jDj j + aj jDBj j) a 0 , the maximal possible depth of a tree. For example, in learning a 2-ary recursive sort such as quicksort, the user might specify a logarithmic depth bound, thus guaranteeing that ForceSim is polynomial-time. This requires additional input from the user, but would be easy to implement. It also has the advantage (not shared by the approach described below) that the hypothesized program can be executed using a simple depth-bounded Prolog interpreter, and will always have shallow proof trees. This seems to be a plausible bias to impose when learning k-ary recursive Prolog programs, as many of these tend to have shallow proof trees.\nA second solution to the possible high cost of forced simulation for k-ary recursive programs is to forcibly simulate a \\smarter\" type of interpreter|one which can execute k-ary recursive program in polynomial time. 5 One sound and complete theorem-prover for closed k-ary recursive programs can be implemented as follows.\nConstruct a top-down proof tree in the usual fashion, i.e., using a depth-rst left-to-right strategy, but maintain a list of the ancestors of the current subgoal, and also a list VISITED that records, for each previously visited node in the tree, the subgoal associated with that node. Now, suppose that in the course of constructing the proof tree one generates a subgoal f that is on the VISITED list. Since the traversal of the tree is depth-rst left-to-right, the node associated with f is either an ancestor of the current node, or is a descendant of some left sibling of an ancestor of the current node. In the former case, the proof tree contains a loop, and cannot produce a successful proof; in this case the theorem-prover should exit with failure. In the latter case, a proof must already exist for f 0 , and hence nodes below the current node in the tree need not be visited; instead the theorem prover can simply assume that f is true.\nThis top-down interpreter can be easily extended into a forced simulation procedure: one simply traverses the tree in the same order, generalizing the current hypothesis H as needed to justify each inference step in the tree. The only additional point to note is that if one is performing forced simulation and revisits a previously proved subgoal f at a node n, the current clause H need not be further generalized in order to prove f, and hence it is again permissible to simply skip the portion of the tree below n. We thus have the following result.\nTheorem 7 Let d-Depth-k-Rec be the set of k-ary closed recursive clauses of depth d.\nFor any constants a, d, and k the language family d-Depth-k-Rec DB; a-DetDEC]\nis uniformly identi able from equivalence queries.\nProof: Omitted, but following the informal argument made above.\nNote that we give this result without the restrictions that the database contains an equality relation and that the declaration is unique-mode, since the tricks used to relax these restrictions in Proposition 6 are still applicable." }, { "figure_ref": [], "heading": "Learning Recursive and Base Cases Simultaneously", "publication_ref": [ "b9", "b9", "b42" ], "table_ref": [], "text": "So far, we have analyzed the problem of learning single clauses: rst a single nonrecursive clause, and then a single recursive clause. However, every useful recursive program contains at least two clauses: a recursive clause, and a nonrecursive base case. It is natural to ask if it is possible to learn a complete recursive program by simultaneously learning both a recursive clause, and its associated nonrecursive base case.\nIn general, this is not possible, as is demonstrated elsewhere (Cohen, 1995). However, there are several cases in which the positive result can be extended to two-clause programs. In this section, we will rst discuss learning a recursive clause and base clause simultaneously, assuming that any determinate base clause is possible, but also assuming that an additional \\hint\" is available, in the form of a special \\basecase\" oracle. We will then discuss various alternative types of \\hints\".\nLet P be a target program with base clause C B and recursive clause C R . A basecase oracle for P takes as input an extended instance (f; D) and returns \\yes\" if C B ^DB^D `f, and \\no\" otherwise. In other words, the oracle determines if f is covered by the nonrecursive base clause alone. As an example, for the append program, the basecase oracle should return \\yes\" for an instance append(Xs,Ys,Zs) when Xs is the empty list, and \\no\" otherwise.\nGiven the existence of a basecase oracle, the learning algorithm can be extended as is uniformly identi able from equivalence and basecase queries.\nProof: Omitted.\nA companion paper (Cohen, 1995) shows that something like the basecase oracle is necessary: in particular, without any \\hints\" about the base clause, learning a two-clause linear recursive program is as hard as learning boolean DNF. However, there are several situations in which the basecase oracle can be dispensed with.\nCase 1. The basecase oracle can be replaced by a polynomial-sized set of possible base clauses. The learning algorithm in this case is to enumerate pairs of base clauses C B i and \\starting clauses\" BOTTOM L r j , generalize the starting clause with forced simulation, and mark a pair as incorrect if overgeneralization is detected.\nCase 2. The basecase oracle can be replaced by a xed rule that determines when the base clause is applicable. For example, consider the rule that says that the base clause is applicable to any atom p(X 1 ; : : :; X a ) such that no X i is a non-null list. Adopting such a rule leads immediately to a learning procedure that pac-learns exactly those two-clause linear recursive programs for which the rule is correct.\nCase 3. The basecase oracle can be also be replaced by a polynomial-sized set of rules for determining when a base clause is applicable. The learning algorithm in this case is pick a unmarked decision rule and run Force2 using that rule as a basecase oracle. If Force2 returns \\no consistent hypothesis\" then the decision rule is marked incorrect, and a new one is choosen. This algorithm will learn those two-clause linear recursive programs for which any of the given decision rules is correct. Even though the general problem of determining a basecase decision rule for an arbitrary Datalog program may be di cult, it may be that a small number of decision procedures apply to a large number of common Prolog programs. For example, the recursion for most list-manipulation programs halts when some argument is reduced to a null list or to a singleton list. Thus Case 3 above seems likely to cover a large fraction of the automatic logic programming programs of practical interest.\nWe also note that heuristics have been proposed for nding such basecase decision rules automatically using typing restrictions (Stahl, Tausend, & Wirth, 1993)." }, { "figure_ref": [], "heading": "Combining the Results", "publication_ref": [], "table_ref": [], "text": "Finally, we note that all of the extensions described above are compatible. This means that if we let kd-MaxRecLang be the language of two-clause programs consisting of one clause C R that is k-ary closed recursive and depth-d determinate, and one clause C B that is nonrecursive and depth-d determinate, then the following holds.\nProposition 9 For any constants a, k and d the language family kd-MaxRecLang DB; a-DetDEC]\nis uniformly identi able from equivalence and basecase queries." }, { "figure_ref": [], "heading": "Further Extensions", "publication_ref": [ "b9" ], "table_ref": [], "text": "The notation kd-MaxRecLang may seem at this point to be unjusti ed; although it is the most expressive language of recursive clauses that we have proven to be learnable, there are numerous extensions that may be e ciently learnable. For example, one might generalize the language to allow an arbitrary number of recursive clauses, or to include clauses that are not determinate. These generalizations might very well be pac-learnable|given the results that we have presented so far. However, a companion paper (Cohen, 1995) presents a series of negative results showing that most natural generalizations of kd-MaxRecLang are not e ciently learnable, and further that kd-MaxRecLang itself is not e ciently learnable without the basecase oracle. Speci cally, the companion paper shows that eliminating the basecase oracle leads to a problem that is as hard as learning boolean DNF, an open problem in computational learning theory. Similarly, learning two linear recursive clauses simultaneously is as hard as learning DNF, even if the base case is known. Finally, the following learning problems are all as hard as breaking certain (presumably) secure cryptographic codes: learning n linear recursive determinate clauses, learning one n-ary recursive determinate clause, or learning one linear recursive \\k-local\" clause. All of these negative results hold not only for the model of identi cation from equivalence queries, but also for the weaker models of pac-learnability and pac-predictability." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b40", "b10", "b24", "b11", "b32", "b17", "b30", "b23", "b26", "b0", "b9", "b6", "b36", "b35" ], "table_ref": [], "text": "In discussing related work we will concentrate on previous formal analyses that employ a learning model similar to that considered here: namely, models that (a) require all computation be polynomial in natural parameters of the problem, and (b) assume either a neutral source or adversarial source of examples, such as equivalence queries or stochastically presented examples. We note, however, that much previous formal work exists that relies on di erent assumptions. For instance, there has been much work in which member or subset queries are allowed (Shapiro, 1982;De Raedt & Bruynooghe, 1992), or where examples are choosen in some non-random manner that is helpful to the learner (Ling, 1992;De Raedt & D zeroski, 1994). There has also been some work in which the e ciency requirements imposed by the pac-learnability model are relaxed (Nienhuys-Cheng & Polman, 1994). If the requirement of e ciency is relaxed far enough, very general positive results can be obtained using very simple learning algorithms. For example, in model of learnability in the limit (Gold, 1967), any language that is both recursively enumerable and decidable (which includes all of Datalog) can be learned by a simple enumeration procedure; in the model of U-learnability (Muggleton & Page, 1994) any language that is polynomially enumerable and polynomially decidable can be learned by enumeration. The most similar previous work is that of Frazier andPage (1993a, 1993b). They analyze the learnability from equivalence queries of recursive programs with function symbols but without background knowledge. The positive results they provide are for program classes that satisfy the following property: given a set of positive examples S + that requires all clauses in the target program to prove the instances in S + , only a polynomial number of recursive clauses are possible; further the base clause must have a certain highly constrained form. Thus the concept class is \\almost\" bounded in size by a polynomial. The learning algorithm for such a program class is to interleave a series of equivalence queries that test every possible target program. In contrast, our positive results are for exponentially large classes of recursive clauses. Frazier and Page also present a series of negative results suggesting that the learnable languages that they analyzed are di cult to generalize without sacri cing e cient learnability.\nPrevious results also exist on the pac-learnability of nonrecursive constant-depth determinate programs, and on the pac-learnability of recursive constant-depth determinate programs in a model that also allows membership and subset queries (D zeroski et al., 1992).\nThe basis for the intelligent search used in our learning algorithms is the technique of forced simulation. This method nds the least implicant of a clause C that covers an extended instance e. Although when we developed this method we believed it to be original, subsequently we discovered that this was not the case|an identical technique had been previously proposed by Ling (1991). Since an extended instance e can be converted (via saturation) to a ground Horn clause, there is also a close connection between forced simulation and recent work on \\inverting implication\" and \\recursive anti-uni cation\"; for instance, Muggleton (1994) describes a nondeterministic procedure for nding all clauses that imply a clause C, and Idestam-Almquist (1993) describes a means of constraining such an implicant-generating procedure to produce the least common implicant of two clauses. However, while both of these techniques have obvious applications in learning, both are extremely expensive in the worst case.\nThe CRUSTACEAN system (Aha et al., 1994) uses inverting implication in constrained settings to learn certain restricted classes of recursive programs. The class of programs e ciently learned by this system is not formally well-understood, but it appears to be similar to the classes analyzed by Frazier and Page. Experimental results show that these systems perform well on inferring recursive programs that use function symbols in certain restricted ways. This system cannot, however, make use of background knowledge.\nFinally, we wish to direct the reader to several pieces of our own research that are relevant. As noted above, a companion paper exists which presents negative learnability results for several natural generalizations of the language kd-MaxRecLang (Cohen, 1995). Another related paper investigates the learnability of non-recursive Prolog programs (Cohen, 1993b); this paper also contains a number of negative results which strongly motivate the restriction of constant-depth determinacy. A nal prior paper which may be of interest presents some experimental results with a Prolog implementation of a variant of the Force2 algorithm (Cohen, 1993a). This paper shows that forced simulation can be the basis of a learning program that outperforms state-of-the art heuristic methods such as FOIL (Quinlan, 1990;Quinlan & Cameron-Jones, 1993) in learning from randomly chosen examples." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b44", "b36", "b35", "b9" ], "table_ref": [], "text": "Just as it is often desirable to have guarantees of correctness for a program, in many plausible contexts it would be highly desirable to have an automatic programming system o er some formal guarantees of correctness. The topic of this paper is the learnability of recursive logic programs using formally well-justi ed algorithms. More speci cally, we have been concerned with the development of algorithms that are provably sound and e cient in learning recursive logic programs from equivalence queries. We showed that one constantdepth determinate closed k-ary recursive clause is identi able from equivalent queries; this implies immediately that this language is also learnable in Valiant's (1984) model of paclearnability. We also showed that a program consisting of one such recursive clause and one constant-depth determinate nonrecursive clause is identi able from equivalence queries given an additional \\basecase oracle\", which determines if a positive example is covered by the non-recursive base clause of the target program alone.\nIn obtaining these results, we have introduced several new formal techniques for analyzing the learnability of recursive programs. We have also shown the soundness and e ciency of several instances of generalization by forced simulation. This method may have applications in practical learning systems. The Force2 algorithm compares quite well experimentally with modern ILP systems on learning problems from the restricted class that it can identify (Cohen, 1993a); thus sound learning methods like Force2 might be useful as a lter before a more general ILP system like FOIL (Quinlan, 1990;Quinlan & Cameron-Jones, 1993). Alternatively, forced simulation could be used in heuristic programs. For example, although forced simulation for programs with many recursive clauses is nondeterministic and hence potentially ine cient, one could introduce heuristics that would make the forced simulation e cient, at the cost of completeness.\nA companion paper (Cohen, 1995) shows that the positive results of this paper are not likely to be improved: either eliminating the basecase oracle for the language above or learning two recursive clauses simultaneously is as hard as learning DNF, and learning n linear recursive determinate clauses, one n-ary recursive determinate clause, or one linear recursive \\k-local\" clause is as hard as breaking certain cryptographic codes. With the positive results of this paper, these negative results establish the boundaries of learnability for recursive programs function-free in the pac-learnability model. These results thus not only give a prescription for building a formally justi ed system for learning recursive programs; taken together, they also provide upper bounds on what one can hope to achieve with an e cient, formally justi ed system that learns recursive programs from random examples alone." }, { "figure_ref": [], "heading": "Appendix A. Additional Proofs", "publication_ref": [], "table_ref": [], "text": "Theorem 1 states: Let Dec = (p; a 0 ; R) be a declaration in 2 a-DetDEC = , let n r = j jRj j, let X 1 ; : : :; X a 0 be distinct variables, and de ne the clause BOTTOM d as follows: Thus for any clause C j jDEEPEN Dec (C)j j n + (an) a 1 n r\n(1) By a similar argument j jCONSTRAIN Dec (C)j j n + (an) a n r (2) Since both of the functions DEEPEN Dec and CONSTRAIN Dec give outputs that are polynomially larger in size than their inputs, if follows that composing these functions a constant number of times, as was done in computing BOTTOM d for constant d, will also produce only a polynomial increase in the size.\nNext, we wish to show that every depth-d determinate clause C that satis es Dec is equivalent to some subclause of BOTTOM d . Let C be some depth-d determinate clause, and without loss of generality let us assume that no pair of literals L i and L j in the body of C have the same mode, predicate symbol, and sequence of input variables. 6Given C, let us now de ne the substitution C as follows:\n1. Initially set C fX 1 = X 1 ; : : :; X a 0 = X a 0 g where X 1 ; : : :; X a 0 are the arguments to the head of BOTTOM d and X 1 ; : : :; X a 0 are the arguments to the head of C.\nNotice that because the variables in the head of BOTTOM d are distinct, this mapping is well-de ned.\n2. Next, examine each of the literals in the body of C in left-to-right order. For each literal L, let variables T 1 ; : : :T k be its input variables. For each literal L in the body BOTTOM d with the same mode and predicate symbol whose input variables T 1 ; : : :; T k are such that 8i : 1 i r; T j C = T j , modify C as follows:\nC C fU 1 = U 1 ; : : :; U l = U l g where U 1 ; : : :; U l are the output variables of L and U 1 ; : : :; U l are the output variables of L . Notice that because we assume that C contains only one literal L with a given predicate symbol and sequence of input variables, and because the output variables of literals L in BOTTOM d are distinct, this mapping is again well-de ned. It is also easy to verify (by induction on the length of C) that in executing this procedure some variable in BOTTOM d is always mapped to each input variable T i , and that at least one L meeting the requirements above exists. Thus the mapping C is onto the variables appearing in C. 7Let A be the head of BOTTOM d , and consider the clause C 0 which is de ned as follows:\nThe head of C 0 is A . The body of C 0 contains all literals L from the body of BOTTOM d such that either { L C is in the body of C { L is the literal equal(X i ; X j ) and X i C = X j C .\nWe claim that C 0 is a subclause of BOTTOM d that is equivalent to C. Certainly C 0 is a subclause of BOTTOM d . One way to see that it is equivalent to C is to consider the clause Ĉ and the substitution ^ C which are generated as follows. Initially, let Ĉ = C 0 and let ^ C = C . Then, for every literal L = equal(X i ; X j ) in the body of Ĉ, delete L from Ĉ, and nally replace Ĉ with Ĉ ij and replace ^ C with ( ^ C ) ij , where ij is the substitution fX i = X ij ; X j = X ij g and X ij is some new variable not previously appearing in Ĉ. (Note: by ( ^ C ) ij we refer to the substitution formed by replacing every occurrence of X i or X j appearing in ^ C with X ij .) Ĉ is semantically equivalent to C 0 because the operation described above is equivalent to simply resolving each possible L in the body of C 0 against the clause \\equal(X,X) \".\nThe following are now straightforward to verify:\n^ C is a one-to-one mapping.\nTo see that this is true, notice that for every pair of assignments X i = Y and X j = Y in C there must be a literal equal(X i ; X j ) in C 0 . Hence following the process described above the assignments X i = Y and X j = Y in ^ C would eventually be replaced with X ij = Y and X ij = Y .\n^ C is onto the variables in C.\nNotice that C was onto the variables in C, and for every assignment X i = Y in C there is some assignment in ^ C with a right-hand side of Y (and this assignment is either of the form X i = Y or X ij = Y ). Thus ^ C is also onto the variables in C.\nA literal L is in the body of Ĉ i L^ C is in the body of C. This follows from the de nition of C 0 and from the fact that for every literal L from C 0 that is not of the form equal(X i ; X j ) there is a corresponding literal in Ĉ.\nThus Ĉ is an alphabetic variant of C, and hence is equivalent to C. Since Ĉ is also equivalent to C 0 , it must be that C 0 is equivalent to C, which proves our claim." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The author wishes to thank three anonymous JAIR reviewers for a number of useful suggestions on the presentation and technical content." } ]
[ { "authors": "D Aha; S Lapointe; C X Ling; S Matwin", "journal": "Springer-Verlag", "ref_id": "b0", "title": "Inverting implication with small training sets", "year": "1994" }, { "authors": "D Angluin", "journal": "Machine Learning", "ref_id": "b1", "title": "Queries and concept learning", "year": "1988" }, { "authors": "D Angluin", "journal": "", "ref_id": "b2", "title": "Equivalence queries and approximate ngerprints", "year": "1989" }, { "authors": "F Bergadano; D Gunetti", "journal": "", "ref_id": "b3", "title": "An interactive system to learn functional logic programs", "year": "1993" }, { "authors": "A Biermann", "journal": "IEEE Transactions on Systems, Man and Cybernetics", "ref_id": "b4", "title": "The inference of regular lisp programs from examples", "year": "1978" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b5", "title": "A pac-learning algorithm for a restricted class of recursive logic programs", "year": "1993" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b6", "title": "Pac-learning non-recursive Prolog clauses", "year": "1993" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b7", "title": "Rapid prototyping of ILP systems using explicit bias", "year": "1993" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b8", "title": "Pac-learning nondeterminate clauses", "year": "1994" }, { "authors": "W W Cohen", "journal": "Journal of AI Research", "ref_id": "b9", "title": "Pac-learning recursive logic programs: negative results", "year": "1995" }, { "authors": "L De Raedt; M Bruynooghe", "journal": "Machine Learning", "ref_id": "b10", "title": "Interactive concept-learning and constructive induction by analogy", "year": "1992" }, { "authors": "L De Raedt; S ", "journal": "", "ref_id": "b11", "title": "First-order jk-clausal theories are PAC-learnable", "year": "1994" }, { "authors": "", "journal": "", "ref_id": "b12", "title": "Proceedings of the Fourth International Workshop on Inductive Logic Programming", "year": "" }, { "authors": "L De Raedt; N Lavra C; S ", "journal": "", "ref_id": "b13", "title": "Multiple predicate learning", "year": "1993" }, { "authors": "S Muggleton; S Russell; S ", "journal": "", "ref_id": "b14", "title": "Pac-learnability of determinate logic programs", "year": "1992" }, { "authors": "M Frazier; C D Page", "journal": "", "ref_id": "b15", "title": "Learnability in inductive logic programming: Some basic results and techniques", "year": "1993" }, { "authors": "M Frazier; C D Page", "journal": "", "ref_id": "b16", "title": "Learnability of recursive, non-determinate theories: Some basic results and techniques", "year": "1993" }, { "authors": "M Gold", "journal": "Information and Control", "ref_id": "b17", "title": "Language identi cation in the limit", "year": "1967" }, { "authors": "H Hirsh", "journal": "MIT Press", "ref_id": "b18", "title": "Polynomial-time learning with version spaces", "year": "1992" }, { "authors": "P Idestam-Almquist", "journal": "", "ref_id": "b19", "title": "Generalization under implication by recursive anti-uni cation", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": "Cohen King; R D Muggleton; S Lewis; R A Sternberg; M J E ", "journal": "Proceedings of the National Academy of Science", "ref_id": "b21", "title": "Drug design by machine learning: the use of inductive logic programming to model the structureactivity relationships of trimethoprim analogues binding to dihydrofolate reductase", "year": "1992" }, { "authors": "N Lavra C; S ", "journal": "Springer Verlag", "ref_id": "b22", "title": "Background knowledge and declarative bias in inductive concept learning", "year": "1992" }, { "authors": "C Ling", "journal": "", "ref_id": "b23", "title": "Inventing necessary theoretical terms in scienti c discovery and inductive logic programming", "year": "1991" }, { "authors": "C Ling", "journal": "Academic Press", "ref_id": "b24", "title": "Logic program synthesis from good examples", "year": "1992" }, { "authors": "J W Lloyd", "journal": "Springer-Verlag", "ref_id": "b25", "title": "Foundations of Logic Programming: Second Edition", "year": "1987" }, { "authors": "S Muggleton", "journal": "Arti cial Intelligence", "ref_id": "b26", "title": "Inverting implication", "year": "1994" }, { "authors": "S Muggleton; L De Raedt", "journal": "Journal of Logic Programming", "ref_id": "b27", "title": "Inductive logic programming: Theory and methods", "year": "1994" }, { "authors": "S Muggleton; C Feng", "journal": "Academic Press", "ref_id": "b28", "title": "E cient induction of logic programs", "year": "1992" }, { "authors": "S Muggleton; R D King; M J E Sternberg", "journal": "Protein Engineering", "ref_id": "b29", "title": "Protein secondary structure prediction using logic-based machine learning", "year": "1992" }, { "authors": "S Muggleton; C D Page", "journal": "", "ref_id": "b30", "title": "A learnability model for universal representations", "year": "1994" }, { "authors": "", "journal": "Academic Press", "ref_id": "b31", "title": "Inductive Logic Programming", "year": "1992" }, { "authors": "S Nienhuys-Cheng; M Polman", "journal": "Springer-Verlag", "ref_id": "b32", "title": "Sample pac-learnability in model inference", "year": "1994" }, { "authors": "C D Page; A M Frisch", "journal": "Academic Press", "ref_id": "b33", "title": "Generalization and learnability: A study of constrained atoms", "year": "1992" }, { "authors": "M Pazzani; D Kibler", "journal": "Machine Learning", "ref_id": "b34", "title": "The utility of knowledge in inductive learning", "year": "1992" }, { "authors": "J R Quinlan; R M Cameron-Jones", "journal": "Springer-Verlag", "ref_id": "b35", "title": "FOIL: A midterm report", "year": "1993" }, { "authors": "J R Quinlan", "journal": "Machine Learning", "ref_id": "b36", "title": "Learning logical de nitions from relations", "year": "1990" }, { "authors": "J R Quinlan", "journal": "", "ref_id": "b37", "title": "Determinate literals in inductive logic programming", "year": "1991" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b38", "title": "", "year": "" }, { "authors": "C Rouveirol", "journal": "Machine Learning", "ref_id": "b39", "title": "Flattening and saturation: two representation changes for generalization", "year": "1994" }, { "authors": "E Shapiro", "journal": "MIT Press", "ref_id": "b40", "title": "Algorithmic Program Debugging", "year": "1982" }, { "authors": "A Srinivasan; S H Muggleton; R D King; M J E Sternberg", "journal": "", "ref_id": "b41", "title": "Mutagenesis: ILP experiments in a non-determinate biological domain", "year": "1994" }, { "authors": "I Stahl; B Tausend; R Wirth", "journal": "", "ref_id": "b42", "title": "Two methods for improving inductive logic programming", "year": "1993" }, { "authors": "P D Summers", "journal": "Journal of the Association for Computing Machinery", "ref_id": "b43", "title": "A methodology for LISP program construction from examples", "year": "1977" }, { "authors": "L G Valiant", "journal": "Communications of the ACM", "ref_id": "b44", "title": "A theory of the learnable", "year": "1984" }, { "authors": "J M Zelle; R J Mooney", "journal": "MIT Press", "ref_id": "b45", "title": "Inducing deterministic Prolog parsers from treebanks: a machine learning approach", "year": "1994" } ]
[ { "formula_coordinates": [ 4, 117.36, 251.64, 125.28, 15.2 ], "formula_id": "formula_0", "formula_text": "components(list12,1,list2)." }, { "formula_coordinates": [ 10, 424.8, 395.16, 36, 38.04 ], "formula_id": "formula_1", "formula_text": "Li 2L D L i )" }, { "formula_coordinates": [ 10, 90, 580.2, 380.88, 94.92 ], "formula_id": "formula_2", "formula_text": "DEEPEN i Dec (C) ( C if i = 0 DEEPEN Dec (DEEPEN i 1 Dec (C)) otherwise We de ne the function CONSTRAIN Dec as CONSTRAIN Dec (A B 1 ^: : : ^Br ) A B 1 ^: : : ^Br ^( Li 2L C L i )" }, { "formula_coordinates": [ 11, 117.36, 262.44, 226.56, 245.4 ], "formula_id": "formula_3", "formula_text": "CONSTRAIN D0 (DEEPEN D0 (p(X,Y) )) p(X,Y) mother(X,XM)^father(X,XF)^mother(Y,YM)^father(Y,YF)m ale(X)^female(X)^male(Y)^female(Y)m ale(XM)^female(XM)^male(XF)^female(XF)m ale(YM)^female(YM)^male(YF)^female(YF)ê qual(X,X)^equal(X,XM)^equal(X,XF)ê qual(X,Y)^equal(X,YM)^equal(X,YF)ê qual(XM,X)^equal(XM,XM)^equal(XM,XF)ê qual(XM,Y)^equal(XM,YM)^equal(XM,YF)ê qual(XF,X)^equal(XF,XM)^equal(XF,XF)ê qual(XF,Y)^equal(XF,YM)^equal(XF,YF)ê qual(Y,X)^equal(Y,XM)^equal(Y,XF)ê qual(Y,Y)^equal(Y,YM)^equal(Y,YF)ê qual(YM,X)^equal(YM,XM)^equal(YM,XF)ê qual(YM,Y)^equal(YM,YM)^equal(YM,YF)ê qual(YF,X)^equal(YF,XM)^equal(YF,XF)ê qual(YF,Y)^equal(YF,YM)^equal(YF,YF)" }, { "formula_coordinates": [ 17, 117.36, 367.2, 251.76, 56.76 ], "formula_id": "formula_4", "formula_text": "f = append(l12; l3; l123) D = f cons(l123,1,l23), cons(l23,2,l3), cons(l3,3,nil), cons(l12,1,l2), cons(l2,2,nil), append(nil,l3,l3) g" } ]
Pac-Learning Recursive Logic Programs: E cient Algorithms
We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional \basecase" oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally di cult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of e cient learnability for recursive logic programs.
William W Cohen
[ { "figure_caption": "Y) mother(X,XM)^father(X,XF)^mother(Y,YM)^father(Y,YF) DEEPEN 2 D0 (p(X,Y) ) DEEPEN D0 (DEEPEN D0 (p(X,Y) )) p(X,Y) mother(X,XM)^father(X,XF)^mother(Y,YM)^father(Y,YF)m other(XM,XMM)^father(XM,XMF)^mother(XF,XFM)^father(XF,XFF)m other(YM,YMM)^father(YM,YMF)^mother(YF,YFM)^father(YF,YFF)", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "C 1 : p(A,B) mother(A,C)^father(A,D)^mother(B,C)^father(B,D)^male(A) D 1 : p(X,Y) mother(X,XM)^father(X,XF)^mother(Y,YM)^father(Y,YF)m ale(X)^equal(XM,YM)^equal(XF,YF) C 2 : p(A,B) father(A,B)^female(A) D 2 : p(X,Y) father(X,XF)^female(X)^equal(XF,Y)", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Force1 NR described above to learn following target program: append(Xs,Ys,Zs) 2. The reader may object that useful recursive programs always have at least two clauses|a recursive clause and a nonrecursive base case. In posing the problem of learning a single recursive clause, we are thus assuming the non-recursive \\base case\" of the target program is provided as background knowledge, either in the background database DB, or in the description atoms D of extended instances.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Forced simulation for linear closed recursive clauses", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: A learning algorithm for nonrecursive depth-d determinate clauses", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: A learning algorithm for two-clause recursive programs", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Theorem 88Let d-Depth-2-Clause be the set of 2-clause programs consisting of one clause in d-DepthLinRec and one clause in d-DepthNonRec. For any constants a and d the language family d-Depth-2-Clause DB; a-DetDEC]", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "BOTTOM d (Dec) CONSTRAIN Dec(DEEPEN d Dec (p(X 1 ; : : :; X a 0 ) ))For any constants d and a, the following are true: the size of BOTTOM d (Dec) is polynomial in n r ; every depth-d clause that satis es Dec is equivalent to some subclause of BOTTOM d (Dec).Proof: Let us rst establish the polynomial bound on the size of BOTTOM d . Let C be a clause of size n. As the number of variables in C is bounded by an, the size of the set L", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "algorithm Force1 NR (d; Dec; DB): % below BOTTOM d is the most speci c possible clause let H BOTTOM d (Dec) repeat Ans answer to the query \\Is H correct?\" if Ans =\\yes\" then return H elseif Ans is a negative example then return \\no consistent hypothesis\" elseif Ans is a positive example e + then % generalize H minimally to cover e + let (f; D) be the components of the extended instance e + H ForceSim NR (H ; f ; Dec; (DB D))", "figure_data": "endif H = FAILURE then return \\no consistent hypothesis\" endif endif endrepeatFigure 1: A learning algorithm for nonrecursive depth-d determinate clauses", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "follows. As before, all possible recursive literals L r i of the clause BOTTOM d are generated; however, in this case, the learner will test two clause hypotheses that are initially of the form (BOTTOM d L r i ; BOTTOM d ). To forcibly simulate such a hypothesis on a fact f, the following procedure is used. After checking the usual termination conditions, the forced simulator checks to see if BASECASE(f) is true. If so, it calls ForceSim NR (with appropriate arguments) to generalize the current hypothesis for the base case. If BASECASE(f) is false, then the recursive clause H r is forcibly simulated on f, a subgoal L r is generated as in before, and the generalized program is recursively forcibly simulated on the subgoal. Figures6 and 7present a learning algorithm Force2 for two clause programs consisting of one linear recursive clause C R and one nonrecursive clause C B , under the assumption that both equivalence and basecase oracles are available.It is straightforward to extend the arguments of Theorem 5 to this case, leading to the following result.", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b17", "b36", "b6", "b34", "b1", "b29", "b0", "b7", "b7" ], "table_ref": [], "text": "Inductive logic programming (ILP) (Muggleton, 1992;Muggleton & De Raedt, 1994) is an active area of machine learning research in which the hypotheses of a learning system are expressed in a logic programming language. While many di erent learning problems have been considered in ILP, including some of great practical interest (Muggleton, King, & Sternberg, 1992;King, Muggleton, Lewis, & Sternberg, 1992;Zelle & Mooney, 1994;Cohen, 1994b), a class of problems that is frequently considered is to reconstruct simple list-processing or arithmetic functions from examples. A prototypical problem of this sort might be learning to append two lists. Often, this sort of task is attempted using only randomly-selected positive and negative examples of the target concept.\nBased on its similarity to the problems studied in the eld of automatic programming from examples (Summers, 1977;Biermann, 1978), we will (informally) call this class of learning tasks automatic logic programming problems. While a number of experimental systems have been built (Quinlan & Cameron-Jones, 1993;Aha, Lapointe, Ling, & Matwin, 1994), the experimental success in automatic logic programming systems has been limited. One common property of automatic logic programming problems is the presence of recursion. The goal of this paper is to explore by analytic methods the computational limitations on learning recursive programs in Valiant's model of pac-learnability (1984). (In brief, this model requires that an accurate approximation of the target concept be found in polynomial time using a polynomial-sized set of labeled examples, which are chosen stochastically.) While it will surprise nobody that such limitations exist, it is far from obvious from previous c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nresearch where these limits lie: there are few provably fast methods for learning recursive logic programs, and even fewer meaningful negative results.\nThe starting point for this investigation is a series of positive learnability results appearing in a companion paper (Cohen, 1995). These results show that a single constant-depth determinate clause with a constant number of \\closed\" recursive calls is pac-learnable. They also show that a two-clause constant-depth determinate program consisting of one nonrecursive clause and one recursive clause of the type described above is pac-learnable, if some additional \\hints\" about the target concept are provided.\nIn this paper, we analyze a number of generalizations of these learnable languages. We show that that relaxing any of the restrictions leads to di cult learning problems: in particular, learning problems that are either as hard as learning DNF (an open problem in computational learning theory), or as hard as cracking certain presumably secure cryptographic schemes. The main contribution of this paper, therefore, is a delineation of the boundaries of learnability for recursive logic programs.\nThe paper is organized as follows. In Section 2 we de ne the classes of logic programs and the learnability models that are used in this paper. In Section 3 we present cryptographic hardness results for two classes of constant-depth determinate recursive programs: programs with n linear recursive clauses, and programs with one n-ary recursive clause. We also analyze the learnability of clauses of constant locality, another class of clauses that is paclearnable in the nonrecursive case, and show that even a single linearly recursive local clause is cryptographically hard to learn. We then turn, in Section 4, to the analysis of even more restricted classes of recursive programs. We show that two di erent classes of constant-depth determinate programs are prediction-equivalent to boolean DNF: the class of programs containing a single linear recursive clause and a single nonrecursive clause, and the class of programs containing two linearly recursive clauses. Finally, we summarize the results of this paper and its companion, discuss related work, and conclude.\nAlthough this paper can be read independently of its companion paper we suggest that readers planning to read both papers begin with the companion paper (Cohen, 1995)." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b7" ], "table_ref": [], "text": "For completeness, we will now present the technical background needed to state our results; however, aside from Sections 2.2 and 2.3, which introduce polynomial predictability and prediction-preserving reducibilities, respectively, this background closely follows that presented in the companion paper (Cohen, 1995). Readers are encouraged to skip this section if they are already familiar with the material." }, { "figure_ref": [], "heading": "Logic Programs", "publication_ref": [ "b19", "b33", "b9", "b30", "b25", "b7" ], "table_ref": [], "text": "We will assume that the reader has some familiarity in logic programming (such as can be obtained by reading one of the standard texts (Lloyd, 1987).) Our treatment of logic programs di ers only in that we will usually consider the body of a clause to be an ordered set of literals. We will also consider only logic programs without function symbols|i.e., programs written in Datalog.\nThe semantics of a Datalog program P will be de ned relative to to a database, DB, which is a set of ground atomic facts. (When convenient, we will also think of DB as a conjunction of ground unit clauses). In particular, we will interpret P and DB as a subset of the set of all extended instances. An extended instance is a pair (f; D) in which the instance fact f is a ground fact, and the description D is a set of ground unit clauses. An extended instance (f; D) is covered by (P; DB) i DB ^D ^P `f\nIf extended instances are allowed, then function-free programs can encode many computations that are usually represented with function symbols. For example, a function-free program that tests to see if a list is the append of two other lists can be written as follows:\nProgram P: append (Xs,Ys,Ys) null(Xs). append (Xs,Ys,Zs) components(Xs,X,Xs1) ĉomponents(Zs,X,Zs1) âppend(Xs1,Ys,Zs1).\nDatabase DB: null(nil).\nHere the predicate components(A,B,C) means that A is a list with head B and tail C; thus an extended instance equivalent to append( 1,2], 3], 1,2,3]) would have the instance fact f = append(list12; list3; list123) and a description containing these atoms: components(list12,1,list2), components (list2,2,nil), components(list123,1,list23), components(list23,2,list3), components(list3,3,nil) The use of extended instances and function-free programs is closely related to \\ attening\" (Rouveirol, 1994;De Raedt & D zeroski, 1994); some experimental learning systems also impose a similar restriction (Quinlan, 1990;Pazzani & Kibler, 1992). Another motivation for using extended instances is technical. Under the (sometimes quite severe) syntactic restrictions considered in this paper, there are often only a polynomial number of possible ground facts|i.e., the Herbrand base is polynomial. Hence if programs were interpreted in the usual model-theoretic way it would be possible to learn a program equivalent to any given target by simply memorizing the appropriate subset of the Herbrand base. However, if programs are interpreted as sets of extended instances, such trivial learning algorithms become impossible; even for extremely restricted program classes there are still an exponential number of extended instances of size n. Further discussion can be found in the companion paper (Cohen, 1995).\nBelow we will de ne some of the terminology for logic programs that will be used in this paper." }, { "figure_ref": [], "heading": "Input/Output Variables", "publication_ref": [], "table_ref": [], "text": "If A B 1 ^: : :^B r is an (ordered) de nite clause, then the input variables of the literal B i are those variables which also appear in the clause A B 1 ^: : : ^Bi 1 ; all other variables appearing in B i are called output variables." }, { "figure_ref": [], "heading": "Types of Recursion", "publication_ref": [], "table_ref": [], "text": "A literal in the body of a clause is a recursive literal if it has the same predicate symbol and arity as the head of the clause. If every clause in a program has at most one recursive literal, the program is linear recursive. If every clause in a program has at most k recursive literals, the program is k-ary recursive. If every recursive literal in a program contains no output variables, the program is closed recursive." }, { "figure_ref": [], "heading": "Depth", "publication_ref": [], "table_ref": [], "text": "The depth of a variable appearing in a (ordered) clause A B 1 ^: : :^B r is de ned as follows.\nVariables appearing in the head of a clause have depth zero. Otherwise, let B i be the rst literal containing the variable V , and let d be the maximal depth of the input variables of B i ; then the depth of V is d+1. The depth of a clause is the maximal depth of any variable in the clause." }, { "figure_ref": [], "heading": "Determinacy", "publication_ref": [ "b21", "b21", "b31", "b18", "b4", "b11" ], "table_ref": [], "text": "The literal B i in the clause A B 1 ^: : :^B r is determinate i for every possible substitution that uni es A with some fact e such that DB `B1 ^: : : ^Bi 1 there is at most one maximal substitution so that DB `Bi . A clause is determinate if all of its literals are determinate. Informally, determinate clauses are those that can be evaluated without backtracking by a Prolog interpreter.\nThe term ij-determinate (Muggleton & Feng, 1992) is sometimes used for programs that are depth i, determinate, and contain literals of arity j or less. A number of experimental systems exploit restrictions associated with limited depth and determinacy (Muggleton & Feng, 1992;Quinlan, 1991;Lavra c & D zeroski, 1992;Cohen, 1993c). The learnability of constant-depth determinate clauses has also received some formal study (D zeroski, Muggleton, & Russell, 1992;Cohen, 1993a)." }, { "figure_ref": [], "heading": "Mode Constraints and Declarations", "publication_ref": [], "table_ref": [], "text": "Mode declarations are commonly used in analyzing Prolog code or describing Prolog code; for instance, the mode declaration \\components(+; ; )\" indicates that the predicate components can be used when its rst argument is an input and its second and third arguments are outputs. Formally, we de ne the mode of a literal L appearing in a clause C to be a string s such that the initial character of s is the predicate symbol of L, and for j > 1 the j-th character of s is a \\+\" if the (j 1)-th argument of L is an input variable and a \\ \" if the (j 1)-th argument of L is an output variable. (This de nition assumes that all arguments to the head of a clause are inputs; this is justi ed since we are considering only how clauses behave in classifying extended instances, which are ground.) A mode constraint is a set of mode strings R = fs 1 ; : : :; s k g, and a clause C is said to satisfy a mode constraint R for p if for every literal L in the body of C, the mode of L is in R.\nWe de ne a declaration to be a tuple (p; a 0 ; R) where p is a predicate symbol, a 0 is an integer, and R is a mode constraint. We will say that a clause C satis es a declaration if the head of C has arity a 0 and predicate symbol p, and if for every literal L in the body of C the mode of L appears in R." }, { "figure_ref": [], "heading": "Determinate Modes", "publication_ref": [], "table_ref": [], "text": "In a typical setting, that facts in the database DB and extended instances are not arbitrary: instead, they are representative of some \\real\" predicate, which may obey certain restrictions. Let us assume that all database and extended-instance facts will be drawn from some (possibly in nite) set F. Informally, a mode is determinate if the input positions of the facts in F functionally determine the output positions. Formally, if f = p(t 1 ; : : :; t k ) is a fact with predicate symbol p and p is a mode, then de ne inputs(f; p ) to be ht i 1 ; : : :; t i k i, where i 1 , : : :, i k are the indices of containing a \\+\", and de ne outputs(f; p ) to be ht j 1 ; : : :; t j l i, where j 1 , : : :, j l are the indices of containing a \\ \". We de ne a mode string p for a predicate p to be determinate for F i fhinputs(f; p ); outputs(f; p )i : f 2 Fg is a function. Any clause that satis es a declaration Dec 2 DetDEC must be determinate.\nThe set of all declarations containing only modes determinate for F will be denoted DetDEC F . Since in this paper the set F will be assumed to be xed, we will generally omit the subscript." }, { "figure_ref": [], "heading": "Bounds on Predicate Arity", "publication_ref": [], "table_ref": [], "text": "We will use the notation a-DB for the set of all databases that contain only facts of arity a or less, and a-DEC for the set of all declarations (p; a 0 ; R) such that every string s 2 R is of length a + 1 or less." }, { "figure_ref": [], "heading": "Size Measures", "publication_ref": [], "table_ref": [], "text": "The learning models presented in the following section will require the learner to use resources polynomial in the size of its inputs. Assuming that all predicates are arity a or less for some constant a allows very simple size measures to be used. In this paper, we will measure the size of a database DB by its cardinality; the size of an extended instance (f; D) by the cardinality of D; the size of a declaration (p; a 0 ; R) by the cardinality of R; and the size of a clause A B 1 ^: : : ^Br by the number of literals in its body." }, { "figure_ref": [], "heading": "A Model of Learnability", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "Let X be a set. We will call X the domain, and call the elements of X instances. De ne a concept C over X to be a representation of some subset of X, and de ne a language Lang to be a set of concepts. In this paper, we will be rather casual about the distinction between a concept and the set it represents; when there is a risk of confusion we will refer to the set represented by a concept C as the extension of C. Two sets C 1 and C 2 with the same extension are said to be equivalent. De ne an example of C to be a pair (e; b) where b = 1 if e 2 C and b = 0 otherwise. If D is a probability distribution function, a sample of C from X drawn according to D is a pair of multisets S + ; S drawn from the domain X according to D, S + containing only positive examples of C, and S containing only negative ones.\nAssociated with X and Lang are two size complexity measures, for which we will use the following notation:\nThe size complexity of a concept C 2 Lang is written j jCj j.\nThe size complexity of an instance e 2 X is written j jej j.\nIf S is a set, S n stands for the set of all elements of S of size complexity no greater than n. For instance, X n = fe 2 X : j jej j ng and Lang n = fC 2 Lang : j jCj j ng.\nWe will assume that all size measures are polynomially related to the number of bits needed to represent C or e; this holds, for example, for the size measures for logic programs and databases de ned above." }, { "figure_ref": [], "heading": "Polynomial Predictability", "publication_ref": [], "table_ref": [], "text": "We now de ne polynomial predictability as follows. A language Lang is polynomially predictable i there is an algorithm PacPredict and a polynomial function m( 1 ; 1 ; n e ; n t ) so that for every n t > 0, every n e > 0, every C 2 Lang nt , every : 0 < < 1, every : 0 < < 1, and every probability distribution function D, PacPredict has the following behavior:\n1. given a sample S + ; S of C from X ne drawn according to D and containing at least m( The algorithm PacPredict is called a prediction algorithm for Lang, and the function m( 1 ; 1 ; n e ; n t ) is called the sample complexity of PacPredict. We will sometimes abbreviate \\polynomial predictability\" as \\predictability\". The rst condition in the de nition merely states that the error rate of the hypothesis must (usually) be low, as measured against the probability distribution D from which the training examples were drawn. The second condition, together with the stipulation that the sample size is polynomial, ensures that the total running time of the learner is polynomial.\nThe nal condition simply requires that the hypothesis be usable in the very weak sense that it can be used to make predictions in polynomial time. Notice that this is a worst case learning model, as the de nition allows an adversarial choice of all the inputs of the learner." }, { "figure_ref": [], "heading": "Relation to Other Models", "publication_ref": [ "b28", "b35", "b7" ], "table_ref": [], "text": "The model of polynomial predictability has been well-studied (Pitt & Warmuth, 1990), and is a weaker version of Valiant's (1984) criterion of pac-learnability. A language Lang is pac-learnable i there is an algorithm PacLearn so that 1. PacLearn satis es all the requirements in the de nition of polynomial predictability, and 2. on inputs S + and S , PacLearn always outputs a hypothesis H 2 Lang.\nThus if a language is pac-learnable it is predictable.\nIn the companion paper (Cohen, 1995), our positive results are all expressed in the model of identi ability from equivalence queries, which is strictly stronger than pac-learnability; that is, anything that is learnable from equivalence queries is also necessarily pac-learnable.1 Since this paper contains only negative results, we will use the the relatively weak model of predictability. Negative results in this model immediately translate to negative results in the stronger models; if a language is not predictable, it cannot be pac-learnable, nor identi able from equivalence queries." }, { "figure_ref": [], "heading": "Background Knowledge in Learning", "publication_ref": [ "b7" ], "table_ref": [], "text": "In a typical ILP system, the setting is slightly di erent, as the user usually provides clues about the target concept in addition to the examples, in the form of a database DB of \\background knowledge\" and a set of declarations. To account for these additional inputs it is necessary to extend the framework described above to a setting where the learner accepts inputs other than training examples. Following the formalization used in the companion paper (Cohen, 1995), we will adopt the notion of a \\language family\".\nIf Lang is a set of clauses, DB is a database and Dec is a declaration, we will de ne\nLang DB; Dec] to be the set of all pairs (C; DB) such that C 2 Lang and C satis es Dec.\nSemantically, such a pair will denote the set of all extended instances (f; D) covered by (C; DB). Next, if DB is a set of databases and DEC is a set of declarations, then de ne Lang DB; DEC] = fLang DB; Dec] : DB 2 DB and Dec 2 DECg\nThis set of languages is called a language family. We will now extend the de nition of predictability queries to language families as follows.\nA language family Lang DB; DEC] is polynomially predictable i every language in the set is predictable. A language family Lang DB; DEC] is polynomially predictable i there is a single algorithm Identify(DB; Dec) that predicts every Lang DB; Dec] in the family given DB and Dec.\nThe usual model of polynomial predictability is worst-case over all choices of the target concept and the distribution of examples. The notion of polynomial predictability of a language family extends this model in the natural way; the extended model is also worstcase over all possible choices for database DB 2 DB and Dec 2 DEC. This worst-case model may seem unintuitive, since one typically assumes that the database DB is provided by a helpful user, rather than an adversary. However, the worst-case model is reasonable because learning is allowed to take time polynomial in the size of smallest target concept in the set Lang DB; Dec]; this means that if the database given by the user is such that the target concept cannot be encoded succinctly (or at all) learning is allowed to take more time.\nNotice that for a language family Lang DB; Dec] to be polynomially predictable, every language in the family must be polynomially predictable. Thus to show that a family is not polynomially predictable it is su cient to construct one language in the family for which learning is hard. The proofs of this paper will all have this form." }, { "figure_ref": [], "heading": "Prediction-Preserving Reducibilities", "publication_ref": [ "b28", "b26" ], "table_ref": [], "text": "The principle technical tool used in our negative results in the notion of prediction-preserving reducibility, as introduced by Pitt and Warmuth (1990). Prediction-preserving reducibilities are a method of showing that one language is no harder to predict than another. Formally, let Lang 1 be a language over domain X 1 and Lang 2 be a language over domain X 2 .\nWe say that predicting Lang 1 reduces to predicting Lang 2 , denoted Lang 1 Lang 2 , if there is a function f i : X 1 ! X 2 , henceforth called the instance mapping, and a function f c : Lang 1 ! Lang 2 , henceforth called the concept mapping, so that the following all hold:\n1. x 2 C if and only if f i (x) 2 f c (C) | i.e., concept membership is preserved by the mappings;\n2. the size complexity of f c (C) is polynomial in the size complexity of C|i.e., the size of concept representations is preserved within a polynomial factor;\n3. f i (x) can be computed in polynomial time. Note that f c need not be computable; also, since f i can be computed in polynomial time, f i (x) must also preserve size within a polynomial factor.\nIntuitively, f c (C 1 ) returns a concept C 2 2 Lang 2 that will \\emulate\" C 1 |i.e., make the same decisions about concept membership|on examples that have been \\preprocessed\" with the function f i . If predicting Lang 1 reduces to predicting Lang 2 and a learning algorithm for Lang 2 exists, then one possible scheme for learning concepts from Lang 1 would be the following. First, convert any examples of the unknown concept C 1 from the domain X 1 to examples over the domain X 2 using the instance mapping f i . If the conditions of the de nition hold, then since C 1 is consistent with the original examples, the concept f c (C 1 ) will be consistent with their image under f i ; thus running the learning algorithm for Lang 2 should produce some hypothesis H that is a good approximation of f c (C 1 ). Of course, it may not be possible to map H back into the original language Lang 1 , as computing f c 1 may be di cult or impossible. However, H can still be used to predict membership in C 1 : given an example x from the original domain X 1 , one can simply predict x 2 C 1 to be true whenever f i (x) 2 H. Pitt and Warmuth (1988) give a more rigorous argument that this approach leads to a prediction algorithm for Lang 1 , leading to the following theorem.\nTheorem 1 (Pitt and Warmuth) Assume Lang 1 Lang 2 . Then if Lang 1 is not polynomially predictable, Lang 2 is not polynomially predictable." }, { "figure_ref": [], "heading": "Cryptographic Limitations on Learning Recursive Programs", "publication_ref": [], "table_ref": [], "text": "Theorem 1 allows one to transfer hardness results from one language to another. This is useful because for a number of languages, it is known that prediction is as hard as breaking cryptographic schemes that are widely assumed to be secure. For example, it is known that predicting the class of languages accepted by deterministic nite state automata is \\cryptographically hard\", as is the class of languages accepted by log-space bounded Turing machines.\nIn this section we will make use of Theorem 1 and previous cryptographic hardness results to show that certain restricted classes of recursive logic programs are hard to learn." }, { "figure_ref": [], "heading": "Programs With n Linear Recursive Clauses", "publication_ref": [ "b7", "b2" ], "table_ref": [], "text": "In a companion paper (Cohen, 1995) we showed that a single linear closed recursive clause was identi able from equivalence queries. In this section we will show that a program with a polynomial number of such clauses is not identi able from equivalence queries, nor even polynomially predictable.\nSpeci cally, let us extend our notion of a \\family of languages\" slightly, and let DLog n; s] represent the language of log-space bounded deterministic Turing machines with up to s states accepting inputs of size n or less, with the usual semantics and complexity measure. 2 Also let d-DepthLinRecProg denote the family of logic programs containing only depth-d linear closed recursive clauses, but containing any number of such clauses. We have the following result:\nTheorem 2 For every n and s, there exists a database DB n;s 2 1-DB and declaration Dec n;s 2 1-DetDEC of sizes polynomial in n and s such that DLog n; s] 1-DepthLinRecProg DB n;s ; Dec n;s ] Hence for d 1 and a 1, d-DepthLinRecProg DB; a-DetDEC] is not uniformly polynomially predictable under cryptographic assumptions. 3Proof: Recall that a log-space bounded Turing machine (TM) has an input tape of length n, a work tape of length log 2 n which initially contains all zeros, and a nite state control with state set Q. To simplify the proof, we assume without loss of generality that the tape and input alphabets are binary, that there is a single accepting state q f 2 Q, and that the machine will always erase its work tape and position the work tape head at the far left after it decides to accept its input.\nAt each time step, the machine will read the tape squares under its input tape head and work tape head, and based on these values and its current state q, it will write either a 1 or a 0 on the work tape, shift the input tape head left or right, shift the work tape head left or right, and transition to a new internal state q 0\nA deterministic machine can thus be speci ed by a transition function : f0; 1g f0; 1g Q ! f0; 1g fL; Rg fL; Rg Q Let us de ne the internal con guration of a TM to consist of the string of symbols written on the worktape, the position of the tape heads, and the internal state q of the machine: thus a con guration is an element of the set CON f0; 1g log 2 n f1; : : :; log 2 ng f1; : : :; ng Q A simpli ed speci cation for the machine is the transition function 0 : f0; 1g CON ! CON where the component f0; 1g represents the contents of the input tape at the square below the input tape head.\nNotice that for a machine whose worktape size is bounded by log n, the cardinality of CON is only p = jQjn 2 log 2 n, a polynomial in n and s = jQj. We will use this fact in our constructions.\nThe background database DB n;s is as follows. First, for i = 0; : : :; p, an atom of the form con i (c i ) is present. Each constant c i will represent a di erent internal con guration of the Turing machine. We will also arbitrarily select c 1 to represent the (unique) accepting con guration, and add to DB n;s the atom accepting(c 1 ). Thus DB n;s fcon i (c i )g p i=1 faccepting(c 1 )g Next, we de ne the instance mapping. An instance in the Turing machine's domain is a binary string X = b 1 : : :b n ; this is mapped by f i to the extended instance (f; D) where f\naccepting(c 0 ) D ftrue i g b i 2X:b i =1 ffalse i g b i 2X:b i =0\nThe description atoms have the e ect of de ning the predicate true i to be true i the i-th bit of X is a \\1\", and the de ning the predicate false i to be true i the i-th bit of X is \\0\". The constant c 0 will represent the start con guration of the Turing machine, and the predicate accepting(C) will be de ned so that it is true i the Turing machine accepts input X starting from state C.\nWe will let Dec n;s = (accepting; 1; R) where R contains the modes con i (+) and con i ( ), for i = 1; : : :; p; and true j and false j for j = 1; : : :; n.\nFinally, for the concept mapping f c , let us assume some arbitrary one-to-one mapping between the internal con gurations of a Turing machine M and the predicate names con 0 ,: : :,con p 1 such that the start con guration (0 log 2 n ; 1; q 0 ) maps to con 0 and the accepting con guration (0 log 2 n ; 1; q f ) maps to con 1 . We will construct the program f c (M) as follows. For each transition 0 (1; c) ! c 0 in 0 , where c and c 0 are in CON, construct a clause of the form accepting(C) con j (C) ^true i ^con j 0(C1) ^accepting(C1). where i is the position of the input tape head which is encoded in c, con j = (c), and con j 0 = (c 0 ). For each transition 0 (0; c) ! (c 0 ) in 0 construct an analogous clause, in which true i is replaced with false i . Now, we claim that for this program P, the machine M will accept when started in con guration c i i DB n;s ^D ^P `accepting(c i )\nand hence that this construction preserves concept membership. This is perhaps easiest to see by considering the action of a top-down theorem prover when given the goal accepting(C ): the sequence of subgoals accepting(c i ), accepting(c i+1 ), : : :generated by the theorem-prover precisely parallel the sequence of con gurations c i , : : :entered by the Turing machine.\nIt is easily veri ed that the size of this program is polynomial in n and s, and that the clauses are linear recursive, determinate, and of depth one, completing the proof.\nThere are number of ways in which this result can be strengthened. Precisely the same construction used above can be used to reduce the class of nondeterministic log-space bounded Turing machines to the constant-depth determinate linear recursive programs. Further, a slight modi cation to the construction can be used to reduce the class of log-space bounded alternating Turing machines (Chandra, Kozen, & Stockmeyer, 1981) to constantdepth determinate 2-ary recursive programs. The modi cation is to emulate con gurations corresponding to universal states of the Turing machine with clauses of the form accepting(C) con j (C) ^true i ĉon j1 0(C1) ^accepting(C1) ĉon j2 0(C2) ^accepting(C2).\nwhere con j1 0 and con j2 0 are the two successors to the universal con guration con j . This is a very strong result, since log-space bounded alternating Turing machines are known to be able to perform every polynomial-time computation." }, { "figure_ref": [], "heading": "Programs With One n-ary Recursive Clause", "publication_ref": [], "table_ref": [], "text": "We will now consider learning a single recursive clause with arbitrary closed recursion. Again, the key result of this section is an observation about expressive power: there is a background database that allows every log-space deterministic Turing machine M to be emulated by a single recursive constant-depth determinate clause. This leads to the following negative predictability result.\nTheorem 3 For every n and s, there exists a database DB n;s 2 3-DB and declaration Dec n;s 2 3-DetDEC of sizes polynomial in n and s such that DLog n; s] 3-DepthRec DB n;s ; Dec n;s ] Hence for d 3 and a 3, d-DepthRec DB n ; a-DetDEC] is not uniformly polynomially predictable under cryptographic assumptions.\nProof: Consider a DLOG machine M. As in the proof of Theorem 2, we assume without loss of generality that the tape alphabet is f0; 1g, that there is a unique starting con guration c 0 , and that there is a unique accepting con guration c 1 . We will also assume without loss of generality that there is a unique \\failing\" con guration c f ail ; and that there is exactly one transition of the form 0 (b; c j ) ! c 0 j for every combination of i 2 f1; : : :; ng, b 2 f0; 1g, and c j 2 CON fc 1 ; c f ail g. Thus on input X = x 1 : : :x n the machine M starts with CONFIG=c 0 , then executes transitions until it reaches CONFIG=c 1 or CONFIG=c f ail , at which point X is accepted or rejected (respectively). We will use p for the number of con gurations. (Recall that p is polynomial in n and s.)\nTo emulate M, we will convert an example X = b 1 : : :b n into the extended instance f i (X) = (f; D) where f accepting(c 0 )\nD fbit i (b i )g n i=1\nThus the predicate bit i (X) binds X to the i-th bit of the TM's input tape. We also will de ne the following predicates in the background database DB n;s .\nFor every possible b 2 f0; 1g and j : 1 j p(n), the predicate status b;j (B,C,Y) will be de ned so that given bindings for variables B and C, status b;j (B,C,Y) will fail if C = c f ail ; otherwise it will succeed, binding Y to active if B = b and C = c j and binding Y to inactive otherwise.\nFor j : 1 j p(n), the predicate next j (Y,C) will succeed i Y can be bound to either active or inactive. If Y = , then C will be bound to c j ; otherwise, C will be bound to the accepting con guration c 1 .\nThe database also contains the fact accepting(c 1 ).\nIt is easy to show that the size of this database is polynomial in n and s. The declaration Dec n;s is de ned to be (accepting; 1; R) where R includes the modes status bj (+; +; ), next j (+; ), and bit i ( ) for b 2 f0; 1g, j = 1; : : :; p, and i = 1; : : :; n. Now, consider the transition rule 0 (b; c j ) ! c 0 j , and the corresponding conjunction TRANS ibj bit i (B ibj ) ^status b;j (C,B ibj ,Y ibj ) ^next j 0 (Y ibj ,C1 ibj ) ^accepting(C1 ibj )\nGiven DB n;s and D, and assuming that C is bound to some con guration c, this conjunction will fail if c = c f ail . It will succeed if x i 6 = b or c 6 = c j ; in this case Y ibj will be bound to inactive, C1 ibj will be bound to c 1 , and the recursive call succeeds because accepting(c 1 ) is in DB n;s . Finally, if x i = b and c = c j , TRANS ibj will succeed only if the atom accepting(c j 0) is provable; in this case, Y ibj will be bound to active and C1 ibj will be bound to c j 0. From this it is clear that the clause f c (M) below accepting(C) î2f1;:::;ng; b2f0;1g j2f1;:::;pg TRANS ibj will correctly emulate the machine M on examples that have been preprocessed with the function f i described above. Hence this construction preserves concept membership. It is also easily veri ed that the size of this program is polynomial in n and s, and that the clause is determinate and of depth three." }, { "figure_ref": [], "heading": "One k-Local Linear Closed Recursive Clause", "publication_ref": [ "b7", "b3", "b3", "b7" ], "table_ref": [], "text": "So far we have considered only one class of extensions to the positive result given in the companion paper (Cohen, 1995)|namely, relaxing the restrictions imposed on the recursive structure of the target program. Another reasonable question to ask is if linear closed recursive programs can be learned without the restriction of constant-depth determinacy.\nIn earlier papers (Cohen, 1993a(Cohen, , 1994a(Cohen, , 1993b) ) we have studied the conditions under which the constant-depth determinacy restriction can be relaxed while still allowing learnability for nonrecursive clauses. It turns out that most generalizations of constant-depth determinate clauses are not predictable, even without recursion. However, the language of nonrecursive clauses of constant locality is a pac-learnable generalization of constant-depth determinate clauses. Below, we will de ne this language, summarize the relevant previous results, and then address the question of the learnability of recursive local clauses.\nDe ne a variable V appearing in a clause C to be free if it appears in the body of C but not the head of C. Let V 1 and V 2 be two free variables appearing in a clause. V 1 touches V 2 if they appear in the same literal, and V 1 in uences V 2 if it either touches V 2 , or if it touches some variable V 3 that in uences V 2 . The locale of a free variable V is the set of literals that either contain V , or that contain some free variable in uenced by V . Informally, variable V 1 in uences variable V 2 if the choice of a binding for V 1 can a ect the possible choices of bindings for V 2 .\nThe locality of a clause is the size of its largest locale. Let k-LocalNonRec denote the language of nonrecursive clauses with locality k or less. (That is, k-LocalNonRec is the set of logic programs containing a single nonrecursive k-local clause.) The following facts are known (Cohen, 1993b):\nFor xed k and a, the language family k-LocalNonRec a-DB; a-DEC] is uniformly pac-learnable. For example, an immediate consequence of the construction of Theorem 2 is that programs with a polynomial number of linear recursive k-local clauses are not predictable for k 2. Similarly, Theorem 3 shows that a single recursive k-local clause is not predictable for k 4.\nIt is still reasonable to ask, however, if the positive result for bounded-depth determinate recursive clauses (Cohen, 1995) can be extended to k-ary closed recursive k-local clauses.\nUnfortunately, we have the following negative result, which shows that even linear closed recursive clauses are not learnable.\nTheorem 4 Let Dfa s] denote the language of deterministic nite automata with s states, and let k-LocalLinRec be the set of linear closed recursive k-local clauses. For any constant s there exists a database DB s 2 3-DB and a declaration Dec s 2 3-DEC, both of size polynomial in s, such that Dfa s] 3-LocalLinRec DB s ; Dec s ] Hence for k 3 and a 3, k-LocalLinRec a-DB; Dec] is not uniformly polynomially predictable under cryptographic assumptions.\nProof: Following Hopcroft and Ullman (1979) we will represent a DFA M over the alphabet as a tuple (q 0 ; Q; F; ) where q 0 is the initial state, Q is the set of states, F is the set of accepting states, and : Q ! Q is the transition function (which we will sometimes think of as a subset of Q Q). To prove the theorem, we need to construct a database DB s of size polynomial in s such that every s-state DFA can be emulated by a linear recursive k-local clause over DB s .\nRather than directly emulating M, it will be convenient to emulate instead a modi cation of M. Let M be a DFA with state set Q Q fq ( 1) ; q e ; q f g, where q ( 1) , q e and q f are new states not found in Q. The initial state of M is q ( 1) . The only nal state of M is q f . The transition function of M is ^ f(q ( 1) ; a; q 0 ); (q e ; c; q f )g q i 2F f(q i ; b; q e )g where a, b, and c are new letters not in . Note that M is now a DFA over the alphabet fa; b; cg, and, as described, need not be a complete DFA over this alphabet. (That is, there may be pairs (q i ; a) such that ^ (q i ; a) is unde ned.) However, M can be easily q f q e q 1 q 0 0 0\n1 1 q 1 q 1 1 1 0 0 q 0 q 1 q e q f a b c 1 1 0 0 q 0 q 1 q r\nFigure 1: How a DFA is modi ed before emulation with a local clause made complete by introducing an additional rejecting state q r , and making every unde ned transition lead to q r . More precisely, let 0 be de ned as 0 ^ f(q i ; x; q r ) j q i 2 Q ^x 2 fa; b; cg ^(6 9q j : (q i ; x; q j ) 2 ^ )g Thus M 0 = (q ( 1) ; Q fq r g; fq f g; 0 ) is a \\completed\" version of M, with Q 0 = Q fq r g. We will use M 0 in the construction below; we will also let Q 0 = Q fq r g and 0 = fa; b; cg.\nExamples of M, M and M 0 are shown in Figure 1. Notice that aside from the arcs into and out of the rejecting state q r , the state diagram of M 0 is nearly identical to that of M. The di erences are that in M 0 there is a new initial state q ( 1) with a single outgoing arc labeled a to the old initial state q 0 ; also every nal state of M has in M 0 an outgoing arc labeled b to a new state q e , which in turn has a single outgoing arc labeled c to the nal state q f . It is easy to show that x 2 L(M) i axbc 2 L(M 0 ) Now, given a set of states Q 0 we de ne a database DB that contains the following predicates: arc q i ; ;q j (S,X,T) is true for any S 2 Q 0 , any T 2 Q 0 , and any X 2 0 , unless S = q i , X = , and T 6 = q j . state(S) is true for any S 2 Q 0 . accept(c,nil,q e ,q f ) is true.\nAs motivation for the arc predicates, observe that in emulating M 0 it is clearly useful to be able to represent the transition function 0 . The usefulness of the arc predicates is that any transition function 0 can be represented using a conjunction of arc literals. In particular, the conjunction (q i ; ;q j )2 0 arc q i ; ;q j (S; X; T) succeeds when 0 (S; X) = T, and fails otherwise.\nLet us now de ne the instance mapping f i as f i (x) = (f; D) where f = accept(a; xbc; q ( 1) ; q 0 ) and D is a set of facts that de nes the components relation on the list that corresponds to the string xbc. In other words, if x = 1 : : : n , then D is the set of facts components( 1 : : : n bc; 1 ; 2 : : : n bc) components( 2 : : : n bc; 2 ; 3 : : : n bc)\n. . . components(c,c,nil)\nThe declaration Dec n will be Dec n = (accept; 4; R) where R contains the modes components(+; ; ), state( ), and arc q i ; ;q j (+; +; +) for q i , q j in Q 0 , and 2 0 .\nFinally, de ne the concept mapping f c (M) for a machine M to be the clause accept(X,Ys,S,T) V (q i ; ;q j )2 0 arc q i ; ;q j (S,X,T) ^components(Ys,X1,Ys1) ^state(U) ^accept(X1,Ys1,T,U). where 0 is the transition function for the corresponding machine M 0 de ned above. It is easy to show this construction is polynomial.\nIn the clause X is a letter in 0 , Ys is a list of such letters, and S and T are both states in Q 0 . The intent of the construction is that the predicate accept will succeed exactly when (a) the string XYs is accepted by M 0 when M 0 is started in state S, and (b) the rst action taken by M 0 on the string XYs is to go from state S to state T.\nSince all of the initial transitions in M 0 are from q ( 1) to q 0 on input a, then if the predicate accept has the claimed behavior, clearly the proposed mapping satis es the requirements of Theorem 1. To complete the proof, therefore, we must now verify that the predicate accept succeeds i XYs is accepted by M 0 in state S with an initial transition to T.\nFrom the de nition of DFAs the string XYs is accepted by M 0 in state S with an initial transition to T i one of the following two conditions holds. 0 (S; X) = T, Ys is the empty string and T is a nal state of M 0 , or; 0 (S; X) = T, Ys is a nonempty string (and hence has some head X1 and some tail Ys1) and Ys1 is accepted by M 0 in state T, with any initial transition.\nThe base fact accept(c,nil,q e ,q f ) succeeds precisely when the rst case holds, since in M 0 this transition is the only one to a nal state. In the second case, the conjunction of the arc conditions in the f c (M) clause succeeds exactly when (S; X) = T (as noted above).\nFurther the second conjunction in the clause can be succeeds when Ys is a nonempty string with head X1 and tail Ys1 and X1Ys1 is accepted by M 0 in state T with initial transition to any state U, which corresponds exactly to the second case above.\nThus concept membership is preserved by the mapping. This completes the proof." }, { "figure_ref": [], "heading": "DNF-Hardness Results for Recursive Programs", "publication_ref": [ "b7" ], "table_ref": [], "text": "To summarize previous results for determinate clauses, it was shown that while a single k-ary closed recursive depth-d clause is pac-learnable (Cohen, 1995), a set of n linear closed recursive depth-d clauses is not; further, even a single n-ary closed recursive depth-d clauses is not pac-learnable. There is still a large gap between the positive and negative results, however: in particular, the learnability of recursive programs containing a constant number of k-ary recursive clauses has not yet been established.\nIn this section we will investigate the learnability of these classes of programs. We will show that programs with either two linear closed recursive clauses or one linear closed recursive clause and one base case are as hard to learn as boolean functions in disjunctive normal form (DNF). The pac-learnability of DNF is a long-standing open problem in computational learning theory; the import of these results, therefore, is that establishing the learnability of these classes will require some substantial advance in computational learning theory." }, { "figure_ref": [ "fig_2" ], "heading": "A Linear Recursive Clause Plus a Base Clause", "publication_ref": [ "b7" ], "table_ref": [], "text": "Previous work has established that two-clause constant-depth determinate programs consisting of one linear recursive clause and one nonrecursive clause can be identi ed, given two types of oracles: the standard equivalence-query oracle, and a \\basecase oracle' (Cohen, 1995). (The basecase oracle determines if an example is covered by the nonrecursive clause alone.) In this section we will show that in the absence of the basecase oracle, the learning problem is as hard as learning boolean DNF.\nIn the discussion below, Dnf n; r] denotes the language of r-term boolean functions in disjunctive normal form over n variables.\nTheorem 5 Let d-Depth-2-Clause be the set of 2-clause programs consisting of one clause in d-DepthLinRec and one clause in d-DepthNonRec. Then for any n and any r there exists a database DB n;r 2 2-DB and a declaration Dec n;r 2 2-DEC, both of sizes polynomial in n and r, such that Dnf n; r] 1-Depth-2-Clause DB n;r ; Dec n;r ] Hence for a 2 and d 1 the language family d-Depth-2-Clause DB; a-DetDEC] is uniformly polynomially predictable only if DNF is polynomially predictable.\nProof: We will produce a DB n;r 2 DB and Dec n;r 2 2-DetDEC such that predicting DNF can be reduced to predicting 1-Depth-2-Clause DB n;r ; Dec n;r ]. The construction makes use of a trick rst used in Theorem 3 of (Cohen, 1993a), in which a DNF formula is emulated by a conjunction containing a single variable Y which is existentially quanti ed over a restricted range.\nWe begin with the instance mapping f i . An assignment = b 1 : : :b n will be converted to the extended instance (f; D) where\nf p(1) D fbit i (b i )g n i=1\nNext, we de ne the database DB n;r to contain the binary predicates true 1 , false 1 , : : :, true r , false r which behave as follows:\ntrue i (X,Y) succeeds if X = 1, or if Y 2 f1; : : :; rg fig.\nfalse i (X,Y) succeeds if X = 0, or if Y 2 f1; : : :; rg fig.\nFurther, DB n;r contains facts that de ne the predicate succ(Y,Z) to be true whenever Z = Y + 1, and both Y and Z are numbers between 1 and r. Clearly the size of DB n;r is polynomial in r. Let Dec n;r = (p; 1; R) where R contains the modes bit i ( ), for i = 1; : : :; n; true j (+; +) and false j (+; +), for j = 1; : : :; r, and succ(+; ). Now let be an r-term DNF formula = _ r i=1 ^si j=1 l ij over the variables v 1 ; : : :; v n . We may assume without loss of generality that contains exactly r terms, since any DNF formula with fewer than r terms can be padded to exactly r terms by adding terms of the Background database: for i = 1; : : :; r true i (b; y) for all b; y : b = 1 or y 2 f1; : : :; rg but y 6 = i false i (b; y) for all b; y : b = 0 or y 2 f1; : : :; rg but y 6 = i succ(y,z) if z = y + 1 and y 2 f1; : : :; rg and z 2 f1; : : :; rg DNF formula:\n(v 1 ^v3 ^v4 ) _ (v 2 ^v3 ) _ (v 1 ^v4 ) Equivalent program: p(Y) succ(Y,Z)^p(Z). p(Y) bit 1 (X 1 ) ^bit 2 (X 2 ) ^bit 3 (X 3 ) ^bit 4 (X 4 ) true 1 (X 1 ,Y) ^false 1 (X 3 ,Y) ^true 1 (X 4 ,Y) false 2 (X 2 ,Y) ^false 2 (X 3 ,Y)t rue 3 (X 1 ,Y) ^false 3 (X 4 ,Y).\nInstance mapping: f i (1011) = (p(1); fbit 1 (1); bit 2 (0); bit 3 (1); bit 4 (1)g) where B ij is de ned as follows:\nB ij ( true i (X k ,Y) if l ij = v k false i (X k ,Y) if l ij = v k\nAn example of the construction is shown in Figure 2; we suggest that the reader refer to this gure at this point. The basic idea behind the construction is that rst, the clause C B will succeed only if the variable Y is bound to i and the i-th term of succeeds (the de nitions of true i and false i are designed to ensure that this property holds); second, the recursive clause C R is constructed so that the program f c ( ) succeeds i C B succeeds with Y bound to one of the values 1; : : :; n.\nWe will now argue more rigorously for the correctness of the construction. Clearly, f i ( ) and f c ( ) are of the same size as and respectively. Since DB n;r is also of polynomial size, this reduction is polynomial.\nFigure 3 shows the possible proofs that can be constructed with the program f c ( ); notice that the program f c ( ) succeeds exactly when the clause C B succeeds for some value\nA A A A @ @ @ @ @ @ @ @ A A A A A A A A @ @ @ @ A A A A @ @ @ @ B(i) V bit i (X i ) ^V V B ij : : : p(3) succ(2,3) B(3) B(2) p(1) B(1) succ(1,2) p(2) B(n-1) B(n) succ(n-1,n) p(n-1) p(n)\nFigure 3: Space of proofs possible with the program f c ( ) of Y between 1 and r. Now, if is true then some term T i = V s i j=1 l ij must be true; in this case V s i j=1 B ij succeeds with Y bound to the value i and V s i 0 j=1 B i 0 j for every i 0 6 = i also succeeds with Y bound to i. On the other hand, if is false for an assignment, then each T i fails, and hence for every possible binding of Y generated by repeated use of the recursive clause C R the base clause C B will also fail. Thus concept membership is preserved by the mapping.\nThis concludes the proof." }, { "figure_ref": [ "fig_3" ], "heading": "Two Linear Recursive Clauses", "publication_ref": [ "b7" ], "table_ref": [], "text": "Recall again that a single linear closed recursive clause is identi able from equivalence queries (Cohen, 1995). A construction similar to that used in Theorem 5 can be used to show that this result cannot be extended to programs with two linear recursive clauses.\nTheorem 6 Let d-Depth-2-Clause 0 be the set of 2-clause programs consisting of two clauses in d-DepthLinRec. (Thus we assume that the base case of the recursion is given as background knowledge.) Then for any constants n and r there exists a database DB n;r 2 2-DB and a declaration Dec n;r 2 2-DEC, both of sizes polynomial in n, such that Dnf n; r] 1-Depth-2-Clause 0 DB n;r ; Dec n;r ] Hence for any constants a 2 and d 1 the language family d-Depth-2-Clause 0 DB; a-DetDEC] is uniformly polynomially predictable only if DNF is polynomially predictable.\nProof: As before, the proof makes use of a prediction-preserving reducibility from DNF to d-Depth-2-Clause 0 DB; Dec] for a speci c DB and Dec. Let us assume that is a DNF with r terms, and further assume that r = 2 k . (Again, this assumption is made without loss of generality, since the number of terms in can be increased by padding with vacuous terms.) Now consider a complete binary tree of depth k + 1. The k-th level of this tree has exactly r nodes; let us label these nodes 1, : : :, r, and give the other nodes arbitrary labels. Now construct a database DB n;r as in Theorem 5, except for the following changes:\nThe predicates true i (b,y) and false i (b,y) also succeed when y is the label of a node at some level below k.\nRather than the predicate succ, the database contains two predicates leftson and rightson that encode the relationship between nodes in the binary tree.\nThe database includes the facts p(! 1 ), : : :, p(! 2r ), where ! 1 , : : :, ! 2r are the leaves of the binary tree. These will be used as the base cases of the recursive program that is to be learned. Let be the label of the root of the binary tree. We de ne the instance mapping to be f i (b 1 : : :b 1 ) (p( ); fbit 1 (b 1 ); : : :; bit n (b n )g) Note that except for the use of rather than 1, this is identical to the instance mapping used in Theorem 5. Also let Dec n;r = (p; 1; R) where R contains the modes bit i ( ), for i = 1; : : :; n; true j (+; +) and false j (+; +), for j = 1; : : :; r; leftson(+; ); and rightson(+; ).\nThe concept mapping f c ( ) is the pair of clauses R 1 ; R 2 , where R 1 is the clause\np(Y ) n k=1 bit k (X k ) ^r î=1 s i ĵ=1 B ij ^leftson(Y; Z) ^p(Z) and R 2 is the clause p(Y ) n k=1 bit k (X k ) ^r î =1 s i ĵ=1 B ij ^rightson(Y; Z) ^p(Z)\nNote that both of these clause are linear recursive, determinate, and have depth 1. Also, the construction is clearly polynomial. It remains to show that membership is preserved.\nFigure 4 shows the space of proofs that can be constructed with the program f c ( ); as in Figure 3, B(i) abbreviates the conjunction V bit i (X i )^V V B ij . Notice that the program will succeed only if the recursive calls manage to nally recurse to one of the base cases p(! 1 ), : : :, p(! 2r ), which correspond to the leaves of the binary tree. Both clauses will both succeed on the the rst k 1 levels of the tree. However, to reach the base cases of the recursion at the leaves of the tree, the recursion must pass through the k-th level of the tree; that is, one of the clauses above must succeed on some node y of the binary tree, where y is on the k-th level of the tree, and hence the label of y is a number between 1 and r. The program thus succeeds on f i ( ) precisely when there is some number y between 1 and can happen if and only if is satis ed by the assignment . Thus, the mappings preserve concept membership. This completes the proof.\nJ J J J J X X X X E E E E E B B B B B B B B B B X X Z Z Z Z Z \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Z Z Z Z Z ` @ @ @ @ @ @ @ b b b b b b b b b b b H H \" \" \" \" \" \" \" \" \" \" \" p( ) B( ) p(L) B( ) p(\nNotice that the programs f c ( ) used in this proof all have the property that the depth of every proof is logarithmic in the size of the instances. This means that the hardness result holds even if one additionally restricts the class of programs to have a logarithmic depth bound." }, { "figure_ref": [], "heading": "Upper Bounds on the Di culty of Learning", "publication_ref": [ "b7", "b11" ], "table_ref": [], "text": "The previous sections showed that several highly restricted classes of recursive programs are at least as hard to predict as DNF. In this section we will show that these restricted classes are also no harder to predict than DNF. We will wish to restrict the depth of a proof constructed by a target program. Thus, let h(n) be any function; we will use Lang h(n) for the set of programs in the class Lang such that all proofs of an extended instance (f; D) have depth bounded by h(j jDj j).\nTheorem 7 Let Dnf n; ] be the language of DNF boolean functions (with any number of terms), and recall that d-Depth-2-Clause is the language of 2-clause programs consisting of one clause in d-DepthLinRec and one clause in d-DepthNonRec, and that d-Depth-2-Clause 0 is the language of 2-clause programs consisting of two clauses in d-DepthLinRec. For all constants d and a, and all databases DB 2 DB and declarations Dec 2 a-DetDEC, there is a polynomial function poly(n) such that d-Depth-2-Clause DB; Dec] Dnf poly(j jDBj j); ] d-Depth-2-Clause 0 h(n) DB; Dec] Dnf poly(j jDBj j); ] if h(n) is bounded by c log n for some constant c.\nHence if either of these language families is uniformly polynomially predictable, then Dnf n; ] is polynomially predictable.\nProof: The proof relies on several facts established in the companion paper (Cohen, 1995).\nFor every declaration Dec, there is a clause BOTTOM d (Dec) such that every nonrecursive depth-d determinate clause C is equivalent to some subclause of BOTTOM d . Further, the size of BOTTOM d is polynomial in Dec. This means that the language of subclauses of BOTTOM is a normal form for nonrecursive constant-depth determinate clauses.\nEvery linear closed recursive clause C R that is constant-depth determinate is equivalent to some subclause of BOTTOM plus a recursive literal L r ; further, there are only a polynomial number of possible recursive literals L r .\nFor any constants a, a 0 , and d, any database DB 2 a-DB, any declaration Dec = (p; a 0 ; R), any database DB 2 a-DB, and any program P in d-Depth-2-Clause DB; Dec], the depth of a terminating proof constructing using P is no more than h max , where h max is a polynomial in the size of DB and Dec.\nAt can be assumed without loss of generality that the database DB and all decsriptions D contain an equality predicate, where an equality predicate is simply a predicate equal(X,Y) which is true exactly when X = Y .\nThe idea of the proof is to contruct a prediction-preserving reduction between the two classes of recursive programs listed above to and DNF. We will begin with two lemmas.\nLemma 8 Let Dec 2 a-DetDEC, and let C be a nonrecursive depth-d determinate clause consistent with Dec. Let Subclause C denote the language of subclauses of C, and let Monomial u] denote the language of monomials over u variables. Then there is a polynomial poly 1 so that for any database DB 2 DB, Proof of lemma: Follows immediately from the construction used in Theorem 1 of D zeroski, Muggleton, and Russell (D zeroski et al., 1992). (The basic idea of the construction is to introduce a propositional variable representing the \\success\" of each connected chain of literals in C. Any subclause of C can then be represented as a conjunction of these propositions.)\nThis lemma can be extended as follows.\nLemma 9 Let Dec 2 a-DetDEC, and let S = fC 1 ; : : :; C r g be a set of r nonrecursive depthd determinate clauses consistent with Dec, each of length n or less. Let Subclause S denote the set of all programs of the form P = (D 1 ; : : :; D s ) such that each D i is a subclause of some C j 2 S.\nThen there is a polynomial poly 2 so that for any database DB 2 DB, Subclause S DB; Dec] Dnf poly 2 (j jDBj j; r); ] Proof of lemma: By Lemma 8, for each C i 2 S, there is a set of variables V i of size polynomial in j jDBj j such that every clause in Subclause C i can be emulated by a monomial over V i . Let V = S r i=1 V i . Clearly, jV j is polynomial in n and r, and every clause in S i Subclause C i can be also emulated by a monomial over V . Further, every disjunction of r such clauses can be represented by a disjunction of such monomials.\nSince the C i 's all satisfy a single declaration Dec = (p; a; R), they have heads with the same principle function and arity; further, we may assume (without loss of generality, since an equality predicate is assumed) that the variables appearing in the heads of these clauses are all distinct. Since the C i 's are also nonrecursive, every program P 2 Subclause S can be represented as a disjunction D 1 _: : :_D r where for all i, D i 2 ( S i Subclause C i ). Hence every P 2 Subclause S can be represented by an r-term DNF over the set of variables V .\nLet us now introduce some additional notation. If C and D are clauses, then we will use C u D to denote the result of resolving C and D together, and C i to denote the result of resolving C with itself i times. Note that C u D is unique if C is linear recursive and C and D have the same predicate in their heads (since there will be only one pair of complementary Since the depth of any proof for this class of programs is bounded by a number h max that is polynomial in j jDBj j and n e , the nonrecursive program P 0 = fC h R u C B : 0 h h max g is equivalent to P on extended instances of size n e or less.\nFinally, recall that we can assume that C B is a subclause of BOTTOM d ; also, there is a polynomial-sized set L R = L r 1 ; : : :; L rp of closed recursive literals such that for some L r i 2 L R , the clause C R is a subclause of BOTTOM d L r i . This means that if we let S be the polynomial-sized set S 1 = f(BOTTOM d L r i ) h u BOTTOM d j 0 h h max and L r i 2 L R g then P 0 2 Subclause S 1 . Thus by Lemma 9, d-Depth-2-Clause Dnf. This concludes the proof of the rst statement in the the theorem.\nTo show that d-Depth-2-Clause 0 h(n) DB; Dec] Dnf poly(j jDBj j; ] a similar argument applies. Let us again introduce some notation, and de ne MESH h;n (C R 1 ; C R 2 ) as the set of all clauses of the form C R i;1 u C R i;2 u : : : u C R i;h 0 where for all j, C R ij = C R 1 or C R ij = C R 2 , and h 0 h(n). Notice that for functions h(n) c log n the number of such clauses is polynomial in n. Now let p be the predicate appearing in the heads of C R1 and C R 2 , and let Ĉ (respectively DB) be a a version of C (DB) in which every instance of the predicate p has been replaced with a new predicate p. If P is a recursive program P = fC R 1 ; C R 2 g in d-Depth-2-Clause 0 over the database DB, then P ^DB is equivalent4 to the nonrecursive program P 0 ^DB, where P 0 = f Ĉ j C 2 MESH h;ne (C R 1 ; C R 2 )g Now recall that there are a polynomial number of recursive literals L r i , and hence a polynomial number of pairs of recursive literals L r i ; L r j . This means that the set of clauses S 2 = (Lr i ;Lr j )2L R L R f Ĉ j C 2 MESH h;ne (BOTTOM d L r i ; BOTTOM d L r j )g is also polynomial-sized; furthermore, for any program P in the language d-Depth-2-Clause, P 0 2 Subclause S 2 . The second part of the theorem now follows by application of Lemma 9.\nAn immediate corollary of this result is that Theorems 6 and 5 can be strengthened as follows.\nCorollary 10 For all constants d 1 and a 2, the language family d-Depth-2-Clause DB; a-DetDEC] is uniformly polynomially predictable if and only if DNF is polynomially predictable.\nFor all constants d 1 and a 2, the language family d-Depth-2-Clause 0 DB; a-DetDEC] is uniformly polynomially predictable if and only if DNF is polynomially predictable.\nThus in an important sense these learning problems are equivalent to learning boolean DNF. This does not resolve the questions of the learnability of these languages, but does show that their learnability is a di cult formal problem: the predictability of boolean DNF is a long-standing open problem in computational learning theory." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b3", "b12", "b27", "b13", "b16", "b8", "b24", "b7" ], "table_ref": [], "text": "The work described in this paper di ers from previous formal work on learning logic programs in simultaneously allowing background knowledge, function-free programs, and recursion. We have also focused exclusively on computational limitations on e cient learnability that are associated with recursion, as we have considered only languages known to be paclearnable in the nonrecursive case. Since the results of this paper are all negative, we have concentrated on the model of polynomial predictability; negative results in this model immediately imply a negative result in the stronger model of pac-learnability, and also imply negative results for all strictly more expressive languages.\nAmong the most closely related prior results are the negative results we have previously obtained for certain classes of nonrecursive function-free logic programs (Cohen, 1993b). These results are similar in character to the results described here, but apply to nonrecursive languages. Similar cryptographic results have been obtained by Frazier and Page (1993) for certain classes of programs (both recursive and nonrecursive) that contain function symbols but disallow background knowledge.\nSome prior negative results have also been obtained on the learnability of other rstorder languages using the proof technique of consistency hardness (Pitt & Valiant, 1988). Haussler (1989) showed that the language of \\existential conjunction concepts\" is not paclearnable by showing that it can be hard to nd a concept in the language consistent with a given set of examples. Similar results have also been obtained for two restricted languages of Horn clauses (Kietz, 1993); a simple description logic (Cohen & Hirsh, 1994); and for the language of sorted rst-order terms (Page & Frisch, 1992). All of these results, however, are speci c to the model pac-learnability, and none can be easily extended to the polynomial predictability model considered here. The results also do not extend to languages more expressive than these speci c constrained languages. Finally, none of these languages allow recursion.\nTo our knowledge, there are no other negative learnability results for rst-order languages. A discussion of prior positive learnability results for rst-order languages can be found in the companion paper (Cohen, 1995)." }, { "figure_ref": [], "heading": "Summary", "publication_ref": [ "b7", "b7", "b7" ], "table_ref": [ "tab_1", "tab_2", "tab_1", "tab_2", "tab_1", "tab_1", "tab_1", "tab_1" ], "text": "This paper and its companion (Cohen, 1995) have considered a large number of di erent subsets of Datalog. Our aim has been to be not comprehensive, but systematic: in particular, we wished to nd precisely where the boundaries of learnability lie as various syntactic restrictions are imposed and relaxed. Since it is all too easy for a reader to \\miss the forest for the trees\", we will now brie y summarize the results contained in this paper, together with the positive results of the companion paper (Cohen, 1995) Throughout these papers, we have assumed that a polynomial amount of background knowledge exists; that the programs being learned contain no function symbols; and that literals in the body of a clause have small arity. We have also assumed that recursion is closed, meaning that no output variables appear in a recursive clause; however, we believe that this restriction can be relaxed without fundamentally changing the results of the paper.\n. Local Constant-Depth Determinate Clauses Clauses nC R nC R nC R jC B nC R ; C B k nC R n nC R kC R kC + R kC R jC + B kC R ; C DNF B k k 0 C DNF R n kC R 1C R 1C + R 1C R jC + B 1C R ; C =DNF B 2 1C =DNF R n 1C R\nIn the companion paper (Cohen, 1995) we showed that a single nonrecursive constantdepth determinate clause was learnable in the strong model of identi cation from equivalence queries. In this learning model, one is given access to an oracle for counterexamples|that is, an oracle that will nd, in unit time, an example on which the current hypothesis is incorrect|and must reconstruct the target program exactly from a polynomial number of these counterexamples. This result implies that a single nonrecursive constant-depth determinate clause is pac-learnable (as the counterexample oracle can be emulated by drawing random examples in the pac setting). The result is not novel (D zeroski et al., 1992); however the proof given is independent, and is also of independent interest. Notably, it is somewhat more rigorous than earlier proofs, and also proves the result directly, rather than via reduction to a propositional learning problem. The proof also introduces a simple version of the forced simulation technique, variants of which are used in all of the positive results.\nWe then showed that the learning algorithm for nonrecursive clauses can be extended to the case of a single linear recursive constant-depth determinate clause, leading to the result that this restricted class of recursive programs is also identi able from equivalence queries. With a bit more e ort, this algorithm can be further extended to learn a single k-ary recursive constant-depth determinate clause.\nWe also considered extended the learning algorithm to learn recursive programs consisting of more than one constant-depth determinate clauses. The most interesting extension was to simultaneously learn a recursive clause C R and a base clause C B , using equivalence queries and also a \\basecase oracle\" that indicates which counterexamples should be covered by the base clause C B . In this model, it is possible to simultaneously learn a recursive clause and a nonrecursive base case in all of the situations for which a recursive clause is learned alone; for instance, one can learn a k-ary recursive clause to together with its nonrecursive base case. This was our strongest positive result.\nEQ 1C R ; C B =DNF d-Depth-2-Clause 0 a-DB; a-DetDEC] 0 2 1 EQ 2 1C R =DNF d-DepthLinRecProg a-DB; a-DetDEC] 0 n 1 EQ n 1C R no d-DepthRec a-DB; a-DetDEC] 0 1 n EQ nC R no k-LocalLinRec a-DB; a-DEC] 0 1 1 EQ 1C R no\nThese results are summarized in Tables 1 and2. In Table 1, a program with one rary recursive clause is denoted rC R , a program with one r-ary recursive clause and one nonrecursive basecase is denoted rC R ; C B , or rC R jC B if there is a \\basecase\" oracle, and a program with s di erent r-ary recursive clauses is denoted s rC R . The boxed results are associated with one or more theorems from this paper, or its companion paper, and the unmarked results are corollaries of other results. A \\+\" after a program class indicates that it is identi able from equivalence queries; thus the positive results described above are summarized by the four \\+\" entries in the lower left-hand corner of the section of the table concerned with constant-depth determinate clauses.\nTable 2 presents the same information in a slightly di erent format, and also relates the notation of Table 1 to the terminology used elsewhere in the paper.\nThis paper has considered the learnability of the various natural generalizations of the languages shown to be learnable in the companion paper. Consider for the moment single clauses. The companion paper showed that for any xed k a single k-ary recursive constantdepth determinate clause is learnable. Here we showed that all of these restrictions are necessary. In particular, a program of n constant-depth linear recursive clauses is not polynomially predictable; hence the restriction to a single clause is necessary. Also, a single clause with n recursive calls is hard to learn; hence the restriction to k-ary recursion is necessary. We also showed that the restriction to constant-depth determinate clauses is necessary, by considering the learnability of constant locality clauses. Constant locality clauses are the only known generalization of constant-depth determinate clauses that are pac-learnable in the nonrecursive case. However, we showed that if recursion is allowed, then this language is not learnable: even a single linear recursive clause is not polynomially predictable.\nAgain, these results are summarized in Table 1; a \\ \" after a program class means that it is not polynomially predictable, under cryptographic assumptions, and hence neither pac-learnable nor identi able from equivalence queries.\nThe negative results based on cryptographic hardness give an upper bound on the expressiveness of learnable recursive languages, but still leave open the learnability of programs with a constant number of k-ary recursive clauses in the absence of a basecase oracle. In the nal section of this paper, we showed that the following problems are, in the model of polynomial predictability, equivalent to predicting boolean DNF:\npredicting two-clause constant-depth determinate recursive programs containing one linear recursive clause and one base case; predicting two-clause recursive constant-depth determinate programs containing two linear recursive clauses, even if the base case is known. We note that these program classes are the very nearly the simplest classes of multi-clause recursive programs that one can imagine, and that the pac-learnability of DNF is a longstanding open problem in computational learning theory. These results suggest, therefore, that pac-learning multi-clause recursive logic programs is di cult; at the very least, they show that nding a provably correct pac-learning algorithm will require substantial advances in computational learning theory. In Table 1, a \\= Dnf\" (respectively Dnf) means that the corresponding language is prediction-equivalent to DNF (respectively at least as hard as DNF).\nTo further summarize Table 1: with any sort of recursion, only programs containing constant-depth determinate clauses are learnable. The only constant-depth determinate recursive programs that are learnable are those that contain a single k-ary recursive clause (in the standard equivalence query model) or a single k-ary recursive clause plus a base case (if a \\basecase oracle\" is allowed). All other classes recursive programs are either cryptographically hard, or as hard as boolean DNF." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b3", "b29", "b9" ], "table_ref": [], "text": "Inductive logic programming is an active area of research, and one broad class of learning problems considered in this area is the class of \\automatic logic programming\" problems. Prototypical examples of this genre of problems are learning to append two lists, or to multiply two numbers. Most target concepts in automatic logic programming are recursive programs, and often, the training data for the learning system are simply examples of the target concept, together with suitable background knowledge.\nThe topic of this paper is the pac-learnability of recursive logic programs from random examples and background knowledge; speci cally, we wished to establish the computational limitations inherit in performing this task. We began with some positive results established in a companion paper. These results show that one constant-depth determinate closed k-ary recursive clause is pac-learnable, and that further, a program consisting of one such recursive clause and one constant-depth determinate nonrecursive clause is also pac-learnable given an additional \\basecase oracle\".\nIn this paper we showed that these positive results are not likely to be improved. In particular, we showed that either eliminating the basecase oracle or learning two recursive clauses simultaneously is prediction-equivalent to learning DNF, even in the case of linear recursion. We also showed that the following problems are as hard as breaking (presumably) secure cryptographic codes: pac-learning n linear recursive determinate clauses, pac-learning one n-ary recursive determinate clause, or pac-learning one linear recursive k-local clause.\nThese results contribute to machine learning in several ways. From the point of view of computational learning theory, several results are technically interesting. One is the prediction-equivalence of several classes of restricted logic programs and boolean DNF; this result, together with others like it (Cohen, 1993b), reinforces the importance of the learnability problem for DNF. This paper also gives a dramatic example of how adding recursion can have widely di ering e ects on learnability: while constant-depth determinate clauses remain pac-learnable when linear recursion is added, constant-locality clauses become cryptographically hard.\nOur negative results show that systems which apparently learn a larger class of recursive programs must be taking advantage either of some special properties of the target concepts they learn, or of the distribution of examples that they are provided with. We believe that the most likely opportunity for obtaining further positive formal results in this area is to identify and analyze these special properties. For example, in many examples in which FOIL has learned recursive logic programs, it has made use of \\complete example sets\"| datasets containing all examples of or below a certain size, rather than sets of randomly selected examples (Quinlan & Cameron-Jones, 1993). It is possible that complete datasets allow a more expressive class of programs to be learned than random datasets; in fact, some progress has been recently made toward formalizing this conjecture (De Raedt & D zeroski, 1994).\nFinally, and most importantly, this paper has established the boundaries of learnability for determinate recursive programs in the pac-learnability model. In many plausible automatic programming contexts it would be highly desirable to have a system that o ered some formal guarantees of correctness. The results of this paper provide upper bounds on what one can hope to achieve with an e cient, formally justi ed system that learns recursive programs from random examples alone." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The author wishes to thank three anonymous JAIR reviewers for a number of useful suggestions on the presentation and technical content." } ]
[ { "authors": "D Aha; S Lapointe; C X Ling; S Matwin", "journal": "Springer-Verlag", "ref_id": "b0", "title": "Inverting implication with small training sets", "year": "1994" }, { "authors": "A Biermann", "journal": "IEEE Transactions on Systems, Man and Cybernetics", "ref_id": "b1", "title": "The inference of regular lisp programs from examples", "year": "1978" }, { "authors": "A K Chandra; D C Kozen; L J Stockmeyer", "journal": "Journal of the ACM", "ref_id": "b2", "title": "Alternation", "year": "1981" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b3", "title": "Cryptographic limitations on learning one-clause logic programs", "year": "1993" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b4", "title": "Rapid prototyping of ILP systems using explicit bias", "year": "1993" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b5", "title": "Pac-learning nondeterminate clauses", "year": "1994" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b6", "title": "Recovering software speci cations with inductive logic programming", "year": "1994" }, { "authors": "W W Cohen", "journal": "Journal of AI Research", "ref_id": "b7", "title": "Pac-learning recursive logic programs: e cient algorithms", "year": "1995" }, { "authors": "W W Cohen; H Hirsh", "journal": "Machine Learning", "ref_id": "b8", "title": "The learnability of description logics with equality constraints", "year": "1994" }, { "authors": "L De Raedt; S ", "journal": "", "ref_id": "b9", "title": "First-order jk-clausal theories are PAC-learnable", "year": "1994" }, { "authors": "", "journal": "", "ref_id": "b10", "title": "Proceedings of the Fourth International Workshop on Inductive Logic Programming", "year": "" }, { "authors": "S Muggleton; S Russell; S ", "journal": "", "ref_id": "b11", "title": "Pac-learnability of determinate logic programs", "year": "1992" }, { "authors": "M Frazier; C D Page", "journal": "", "ref_id": "b12", "title": "Learnability of recursive, non-determinate theories: Some basic results and techniques", "year": "1993" }, { "authors": "D Haussler", "journal": "Machine Learning", "ref_id": "b13", "title": "Learning conjunctive concepts in structural domains", "year": "1989" }, { "authors": "J E Hopcroft; J D Ullman", "journal": "Addison-Wesley", "ref_id": "b14", "title": "Introduction to Automata Theory, Languages, and Computation", "year": "1979" }, { "authors": "M Kearns; L Valiant", "journal": "ACM Press", "ref_id": "b15", "title": "Cryptographic limitations on learning Boolean formulae and nite automata", "year": "1989" }, { "authors": "J.-U Kietz", "journal": "", "ref_id": "b16", "title": "Some computational lower bounds for the computational complexity of inductive logic programming", "year": "1993" }, { "authors": "R D King; S Muggleton; R A Lewis; M J E Sternberg", "journal": "Proceedings of the National Academy of Science", "ref_id": "b17", "title": "Drug design by machine learning: the use of inductive logic programming to model the structureactivity relationships of trimethoprim analogues binding to dihydrofolate reductase", "year": "1992" }, { "authors": "N Lavra C; S ", "journal": "Springer Verlag", "ref_id": "b18", "title": "Background knowledge and declarative bias in inductive concept learning", "year": "1992" }, { "authors": "J W Lloyd", "journal": "Springer-Verlag", "ref_id": "b19", "title": "Foundations of Logic Programming: Second Edition", "year": "1987" }, { "authors": "S Muggleton; L De Raedt", "journal": "Journal of Logic Programming", "ref_id": "b20", "title": "Inductive logic programming: Theory and methods", "year": "1994" }, { "authors": "S Muggleton; C Feng", "journal": "Academic Press", "ref_id": "b21", "title": "E cient induction of logic programs", "year": "1992" }, { "authors": "S Muggleton; R D King; M J E Sternberg", "journal": "Protein Engineering", "ref_id": "b22", "title": "Protein secondary structure prediction using logic-based machine learning", "year": "1992" }, { "authors": "", "journal": "Academic Press", "ref_id": "b23", "title": "Inductive Logic Programming", "year": "1992" }, { "authors": "C D Page; A M Frisch", "journal": "Academic Press", "ref_id": "b24", "title": "Generalization and learnability: A study of constrained atoms", "year": "1992" }, { "authors": "M Pazzani; D Kibler", "journal": "Machine Learning", "ref_id": "b25", "title": "The utility of knowledge in inductive learning", "year": "1992" }, { "authors": "L Pitt; M K Warmuth", "journal": "Computer Society Press of the IEEE", "ref_id": "b26", "title": "Reductions among prediction problems: On the difculty of predicting automata", "year": "1988" }, { "authors": "L Pitt; L Valiant", "journal": "Journal of the ACM", "ref_id": "b27", "title": "Computational limitations on learning from examples", "year": "1988" }, { "authors": "L Pitt; M Warmuth", "journal": "Journal of Computer and System Sciences", "ref_id": "b28", "title": "Prediction-preserving reducibility", "year": "1990" }, { "authors": "J R Quinlan; R M Cameron-Jones", "journal": "Springer-Verlag", "ref_id": "b29", "title": "FOIL: A midterm report", "year": "1993" }, { "authors": "J R Quinlan", "journal": "Machine Learning", "ref_id": "b30", "title": "Learning logical de nitions from relations", "year": "1990" }, { "authors": "J R Quinlan", "journal": "", "ref_id": "b31", "title": "Determinate literals in inductive logic programming", "year": "1991" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b32", "title": "", "year": "" }, { "authors": "C Rouveirol", "journal": "Machine Learning", "ref_id": "b33", "title": "Flattening and saturation: two representation changes for generalization", "year": "1994" }, { "authors": "P D Summers", "journal": "Journal of the Association for Computing Machinery", "ref_id": "b34", "title": "A methodology for LISP program construction from examples", "year": "1977" }, { "authors": "L G Valiant", "journal": "Communications of the ACM", "ref_id": "b35", "title": "A theory of the learnable", "year": "1984" }, { "authors": "J M Zelle; R J Mooney", "journal": "MIT Press", "ref_id": "b36", "title": "Inducing deterministic Prolog parsers from treebanks: a machine learning approach", "year": "1994" } ]
[ { "formula_coordinates": [ 10, 198.48, 542.52, 214.56, 32.88 ], "formula_id": "formula_0", "formula_text": "accepting(c 0 ) D ftrue i g b i 2X:b i =1 ffalse i g b i 2X:b i =0" }, { "formula_coordinates": [ 12, 255.36, 413.88, 95.28, 17.16 ], "formula_id": "formula_1", "formula_text": "D fbit i (b i )g n i=1" }, { "formula_coordinates": [ 15, 162.96, 184.02, 268.32, 393.54 ], "formula_id": "formula_2", "formula_text": "1 1 q 1 q 1 1 1 0 0 q 0 q 1 q e q f a b c 1 1 0 0 q 0 q 1 q r" }, { "formula_coordinates": [ 18, 258, 469.2, 95.52, 33.12 ], "formula_id": "formula_3", "formula_text": "f p(1) D fbit i (b i )g n i=1" }, { "formula_coordinates": [ 19, 90, 181.08, 465.12, 112.08 ], "formula_id": "formula_4", "formula_text": "(v 1 ^v3 ^v4 ) _ (v 2 ^v3 ) _ (v 1 ^v4 ) Equivalent program: p(Y) succ(Y,Z)^p(Z). p(Y) bit 1 (X 1 ) ^bit 2 (X 2 ) ^bit 3 (X 3 ) ^bit 4 (X 4 ) true 1 (X 1 ,Y) ^false 1 (X 3 ,Y) ^true 1 (X 4 ,Y) false 2 (X 2 ,Y) ^false 2 (X 3 ,Y)t rue 3 (X 1 ,Y) ^false 3 (X 4 ,Y)." }, { "formula_coordinates": [ 19, 224.64, 507.72, 155.76, 39.88 ], "formula_id": "formula_5", "formula_text": "B ij ( true i (X k ,Y) if l ij = v k false i (X k ,Y) if l ij = v k" }, { "formula_coordinates": [ 20, 173.52, 93.78, 239.52, 250.4 ], "formula_id": "formula_6", "formula_text": "A A A A @ @ @ @ @ @ @ @ A A A A A A A A @ @ @ @ A A A A @ @ @ @ B(i) V bit i (X i ) ^V V B ij : : : p(3) succ(2,3) B(3) B(2) p(1) B(1) succ(1,2) p(2) B(n-1) B(n) succ(n-1,n) p(n-1) p(n)" }, { "formula_coordinates": [ 21, 90, 448.2, 357.12, 100.8 ], "formula_id": "formula_7", "formula_text": "p(Y ) n k=1 bit k (X k ) ^r î=1 s i ĵ=1 B ij ^leftson(Y; Z) ^p(Z) and R 2 is the clause p(Y ) n k=1 bit k (X k ) ^r î =1 s i ĵ=1 B ij ^rightson(Y; Z) ^p(Z)" }, { "formula_coordinates": [ 22, 121.2, 91.2, 369.84, 268.92 ], "formula_id": "formula_8", "formula_text": "J J J J J X X X X E E E E E B B B B B B B B B B X X Z Z Z Z Z \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Z Z Z Z Z ` @ @ @ @ @ @ @ b b b b b b b b b b b H H \" \" \" \" \" \" \" \" \" \" \" p( ) B( ) p(L) B( ) p(" }, { "formula_coordinates": [ 26, 391.56, 691.56, 4.92, 15.2 ], "formula_id": "formula_9", "formula_text": ". Local Constant-Depth Determinate Clauses Clauses nC R nC R nC R jC B nC R ; C B k nC R n nC R kC R kC + R kC R jC + B kC R ; C DNF B k k 0 C DNF R n kC R 1C R 1C + R 1C R jC + B 1C R ; C =DNF B 2 1C =DNF R n 1C R" }, { "formula_coordinates": [ 28, 101.52, 172.8, 395.52, 63.24 ], "formula_id": "formula_10", "formula_text": "EQ 1C R ; C B =DNF d-Depth-2-Clause 0 a-DB; a-DetDEC] 0 2 1 EQ 2 1C R =DNF d-DepthLinRecProg a-DB; a-DetDEC] 0 n 1 EQ n 1C R no d-DepthRec a-DB; a-DetDEC] 0 1 n EQ nC R no k-LocalLinRec a-DB; a-DEC] 0 1 1 EQ 1C R no" } ]
Pac-learning Recursive Logic Programs: Negative Results
In a companion paper it was shown that the class of constant-depth determinate k-ary recursive clauses is e ciently learnable. In this paper we present negative results showing that any natural generalization of this class is hard to learn in Valiant's model of paclearnability. In particular, we show that the following program classes are cryptographically hard to learn: programs with an unbounded number of constant-depth linear recursive clauses; programs with one constant-depth determinate clause containing an unbounded number of recursive calls; and programs with one linear recursive clause of constant locality. These results immediately imply the non-learnability of any more general class of programs. We also show that learning a constant-depth determinate program with either two linear recursive clauses or one linear recursive clause and one non-recursive clause is as hard as learning boolean DNF. Together with positive results from the companion paper, these negative results establish a boundary of e cient learnability for recursive function-free clauses.
William W Cohen
[ { "figure_caption": "Forevery constant d, every constant a, every database DB 2 a-DB, every declaration Dec 2 a-DetDEC, and every clause C 2 d-DepthNonRec DB; Dec], there is an equivalent clause C 0 in k-LocalNonRec DB; Dec] of size bounded by kj jCj j, where k is a function only of a and d (and hence is a constant if d and a are also constants.) Hence k-LocalNonRec DB; a-DEC] is a pac-learnable generalization of d-DepthNonRec DB; a-DetDEC] It is thus plausible to ask if recursive programs of k-local clauses are pac-learnable. Some facts about the learnability of k-local programs follow immediately from previous results.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Reducing DNF to a recursive program", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Proofs possible with the program f c ( ) r such that the conjunction B(i) succeeds, which (by the argument given in Theorem 5)", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "C R ; C B ) 2 d-Depth-2-Clause DB; Dec] where C R is the recursive clause and C B is the base. The proof of any extended instance (f; D) must use clause C R repeatedly h times and then use clause C B to resolve away the nal subgoal. Hence the nonrecursive clause C h R u C B could also be used to cover the instance (f; D).", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "A summary of the learnability results", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Summary by language of the learnability results. Column B indicates the number of base (nonrecursive) clauses allowed in a program; column R indicates the number of recursive clauses; L/R indicates the number of recursive literals allowed in a single recursive clause; EQ indicates an oracle for equivalence queries and BASE indicates a basecase oracle. For all languages except k-LocalLinRec, all clauses must be determinate and of depth d.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b12", "b5", "b1", "b9", "b15", "b19", "b14", "b5", "b12", "b13", "b18", "b20", "b26", "b4", "b7", "b17", "b16", "b22", "b3", "b8" ], "table_ref": [], "text": "General-purpose planning has a long history of research in Arti cial Intelligence. Several di erent planning algorithms have been developed ranging from the pioneering GPS (Ernst & Newell, 1969) to a variety of recent algorithms in the SNLP (McAllester & Rosenblitt, 1991) family. At the most basic level, the purpose of planning is to nd a sequence of actions that change an initial state into a state that satis es a goal statement. Planners use the actions provided in their domain representations to try to achieve the goal. However di erent planners use di erent means to this end.\nFaced with a variety of di erent planning algorithms, some planning researchers, including these authors, have been increasingly curious to compare di erent planning methodologies. Although general-purpose planning is known to be undecidable (Chapman, 1987), it has been a common belief that least-commitment planning is the \\best,\" i.e., the most efcient planning strategy for most planning problems. This belief is based on evidence that least-commitment planners can e ciently handle planning problems that involve di cult plan step interactions (Barrett & Weld, 1994;Kambhampati, 1994;Minton, Bresina, & Drummond, 1991). Delayed commitments, in particular to step orderings, allow the plan c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. steps to remain unordered until the interactions are visible. 1 In similar situations, eagercommitment planners may encounter severe e ciency problems with early commitments to incorrect orderings.\nRecently we engaged in an investigation of other sorts of planning problems which would be handled e ciently by other planning strategies. Since all planning is driven by heuristics, we identi ed di erent sets of heuristics that correspond to di erent planning methods. We designed sets of planning domains and problems to test di erent planning strategies. While studying the impact of these di erent strategies in di erent kinds of planning problems, we came across evidence that eager-commitment planners can e ciently handle a variety of planning problems, in particular those with di cult operator choices (Stone, Veloso, & Blythe, 1994). The up-to-date state allows them to make informed planning choices, particularly in terms of the operator alternatives available. In similar situations, delayedcommitment planners may need to backtrack over incorrect operator choices (Veloso & Blythe, 1994). We came to believe that no planner was consistently better than all others across di erent domains and problems.\nResigned to the futility of trying to nd a universally successful planning strategy, we felt the need to study which domains and problems were best suited to which planning methods. 2 In order to do so, we devised and implemented a planner that can use any operator-ordering commitment strategy along the continuum between, on the one extreme delayed commitment, and on the other, eager commitment. This planner is completely exible along one dimension of planning heuristics: operator-ordering commitments. Our main contribution in this paper is to completely describe this planning algorithm and to put it forth as a tool for studying the mapping between heuristics and domains or problems. Rather than risking the possibility that the planner itself might get overlooked if it were relegated to an \\architecture\" section of a future paper, we present flecs and its underlying philosophy as a contribution in its own right.\nThe continuum of heuristics that can be explored by our planning algorithm lies between the operator-ordering commitment strategies of delayed-commitment and eager-commitment backward-chaining planners, which we now situate within a broad range of planning and problem solving methods. One possible planning strategy is to search all the possible states that can be reached from the initial state to nd one that satis es the goal. This method, called progression or forward-chaining, can be very impractical. There are often too many accessible states in the world to e ciently search the complete state space. As an alternative, several planners constrain their search by using regression, or backward-chaining. Rather than considering all possible actions that could be executed in the initial state and searching recursively forward through the state space, they search backwards from the goal. Their search is driven by the set of actions that can directly achieve the goal.\nThere are two main ways of performing backward-chaining. Several planners do regression by searching the space of possible plans. Planners, such as noah, tweak, snlp, 1. Least-commitment planners really delay commitments to plan step orderings and to variable bindings.\nThroughout this article we use the term delayed commitment to contrast with eager commitment in the context of step orderings. 2. Similar concerns regarding di erent constraint satisfaction algorithms have led recently to the design of the Multi-Tac architecture (Minton, 1993). This system investigates a given problem to nd a combination of heuristics from a collection of available ones to solve the problem in an e cient way.\nand their descendants (Chapman, 1987;McAllester & Rosenblitt, 1991;McDermott, 1978;Sacerdoti, 1977;Tate, 1977;Wilkins, 1984) are plan-space planners that use a delayedcommitment strategy. In particular, they delay the decision of ordering operators as long as possible. Consequently, the planner reasons from the initial state and from a set of constraints that are regressed from the goal. On the other hand, planners such as gps, strips, and the prodigy family (Carbonell, Knoblock, & Minton, 1990;Fikes & Nilsson, 1971;Rosenbloom, Newell, & Laird, 1990) use an eager-commitment strategy. 3 They use backward-chaining to select plan steps relevant to the goals. These eager-commitment planners make explicit use of an internal representation of the state of the world (their internal state) and order operators when possible so that they can reason from an updated version of this state. They trade the risk of eager commitment for the bene ts of using an explicit updated planning state.\nIn this article we introduce a planning algorithm, flecs, that uses a FLExible Commitment Strategy with respect to operator orderings. flecs is designed to provide us and other planning researchers with a framework to investigate the mapping from domains and problems to e cient planning strategies. This algorithm represents a novel contribution to planning in that it introduces explicitly the choice of the commitment strategy. This ability to change its commitment strategy makes it useful for studying the tradeo s between delayed and eager commitments. flecs is a descendant of prodigy4.0 and its current implementation is directly on top of prodigy4.0. It extends prodigy4.0 by reasoning explicitly about ordering alternatives and by having the ability to change its commitment strategy across di erent problems and domains, and also during the course of a single planning problem. 4 This article gradually introduces flecs. Section 2 gives a top-level view of the algorithm and describes the di erent ways in which flecs makes use of a uniquely speci ed state of the world. Section 3 introduces the concepts used by the flecs algorithm. We provide an annotated example to illustrate the details of the planning concepts de ned. Section 4 presents flecs's planning algorithm in full detail and explains the algorithm step by step. We discuss di erent heuristics to guide flecs's choices, in particular the exible choice of commitment strategy. We analyze the advantages and disadvantages of delayed and eager plan step ordering commitments. Section 5 shows speci c examples of planning domains and problems that we devised, which support the need for the use of flecs's exible commitment strategy. We performed an empirical analysis on planning performance in these domains. The corresponding empirical results demonstrate the tradeo s discussed and show evidence that exible commitment is necessary. Finally Section 6 draws conclusions from this work.\n3. Planners in the prodigy family include prodigy2.0 (Minton, Knoblock, Kuokka, Gil, Joseph, & Carbonell, 1989), NoLimit (Veloso, 1989), and prodigy4.0 (Carbonell, Blythe, Etzioni, Gil, Joseph, Kahn, Knoblock, Minton, P erez, Reilly, Veloso, & Wang, 1992). NoLimit and prodigy4.0, as opposed to prodigy2.0, do not require the linearity assumption of goal independence and their search spaces are complete (Fink & Veloso, 1994). They also have some control over their commitment choices as opposed to the other earlier total-order planners. 4. We found that we needed a new name for our algorithm as flecs represents a signi cant change in philosophy and implementation from prodigy4.0." }, { "figure_ref": [], "heading": "A Top-Level View of flecs", "publication_ref": [ "b8", "b0" ], "table_ref": [ "tab_0" ], "text": "prodigy4.0 and flecs di er most signi cantly from other state-of-the-art planning systems in that they search for a solution to a planning problem by combining backward-chaining (or regression) and simulation of plan execution (Fink & Veloso, 1994). While back-chaining, they can commit to a total-ordering of plan steps so as to make use of a uniquely speci ed world state. These planners maintain an internal representation of the state and update it by simulating the execution of operators found relevant to the goal by backward-chaining. Note that simulating execution while planning di ers from interleaving planning and execution, since the option of \\un-simulating,\" or rolling back, must remain open. Interleaved planning and execution is generally done by separate modules for planning, monitoring, executing, and replanning (Ambros-Ingerson & Steel, 1988). flecs can either delay or eagerly carry out the plan simulation. In this way, our planning algorithm has the exibility of both being able to delay operator-ordering commitments and being able to use the e ects of previously selected operators to help determine which goals to plan for next and which operators to use to achieve these goals. In short, it can emulate both delayed-commitment planners and eager-commitment planners.\nTable 1 shows the top-level view of the flecs algorithm.\n1. Initialize. 2. Terminate if the goal statement has been satis ed.\n3. Compute the pending goals and applicable operators.\nPending goals are the yet-to-be-achieved preconditions of operators that have been selected to be in the plan. Applicable operators are those that have all their preconditions satis ed in the current state." }, { "figure_ref": [], "heading": "Choose to subgoal or apply: (backtrack point)", "publication_ref": [], "table_ref": [], "text": "To subgoal, go to step 6. To apply, go to step 7. 6. Select a pending goal (no backtrack point) and an operator that can achieve it (backtrack point); go to step 3. 7. Change the state as speci ed by an applicable operator (backtrack point); go to step 2. All the terms used in this table are fully described along with the detailed version of the algorithm in Section 4. In this section we focus on two main characteristics of this algorithm, namely its use of an internal state and its exibility with respect to commitment strategies." }, { "figure_ref": [], "heading": "The Use of a Simulated Planning State", "publication_ref": [ "b5" ], "table_ref": [ "tab_0" ], "text": "flecs uses its internal state for at least four purposes. First, it terminates when every goal from the given problem is satis ed in the current version of the state (the current state): at this point, a complete plan (the sequence of operators that transformed the initial state into the current state) has been created and the planning process can stop. Second, in every cycle, the algorithm uses the internal state to determine which goals need to be planned for and which have already been achieved by following a means-ends analysis strategy. Unlike some other planners which analyze all of the possible e ects of the operators that may have changed the initial state, flecs simply checks if a particular goal is true in the current state. 5 Third, the planner uses the state to determine which operators may now be applied: i.e., those whose preconditions are all true in the state. Fourth, flecs can use its state to choose an operator and bindings that are most likely to achieve a particular goal with a minimum of planning e ort (Blythe & Veloso, 1992). In summary, and with reference to the algorithm in Table 1, flecs uses the state to determine:\nif the goal statement has been satis ed (step 2); which goals still need to be achieved (step 3); which operators are applicable (step 3); which operators to try rst while planning (step 6).\nIn planners that do not keep an internal state, all four of these steps require considerable planning e ort when they are even attempted at all. In contrast, flecs can perform these steps in sub-quadratic time. Furthermore, other planners do not have any particular methods for choosing among possible operators to achieve a goal. This particular use of state has been shown to provide signi cant e ciency gains in prodigy4.0 (Veloso & Blythe, 1994).\nSince flecs does use the state, it makes a big di erence whether or not it chooses to change its state (apply an operator) at a given time. The advantage of applying an operator is that more informed planning results during each of the above four steps. However, the choice to apply an operator involves a commitment to order this operator before all other operators that have not yet been applied. This commitment is only temporary since if no plan can be found with the operator in this position, the operator can be \\un-applied\" by simply changing the internal state back to its previous status. One may argue that the requirement that operators be applied in an explicit order opens up the possibility of exponential backtracking. However this argument is vacuous, as planning is undecidable (Chapman, 1987). Due to the use of state, flecs can reduce the likelihood of requiring backtracking at the operator choice point. In so doing, it may increase the likelihood of backtracking at the operator-ordering choice point. However, it has the exibility of being able to come down on either side of this tradeo ." }, { "figure_ref": [ "fig_0" ], "heading": "The Choice of Commitment Strategies", "publication_ref": [], "table_ref": [], "text": "In order to control the tradeo between eager and delayed state changes, flecs has a toggle which determines whether the algorithm prefers subgoaling or applying an operator in step 5. Which option flecs considers rst may a ect its path through the search space and consequently its planning e ciency. This ability to accommodate di erent types of search is the most novel part of our algorithm. Its signi cance lies in the di erence between subgoaling and applying.\nThe di erence between subgoaling and applying is illustrated in Figure 1. Subgoaling can be best understood as regressing one goal, or backward chaining, using means-ends analysis. It includes the choices of a goal to plan for and an operator to achieve this goal. As seen in Section 2.1, both of these choices are a ected by flecs's internal state. Thus, subgoaling without ever updating the internal state (applying an operator) can lead to uninformed planning decisions. On the other hand, by subgoaling extensively, flecs can select a large set of operators that will appear in the plan before deciding in which order to apply them. Then flecs takes into account the con icts, or \\threats,\" among operators and orders them appropriately when applying them." }, { "figure_ref": [], "heading": "Subgoaling Applying", "publication_ref": [ "b8", "b19" ], "table_ref": [], "text": "Operator t achieves a precondition of operator y that is not true in state C.\nAll preconditions of operator x are true in state C. Applying x changes the state to C'. (Fink & Veloso, 1994) illustrates the di erence between subgoaling and applying. A search node consisting of a \\head-plan\" and a \\tailplan.\" The head-plan contains operators that have already been applied and have changed the initial state \\I\" to the current state \\C.\" The tail-plan consists of operators that have been selected to achieve goals in the goal statement \\G\" and operators that have been selected to achieve preconditions of these operators, etc. The gure shows how the planner could either subgoal or apply at a given search node.\nApplying an operator is flecs's way of changing the current internal state so that future subgoaling decisions can be more informed. However, applying an operator is a commitment (temporary since backtracking is possible) that this operator should be executed flecs: Planning with a Flexible Commitment Strategy before any other. This is the essential tradeo between eagerly subgoaling and eagerly applying: eagerly subgoaling delays ordering commitments (delayed commitment), while eagerly applying facilitates more informed subgoaling (eager commitment).\nflecs has a switch (toggle) that can change its behavior from eager subgoaling to eager applying and vice versa at any time. This feature is the most signi cant improvement in flecs over prodigy4.0 and its predecessors. Since we saw evidence that neither delayed-commitment nor eager-commitment search strategies were consistently e ective (Stone et al., 1994), we felt the need to provide flecs with the toggle. Thus, flecs can combine the advantages of delayed commitments and eager commitments.6 " }, { "figure_ref": [ "fig_1", "fig_6", "fig_1", "fig_3", "fig_5", "fig_6", "fig_5", "fig_6" ], "heading": "An Illustrative Example", "publication_ref": [ "b5", "b12", "b24" ], "table_ref": [], "text": "In this section we present an example that illustrates in detail most of the planning situations that can arise in a general planning problem. Although planning may be well understood in general, past descriptions of planning algorithms have not directly addressed most of these situations in full detail. The flecs algorithm is designed to handle all of these situations.\nIn order to describe flecs completely, we need to de ne several variables that are maintained as the algorithm proceeds. Since it is much easier to understand the algorithm once one is familiar with the concepts that these variables denote, we present an annotated example in Figures 2 through 9 before formally presenting flecs. We further recommend following how each of the variables and functions C, G, P, O, A, a, and c change throughout the annotated example, according to their de nitions: C represents the current internal state of the planner. Its uses are summarized in Section 2.1.\nG is the set of goals and subgoals that the planner is aiming to achieve. These are the goals that are on the fringe of the subgoal tree. Goals in G may be goals that have not yet been planned for, or goals that have been achieved (perhaps trivially) but not yet used by the operator that needs them as one of its preconditions (i.e., this operator has not been applied yet).\nP is the set of pending goals: goals in G that may need to be planned for in the current state.\nO stands for the set of instantiated operators that have been selected to achieve goals and subgoals.\nA is the set of applicable operators: operators in O whose preconditions are all satis ed in the current state and which are needed in the current state to achieve some goal.\nFor a goal G, a(G) is the set of its ancestor goal sets { the sequences of goals that caused G to become a member of G. Trivially, a goal is an ancestor of each of the preconditions of the operator selected to achieve this goal. a(G) is a set of sets because G can have di erent sets of ancestors. This concept will become clearer through the example.\nFor an operator O, c(O) is the set of goals which O was selected to achieve { its causes. Applying O establishes each member of c(O). As illustrated below, the functions a and c are needed to determine which goals are pending and which operators are applicable. They are analogous to causal links used to determine threats in other planners (Chapman, 1987;McAllester & Rosenblitt, 1991). The sequence of planning decisions in this example (Figure 2 through Figure 9) is designed to illustrate the uses of all of flecs's variables and functions. We recommend becoming familiar with them by spending some time carefully tracing their values and returning to the above de nitions throughout this example. Note that the gures show only the tail-plan while we mention applied operators and state changes in the text. Goals are in circles: solid circles if they are not true and dashed circles if they are true in the current state. Operators are in boxes with arrows pointing to the goals which they \\produce,\" i.e., the goals which the operators have been selected to achieve (their causes). In turn, the preconditions of these operators are goals with arrows pointing to the operators which \\consume\" them. Operators that are applicable in the current state appear in bold boxes.\nChanges to the functions c and a are underlined in the captions. We present now the example. Figure 2 shows the initial planning situation, in which we consider a planning problem with three literals in the goal statement, G 1 , G 2 , and G 3 , i.e., G = fG 1 ; G 2 ; G 3 g. There is one literal in the initial state, G 7 , i.e., C = fG 7 g. As none of the goals is true in the initial state, P = G. There are no operators selected, i.e., O = ;, and therefore also no operators applicable, i.e., A = ;. At this point, since they are all top-level goals, none of the goals has any ancestors: a(G 1 ) = a(G 2 ) = a(G 3 ) = ;. As there are no applicable operators, the next step must be to subgoal on one of the pending goals. Figure 3 shows the planning situation after flecs subgoals on G 1 and G 2 . Suppose that operator O 1 , with preconditions G 6 and G 7 , is selected to achieve G 1 , while O 2 is chosen to achieve G 2 as indicated below. Note that the operators' preconditions replace their causes in the set of fringe goals G; since G 7 is true in the current state, it is NOT included in the set of pending goals P. Here G 1 is the cause of O 1 , so c(O 1 ) = fG 1 g; similarly, c(O 2 ) = fG 2 g. The new goals all have nonempty ancestor sets: a(G 6 ) = a(G 7 ) = ffG 1 gg, and a(G 4 ) = ffG 2 gg. There are still no applicable operators: O 1 cannot be applied because G 6 6 2 C and O 2 cannot be applied because G 4 6 2 C. Therefore, flecs subgoals again.\nG G G 1 2 3 C = fG 7 g G = fG 1 ; G 2 ; G 3 g O = ; P = fG 1 ; G 2 ; G 3 g A = ;\nG G G G 4 G 6 1 2 3 1 2 7 G O O C = fG 7 g G = fG 3 ; G 6 ; G 7 ; G 4 g O = fO 1 ; O 2 g P = fG 3 ; G 6 ; G 4 g A = ;\nFigure 3: Resulting planning situation after subgoaling on G 1 and G 2 .\nFigure 4 shows the planning situation after flecs subgoals on G 3 . Suppose that the operator selected to achieve G 3 has preconditions G 4 and G 5 . We now have c(O 3 ) = fG 3 g, and a(G 5 ) = ffG 3 gg. The causes of operators O 1 and O 2 do not change, so c(O 1 ) = fG 1 g and c(O 2 ) = fG 2 g as in the previous step. Similarly, a(G 6 ) and a(G 7 ) remain unchanged. However, G 4 now has two sets of ancestor goals: a(G 4 ) = ffG 2 g; fG 3 gg. To understand the need to keep both ancestor sets, consider the possibility that G 2 could be achieved unexpectedly as a side-e ect of some unrelated operator instead of being achieved by O 2 as planned for. In this case, G 4 would remain a pending goal since it would be needed to achieve G 3 . Again, since there are no applicable operators, flecs must subgoal on one of the pending goals, i.e., G 6 , G 4 , or G 5 .\nG G G G G 4 G 6 1 2 3 1 2 3 5 7 G O O O C = fG 7 g G = fG 6 ; G 7 ; G 4 ; G 5 g O = fO 1 ; O 2 ; O 3 g P = fG 6 ; G 4 ; G 5 g A = ;\nFigure 4: Resulting planning situation after subgoaling on G 3 .\nFigure 5 shows the planning situation after flecs subgoals on G 4 . Suppose that O 4 | an operator with precondition G 7 | is selected to achieve G 4 . Since G 7 is true in the current state, O 4 is our rst applicable operator. Note that it is necessarily ordered before O 2 and O 3 since its cause is a precondition of these operators. As usual, the cause of the new operator is stored: c(O 4 ) = fG 4 g. In addition, the ancestors of G 7 must be augmented to include two new ancestor sets: a(G 7 ) = ffG 1 g; fG 4 ; G 2 g; fG 4 ; G 3 gg. Although there is now an applicable operator, let us assume that flecs chooses to delay its commitment to order O 4 as the rst step in the plan and subgoals again on a pending goal. Figure 6 shows the planning situation after flecs subgoals on G 5 . Suppose that operator O 4 can also achieve G 5 and that it is selected to do so. We now need to update both the causes of this operator and the ancestors of its precondition: c(O 4 ) = fG 4 ; G 5 g and a(G 7 ) = ffG 1 g; fG 4 ; G 2 g, fG 4 ; G 3 g; fG 5 ; G 3 gg. Now rather than subgoaling on the last remaining pending goal (G 6 ), let us apply O 4 . Note that this decision corresponds to an early commitment in terms of ordering the operators O 1 , O 4 , and any operators later selected to achieve G 6 which are unordered by the current planning constraints. flecs changes here from its delayed-commitment strategy to an eager-commitment strategy. Figure 8 shows the planning situation after flecs applied O 2 . Suppose that, although it was not selected to do so, operator O 2 achieves G 1 as a side-e ect. Perhaps O 2 has a conditional e ect that was not visible to the planner, or perhaps O 1 simply looked more promising than O 2 as an operator to achieve G 1 at the time when it was selected. In any case, G 1 is now in C and the planning done for it is no longer needed: G 6 is no longer a pending goal, since its sole ancestor is already in C. This fortuitous achievement of a goal is the reason that we need to use the functions c and a to adjust the sets of pending goals P and applicable operators A: it would be wasted e ort for flecs to plan to achieve G 6 .\nG G 6 1 2 3 1 3 5 7 4 G G G G G O O C = fG 7 ; G 4 ; G 5 ; G 1 ; G 2 g G = fG 6 ; G 7 ; G 4 ; G 5 ; G 2 g O = fO 1 ; O 3 g P = ; A = fO 3 g\nNote that were G 6 a precondition of O 3 as well as O 1 , it would be a pending goal since it would still be relevant to achieving G 3 . At this point, only the ancestors of G 4 must be reset: a(G 4 ) = ffG 3 gg. Since there are no more pending goals, flecs must now apply the last remaining applicable operator, O 3 .\nFigure 9 shows the nal planning situation after flecs applied O 3 . At this point all of the top level goals are true in the current state. Despite the fact that some of the planning tree remains, flecs recognizes that there is no more work to be done and terminates. The nal plan is O 4 , O 2 , O 3 , which is the sequence of operators applied in the head-plan (not shown) corresponding to the steps in Figures 7, 8, and9. An a posteriori algorithm (Veloso, P erez, & Carbonell, 1990) can convert the sequence into a partially ordered plan capturing the dependencies: O 4 ; fO 2 ; O 3 g. " }, { "figure_ref": [], "heading": "FLECS: The Detailed Description", "publication_ref": [ "b3" ], "table_ref": [], "text": "Aside from the variables and functions introduced in the preceding section, we need to de ne only four more things before presenting the complete algorithm. First, Initial State and Goal Statement are the corresponding ground literals from the problem de nition. Second, for a given operator O, pre(O), add(O), and del(O) are its instantiated preconditions, add-list, and delete-list respectively. flecs takes these values straight from the domain representation, which may include disjunctions, negations, existentially and universally quanti ed preconditions and e ects, and conditional e ects (Carbonell et al., 1992). When O has conditional e ects, add(O) and del(O) are determined dynamically, using the state at the time O is applied. Third, the \\relevant instantiated operators that could achieve G\" (step 6) are all the instantiated operators O (operators with fully-speci ed bindings) which have G 2 add(O) if G is a positive goal or G 2 del(O) if G is a negative goal. Fourth, toggle is a variable that determines the avor of search, as described later." }, { "figure_ref": [ "fig_5" ], "heading": "The Planning Algorithm", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We present the flecs planning algorithm in full detail in Table 2. 7 While examining the algorithm, notice that the fringe goals G, the selected operators O, the ancestor function a(G), the cause function c(O), and the current state C are maintained incrementally. On the other hand, the pending goals P, the applicable operators A, and toggle are recomputed on every pass through the algorithm.\nStep 1 initializes most of these variables. At the beginning of the planning process, the only goals in G are those in the goal statement, the current state C is the same as the initial state, and since no operators have yet been selected, O is empty. Both the ancestor function a and the cause function c are initialized to the constant function that maps everything to ;.\nIn practice, the domain of a is the set of goals and the domain of c is the set of operators that appear in the problem. However, since most of these goals and all of these operators have not been determined when the algorithm is rst called, we must initialize the functions with unrestricted domains.\nStep 2 is the termination condition. It is called after each time a new operator is applied. The algorithm terminates successfully if every goal G in the goal statement is true, or satis ed, in the current state C, i.e., G 2 C.\nIn step 3, the sets of pending goals and applicable operators are computed based on the current state. Pending goals are the goals that the planner may need to plan for. Initially, the pending goals are the fringe goals that are not currently true or that were true in the initial state. 8 The applicable operators are the selected operators whose preconditions are true in the state.\nThen, step 4 computes the pending goals P and applicable operators A that are active in the current state. A pending goal is active as long as it is on the fringe of the subgoal tree and it still needs to be planned for. A goal is no longer active if every one of its ancestor sets has at least one goal that has already been achieved: then all purposes for which the goal was selected no longer exist (as was the case for G 6 in Figure 8). An applicable operator is active in the current state as long as it would achieve a goal that is still useful to the plan. An applicable operator is no longer active if each of its causes is either true in the current state or no longer active.\nStep 5 is the most novel part of our algorithm. It allows for a exible search strategy within a single planning algorithm. Since at this step, flecs has not yet terminated, there must be either some active pending goals or active applicable operators, i.e., P or A must be non-empty. However, if there is only one or the other, then there is no choice to be made. If, on the other hand, both P and A are non-empty, then we can either proceed to step 6 or to step 7. For the sake of completeness, we must keep both options open; but which option flecs considers rst may a ect the amount of search required. By changing If toggle = sub ^P 6 C, subgoal rst: go to step 6.\nIf toggle = app, apply rst: go to step 7.\n6. Choose a goal P from P (not a backtrack point).\nChoose a goal not true in the Current State using means-ends analysis.\na. Get the set R of relevant instantiated operators that could achieve P. b. If R = ; then i. P = P fPg.\nii. If P = ; then fail (i.e., backtrack).\niii. Go to step 6.\nc. Choose an operator O from R (backtrack point).\nChoose the operator with minimum conspiracy number, i.e. the operator which appears to be achievable with the least amount of planning.\nd. O = O fOg. e. G = (G fPg) pre(O). f. c(O) = c(O) fPg. g. 8G 2 pre(O):a(G) = a(G) ffPg S j S 2 a(P)g.\nh. Go to step 3." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Choose an operator A from A (backtrack point for interactions).", "publication_ref": [ "b19" ], "table_ref": [], "text": "Use a heuristic to nd operators with fewer interactions { similar to the one used by the SABA heuristic. the value of toggle, which can be done on any pass through the loop, flecs can change the type of search as it works on a problem. Each pass through the body of the algorithm visits either step 6 or step 7. When subgoaling (step 6), an active pending goal P is chosen from P. Note that unlike the corresponding choice in step 7, this choice of subgoals is not a backtrack point. However, if there are no operators that could achieve this goal, then another goal is chosen (step 6b).\nMeans-ends analysis is used as a heuristic to prefer subgoaling on goals that are not currently true. Next, an operator O is chosen that could achieve the chosen goal (step 6c). It can either be a new operator or an existing one as in Figure 6 (O 4 , which had already been selected to achieve G 4 , is also selected to achieve G 5 ). The choice of operator is a backtrack point. Unless some other heuristic is provided, the minimum conspiracy number heuristic is used to determine which operator should be tried rst (Blythe & Veloso, 1992). In short, this heuristic selects the instantiated operator that appears to be achievable with the least amount of planning. Before returning to the top of the loop, all of the a ected variables are updated. First, O is added to O using set union so that the same operator never appears twice (step 6d). Second, O's preconditions are added to G, while P is removed (step 6e): once P has an operator selected to achieve it, it is no longer on the fringe of the subgoal tree. Third, the cause of O is augmented to include P (step 6f). Fourth, the ancestor sets of O's preconditions are augmented to include all sets of goals comprised of P and its ancestors (step 6g). As explained in Figure 4, all ancestor sets must be included. Finally, since the state is not changed at all, the termination condition cannot be met. The algorithm returns to step 3.\nWhen applying an operator (step 7), an applicable operator A is chosen from A.\nA heuristic that analyzes the applicable operators can be used to choose the best possible operator. One such heuristic analyzes interactions between operators by identifying negative threats, similarly to the saba heuristic in (Stone et al., 1994). In short, this heuristic prefers operators that do not delete any preconditions of, and whose e ects are not deleted by, other operators. This choice of an applicable operator is a backtrack point where all orderings of interacting applicable operators are considered. Di erent orderings of completely independent operators need not be considered. Completely independent operators are those with interactions neither between themselves nor among their ancestor sets. Since the application of one such operator can make no di erence to the application of another, we only need to consider one ordering of these operators.\nOnce A is chosen, it is promptly applied (step 7a). This application involves changing the current state as prescribed by A. Note that if A has conditional e ects, they are expanded at this point. Next, the relevant variables are updated. First, updating involves removing A from the set of selected operators (step 7b). Second, the ancestors of A's preconditions are only those ancestor sets which did not include A (step 7c): A does not need further planning. Figure 7 shows an example in which a precondition (G 7 ) does still have an ancestor remaining. Third, since A has been applied, its preconditions that are not goals for any other reason are no longer on the fringe, but its causes are (step 7d): if they are unachieved they must be re-achieved. Fourth, in case A is ever selected again as an operator to achieve some goal, c(A) is reset to ; (step 7e). Finally, since the current state has been altered, the algorithm returns to step 2 where the termination condition is checked." }, { "figure_ref": [ "fig_9" ], "heading": "Discussion: Backtracking, Heuristics, and Properties", "publication_ref": [ "b3", "b10", "b11" ], "table_ref": [], "text": "One should pay close attention to the placement of backtrack points in the algorithm. In particular, there are only three: the subgoal/apply choice in step 5, the choice of operator to achieve a goal in step 6, and the choice of applicable operator in step 7. However, the choice of goal on which to subgoal in step 6, which is a backtrack point in the prodigy algorithm, is not a backtrack point here. flecs does not need this backtrack point because the choice to apply or not to apply an operator at a given time is left open in step 5 and all signi cantly di erent orders of applying applicable operators are considered in step 7. As explained in the previous subsection, di erent orderings of completely independent operators are not considered. Nevertheless, all orderings that could lead to a solution are considered. Therefore, backtracking on the choice of subgoal would only cause redundant search. This elimination of a backtrack point is a signi cant improvement in flecs over previous implementations, namely NoLimit and prodigy4.0. Note that no new backtrack points are added to o set the eliminated backtrack point.\nflecs's only explicit failure point is in step 6 and occurs when the algorithm has chosen to subgoal, but none of the pending goals have any relevant operators. All other failures are implicit. That is, at a backtrack point, if all choices have been unsuccessfully tried then the algorithm backtracks. As presented, the algorithm only terminates unsuccessfully if the entire search space has been exhausted. Other causes for failure, such as goal loops, state loops, depth bounds, and time limits, are incorporated in the same manner as in prodigy4.0 (Carbonell et al., 1992).\nAt each choice point, there is some heuristic to determine which branch to try ( rst). In step 6, the goal is chosen using means-ends analysis, and the operator with the minimum conspiracy number is chosen to achieve that goal. In step 7, the choice mechanism from the saba heuristic is used to determine which applicable operator to try rst. In step 5, toggle, which can be changed at any time, determines whether the default commitment strategy should be eager subgoaling or eager applying. Note that if all of the pending goals are true in the Current State (or if there are no pending goals), the planner may apply an applicable operator regardless of the value of toggle. Similarly, if there are no applicable operators, the planner must subgoal even if toggle indicates to prefer applying. toggle is a new variable to guide heuristic search at an existing choice point with a branching factor of two: it does not represent the addition of a new backtrack point. As discussed throughout, it provides flecs with the ability to change its commitment strategy. As suggested by its name, toggle can be one of two values: sub and app indicating eager subgoaling and eager applying respectively.\nHere we describe a domain-independent heuristic that could be used to guide changes to the value of toggle. Such a heuristic should allow eager commitments when there is reason to believe that there will not be a need to backtrack over the resulting operator linearization.\nIn this case, setting toggle to app will increase the planning e ciency by converting a partially-ordered set of operators into a sequence that leads to a single possible state, which can then be used to guide subsequent planning. This process is equivalent to starting a new and smaller planning problem as all the previous choices will be embedded in the state. The situation described above is similar to that which arises in the alpine system which constructs e cient abstraction hierarchies (Knoblock, 1994). alpine can guarantee that planning hierarchically using its generated abstraction hierarchies will not lead to backtracking across re nement spaces. Figure 10 illustrates how flecs can use this abstraction planning information to control the value of toggle. If toggle changes to app when a particular abstract planning step is completely re ned and the abstraction hierarchies preserve alpine's ordered monotonicity property, then there should be no need to backtrack over the resulting operator ordering. Then toggle can change back to sub, and flecs can continue planning with updated state information. The abstraction-driven heuristic is one method for exploiting this choice point. Similarly, the minimum conspiracy number heuristic and the saba heuristic are not the only ways to guide the choices of instantiated operator and applicable operator respectively. The heuristics used can always be changed, and we do not claim that the ones we provide as defaults are the best possible: no heuristic will work all the time.\nThe planning algorithm we present is both sound and complete if it searches the entire search space, using a technique such as iterative deepening (Korf, 1985). flecs is sound because it only terminates when it has reached the goal statement as a result of applying operators. That is, the application of the operator sequence returned as the nal plan has been entirely simulated by the time the planner terminates. Thus the preconditions of each operator will all be true at the time the operator is executed, and after all operators have been executed, the goal statement will be satis ed. Consequently, flecs is sound.\nSince no step in the algorithm prunes any of the search space, flecs with an iteratively increasing depth bound is also complete: if there is a solution to a planning problem, flecs will nd one. To insure this property, we need only show that flecs can consider all possible operators that may achieve a goal as well as all orderings of interacting applicable operators. flecs does so by maintaining backtracking points at the choice of operator (step 6c) and at both points at which the operator ordering could be a ected: the choice of applicable operator itself (step 7) and the choice of whether to subgoal or apply (step 5d). Selecting \\apply\" commits to ordering all operators that are not currently applicable after at least one of the currently applicable operators. Note that completeness is achieved even without maintaining the choice of goals to subgoal on as a backtrack point (step 6), since regardless of the order in which the operators are chosen, they are applied according to their possible interactions (i.e., similarly to resolving negative threats). Thus flecs's search space is signi cantly reduced from that of prodigy4.0, while still preserving completeness. (See Appendix A for formal proofs of flecs's soundness and completeness.)" }, { "figure_ref": [], "heading": "Empirical Analysis of Heuristics to Control the Commitment Strategy", "publication_ref": [ "b19", "b1", "b9" ], "table_ref": [], "text": "As we have seen, flecs introduces the notion of a exible choice point between delayed and eager operator-ordering commitments. To appreciate the need for this exibility, consider the two extreme heuristics: always eagerly subgoaling (delaying commitment) and always eagerly applying (eager commitment). The former heuristic chooses to subgoal as long as there is at least one active pending goal (Subgoal Always Before Applying or saba); the latter chooses to apply as long as there are any active applicable operators (Subgoal After eVery Try to Apply or savta). In this section we show empirical results that demonstrate that both of these extremes can lead to highly sub-optimal search in particular domains. Indeed, we believe that no single domain-independent search heuristic can perform well in all domains (Stone et al., 1994). It is for this reason that we have equipped flecs with the ability to use either extreme domain-independent heuristic or any more moderate heuristic \\in between\" the two: during every iteration through our algorithm, there is an opportunity to change from eagerly subgoaling to eagerly applying or vice versa. One could de ne di erent heuristics to guide this choice, or one could leave the choice up to the user interactively.\nThis exibility in search method provides our algorithm with the ability to search sensibly in a wide variety of domains. Any algorithm that is not so exible is susceptible to coming across domains which it cannot handle e ciently (Barrett & Weld, 1994;Veloso & Blythe, 1994;Kambhampati, 1994). flecs's exibility makes it possible to study which heuristics work best in which situations. In addition, this exible choice is a perfect learning opportunity. Since no single search method will solve all planning problems, we will use learning techniques to help us determine from experience which search strategies to try.\nTo illustrate the need for di erent search strategies, we provide one real world situation in which eagerly subgoaling leads directly to the optimal solution, one in which eagerly applying does so, and one in which an intermediate policy is best. These examples are not intended to be an exhaustive demonstration of flecs's capabilities. Rather, our examples are intended to illustrate the need to consider problems other than traditional goal ordering problems and to motivate the potential impact of flecs." }, { "figure_ref": [ "fig_10" ], "heading": "Eagerly Subgoaling Can Be Better", "publication_ref": [ "b1" ], "table_ref": [ "tab_4" ], "text": "First, consider the class of tasks in which the following is true: all operators are initially executable, but they must be performed in a speci c order because each operator deletes the preconditions of the operators that were supposed to be executed earlier. For instance, suppose that there is a single paint brush and several objects which need to be painted di erent colors. The paint brush can be washed fairly well, but it never comes completely clean. For this reason, if we ever use a lighter paint after a darker paint, some of the darker paint will show up on the painted object and our whole project will be ruined. Perhaps the shade of red is darker than the shade of green. Then to paint a chair with a red seat and green legs, we had better paint the legs rst.\nConsider a range of colors ordered from light to dark: white, yellow, green, : : :, and black. Initially, we could paint an object any color. However, if we start by painting something black, then no other paint can be used. In order to represent this situation to a planner, we created a domain with the operators shown in Table 3. Assume that all the colors are usable in the initial state. Since painting an object a certain color deletes the precondition of painting an object a lighter color, and since this precondition cannot be re-achieved (no operator adds the predicate \\usable\"), the colors must be used in a speci c order.\nThis painting domain is a real-world interpretation of the arti cial domain D m S 1 introduced in (Barrett & Weld, 1994). The operators in D m S 1 look like:\nOperator: A i preconds: fI i g adds: fG i g deletes: fI j jj < ig Since each operator deletes the preconditions of all operators numerically before it, these operators can only be applied in increasing numerical order. Thus, A 1 corresponds to the operator paint-white, A 2 corresponds to paint-yellow, etc. We used this domain for our experiments, all of which were run on a SPARC station. We generated random problems having one to fteen goals: ten problems with each number of goals. We used these same 150 problems to test both of the extreme heuristics. To get our data points, we averaged the results for the ten problems with the same number of goals. All of the raw data is contained in the online appendix. We graph the average time that flecs took to solve the problems versus the number of goals.\nAs shown in (Stone et al., 1994), 9 eagerly applying leads to exponential behavior (as a function of the number of goals) in this domain, while eagerly subgoaling, when using an operator choice heuristic from the same study, leads to approximately linear behavior and no backtracking. The problem with eagerly applying is that, for example, if goal G 7 is solved before G 4 , then flecs will immediately apply A 7 and have to backtrack when it unsuccessfully tries to apply A 4 . Eagerly subgoaling allows flecs to build up the set of operators that it will need to apply and then order them appropriately by selecting an application order that avoids con icts or threats. Figure 11 shows a graphic comparison of the two di erent behaviors. " }, { "figure_ref": [ "fig_11" ], "heading": "Eagerly Applying Can Be Better", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Next, consider the class of tasks in which the following is true: several operators could be used to achieve any goal, but each operator can only be used once. To use a similar example, suppose we are trying to paint di erent parts of a single object di erent colors.\nHowever, now suppose that we are using multiple brushes that never come clean: once we use a brush for one color, we can never safely use it again. For instance, if we painted the green parts using brush1, we would need to use brush2 (or any brush besides brush1) to paint the red parts. Table 4 represents the operators in this new domain.\nOperator: paint-with-brush1\n: : : paint-with-brush8 <parts> <color> <parts> <color> preconds: (unused brush1) (unused brush8) adds: (painted <parts> <color>) (painted <parts> <color>) deletes: (unused brush1) (unused brush8) Note that each operator can be used for any color, but since it deletes its own precondition, it can only be used once. We capture the essential features of this domain in an arti cial domain called D 1 -use-once. The operators in D 1 -use-once look like:\nOperator: A i preconds: fI i g adds: f< g >g deletes: fI i g Any operator can achieve any goal, but since each operator deletes its own precondition, it can only be used once. Each operator corresponds to painting with a di erent brush.\nIn this domain, it is better to eagerly apply than it is to eagerly subgoal. Eagerly subgoaling causes flecs to select the same operator to achieve all of its goals. With a deterministic method for selecting operators (such as minimum conspiracy number with order of appearance in the domain speci cation as a tie-breaker), it selects operator A 1 to achieve two di erent goals. However, since it could only apply A 1 once, it would need to backtrack to select a di erent operator for one of the goals. As shown in Figure 12, eagerly applying outperforms eagerly subgoaling in this case. We generated these results in the same way as the results in the previous subsection. " }, { "figure_ref": [], "heading": "An Intermediate Heuristic", "publication_ref": [ "b23" ], "table_ref": [], "text": "Were it always possible to nd good solutions either by always eagerly subgoaling, as in the rst example, or by always eagerly applying, as in the second, there would be no need to include the variable toggle in flecs: we could simply have an eager-subgoal mode and an eager-apply mode. However, there are cases when neither of the above alternatives su ces. Instead, we need to eagerly subgoal during some portions of the search and eagerly apply during others. One heuristic for changing the commitment strategy is the abstraction-driven method described in Section 4.2. Here we present a domain which can use a form of this heuristic. This time consider the class of tasks in which the following is true: top-level goals take at least three operators to achieve, one of which is irreversible, can only be executed a limited number of times, and restricts the bindings of the other operators. One representative of this class is the one-way rocket domain introduced in (Veloso & Carbonell, 1993). For the sake of consistency, however, we will present a representative of this class of domains in the painting context. Suppose that we are painting walls with rollers. To paint a wall we need to rst \\ready\" the wall, which for the purpose of this example means to decide that the wall needs to be painted and to designate a color and roller to paint the wall. Next we must ll the selected roller with the appropriately colored paint. Then we can paint the wall. Unfortunately, our limited supply of rollers can never become clean after they have been lled with paint, but they must be clean when they are selected to paint a wall. For this reason, we must ready all the walls that we want to paint with the same roller before we can ll the roller with paint. For the reader familiar with the one-way rocket domain, the \\ ll-roller\" operator here is analogous to the \\move-rocket\" operator in that domain: it can only be executed once due to a limited supply of fuel, and it must be executed after it has been fully loaded. When given this domain representation, flecs has a di cult time with some apparently simple problems if it uses the same search strategy throughout its entire search. For example, consider the problem with ve walls and two rollers (equivalent to a problem in the one-way rocket domain with ve objects and two destinations): flecs does not directly nd this solution when always eagerly subgoaling or when always eagerly applying. To search e ciently, it must subgoal until it has considered all the walls that need to be painted the same color; then it must apply all applicable operators before continuing. There is no explicit information in the domain telling it to use one roller for red and one roller for green.10 For this reason, when flecs eagerly subgoals, it initially selects the same roller to paint all the walls. It extensively backtracks before nding the correct bindings. flecs also does not realize that it should \\ready\" all the walls that are going to be painted the same color before lling the roller. Thus, when flecs eagerly applies operators, it tries lling a roller as soon as it has one wall \\readied.\" Note that planning with variables would not solve this problem since the planner would still need to make binding selections before subgoaling beyond \\paint-wall,\" hence facing the same problems.\nWhen flecs tries to solve the above problem using either strategy described, it does not succeed in a reasonable amount of time. Since flecs is complete, it would certainly succeed eventually, but eventually can be a long time away when dealing with an NP-hard problem: neither of these commitment strategies leads to a solution to the above problem in under 500 seconds of search time. But all is not lost. By changing the value of toggle at the appropriate times, flecs can easily nd a solution to the above problem. In fact, it can do so in just 4 seconds when toggle is manually changed at the appropriate times. time(sec) solution eager applying 500 no eager subgoaling 500 no variable strategy 4 yes If flecs eagerly subgoals until it has decided to paint wallA, wallB, and wallC with roller1, then it can begin eagerly applying. Once the three walls are painted red, flecs can begin subgoaling again without any danger of preparing the other walls with the wrong roller: only roller2 is still clean. This is an example in which the change in state allows the minimum conspiracy number heuristic to select the correct instantiated operator.\nThe general heuristic here is that toggle should be set to sub until all walls that need to be painted the same color have been considered. Then toggle should be set to app until all the applicable operators have been applied. Then toggle should be set back to sub as the process continues. In this way, flecs will need to do very little backtracking and it can quickly reach a solution. This heuristic corresponds to using an abstraction hierarchy to deal separately with the interactions between the di erent colors and the di erent walls." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have presented a planner that is intended for studying the correspondence between planning problems and the search heuristics that are most suited to those problems. flecs has the ability to eagerly subgoal, thus delaying operator-ordering commitments; eagerly apply, thus maximizing the advantages of maintaining an internal state; or to exibly interleave these two strategies. Thus it can operate at any point in the continuum of operator-ordering heuristics { one important dimension of planning.\nIn this paper, we explained the advantages and disadvantages of delayed and eager commitments. We presented the flecs algorithm in full detail, carefully motivating the concepts and illustrating them with clear examples. We discussed di erent heuristics to guide flecs in its choice points and discussed its properties. We showed examples of speci c planning tasks and corresponding empirical results which support our position that a general-purpose planner must be able to use a exible commitment strategy. Although all planning problems are solvable by complete planners, flecs may solve some of the problems more e ciently than other planners that do not have the ability to change their commitment strategy and may fall into a worst case of their unique commitment strategy.\nflecs provides a framework to study the characteristics of di erent planning strategies and their mapping to planning domains and problems. flecs represents our view that there is no domain-independent planning strategy that is uniformly e cient across di erent domains and problems. flecs addresses the particular operator-ordering choice as a exible planning decision. It allows the combination of delayed and eager operator-ordering commitments to take advantage of the bene ts of explicitly using a simulated execution state and reasoning about planning constraints.\nWe are currently continuing our work on understanding the tradeo s among di erent planning strategies along di erent dimensions. We plan to study the e ects of eager versus delayed commitments at the point of operator instantiations. We are also investigating the e ects of combining real execution into flecs. Finally, we plan to use machine learning techniques on flecs's choice points to gain a possibly automated understanding of the mapping between e cient planning methods and planning domains and problems." }, { "figure_ref": [], "heading": "Appendix A. Proofs", "publication_ref": [ "b11" ], "table_ref": [ "tab_2" ], "text": "We prove that flecs is sound and that with iterative deepening it is complete. Consider the flecs algorithm as presented in Table 2. A planning problem is determined by the initial state, the goal statement, and the set of operators available in the domain. A plan is a (totally-ordered) sequence of instantiated operators. The returned plan generated by flecs for a planning problem is the sequence of applied operators upon termination. A solution to a planning problem is a plan whose operators can be applied to the problem's initial state so as to reach a state that satis es the Goal Statement. A justi ed solution is a solution such that no subsequence of operators in the solution is also a solution. flecs terminates successfully when the termination condition is met (step 2).\nTheorem 1. flecs is sound.\nWe show that the flecs algorithm is sound; that is, if the algorithm terminates successfully, then the returned plan is indeed a solution to the given planning problem.\nAssume that flecs terminates successfully and that S = O 1 ; O 2 ; :::O n is the returned plan. flecs applies an operator only when the preconditions of the operator are satis ed in the Current State C (step 7). Hence, by construction, after operators O 1 ; O 2 ; : : :O k for any k < n have been applied, the preconditions of operator O k+1 are satis ed in C. At the point of termination, the Current State C satis es the Goal Statement (step 2). But C was reached from the initial state by applying the operators of S. Therefore S is a solution. QED.\nTheorem 2. flecs with iterative deepening is complete.\nRecall that completeness, informally, means that if there is a solution to a particular problem, then the algorithm will nd it. We show that flecs's search space is complete and that flecs's search algorithm is complete as long as it explores all branches of the search space, for example using iterative deepening (Korf, 1985).11 Iterative deepening involves searching with a bound on the number of search steps that may be performed before a particular search path is suspended from further expansion; if no solution is found for a particular depth bound, the search is repeated with a larger depth bound.\nFor a planning problem, assume that S = O 1 ; O 2 ; :::O n is a justi ed solution. We will show that if flecs searches with iterative deepening, it will nd a solution.\nThe flecs algorithm has four choice points. Three of these choice points are backtrack points: the choice between subgoaling and applying (step 5d), the choice of which operator to use to achieve a goal (step 6c), and the choice of which applicable operator to apply (step 7). One choice point is not a backtrack point: the choice of goal on which to subgoal (step 6).\nTo prove completeness, we must show that at each backtrack point, there is some possible choice that will lead flecs towards nding the plan S, no matter what choices flecs makes at the non-backtrack choice point. Then if flecs explores all branches of the search space by searching with iterative deepening, it must eventually nd S unless it nds some other solution (of length n) rst.\nThe proof involves constructing oracles that tell flecs which choices to make at the backtrack points so as to nd S. Then no matter what choices it makes at the other choice point, it nds solution plan S. Consider the point in the search at which operators O 1 ; O 2 ; O 3 ; : : :; O k for some k (and no others) have already been applied. Then let there be oracles at the backtrack points which operate as follows.\nAt the choice of subgoaling or applying (step 5d), the rst oracle makes flecs choose to apply if and only if O k+1 is applicable (i.e., is in A); otherwise it makes flecs subgoal. If flecs chooses to apply (O k+1 2 A), then it reaches another choice point, namely the choice of operator to apply (step 7). Another oracle makes flecs select precisely the step O k+1 .\nIf flecs chooses to subgoal (O k+1 6 2 A), then let flecs choose any goal P from the set of pending goals P (step 6). Since step 6 is not a backtrack point, we cannot have an oracle determine the choice at this point. Instead we have to show that, independently from the choice made at this point, flecs will still nd the solution S. It can nd this solution as a consequence of the construction of the next oracle that controls the nal choice point (below). That oracle guarantees that any P selected must either be a member of the goal statement or a precondition of some operator of S.\nThe nal choice point is the selection of an operator to achieve P (step 6c). The third oracle makes flecs choose an operator of S to achieve P. Since S is a solution to the planning problem and since P is either a member of the Goal Statement or a precondition of some operator of S, there must be some operator of S that achieves P. If there is more than one such operator, any one can be chosen. Since only operators from S are selected, the condition that all pending goals are from the Goal Statement or are preconditions of operators of S is maintained.\nThese three oracles will lead flecs to the justi ed solution S. Since S is justi ed, every operator of S is necessary to achieve either some goal in the goal statement or some precondition of another operator. Consequently, since the third oracle only chooses operators of S, every such operator will eventually be chosen and then applied as prescribed by the rst two oracles. Once every operator of S has been applied, the termination condition will be met (since S is a solution) and flecs will terminate successfully. QED." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to recognize in particular the contributions of Jim Blythe and Eugene Fink to our research. Jim Blythe is highly responsible for the current implementation of prodigy4.0 upon which flecs is based. Eugene Fink helped with the formalization of our algorithms and proofs. We thank Eugene Fink, Karen Haigh, Gary Pelton, Alicia P erez, Xuemei Wang, and the anonymous reviewers for their comments on this article.\nThis research is sponsored by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the o cial policies or endorsements, either expressed or implied, of Wright Laboratory or the U. S. Government." } ]
[ { "authors": "J Ambros-Ingerson; S Steel", "journal": "", "ref_id": "b0", "title": "Integrating planning, execution, and monitoring", "year": "1988" }, { "authors": "A Barrett; D S Weld", "journal": "Arti cial Intelligence", "ref_id": "b1", "title": "Partial-order planning: Evaluating possible e ciency gains", "year": "1994" }, { "authors": "J Blythe; M M Veloso", "journal": "", "ref_id": "b2", "title": "An analysis of search techniques for a totally-ordered nonlinear planner", "year": "1992" }, { "authors": "J G Carbonell; J Blythe; O Etzioni; Y Gil; R Joseph; D Kahn; C Knoblock; S Minton; A Reilly; S Veloso; M Wang; X ", "journal": "", "ref_id": "b3", "title": "PRODIGY4.0: The manual and tutorial", "year": "1992" }, { "authors": "J G Carbonell; C A Knoblock; S Minton", "journal": "Erlbaum", "ref_id": "b4", "title": "Prodigy: An integrated architecture for planning and learning", "year": "1990" }, { "authors": "D Chapman", "journal": "Arti cial Intelligence", "ref_id": "b5", "title": "Planning for conjunctive goals", "year": "1987" }, { "authors": "G W Ernst; A Newell", "journal": "Academic Press", "ref_id": "b6", "title": "GPS: A Case Study in Generality and Problem Solving", "year": "1969" }, { "authors": "R E Fikes; N J Nilsson", "journal": "Arti cial Intelligence", "ref_id": "b7", "title": "Strips: A new approach to the application of theorem proving to problem solving", "year": "1971" }, { "authors": "E Fink; M Veloso", "journal": "", "ref_id": "b8", "title": "PRODIGY planning algorithm", "year": "1994" }, { "authors": "S Kambhampati", "journal": "", "ref_id": "b9", "title": "Desing tradeo s in partial order (plan space) planning", "year": "1994" }, { "authors": "C A Knoblock", "journal": "Arti cial Intelligence", "ref_id": "b10", "title": "Automatically generating abstractions for planning", "year": "1994" }, { "authors": "R E Korf", "journal": "Arti cial Intelligence", "ref_id": "b11", "title": "Depth-rst iterative-deepening: An optimal admissible tree search", "year": "1985" }, { "authors": "D Mcallester; D Rosenblitt", "journal": "", "ref_id": "b12", "title": "Systematic nonlinear planning", "year": "1991" }, { "authors": "D V Mcdermott", "journal": "Cognitive Science", "ref_id": "b13", "title": "Planning and acting", "year": "1978" }, { "authors": "S Minton", "journal": "", "ref_id": "b14", "title": "Integrating heuristics for constraint satisfaction problems: A case study", "year": "1993" }, { "authors": "S Minton; J Bresina; M Drummond", "journal": "", "ref_id": "b15", "title": "Commitment strategies in planning: A comparative analysis", "year": "1991" }, { "authors": "S Minton; C A Knoblock; D R Kuokka; Y Gil; R L Joseph; J G Carbonell", "journal": "", "ref_id": "b16", "title": "prodigy 2.0: The manual and tutorial", "year": "1989" }, { "authors": "P S Rosenbloom; A Newell; J E Laird", "journal": "Erlbaum", "ref_id": "b17", "title": "Towards the knowledge level in SOAR: The role of the architecture in the use of knowledge", "year": "1990" }, { "authors": "E D Sacerdoti", "journal": "American Elsevier", "ref_id": "b18", "title": "A Structure for Plans and Behavior", "year": "1977" }, { "authors": "P Stone; M Veloso; J Blythe", "journal": "", "ref_id": "b19", "title": "The need for di erent domain-independent heuristics", "year": "1994" }, { "authors": "A Tate", "journal": "", "ref_id": "b20", "title": "Generating project networks", "year": "1977" }, { "authors": "M Veloso; J Blythe", "journal": "", "ref_id": "b21", "title": "Linkability: Examining causal link commitments in partialorder planning", "year": "1994" }, { "authors": "M M Veloso", "journal": "", "ref_id": "b22", "title": "Nonlinear problem solving using intelligent casual-commitment", "year": "1989" }, { "authors": "M M Veloso; J G Carbonell", "journal": "Machine Learning", "ref_id": "b23", "title": "Derivational analogy in prodigy: Automating case acquisition, storage, and utilization", "year": "1993" }, { "authors": "M M Veloso; M A Carbonell; J G ", "journal": "", "ref_id": "b24", "title": "Nonlinear planning with parallel resource allocation", "year": "1990" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b25", "title": "", "year": "" }, { "authors": "D E Wilkins", "journal": "Arti cial Intelligence", "ref_id": "b26", "title": "Domain-independent planning: Representation and plan generation", "year": "1984" } ]
[ { "formula_coordinates": [ 8, 96, 448.32, 359.8, 141.92 ], "formula_id": "formula_0", "formula_text": "G G G 1 2 3 C = fG 7 g G = fG 1 ; G 2 ; G 3 g O = ; P = fG 1 ; G 2 ; G 3 g A = ;" }, { "formula_coordinates": [ 9, 96, 145.92, 377.99, 158.61 ], "formula_id": "formula_1", "formula_text": "G G G G 4 G 6 1 2 3 1 2 7 G O O C = fG 7 g G = fG 3 ; G 6 ; G 7 ; G 4 g O = fO 1 ; O 2 g P = fG 3 ; G 6 ; G 4 g A = ;" }, { "formula_coordinates": [ 9, 96, 509.04, 377.99, 158.61 ], "formula_id": "formula_2", "formula_text": "G G G G G 4 G 6 1 2 3 1 2 3 5 7 G O O O C = fG 7 g G = fG 6 ; G 7 ; G 4 ; G 5 g O = fO 1 ; O 2 ; O 3 g P = fG 6 ; G 4 ; G 5 g A = ;" }, { "formula_coordinates": [ 11, 96, 503.04, 377.99, 165.57 ], "formula_id": "formula_3", "formula_text": "G G 6 1 2 3 1 3 5 7 4 G G G G G O O C = fG 7 ; G 4 ; G 5 ; G 1 ; G 2 g G = fG 6 ; G 7 ; G 4 ; G 5 ; G 2 g O = fO 1 ; O 3 g P = ; A = fO 3 g" }, { "formula_coordinates": [ 14, 123.36, 497.64, 234.72, 51.84 ], "formula_id": "formula_4", "formula_text": "d. O = O fOg. e. G = (G fPg) pre(O). f. c(O) = c(O) fPg. g. 8G 2 pre(O):a(G) = a(G) ffPg S j S 2 a(P)g." } ]
FLECS: Planning with a Flexible Commitment Strategy
There has been evidence that least-commitment planners can e ciently handle planning problems that involve di cult goal interactions. This evidence has led to the common belief that delayed-commitment is the \best" possible planning strategy. However, we recently found evidence that eager-commitment planners can handle a variety of planning problems more e ciently, in particular those with di cult operator choices. Resigned to the futility of trying to nd a universally successful planning strategy, we devised a planner that can be used to study which domains and problems are best for which planning strategies. In this article we introduce this new planning algorithm, flecs, which uses a FLExible Commitment Strategy with respect to plan-step orderings. It is able to use any strategy from delayed-commitment to eager-commitment. The combination of delayed and eager operator-ordering commitments allows flecs to take advantage of the bene ts of explicitly using a simulated execution state and reasoning about planning constraints. flecs can vary its commitment strategy across di erent problems and domains, and also during the course of a single planning problem. flecs represents a novel contribution to planning in that it explicitly provides the choice of which commitment strategy to use while planning. flecs provides a framework to investigate the mapping from planning domains and problems to e cient planning strategies.
Manuela Veloso; Peter Stone
[ { "figure_caption": "Figure 1 :1Figure 1: This diagram from(Fink & Veloso, 1994) illustrates the di erence between subgoaling and applying. A search node consisting of a \\head-plan\" and a \\tailplan.\" The head-plan contains operators that have already been applied and have changed the initial state \\I\" to the current state \\C.\" The tail-plan consists of operators that have been selected to achieve goals in the goal statement \\G\" and operators that have been selected to achieve preconditions of these operators, etc. The gure shows how the planner could either subgoal or apply at a given search node.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An example: The initial speci cation of a planning situation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 5: Resulting planning situation after subgoaling on G 4 .", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Resulting planning situation after subgoaling on G 5 .", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 77Figure7shows the planning situation after flecs applied O 4 . Since operator O 4 was applied in order to achieve goals G 4 and G 5 , they are both true in the current state and back on the fringe of the goal tree, i.e., they are in C and G. Notice that they stay in G until eventually they have been \\consumed\" by O 2 and O 3 . However, since they are true in the current state, they are not pending goals. Since G 7 is once again the precondition of only one selected operator, a(G 7 ) = ffG 1 gg as before. O 2 and O 3 are now applicable as their preconditions are all true in the current state thanks to O 4 . Let us assume that flecs maintains the eager-commitment strategy and continues applying applicable operators. flecs orders O 2 before O 3 , since O 3 deletes a precondition of O 2 (e ects are not shown).", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Resulting planning situation after applying O 2 from Figure 7.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Final planning situation after applying O 3 from Figure 8.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "a. Apply A: C = (C add(A)) del(A) b. O = O fAg. c. 8G 2 pre(A):a(G) = a(G) fS 2 a(G) j S \\ c(A) 6 = ;g. d. G = (G c(A)) fG 2 pre(A) j a(G) = ;g. e. c(A) = ;.f. Go to step 2.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Using abstraction information to guide changes to toggle.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: flecs's performance with di erent heuristics in domains D m S 1 . Eager subgoaling and applying correspond to delayed commitments and eager commitments respectively.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: flecs's performance with di erent heuristics in domains D 1 -use-once.", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "A top-level view of flecs. The step numbers here are made to correspond with the step numbers in the detailed version of the algorithm presented in Table 2 (Section 4), which re nes these steps and adds an additional necessary step 4.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Set or reset toggle to sub or app, i.e. Set default to delayed or eager commitment. b. If A = ;, go to step 6. c. If P = ;, go to step 7.", "figure_data": "1. Initialize: a. G = Goal Statement. b. C = Initial State. c. O = ;. d. 8G:a(G) = ;. e. 8O:c(O) = ;.2. Terminate if Goal Statement C.3. Compute applicable operators A and pending goals P: a. P = fG 2 G j G 6 2 C _ G 2 Initial Stateg. b. A = fA 2 O j pre(A) Cg.4. Adjust P and A to contain only active members: a. P = P fP 2 P j 8S 2 a(P):9G 2 S s.t. G 2 Cg. b. A = A fA 2 A j 8G 2 c(A): (G 2 C) _ (8S 2 a(G):9G 0 2 S s.t. G 0 2 C)]g.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The full description of flecs.", "figure_data": "C: current state G: fringe goals P: pending goals O: instantiated operators A: applicable operators a: ancestor goal sets c: causes", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Example domain for which delayed step-ordering commitment results in e cient planning.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Example domain for which eager step-ordering commitment and use of the state results in e cient planning.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Table 5 shows a possible set of operators in this painting domain.", "figure_data": "Operator: designate-roller <wall> <roller> <color> <roller> <color> <wall> <roller> <color> ll-roller paint-wall preconds: (clean <roller>) (clean <roller>) (ready (needs-painting <wall>) (chosen <wall> <roller> <color>) <roller> <color>) ( lled-with-paint <roller> <color>) adds: (ready ( lled-with-paint (painted <wall> <color>) <wall> <roller> <color>) <roller> <color>) (chosen <roller> <color>) deletes: (clean <roller>) (ready <wall> <roller> <color>) (needs-painting <wall>)", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Example domain for which the exibility of commitments results in e cient planning.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b22", "b38", "b35", "b19", "b30", "b21", "b43", "b20", "b31", "b3", "b25", "b43", "b34", "b18", "b15" ], "table_ref": [], "text": "Inductive logic programming (ILP) is a growing subtopic of machine learning that studies the induction of Prolog programs from examples in the presence of background knowledge (Muggleton, 1992;Lavra c & D zeroski, 1994). Due to the expressiveness of rst-order logic, ILP methods can learn relational and recursive concepts that cannot be represented in the attribute/value representations assumed by most machine-learning algorithms. ILP methods have successfully induced small programs for sorting and list manipulation (Shapiro, 1983;Sammut & Banerji, 1986;Muggleton & Buntine, 1988;Quinlan & Cameron-Jones, 1993) as well as produced encouraging results on important applications such as predicting protein secondary structure (Muggleton, King, & Sternberg, 1992) and automating the construction of natural-language parsers (Zelle & Mooney, 1994b).\nHowever, current ILP techniques make important assumptions that restrict their application. Below are three common assumptions:\n1. Background knowledge is provided in extensional form as a set of ground literals. 2. Explicit negative examples of the target predicate are available.\n3. The target program is expressed in \\pure\" Prolog where clause-order is irrelevant and procedural operators such as cut (!) are disallowed. The currently most well-known and successful ILP systems, Golem (Muggleton & Feng, 1990) and Foil (Quinlan, 1990), both make all three of these assumptions. However, each of these assumptions brings signi cant limitations since:\n1. An adequate extensional representation of background knowledge is frequently in nite or intractably large.\n2. Explicit negative examples are frequently unavailable and an adequate set of negative examples computed using a closed-world assumption is in nite or intractably large.\n3. Concise representation of many concepts requires the use of clause-ordering and/or cuts (Bergadano, Gunetti, & Trinchero, 1993).\nThis paper presents a new ILP method called Foidl (First-Order Induction of Decision Lists) which helps overcome each of these limitations by incorporating the following properties:\n1. Background knowledge is represented intensionally as a logic program.\n2. No explicit negative examples need be supplied or constructed. An assumption of output completeness can be used instead to implicitly determine if a hypothesized clause is overly-general and, if so, to quantify the degree of over-generality by simply estimating the number of negative examples covered.\n3. A learned program can be represented as a rst-order decision list, an ordered set of clauses each ending with a cut. This representation is very useful for problems that are best represented as general rules with speci c exceptions.\nAs its name implies, Foidl is closely related to Foil and follows a similar top-down, greedy specialization guided by an information-gain heuristic. However, the algorithm is substantially modi ed to address the three advantages listed above. The use of intensional background knowledge is fairly straightforward and has been incorporated in previous Foil derivatives (Lavra c & D zeroski, 1994;Pazzani & Kibler, 1992;Zelle & Mooney, 1994b), The development of Foidl was motivated by a failure we observed when applying existing ILP methods to a particular problem, that of learning the past tense of English verbs. This problem has been studied fairly extensively using both connectionist and symbolic methods (Rumelhart & McClelland, 1986;MacWhinney & Leinbach, 1991;Ling, 1994); however, previous e orts used specially-designed feature-based encodings that impose a xed limit on the length of words and fail to capture the position-independence of the underlying transformation. We believed that representing the problem as constructing a logic program for the predicate past(X,Y) where X and Y are words represented as lists of letters (e.g past ( a,c,t] ,a,c,t,e,d]),past( a,c,h,e],a,c,h,e,d]), past( a,r,i,s,e], a,r,o,s,e])) would produce much better results. However, due to the limitations mentioned above, we were unable to get reasonable results from either Foil or Golem. However, by overcoming these limitations, Foidl is able to learn highly accurate programs for the past-tense problem from many fewer examples than required by previous methods.\nThe remainder of the paper is organized as follows. Section 2 provides important background material on Foil and on the past-tense learning problem. Section 3 presents the Foidl algorithm and details how it incorporates the three advantages discussed above. Section 4 presents our results on learning the past-tense of English verbs demonstrating that Foidl out-performs all previous methods on this problem. Section 5 reviews related work, Section 6 discusses limitations and future directions, and Section 7 summarizes and presents our conclusions." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b31", "b30", "b29" ], "table_ref": [], "text": "Since Foidl is based on Foil, this section presents a brief review of this important ILP system; Quinlan (1990), Quinlan andCameron-Jones (1993), andCameron-Jones andQuinlan (1994) provide a more complete description. The section also presents a brief review of previous work on the English past tense problem." }, { "figure_ref": [], "heading": "FOIL", "publication_ref": [ "b34", "b26", "b12", "b18", "b17", "b15", "b27", "b15", "b4", "b29", "b29", "b33" ], "table_ref": [], "text": "Foil learns a function-free, rst-order, Horn-clause de nition of a target predicate in terms of itself and other background predicates. The input consists of extensional de nitions of these predicates as tuples of constants of speci ed types. For example, input appropriate for learning a de nition of list membership is:\nmember(Elt,Lst): { <a, a]>, <a, a,b]>, <b, a,b]>, <a, a,b,c]>, ...} components(Lst,Elt,Lst): { < a],a, ]>, < a,b],a, b]>, < a,b,c],a, b,c]> ...}\nwhere Elt is a type denoting possible elements which includes a,b,c, and d; Lst is a type de ned as consisting of at lists containing up to three of these elements; and components(A,B,C) is a background predicate which is true i A is a list whose rst element is B and whose rest is the list C (this must be provided in place of a function for list construction). Foil also requires negative examples of the target concept, which can be supplied directly or computed using a closed-world assumption. For the example, the closed-world assumption would produce all pairs of the form <Elt,Lst> that are not explicitly provided as positive examples (e.g., <b, a]>).\nGiven this input, Foil learns a program one clause at a time using a greedy-covering algorithm that can be summarized as follows: Let positives-to-cover = positive examples. While positives-to-cover is not empty Find a clause, C , that covers a preferably large subset of positives-to-cover but covers no negative examples. Add C to the developing de nition. Remove examples covered by C from positives-to-cover. For example, a clause that might be learned for member during one iteration of this loop is:\nmember(A,B) :-components(B,A,C).\nsince it covers all positive examples where the element is the rst one in the list but does not cover any negatives. A clause that could be learned to cover the remaining examples is: member(A,B) :-components(B,C,D), member(A,D).\nTogether these two clauses constitute a correct program for member.\nThe \\ nd a clause\" step is implemented by a general-to-speci c hill-climbing search that adds antecedents to the developing clause one at a time. At each step, it evaluates possible literals that might be added and selects one that maximizes an information-gain heuristic. The algorithm maintains a set of tuples that satisfy the current clause and includes bindings for any new variables introduced in the body. The following pseudocode summarizes the procedure:\nInitialize C to R(V 1 ; V 2 ; :::; V k ) :-. where R is the target predicate with arity k. Initialize T to contain the positive tuples in positives-to-cover and all the negative tuples. While T contains negative tuples Find the best literal L to add to the clause. Form a new training set T 0 containing for each tuple t in T that satis es L, all tuples of the form t b (t and b concatenated) where b is a set of bindings for the new variables introduced by L such that the literal is satis ed (i.e., matches a tuple in the extensional de nition of its predicate). Replace T by T 0 . Foil considers adding literals for all possible variablizations of each predicate as long as type restrictions are satis ed and at least one of the arguments is an existing variable bound by the head or a previous literal in the body. Literals are evaluated based on the number of positive and negative tuples covered, preferring literals that cover many positives and few negatives. Let T + denote the number of positive tuples in the set T and de ne:\nI (T) = log 2 (T + =jT j):\n(1) The chosen literal is then the one that maximizes: gain(L) = s (I(T) I (T 0 ));\n(2) where s is the number of tuples in T that have extensions in T 0 (i.e., the number of current positive tuples covered by L).\nFoil also includes many additional features such as: heuristics for pruning the space of literals searched, methods for including equality, negation as failure, and useful literals that do not immediately provide gain (determinate literals), pre-pruning and post-pruning of clauses to prevent over-tting, and methods for ensuring that induced programs will terminate. The papers referenced above should be consulted for details on these and other features.\n2.2 Learning the Past Tense of English Verbs Rumelhart and McClelland (1986) were the rst to build a computational model of pasttense learning using the classic perceptron algorithm and a special phonemic encoding of words employing so-called Wickelphones and Wickelfeatures. Their general goal was to show that connectionist models could account for interesting language-learning behavior that was previously thought to require explicit rules. This model was heavily criticized by opponents of the connectionist approach to language acquisition for the relatively poor results achieved and the heavily-engineered representations and training techniques employed (Pinker & Prince, 1988;Lachter & Bever, 1988). MacWhinney and Leinbach (1991) attempted to address some of these criticisms by using a standard multi-layer backpropagation learning algorithm and a simpler UNIBET encoding of phonemes (in which each of 36 phonemes is encoded as a single ASCII character). Ling and Marinov (1993) and Ling (1994) criticize all of the current connectionist models of past-tense acquisition for heavily-engineered representations and poor experimental methodology. They present more systematic results on a system called SPA (Symbolic Pattern Associator) which uses a slightly modi ed version of C4.5 (Quinlan, 1993) to build a forest of decision trees that maps a xed-length input pattern to a xed-length output pattern. Ling's (1994) head-to-head results show that SPA generalizes signi cantly better than backpropagation on a number of variations of the problem employing di erent phonemic encodings (e.g., 76% vs. 56% given 500 training examples).\nHowever, all of this previous work encodes the problem as xed-length pattern association and fails to capture the generativity and position-independence of the true transformation. For example, they use 15-letter patterns like: a,c,t,_,_,_,_,_,_,_,_,_,_,_,_ => a,c,t,e,d,_,_,_,_,_,_,_,_,_,_ or in UNIBET phonemic encoding:\n&,k,t,_,_,_,_,_,_,_,_,_,_,_,_ => &,k,t,I,d,_,_,_,_,_,_,_,_,_,_\nwhere a separate decision tree or output unit is used to predict each character in the output pattern from all of the input characters. Therefore, learning general rules, such as \\add `ed',\" must be repeated at each position where a word can end, and words longer than 15 characters cannot be handled. Also, the best results with SPA exploit a highly-engineered feature template and a modi ed version of C4.5's default leaf-labeling strategy that tailor it to string transformation problems.\nAlthough ILP methods seem more appropriate for this problem, our initial attempts to apply Foil and Golem to past-tense learning gave very disappointing results (Cali , 1994). Below, we discuss how the three problems listed in the introduction contribute to the di culty of applying current ILP methods to this problem.\nIn principle, a background predicate for append is su cient for constructing accurate past-tense programs when incorporated with an ability to include constants as arguments or, equivalently, an ability to add literals that bind variables to speci c constants (called theory constants in Foil). However, a background predicate that does not allow appending with the empty list is more appropriate. We use a predicate called split(A, B, C) which splits a list A into two non-empty sublists B and C. An intensional de nition for split is:\nsplit( X, Y | Z], X] , Y | Z]). split( X | Y], X | W], Z) :-split(Y,W,Z).\nUsing split, an \\add `ed\"' rule can be represented as: Providing an extensional de nition of split that includes all possible strings of 15 or fewer characters (at least 10 21 strings) is clearly intractable. However, providing a partial de nition that includes all possible splits of strings that actually appear in the training corpus is possible and generally su cient. Therefore, providing adequate extensional background knowledge is cumbersome and requires careful engineering; however, it is not the major problem.\nSupplying an appropriate set of negative examples is more problematic. Using a closedworld assumption to produce all pairs of words in the training set where the second is not the past-tense of the rst is feasible but not very useful. In this case, the clause: past(A,B) :-split(B,A,C).\nis very likely to be learned since it covers most of the positives but very few (if any) negatives since it is unlikely that a word is a pre x of another word which is not its past tense. However, this clause is useless for producing the past tense of novel verbs, and, in this domain, accuracy must be measured by the ability to actually generate correct output for novel inputs, rather than the ability to classify pre-supplied tuples of arguments as positive or negative. The obvious solution of supplying all other strings of 15 characters or less as negative examples of the past tense of each word is clearly intractable. Providing specially constructed \\near-miss\" negative examples such as past( a,c,h,e], a,c,h,e,e,d]), is very helpful, but requires careful engineering that exploits detailed prior knowledge of the problem.\nIn order to address the problem of negative examples, when Quinlan (1994) applied Foil to this problem, he employed a di erent target predicate for representing the pasttense transformation. 1 He used a three-place predicate past(X,Y,Z) which is true i the input word X is transformed into past-tense form by removing its current ending Y and substituting the ending Z; for example: past( a,c,t], ], e,d]), past( a,r,i,s,e], i,s,e], o,s,e]). A simple preprocessor can map data for the two-place predicate into this form. Since a sample of 500 verb pairs contains about 30-40 di erent end fragments, this results in a more manageable number of closed-world negatives, approximately 1000 for every positive example in the training set. Using this approach on UNIBET phonemic encodings, Quinlan obtained slightly better results than Ling's best SPA results that exploited a highly-engineered feature template (83.3% vs. 82.8% with 500 training examples) and signi cantly better than SPA's normal results (76.3%). Although the three-place target predicate incorporates some knowledge about the desired transformation, it arguably requires less representation engineering than most previous methods.\nHowever, Quinlan (1994) notes that his results are still hampered by Foil's inability to exploit clause order. For example, when using normal alphabetic encoding, Foil quickly learns a clause su cient for regular verbs:\npast(A,B,C) :-B= ], C= e,d].\nHowever, since this clause still covers a fair number of negative examples due to many irregular verbs, it continues to add literals. As a result, Foil creates a number of specialized versions of this clause that together still fail to capture the generality of the underlying default rule. This problem is compounded by Foil's inability to add constraints such as \\does not end in `e'.\" Since Foil separates the addition of literals containing variables and the binding of variables to constants using literals of the form V = c, it cannot learn clauses like:\npast(A,B,C) :-B= ], C= e,d], not(split(A,D, e])).\nSince a word can be split in several ways, this is clearly not equivalent to the learnable clause:\nConsequently, it must approximate the true rule by learning many clauses of the form:\npast(A,B,C) :-B= ], C= e,d], split(A,D,E), E = b]. past(A,B,C) :-B= ], C= e,d], split(A,D,E), E = d]. ...\nAs a result, Foil generated overly-complex programs containing more than 40 clauses for both the phonemic and alphabetic versions of the problem.\nHowever, an experienced Prolog programmer would exploit clause order and cuts to write a concise program that rst handles the most-speci c exceptions and falls through to more-general default rules if the exceptions fail to apply. For example, the program:\npast(A,B) :-split(A,C, e,e,p]), split(B,C, e,p,t]), !. past(A,B) :-split(A,C, y]), split(B,C, i,e,d]), !. past(A,B) :-split(A,C, e]), split(B,A, d]), !. past(A,B) :-split(B,A, e,d]).\ncan be summarized as:\nIf the word ends in \\eep,\" then replace \\eep\" with \\ept\" (e.g., sleep, slept), else, if the word ends in \\y,\" then replace \\y\" with \\ied\" else, if the word ends in \\e,\" add \\d\" else, add \\ed.\"\nFoidl can directly learn programs of this form, i.e., ordered sets of clauses each ending in a cut. We call such programs rst-order decision lists due to the similarity to the propositional decision lists introduced by Rivest (1987). Foidl uses the normal binary target predicate and requires no explicit negative examples. Therefore, we believe it requires signi cantly less representation engineering than all previous work in the area." }, { "figure_ref": [], "heading": "FOIDL Induction Algorithm", "publication_ref": [], "table_ref": [], "text": "As stated in the introduction, Foidl adds three major features to Foil: 1) Intensional speci cation of background knowledge, 2) Output completeness as a substitute for explicit negative examples, and 3) Support for learning rst-order decision lists. The following subsections describe the modi cations made to incorporate these features." }, { "figure_ref": [], "heading": "Intensional Background", "publication_ref": [], "table_ref": [], "text": "As described above, Foil assumes background predicates are provided with extensional de nitions; however, this is burdensome and frequently intractable. Providing an intensional de nition in the form of general Prolog clauses is generally preferable. For example, instead of providing numerous tuples for the components predicates, it is easier to give the intensional de nition:\ncomponents( A | B], A, B).\nIntentional background de nitions are not restricted to function-free pure Prolog and can exploit all features of the language.\nModifying Foil to use intensional background is straightforward. Instead of matching a literal against a set of tuples to determine whether or not it covers an example, the Prolog interpreter is used in an attempt to prove that the literal can be satis ed using the intensional de nitions. Unlike Foil, expanded tuples are not maintained and positive and negative examples of the target concept are reproved for each alternative specialization of the developing clause. Therefore, the pseudocode for learning a clause is simply: Initialize C to R(V 1 ; V 2 ; :::; V k ) :-. where R is the target predicate with arity k. Initialize T to contain the examples in positives-to-cover and all the negative examples. While T contains negative tuples Find the best literal L to add to the clause. Let T 0 be the subset of examples in T that can still be proved as instances of the target concept using the specialized clause. Replace T by T 0 Since expanded tuples are not produced, the information-gain heuristic for picking the best literal is simply: gain(L) = jT 0 j (I(T) I (T 0 )):\n(3)" }, { "figure_ref": [], "heading": "Output Completeness and Implicit Negatives", "publication_ref": [], "table_ref": [], "text": "In order to overcome the need for explicit negative examples, a mode declaration for the target concept must be provided (i.e., a speci cation whether each argument is an input (+) or an output (-)). An assumption of output completeness can then be made, indicating that for every unique input pattern in the training set, the training set includes all of the correct output patterns. Therefore, any other output which a program produces for a given input can be assumed to represent a negative example. This does not require that all positive examples be part of the training set, only that for each unique input pattern in the training set, all other positive examples with that input pattern (if any) must also be in the training set. This assumption is trivially met if the predicate represents a function with a single unique output for each input.\nFor example, an assumption of output completeness for the mode declaration past(+,-) indicates that all of the correct past-tense forms are included for each input word in the training set. For predicates representing functions, such as past, this implies that the output for each example is unique and that all other outputs implicitly represent negative examples. However, output completeness can also be applied to non-functional cases such as append(-,-,+), indicating that all possible pairs of lists that can be appended together to produce a list are included in the training set (e.g.,\nappend( ], a,b], a,b]), append( a], b], a,b]), append( a,b], ], a,b])).\nGiven an output completeness assumption, determining if a clause is overly-general is straightforward. For each positive example, an output query is made to determine all outputs for the given input (e.g., past( a,c,t], X)). If any outputs are generated that are not positive examples, the clause still covers negative examples and requires further specialization. Note that intensional interpretation of learned clauses is required in order to answer output queries.\nIn addition, in order to compute the gain of alternative literals during specialization, the negative coverage of a clause needs to be quanti ed. Each incorrect answer to an output query which is ground (i.e., contains no variables) clearly counts as a single negative example (e.g., past( a,c,h,e], a,c,h,e,e,d])). However, output queries will frequently produce answers with universally quanti ed variables. For example, given the overly-general clause past(A,B) :-split(A,C,D)., the query past( a,c,t], X) generates the answer past( a,c,t], Y). This implicitly represents coverage of an in nite number of negative examples. In order to quantify negative coverage, Foidl uses a parameter u to represent a bound on the number of possible terms. Since the set of all possible terms (the Herbrand universe of the background knowledge together with the examples) is generally in nite, u is meant to represent a heuristic estimate of the nite number of these terms that will ever actually occur in practice (e.g., the number of distinct words in English). The negative coverage represented by a non-ground answer to an output query is then estimated as u v p, where v is the number of variable arguments in the answer and p is the number of positive examples with which the answer uni es. The u v term stands for the number of unique ground outputs represented by the answer (e.g., the answer append(X,Y, a,b]) stands for u 2 di erent ground outputs) and the p term stands for the number of these that represent positive examples. This allows Foidl to quantify coverage of large numbers of implicit negative examples without ever explicitly constructing them. It is generally su cient to estimate u as a fairly large constant (e.g., 1000), and empirically the method is not very sensitive to its exact value as long as it is signi cantly greater than the number of ground outputs ever generated by a clause.\nUnfortunately, this estimate is not sensitive enough. For example, both clauses past(A,B) :-split(A,C,D). past(A,B) :-split(B,A,C). cover u implicit negative examples for the output query past( a,c,t], X) since the rst produces the answer past( a,c,t], Y) and the second produces the answer past( a,c,t], a,c,t | Y]). However, the second clause is clearly better since it at least requires the output to be the input with some su x added. Since there are presumably more words than there are words that start with \\a-c-t\" (assuming the total number of words is nite), the rst clause should be considered to cover more negative examples. Therefore, arguments that are partially instantiated, such as a,c,t | Y], are counted as only a fraction of a variable when calculating v. Speci cally, a partially instantiated output argument is scored as the fraction of its subterms that are variables, e.g., a,c,t | Y] counts as only 1=4 of a variable argument. Therefore, the rst clause above is scored as covering u implicit negatives and the second as covering only u 1=4 . Given reasonable values for u and the number of positives covered by each clause, the literal split(B,A,C) will be preferred.\nThe revised specialization algorithm that incorporates implicit negatives is:\nInitialize C to R(V 1 ; V 2 ; :::; V k ) :-. where R is the target predicate with arity k. Initialize T to contain the examples in positives-to-cover and output queries for all positive examples. While T contains output queries Find the best literal L to add to the clause. Let T 0 be the subset of positive examples in T that can still be proved as instances of the target concept using the specialized clause, plus the output queries in T that still produce incorrect answers. Replace T by T 0 . Literals are scored as described in the previous section except that jTj is computed as the number of positive examples in T plus the sum of the number of implicit negatives covered by each output query in T ." }, { "figure_ref": [], "heading": "First-Order Decision Lists", "publication_ref": [ "b33", "b33", "b6", "b41" ], "table_ref": [], "text": "As described above, rst-order decision lists are ordered sets of clauses each ending in a cut. When answering an output query, the cuts simply eliminate all but the rst answer produced when trying the clauses in order. Therefore, this representation is similar to propositional decision lists (Rivest, 1987), which are ordered lists of pairs (rules) of the form (t i ; c i ) where the test t i is a conjunction of features and c i is a category label and an example is assigned to the category of the rst pair whose test it satis es.\nIn the original algorithm of Rivest (1987) and in CN2 (Clark & Niblett, 1989), rules are learned in the order they appear in the nal decision list (i.e., new rules are appended to the end of the list as they are learned). However, Webb and Brki c (1993) argue for learning decision lists in the reverse order since most preference functions tend to learn more general rules rst, and these are best positioned as default cases towards the end. They introduce an algorithm, prepend, that learns decision lists in reverse order and present results indicating that in most cases it learns simpler decision lists with superior predictive accuracy. Foidl can be seen as generalizing prepend to the rst-order case for target predicates representing functions. It learns an ordered sequence of clauses in reverse order, resulting in a program which produces only the rst output generated by the rst satis ed clause.\nThe basic operation of the algorithm is best illustrated by a concrete example. For alphabetic past-tense, the current algorithm easily learns the partial clause: However, as discussed in section 2.2, this clause still covers negative examples due to irregular verbs. However, it produces correct ground output for a subset of the examples (i.e., the regular verbs).2 This is an indication that it is best to terminate this clause to handle these examples, and add earlier clauses in the decision list to handle the remaining examples. The fact that it produces incorrect answers for other output queries can be safely ignored in the decision-list framework since these can be handled by earlier clauses. Therefore, the examples correctly covered by this clause are removed from positives-to-cover and a new clause is begun. The literals that now provide the best gain are:\npast(A,B) :-split(B,A,C), C = d].\nsince many of the irregulars are those that just add \\d\" (since they end in \\e\"). This clause also now produces correct ground output for a subset of the examples; however, it is not complete since it produces incorrect output for examples correctly covered by a previously learned clause (e.g., past( a,c,t], a,c,t,d])). Therefore, specialization continues until all of these cases are also eliminated. This results in the clause: which is added to the front of the decision list and the examples it covers are removed from positives-to-cover. This approach ensures that every new clause produces correct outputs for some new subset of the examples but doesn't result in incorrect output for examples already correctly covered by previously learned clauses. This process continues adding clauses to the front of the decision list until all of the exceptions are handled and positives-to-cover is empty.\nThe resulting clause-specialization algorithm can now be summarized as follows:\nInitialize C to R(V 1 ; V 2 ; :::; V k ) :-. where R is the target predicate with arity k. Initialize T to contain the examples in positives-to-cover and output queries for all positive examples. While T contains output queries Find the best literal L to add to the clause. Let T 0 be the subset of positive examples in T whose output query still produces a rst answer that uni es with the correct answer, plus the output queries in T that either 1) Produce a non-ground rst answer that uni es with the correct answer, or 2) Produce an incorrect answer but produce a correct answer using a previously learned clause. Replace T by T 0 .\nIn many cases, this algorithm is able to learn accurate, compact, rst-order decision lists for past tense, like the \\expert\" program shown in section 2.2. However, due to highly irregular verbs, the algorithm can encounter local-minima in which it is unable to nd any literals that provide positive gain while still covering the required minimum number of examples. 3This was originally handled by terminating search and memorizing any remaining uncovered examples as speci c exceptions at the top of the decision list (e.g., past( a,r,i,s,e], a,r,o,s,e]) :-!.). However, this can result in premature termination that prevents the algorithm from nding low-frequency regularities. For example, in the alphabetic version, the system can get stuck trying to learn the complex rule for when to double a nal consonant (e.g., grab ! grabbed) and fail to learn the rule for changing \\y\" to \\ied\" since this is actually less frequent.\nThe current version, like Foil, tests if the learned clause meets a minimum-accuracy threshold; however, unlike Foil, only counting as errors incorrect outputs for queries correctly answered by previously learned clauses. If it does not meet the threshold, the clause is thrown out and the positive examples it covers are memorized at the top of the decision list. The algorithm then continues to learn clauses for any remaining positive examples. This allows Foidl to just memorize di cult irregularities, such as consonant doubling, and still continue on to learn other rules such as changing \\y\" to \\ied.\"\nIf the minimum-accuracy threshold is met, the decision-list property is exploited in a nal attempt to still learn a completely accurate program. If the negatives covered by the clause are all examples that were correctly covered by previously learned clauses, Foidl treats them as \\exceptions to the exception to the rule\" and returns them to positives-tocover to be covered correctly again by subsequently learned clauses. For example, Foidl frequently learns the clause: past(A, B) :-split(A, C, y]), split(B, C, i, e, d]).\nfor changing \\y\" to \\ied.\" However, this clause incorrectly covers a few examples that are correctly covered by the previously learned \\add `ed\"' rule (e.g., bay ! bayed; delay ! delayed). Since these exceptions to the \\y\" to \\ied\" rule are a small percentage of the words that end in \\y,\" the system keeps the rule and returns the examples that just add \\ed\" to positives-to-cover. Subsequently, rules such as: " }, { "figure_ref": [], "heading": "Algorithmic and Implementation Details", "publication_ref": [ "b23" ], "table_ref": [], "text": "This section brie y discusses a few additional details of the Foidl algorithm and its implementation. This includes a discussion of the use of modes, types, weak literals, and theory constants. The current version of Foil includes all of these features in basically the same form.\nFoidl makes use of types and modes to limit the space of literals searched. The argument of each predicate is typed and only literals whose previously-bound arguments are of the correct type are tested when specializing a clause. For example, split is given the types split(word,prefix,suffix), preventing the system from further splitting pre xes and su xes and exploring arbitrary substrings of a word for regularities. Each predicate is also given a mode declaration, and only literals whose input arguments are all previouslybound variables are tested. For example, split is given the mode split(+,-,-), preventing a clause from creating new strings by appending together previously generated pre xes and su xes.\nIn case no literal provides positive information gain, Foidl gives a small bonus to literals that introduce new variables. However, the number of such weak literals that can be added in a row is limited by a user parameter (normally set to 1). For example, this allows the system to split a word into possible pre xes and su xes, even though this may not provide gain until these substrings are constrained by subsequent literals.\nTheory constants are provided for each type, and literals are tested for binding each existing variable to each constant of the appropriate type. For example, the literal X= e,d] is generated if X is of type suffix. For our runs on past-tense, theory constants are included for every pre x and su x that occurs in at least two words in the training data. This helps control training time by limiting the number of literals searched, but does not a ect which literals are actually chosen since the minimum-clause-coverage test prevents Foidl from choosing literals that don't cover at least two examples anyway.\nFoidl is currently implemented in both Common Lisp and Quintus Prolog. Unlike the current Prolog version, the Common Lisp version supports learning recursive clauses 4 and output-completeness for non-functional target predicates. However, the Common Lisp version is signi cantly slower since it relies on an un-optimized Prolog interpreter and compiler written in Lisp (from Norvig, 1992). Consequently, all of the presented results are from the Prolog version running on a Sun SPARCstation 2. 5" }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [ "b15" ], "table_ref": [], "text": "To test Foidl's performance on the English past tense task, we ran experiments using the data which Ling (1994) made available in an appendix." }, { "figure_ref": [], "heading": "Experimental Design", "publication_ref": [ "b29" ], "table_ref": [], "text": "The data used consist of 6939 English verb forms in both normal alphabetic form and UNIBET phoneme representation along with a label indicating the verb form (base, past tense, past participle, etc), a label indicating whether the form is regular or irregular, and the Francis-Kucera frequency of the verb. The data include 1390 distinct pairs of base and past tense verb forms. We ran three di erent experiments. In one we used the phonetic forms of all verbs. In the second we used the phonetic forms of the regular verbs only, because this is the easiest form of the task and because this is the only problem for which Ling provides learning curves. Finally, we ran trials using the alphabetic forms of all verbs. The training and testing followed the standard paradigm of splitting the data into testing and training sets and training on progressively larger samples of the training set. All results were averaged over 10 trials, and the testing set for each trial contained 500 verbs.\nIn order to better separate the contribution of using implicit negatives from the contribution of the decision list representation, we also ran experiments with IFoil, a variant of the system which uses intensional background and the output completeness assumption, but does not build decision lists.\nWe ran our own experiments with Foil, Foidl, and IFoil and compared those with the results from Ling. The Foil experiments were run using Quinlan's representation described in section 2.2. As in Quinlan (1994), negative examples were provided by using a randomlyselected 25% of those which could be generated using the closed world assumption. 6 All experiments with Foidl and IFoil used the standard default values for the various numeric parameters (term universe size, 1000; minimum clause coverage, 2; weak literal limit, 1). The di erences among Foil, IFoil, and Foidl were tested for signi cance using a twotailed paired t-test.\n4. Handling intensional interpretation of recursive clauses for the target predicate requires some additional complexities that have not been discussed in this paper since they are not relevant to decision-lists, which are generally not recursive. 5. Both versions are available by anonymous FTP from net.cs.utexas.edu in the directory pub/mooney/foidl. 6. We replicated Quinlan's approach since memory limitations prevented us from using 100% of the generated negatives with larger training sets. " }, { "figure_ref": [ "fig_4", "fig_5", "fig_7", "fig_8" ], "heading": "Results", "publication_ref": [ "b39" ], "table_ref": [], "text": "The results for the phonetic task using both regular and irregular verbs are presented in Figure 1. The graph shows our results with Foil, IFoil, and Foidl along with the best results from Ling, who did not provide a learning curve for this task. As expected, Foidl out-performed the other systems on this task, surpassing Ling's best results with 500 examples with only 100 examples. IFoil performed quite poorly, barely beating the neural network results despite e ectively having 100% of the negatives as opposed to Foil's 25%. This poor performance is due at least in part to over tting the training data, because IFoil lacks the noise-handling techniques of Foil6. Foil also has the advantage of the three-place predicate, which gives it a bias toward learning su xes. IFoil's poor performance on this task shows that the implicit negatives by themselves are not su cient, and that some other bias such as decision lists or the three-place predicate and noise-handling is needed. The di erences between Foil and Foidl are signi cant at the 0.01 level. Those between Foidl and IFoil are signi cant at the 0.001 level. The di erences between Foil and IFoil are not signi cant with 100 training examples or less, but are signi cant at the 0.001 level with 250 and 500 examples.\nFigure 2 presents accuracy results on the phonetic task using regulars only. The curves for SPA and the neural net are the results reported by Ling. Here again, Foidl outperformed the other systems. This particular task demonstrated one of the problems with using closed-world negatives. In the regular past tense task, the second argument of Quinlan's 3-place predicate is always the same: an empty list. Therefore, if the constants are generated from the positive examples, Foil will never produce rules which ground the second argument, since it cannot create negative examples with other constants in the second argument. This prevents the system from learning a rule to generate the past tense. In order to obtain the results reported here, we introduced extra constants for the second argument (speci cally the constants for the third argument), enabling the closed world assumption to generate appropriate negatives. On this task, IFoil does seem to gain some advantage over Foil from being able to e ectively use all of the negatives. The regularity of the data allows both IFoil and Foil to achieve over 90% accuracy at 500 examples. The di erences between Foil and Foidl are signi cant at the 0.001 level, as are those between IFoil and Foidl. The di erences between IFoil and Foil are not signi cant with 25 examples, and are signi cant at the 0.02 level with 500 examples, but are signi cant at the 0.001 level with 50-250 training examples.\nResults for the alphabetic version appear in Figure 3. This is a task which has not typically been considered in the literature, but it is of interest to those concerned with incorporating morphology into natural language understanding systems which deal with text. It is also the most di cult task, primarily because of consonant doubling. Here we have results only for Foidl, IFoil, and Foil. Because the alphabetic task is even more irregular that the full phonetic task, IFoil again over ts the data and performs quite poorly. The di erences between Foil and Foidl are signi cant at the 0. For all three of these tasks, Foidl clearly outperforms the other systems, demonstrating that the rst order decision list bias is a good one for this learning task. A su cient set of negatives is necessary, and all ve of these systems provide them in some way: the neural network and SPA both learn multiple-class classi cation tasks (which phoneme belongs in each position); Foil uses the three-place predicate with closed world negatives; and IFoil and Foidl, of course, use the output completeness assumption. The primary importance of the implicit negatives is not that they provide an advantage over propositional and neural network systems, but that they enable rst order systems to perform this task at all. Without them, some knowledge of the task is required. Foidl's decision lists give it a signi cant added advantage, though this advantage is less apparent in the regular phonetic task, where there are no exceptions.\nClearly, Foidl produces more accurate rules than the other systems, but another consideration is the complexity of the rule sets. For the ILP systems, two good measures of complexity are the number of rules and number of literals generated. Figure 4 shows the number of rules generated by Foil, IFoil, and Foidl for the phonetic task using all verbs. The number of literals generated appears in Figure 5. Since we are interested in generalization and since Foil does not attempt to t all of the training data, these results do not include the rules Foidl and IFoil add in order to memorize individual exceptions. 7 Although the numbers are comparable with only a few examples, with increasing numbers of examples, the programs Foil and IFoil generate grow much faster than Foidl's programs. The large number of rules/literals learned by IFoil show its tendency to over t the data.\nFoidl also generates very comprehensible programs. The following is an example program generated for the alphabetic version of the task using 250 examples (again excluding the memorized examples). The training times for the various systems considered in this research are di cult to compare. Ling does not provide timing results, though we can probably assume based on research comparing symbolic and neural learning algorithms (Shavlik, Mooney, & Towell, 1991) that SPA runs fairly quickly since it is based on C4.5 and that backpropagation took considerably longer. Our tests Foil and Foidl are not directly comparable because they were run on di erent architectures. The Foil runs were done on a Sparc 5. For 500 examples, Foil averaged 48 minutes on the phonetic task with all verbs. The Foidl experiments ran on a Sparc 2 and averaged 1071 minutes on the same task. Even allowing for the di erences in speed of the two machines (about a factor of two), Foidl is quite a bit slower, probably due largely to the cost of using intentional background and in part to its implementation in Prolog as opposed to C." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Related Work on ILP", "publication_ref": [ "b25", "b8", "b32", "b38", "b35", "b3", "b40", "b3", "b10", "b2", "b3", "b1", "b0" ], "table_ref": [], "text": "Although each of the three features mentioned in the introduction distinguishes Foidl from most work in Inductive Logic Programming, a number of related pieces of research should be mentioned. The use of intensional background knowledge is the least distinguishing feature since a number of other ILP systems also incorporate this aspect. Focl (Pazzani & Kibler, 1992), mFoil (Lavra c & D zeroski, 1994), Grendel (Cohen, 1992), Forte (Richards & Mooney, 1995), and Chillin (Zelle & Mooney, 1994a) all use intensional background to some degree in the context of a Foil-like algorithm. Some other ILP systems which employ intensional background include early ones by Shapiro (1983) and Sammut and Banerji (1986) and more recent ones by Bergadano et al. (1993) and Stahl, Tausend, and Wirth (1993).\nThe use of implicit negatives is signi cantly more novel. As described in section 3.2, this approach is considerably di erent from explicit construction using a closed-world assumption, and therefore can be employed when explicit construction of su cient negative examples is intractable. Bergadano et al. (1993) allows the user to supply an intensional de nition of negative examples that covers a large set of ground instances (e.g (past( a,c,t],X), not(equal(X, a,c,t,e,d])))); however, to be equivalent to output completeness, the user would have to explicitly provide a separate intensional negative de nition for each positive example. The non-monotonic semantics used to eliminate the need for negative examples in Claudien (De Raedt & Bruynooghe, 1993) has the same e ect as an output completeness assumption in the case where all arguments of the target relation are outputs. However, output completeness permits more exibility by allowing some arguments to be speci ed as inputs and only counting as negative examples those extra outputs generated for speci c inputs in the training set. Flip (Bergadano, 1993) provides a method for learning functional programs without negative examples by making an assumption equivalent to output completeness for the functional case. Output completeness is more general in that it permits learning non-functional programs as well. Also, unlike Foidl, none of these previous methods provide a way of quantifying implicit negative coverage in the context of a heuristic top-down specialization algorithm.\nThe notion of a rst-order decision list is unique to Foidl. The only other ILP system that attempts to learn programs that exploit clause-order and cuts is that of Bergadano et al. (1993). Their paper discusses many problems with learning arbitrary programs with cuts, and the brute-force search used in their approach is intractable for most realistic problems. Instead of addressing the general problem of learning arbitrary programs with cuts, Foidl is tailored to the speci c problem of learning rst-order decision lists, which use cuts in a very stylized manner that is particularly useful for functional problems that involve rules with exceptions. Bain and Muggleton (1992) and Bain (1992) discuss a technique which uses negation as failure to handle exceptions. However, using negation as failure is signi cantly di erent from decision lists since it simply prevents a clause from covering exceptions rather than learning an additional clause that both over-rides an existing clause and speci es the correct output for a set of exceptions." }, { "figure_ref": [], "heading": "Related Work on Past-Tense Learning", "publication_ref": [ "b16", "b37", "b9", "b17" ], "table_ref": [], "text": "The shortcomings of most previous work on past-tense learning were reviewed in section 2.2, and the results in section 4 clearly demonstrate the generalization advantage Foidl exhibits on this problem. However, a couple of issues deserve some additional discussion.\nMost of the previous work on this problem has concerned the modelling of various psychological phenomenon, such as the U-shaped learning curve that children exhibit for irregular verbs when acquiring language. This paper has not addressed the issue of psychological validity, rather it has focused on performance accuracy after exposure to a xed number of training examples. Therefore, we make no speci c psychological claims based on our current results.\nHowever, humans can obviously produce the correct past tense of arbitrarily-long novel words, which Foidl can easily model while xed-length feature-based representations clearly cannot. Ling also developed a version of SPA that eliminates position dependence and xed word-length (Ling, 1995) by using a sliding window like that used in NETtalk (Sejnowski & Rosenberg, 1987). A large window is used which includes 15 letters on either side of the current position (padded with blanks if necessary) in order to always include the entire word for all the examples in the corpus. The results on this approach are signi cantly better than normal SPA but still inferior to Foidl's results. Also, this approach still requires a xed-sized input window which prevents it from handling arbitrary-length irregular verbs. Recurrent neural networks could also be used to avoid word-length restrictions (Cotrell & Plunkett, 1991), although it appears that no one has yet applied them to the standard present-tense to past-tense mapping problem. However, we believe the di culty of training recurrent networks and their relatively poor ability to maintain state information arbitrarily long would limit their performance on this task.\nAnother issue is that of the comprehensibility and transparency of the learned result. Foidl's programs for past-tense are short, concise, and very readable; unlike the complicated networks, decision forests, and pure logic programs generated by previous approaches. Ling and Marinov (1993) discusses the possibility of transforming SPA's decision forest into more comprehensible rst-order rules; however, the approach of directly learning rst-order rules from the data seems clearly preferable." }, { "figure_ref": [], "heading": "Future Work", "publication_ref": [ "b17", "b17", "b31", "b33", "b11", "b7" ], "table_ref": [], "text": "One obvious topic for future research is Foidl's cognitive modelling abilities in the context of the past-tense task. Incorporating over-tting avoidance methods may allow the system to model the U-shaped learning curve in a manner analogous to that demonstrated by Ling and Marinov (1993). Its ability to model human results on generating the past tense of novel psuedo-verbs (e.g., spling ! splang) could also be examined and compared to SPA (Ling & Marinov, 1993) and connectionist methods.\nAlthough rst-order decision lists represent a fairly general class of programs, currently our only convincing experimental results are on the past-tense problem. Many realistic problems consist of with exceptions, and experimental results on additional applications are needed to support the general utility of this representation.\nDespite its advantages, the use of intensional background knowledge in ILP incurs a signi cant performance cost, since examples must be continually reproved when testing alternative literals during specialization. This computation accounts for most of the training time in Foidl. One approach to improving computational e ciency would be to maintain partial proofs of all examples and incrementally update these proofs as additional literals are added to the clause. This approach would be more like Foil's approach of maintaining tuples, but would require using a meta-interpreter in Prolog, which incurs its own signi cant overhead. E cient use of intensional knowledge in ILP could greatly bene t from work on rapid incremental compilation of logic programs, i.e., incrementally updating compiled code to account for small changes in the de nition of a predicate.\nFoidl could potentially bene t from methods for handling noisy data and preventing over-tting. Pruning methods employed in Foil and related systems (Quinlan, 1990;Lavra c & D zeroski, 1994) could easily be incorporated. In the decision list framework, an alternative to simply ignoring incorrectly covered examples as noise is to treat them as exceptions to be handled by subsequently learned clauses (as in the uncovering technique discussed in section 3.3).\nTheoretical results on the learnability of restricted classes of rst-order decision lists is another interesting area for research. Given the results on the PAC-learnability of propositional decision lists (Rivest, 1987) and restricted classes of ILP problems (D zeroski, Muggleton, & Russell, 1992;Cohen, 1994), an appropriately restricted class of rst-order decision lists should be PAC-learnable." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper has addressed two main issues: the appropriateness of a rst-order learner for the popular past-tense problem, and the problems of previous ILP systems in handling functional tasks whose best representation is rules with exceptions. Our results clearly demonstrate that an ILP system outperforms both the decision-tree and the neural-network systems previously applied to the past-tense task. This is important since there have been very few results showing that a rst-order learner performs signi cantly better than apply-ing propositional learners to the best feature-based encoding of a problem. This research also demonstrates that there is an e cient and e ective algorithm for learning concise, comprehensible symbolic programs for a small but interesting subproblem in language acquisition. Finally, our work also shows that it is possible to e ciently learn logic programs which involve cuts and exploit clause order for a particular class of problems, and it demonstrates the usefulness of intensional background and implicit negatives. Solutions to many practical problems seem to require general default rules with characterizable exceptions, and therefore may be best learned using rst-order decision lists." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Most of the basic research for this paper was conducted while the rst author was on leave at the University of Sydney supported by a grant to Prof. J.R. Quinlan from the Australian Research Council. Thanks to Ross Quinlan for providing this enjoyable and productive opportunity and to both Ross and Mike Cameron-Jones for very important discussions and pointers that greatly aided the development of Foidl. Thanks also to Ross for aiding us in running the Foil experiments. Discussions with John Zelle and Cindi Thompson at the University of Texas also in uenced this work. Partial support was also provided by grant IRI-9310819 from the National Science Foundation and an MCD fellowship from the University of Texas awarded to the second author." } ]
[ { "authors": "M Bain", "journal": "Academic Press", "ref_id": "b0", "title": "Experiments in non-monotonic rst-order induction", "year": "1992" }, { "authors": "M Bain; S Muggleton", "journal": "Academic Press", "ref_id": "b1", "title": "Non-monotonic learning", "year": "1992" }, { "authors": "F Bergadano", "journal": "", "ref_id": "b2", "title": "An interactive system to learn functional logic programs", "year": "1993" }, { "authors": "F Bergadano; D Gunetti; U Trinchero", "journal": "Journal of Arti cial Intelligence Research", "ref_id": "b3", "title": "The di culties of learning logic programs with cut", "year": "1993" }, { "authors": "M E Cali", "journal": "", "ref_id": "b4", "title": "Learning the past tense of English verbs: An inductive logic programming approach", "year": "1994" }, { "authors": "Cameron-Jones ; R M Quinlan; J R ", "journal": "SIGART Bulletin", "ref_id": "b5", "title": "E cient top-down induction of logic programs", "year": "1994" }, { "authors": "P Clark; T Niblett", "journal": "Machine Learning", "ref_id": "b6", "title": "The CN2 induction algorithm", "year": "1989" }, { "authors": "W W Cohen", "journal": "", "ref_id": "b7", "title": "Pac-learning nondeterminate clauses", "year": "1994" }, { "authors": "W Cohen", "journal": "", "ref_id": "b8", "title": "Compiling prior knowledge into an explicit bias", "year": "1992" }, { "authors": "G Cotrell; K Plunkett", "journal": "", "ref_id": "b9", "title": "Learning the past tense in a recurrent network: Acquiring the mapping from meaning to sounds", "year": "1991" }, { "authors": "L De Raedt; M Bruynooghe", "journal": "", "ref_id": "b10", "title": "A theory of clausal discovery", "year": "1993" }, { "authors": "S Muggleton; S Russell; S ", "journal": "", "ref_id": "b11", "title": "Pac-learnability of determinate logic programs", "year": "1992" }, { "authors": "J Lachter; T Bever", "journal": "MIT Press", "ref_id": "b12", "title": "The relation between linguistic structure and associative theories of language learning: A constructive critique of some connectionist learning models", "year": "1988" }, { "authors": "", "journal": "", "ref_id": "b13", "title": "Inductive Logic Programming: Techniques and Applications", "year": "1994" }, { "authors": "Ellis Horwood", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "C X Ling", "journal": "Journal of Arti cial Intelligence Research", "ref_id": "b15", "title": "Learning the past tense of English verbs: The symbolic pattern associator vs. connectionist models", "year": "1994" }, { "authors": "C X Ling", "journal": "", "ref_id": "b16", "title": "", "year": "1995" }, { "authors": "C X Ling; M Marinov", "journal": "Cognition", "ref_id": "b17", "title": "Answering the connectionist challenge: A symbolic model of learning the past tense of English verbs", "year": "1993" }, { "authors": "B Macwhinney; J Leinbach", "journal": "Cognition", "ref_id": "b18", "title": "Implementations are not conceptualizations: Revising the verb model", "year": "1991" }, { "authors": "S Muggleton; W Buntine", "journal": "", "ref_id": "b19", "title": "Machine invention of rst-order predicates by inverting resolution", "year": "1988" }, { "authors": "S Muggleton; C Feng", "journal": "", "ref_id": "b20", "title": "E cient induction of logic programs", "year": "1990" }, { "authors": "S Muggleton; R King; M Sternberg", "journal": "Protein Engineering", "ref_id": "b21", "title": "Protein secondary structure prediction using logic-based machine learning", "year": "1992" }, { "authors": "S H Muggleton", "journal": "Academic Press", "ref_id": "b22", "title": "Inductive Logic Programming", "year": "1992" }, { "authors": "P Norvig", "journal": "", "ref_id": "b23", "title": "Paradigms of Arti cial Intelligence Programming: Case Studies in Common Lisp", "year": "1992" }, { "authors": "Morgan Kaufmann; San Mateo; Ca", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "M Pazzani; D Kibler", "journal": "Machine Learning", "ref_id": "b25", "title": "The utility of background knowledge in inductive learning", "year": "1992" }, { "authors": "S Pinker; A Prince", "journal": "MIT Press", "ref_id": "b26", "title": "On language and connectionism: Analysis of a parallel distributed model of language acquisition", "year": "1988" }, { "authors": "J R Quinlan", "journal": "", "ref_id": "b27", "title": "C4.5: Programs for Machine Learning", "year": "1993" }, { "authors": "Morgan Kaufmann; San Mateo; Ca", "journal": "", "ref_id": "b28", "title": "", "year": "" }, { "authors": "J R Quinlan", "journal": "World Scienti c", "ref_id": "b29", "title": "Past tenses of verbs and rst-order learning", "year": "1994" }, { "authors": "J R Quinlan; R M Cameron-Jones", "journal": "", "ref_id": "b30", "title": "FOIL: A midterm report", "year": "1993" }, { "authors": "J Quinlan", "journal": "Machine Learning", "ref_id": "b31", "title": "Learning logical de nitions from relations", "year": "1990" }, { "authors": "B L Richards; R J Mooney", "journal": "", "ref_id": "b32", "title": "Automated re nement of rst-order Horn-clause domain theories", "year": "1995" }, { "authors": "R L Rivest", "journal": "Machine Learning", "ref_id": "b33", "title": "Learning decision lists", "year": "1987" }, { "authors": "D E Rumelhart; J Mcclelland", "journal": "MIT Press", "ref_id": "b34", "title": "On learning the past tense of English verbs", "year": "1986" }, { "authors": "C Sammut; R B Banerji", "journal": "", "ref_id": "b35", "title": "Learning concepts by asking questions", "year": "1986" }, { "authors": "Morgan Kaufman", "journal": "", "ref_id": "b36", "title": "", "year": "" }, { "authors": "T J Sejnowski; C Rosenberg", "journal": "Complex Systems", "ref_id": "b37", "title": "Parallel networks that learn to pronounce English text", "year": "1987" }, { "authors": "E Shapiro", "journal": "MIT Press", "ref_id": "b38", "title": "Algorithmic Program Debugging", "year": "1983" }, { "authors": "J W Shavlik; R J Mooney; G G Towell", "journal": "Machine Learning", "ref_id": "b39", "title": "Symbolic and neural learning algorithms: An experimental comparison", "year": "1991" }, { "authors": "I Stahl; B Tausend; R Wirth", "journal": "", "ref_id": "b40", "title": "Two methods for improving inductive logic programming systems", "year": "1993" }, { "authors": "G I Webb; N Brki C", "journal": "", "ref_id": "b41", "title": "Learning decision lists by prepending inferred rules", "year": "1993" }, { "authors": "J M Zelle; R J Mooney", "journal": "", "ref_id": "b42", "title": "Combining top-down and bottom-up methods in inductive logic programming", "year": "1994" }, { "authors": "J M Zelle; R J Mooney", "journal": "", "ref_id": "b43", "title": "Inducing deterministic Prolog parsers from treebanks: A machine learning approach", "year": "1994" } ]
[ { "formula_coordinates": [ 3, 90, 263.04, 435.36, 23.28 ], "formula_id": "formula_0", "formula_text": "member(Elt,Lst): { <a, a]>, <a, a,b]>, <b, a,b]>, <a, a,b,c]>, ...} components(Lst,Elt,Lst): { < a],a, ]>, < a,b],a, b]>, < a,b,c],a, b,c]> ...}" }, { "formula_coordinates": [ 3, 118.56, 543.12, 189.6, 9.6 ], "formula_id": "formula_1", "formula_text": "member(A,B) :-components(B,A,C)." }, { "formula_coordinates": [ 4, 251.28, 293.64, 109.92, 18.24 ], "formula_id": "formula_2", "formula_text": "I (T) = log 2 (T + =jT j):" }, { "formula_coordinates": [ 5, 90, 234.24, 355.92, 9.6 ], "formula_id": "formula_3", "formula_text": "&,k,t,_,_,_,_,_,_,_,_,_,_,_,_ => &,k,t,I,d,_,_,_,_,_,_,_,_,_,_" }, { "formula_coordinates": [ 5, 118.56, 476.4, 246.72, 23.28 ], "formula_id": "formula_4", "formula_text": "split( X, Y | Z], X] , Y | Z]). split( X | Y], X | W], Z) :-split(Y,W,Z)." }, { "formula_coordinates": [ 6, 118.56, 491.04, 166.32, 9.6 ], "formula_id": "formula_5", "formula_text": "past(A,B,C) :-B= ], C= e,d]." }, { "formula_coordinates": [ 6, 118.56, 616.56, 287.04, 9.6 ], "formula_id": "formula_6", "formula_text": "past(A,B,C) :-B= ], C= e,d], not(split(A,D, e]))." }, { "formula_coordinates": [ 7, 118.56, 116.64, 298.08, 36.72 ], "formula_id": "formula_7", "formula_text": "past(A,B,C) :-B= ], C= e,d], split(A,D,E), E = b]. past(A,B,C) :-B= ], C= e,d], split(A,D,E), E = d]. ..." }, { "formula_coordinates": [ 7, 118.56, 242.16, 315.12, 50.16 ], "formula_id": "formula_8", "formula_text": "past(A,B) :-split(A,C, e,e,p]), split(B,C, e,p,t]), !. past(A,B) :-split(A,C, y]), split(B,C, i,e,d]), !. past(A,B) :-split(A,C, e]), split(B,A, d]), !. past(A,B) :-split(B,A, e,d])." }, { "formula_coordinates": [ 7, 118.56, 660, 149.04, 9.6 ], "formula_id": "formula_9", "formula_text": "components( A | B], A, B)." }, { "formula_coordinates": [ 8, 90, 573.6, 432.72, 23.28 ], "formula_id": "formula_10", "formula_text": "append( ], a,b], a,b]), append( a], b], a,b]), append( a,b], ], a,b]))." }, { "formula_coordinates": [ 10, 118.56, 600.48, 200.64, 9.6 ], "formula_id": "formula_11", "formula_text": "past(A,B) :-split(B,A,C), C = d]." } ]
Induction of First-Order Decision Lists: Results on Learning the Past Tense of English Verbs
This paper presents a method for inducing logic programs from examples that learns a new class of concepts called rst-order decision lists, de ned as ordered lists of clauses each ending in a cut. The method, called Foidl, is based on Foil (Quinlan, 1990) but employs intensional background knowledge and avoids the need for explicit negative examples. It is particularly useful for problems that involve rules with speci c exceptions, such as learning the past-tense of English verbs, a task widely studied in the context of the symbolic/connectionist debate. Foidl is able to learn concise, accurate programs for this problem from signi cantly fewer examples than previous methods (both connectionist and symbolic).
Raymond J Mooney; Mary Elaine Cali
[ { "figure_caption": "past(A,B) :-split(B,A, e,d]). which, in Foil, is learned in the form: past(A,B) :-split(B,A,C), C = e,d].", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "past(A,B) :-split(B,A,C), C = e,d].", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "past(A,B) :-split(B,A,C), C = d], split(A,D,E), E = e].", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "past(A, B) :-split(B, A, e, d]), split(A, D, a, y]).are learned to recover these examples, resulting in a program that is completely consistent with the training data. By setting the minimum clause-accuracy threshold to 50%, Foidl only applies this uncovering technique when it results in covering more examples than it uncovers, thereby guaranteeing progress towards tting all of the training examples.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Accuracy on phonetic past tense task using all verbs", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Accuracy on phonetic past tense task using regulars only", "figure_data": "", "figure_id": "fig_5", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "001 level with 25, 50, 250, and 500 examples, but only at the 0.1 level with 100 examples. The di erences between IFoil and Foidl are all signi cant at the 0.001 level. Those between Foil and IFoil are not signi cant with 25 training examples and are signi cant only at the 0.01 level with 50 training examples, but are signi cant at the 0.001 level with 100 or more examples.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Accuracy on alphabetic past tense task", "figure_data": "", "figure_id": "fig_7", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 4: Number of rules created for phonetic past tense task", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b72", "b73", "b81", "b82", "b95", "b29", "b33", "b7", "b23", "b69", "b70", "b79", "b65", "b72", "b73", "b81", "b82", "b95", "b29", "b33", "b64", "b42", "b30", "b32", "b33", "b35", "b7", "b25", "b26", "b25", "b26", "b33", "b26", "b39", "b52", "b93", "b60", "b16", "b55", "b57", "b77", "b17", "b58", "b47", "b28", "b15", "b27", "b89", "b85" ], "table_ref": [], "text": "Abstraction is one of the most challenging and also promising approaches to improve complex problem solving and it is inspired by the way humans seem to solve problems. At rst, less relevant details of a given problem are ignored so that the abstracted problem can be solved more easily. Then, step by step, more details are added to the solution by taking an increasingly more detailed look at the problem. Thereby, the abstract solution constructed rst is re ned towards a concrete solution. One typical characteristic of most work on hierarchical problem solving is that abstraction is mostly performed by dropping sentences of a domain description (Sacerdoti, 1974(Sacerdoti, , 1977;;Tenenberg, 1988;Unruh & Rosenbloom, 1989;Yang & Tenenberg, 1990;Knoblock, 1989Knoblock, , 1994;;Bacchus & Yang, 1994). A second common characteristic is that a hierarchical problem solver usually derives an abstract solution from scratch, without using experience from previous problem solving episodes. Giunchiglia and Walsh (1992) have presented a comprehensive formal framework for abstraction and a comparison of the di erent abstraction approaches from theorem proving (Plaisted, 1981(Plaisted, , 1986;;Tenenberg, 1987), planning (Newell & Simon, 1972;Sacerdoti, 1974Sacerdoti, , 1977;;Tenenberg, 1988;Unruh & Rosenbloom, 1989;Yang & Tenenberg, 1990;Knoblock, 1989Knoblock, , 1994)), and model based diagnosis (Mozetic, 1990). For hierarchical planning, Korf's model of abstraction in problem solving (Korf, 1987) allows the analysis of reductions in search caused by single and multiple levels of abstraction. He has shown that in the optimal case, abstraction can reduce the expected search time from exponential to linear. Knoblock has developed an approach to construct a hierarchy of abstraction spaces automatically from a given concrete-level problem solving domain (Knoblock, 1990(Knoblock, , 1993(Knoblock, , 1994)). These so called ordered monotonic abstraction hierarchies (Knoblock, Tenenberg, & Yang, 1991b) have proven useful in many domains. Recently, Bacchus and Yang (1994) presented an improved method for automatically generating abstraction hierarchies based on a more detailed model of search costs.\nAll these abstraction methods, however, rely on abstraction by dropping sentences of the domain description which is a kind of homomorphic abstraction (Holte et al., 1994(Holte et al., , 1995)). It has been shown that these kinds of abstractions are highly representation dependent (Holte et al., 1994(Holte et al., , 1995)). For two classical planning domains, di erent \\natural\\ representations have been analyzed and it turns out that there are several representations for which the classical abstraction techniques do not lead to signi cantly improved problem solvers (Knoblock, 1994;Holte et al., 1995). However, it is well known that normally many di erent representations of the same domain exist as already pointed out by Korf (1980), but up to now no theory of representation has been developed. In particular, there is no theory of representation for hierarchical problem solving with dropping sentences.\nFrom a knowledge-engineering perspective, many di erent aspects such as simplicity, understandability, and maintainability must be considered when developing a domain representation. Therefore, we assume that representations of domains are given by knowledge engineers and rely on representations which we consider most \\natural\" for certain kinds of problems. We will demonstrate two simple example problems and related representations, in which the usual use of abstraction in problem solving does not lead to any improvement. In the rst example, no improvement can be achieved because abstraction is restricted to dropping sentences of a domain. In the second example, the abstract solution computed from scratch does not decompose the original problem and consequently does not cut down the search space at the next detailed level. We do not want to argue that the examples can never be represented in a way that standard hierarchical problem solving works well. However, we think it would require a large e ort from a knowledge engineer to develop an appropriate representation and we believe that it is often impossible to develop a representation which is appropriate from a knowledge-engineering perspective and which also allows e cient hierarchical problem solving based on dropping sentences.\nWe take these observations as the motivation to develop a more general model of abstraction in problem solving. As already pointed out by Michalski (1994), abstraction, in general, can be seen as switching to a completely new representation language in which the level of detail is reduced. In problem solving, such a new abstract representation language must consist of completely new sentences and operators and not only of a subset of the sentences and operators of the concrete language. To our knowledge, Sipe (Wilkins, 1988) is the only planning system which currently allows the change of representation language across di erent levels of abstraction. However, a general abstraction methodology which allows e cient algorithms for abstraction and re nement has not yet been developed. We want to propose a method of abstraction which allows the complete change of representation language of a problem and a solution from concrete to abstract and vice versa, if the concrete and the abstract language are given. Additionally, we propose to use experience from previously solved problems, usually available as a set of cases, to come to abstract solutions. The use of experience has already proven useful in various approaches to speedup learning such as explanation-based learning (Mitchell, Keller, & Kedar-Cabelli, 1986;DeJong & Mooney, 1986;Rosenbloom & Laird, 1986;Minton, 1988;Minton, Carbonell, Knoblock, Kuokka, Etzioni, & Gil, 1989;Shavlik & O'Rorke, 1993;Etzioni, 1993;Minton & Zweben, 1993;Langley & Allen, 1993;Kambhampati & Kedar, 1994), and analogical or case-based reasoning (Carbonell, 1986;Kambhampati & Hendler, 1992;Veloso & Carbonell, 1993;Veloso, 1994).\nAs the main contribution of this paper, we present an abstraction methodology and a related learning method in which bene cial abstract planning cases are automatically derived from given concrete cases. Based on a given concrete and abstract language, this learning approach allows the complete change of the representation of a case from the concrete to the abstract level. However, to achieve such an unconstrained kind of abstraction, the set of admissible abstractions must be implicitly prede ned by a generic abstraction theory. Compared to approaches in which abstraction hierarchies are generated automatically, more e ort is required to specify the abstract language, but we feel that this is a price we have to pay to make planning more tractable in certain situations. This approach is fully implemented in Paris (Plan Abstraction and Re nement in an Integrated System), a system in which abstract cases are learned and organized in a case base. During novel problem solving, this case-base is searched for a suitable abstract case which is further re ned to a concrete solution to the current problem.\nThe presentation of this approach is organized as follows. The next section presents an analysis of hierarchical problem solving in which the shortcomings of current approaches are illustrated by simple examples. Section three argues that a powerful case abstraction and re nement method can overcome the identi ed problems. Furthermore, we present the Paris approach informally, using a simple example. The next three sections of the paper formalize the general abstraction approach. After introducing the basic terminology, Section 5 de nes a new formal model of case abstraction. Section 6 contains a very detailed description of a correct and complete learning algorithm for case abstraction. Section 7 explains the re nement of cases for solving new problems. Section 8 gives a detailed description of the domain of process planning in mechanical engineering for the production of rotary-symmetric workpieces on a lathe and demonstrates the proposed approach on examples from this domain. Section 9 reports on a detailed experimental evaluation of Paris in the described domain. Finally, we discuss the presented approach in relation to similar work in the eld. The appendix of the article contains the formal proofs of the properties of the abstraction approach and the related learning algorithm. Additionally, the detailed representation of the mechanical engineering domain used for the experimental evaluation is given in Online Appendix 1." }, { "figure_ref": [], "heading": "Analysis of Hierarchical Problem Solving", "publication_ref": [ "b42", "b31", "b32", "b33", "b25", "b26" ], "table_ref": [], "text": "The basic intuition behind abstraction is as follows. By rst ignoring less relevant features of the problem description, abstraction allows problems to be solved in a coarse fashion with less e ort. Then, the derived abstract (skeletal) solution serves as a problem decomposition for the original, more detailed problem. Korf (1987) has shown that hierarchical problem solving can reduce the required search space signi cantly. Assume that a problem requires a solution of length n and furthermore assume that the average branching factor is b, i.e., the average number of states that can be reached from a given state by applying a single operator. The worst-case time complexity for nding the required solution by search is O(b n ). Now, suppose that the problem is decomposed by an abstract solution into k subproblems, each of which require a solution of length n 1 ; : : : ; n k , respectively, with n 1 + n 2 + + n k = n. In this situation, the worst-case time complexity for nding the complete solution is O(b n 1 + b n 2 + + b n k ) which is O(b max(n 1 ;n 2 ;:::;n k ) ). Please note that this a signi cant reduction in search time complexity. In particular, we can easily see that the reduction is maximal if all subproblems are of similar size, i.e., n 1 n 2 n k .\nHowever, to achieve a signi cant search reduction, the computed abstract solution must not only be a solution to the abstracted problem, it must additionally ful ll a certain requirement presupposed in the above analysis. The subproblems introduced by the abstract solution must be independent, i.e., each of them must be solvable without interaction with the other subproblems. This avoids backtracking between the solution of each subproblem and consequently cuts down the necessary overall search space. Even if this restriction is not completely ful lled, i.e., backtracking is still required in a few cases, several empirical studies (especially Knoblock, 1991Knoblock, , 1993Knoblock, , 1994) ) have shown that abstraction can nevertheless lead to performance improvements.\nUnfortunately, there are also domains and representations of domains (Holte et al., 1994(Holte et al., , 1995) ) in which the way abstraction is used in hierarchical problem solving cannot improve problem solving because the derived abstract solutions don't ful ll the above mentioned requirement at all. In the following, we will show two examples of such domains which demonstrate two general drawbacks of hierarchical problem solving. Please note that in these examples, a particular representation is assumed. We feel that these representations are somehow \\natural\" and very likely to be used by a knowledge engineer developing a domain. However, there might be other representations of these domains for which traditional hierarchical planning works. We assume that such representations are very di cult to nd, especially if the domain representation should also ful ll additional knowledge-engineering requirements." }, { "figure_ref": [], "heading": "Abstraction by Dropping Sentences", "publication_ref": [ "b72", "b73", "b81", "b82", "b95", "b29", "b33", "b30", "b32", "b33", "b7" ], "table_ref": [], "text": "In hierarchical problem solving, abstraction is mostly 1 achieved by dropping sentences of the problem description from preconditions and/or e ects of operators (Sacerdoti, 1974(Sacerdoti, , 1977;;Tenenberg, 1988;Unruh & Rosenbloom, 1989;Yang & Tenenberg, 1990;Knoblock, 1989Knoblock, , 1994)). The assumption which justi es this kind of abstraction is that less relevant details of the problem description are expressed as isolated sentences in the representation which can be addressed after the more relevant sentences have been established. Ignoring such sentences is assumed to lead to an abstract solution useful to reduce the search at the more concrete planning levels.\nHowever, this assumption does not hold in all domains. For example, in many real world domains, certain events need to be counted, e.g., when transporting a certain number of containers from one location to another. Imagine a domain in which, in addition to several other operators, there is an increment operator described as follows:\nOperator: inc Precondition: value(X ) Delete: value(X ) Add: value(X + 1)\nIn this representation, the integer value which is increased is represented by a single sentence. Each state consists only of a single sentence, and also the operator contains only one single sentence. 2 We think that this representation is very \\natural\" and very likely to be chosen by a knowledge engineer. In this domain, incrementing value(0) to value(8) requires a sequential plan composed of 8 inc-operators, leading to the state sequence: value(0),value(1),: : :,value(8). In this example, however, abstraction by dropping sentences does not work because, if this single sentence would be dropped, nothing would remain in the operator description and the whole counting problem would have been dropped completely. So there is only the empty problem at the abstract level, and the empty plan is going to solve it. Unfortunately, the empty plan cannot cause any complexity reduction for solving the problem at the concrete level. Consequently, abstraction by dropping sentences completely fails to improve problem solving in this situation. However, we can adequately cope with this counting problem by abstracting the quantitative value expressed in the sentence towards a qualitative representation (e.g., low=f0; 1; 2; 3g, medium = f4; 5; 6; 7g, high = f8; 9; 10; 11g). Such a qualitative representation would result in an abstract plan composed of two operators (subproblems) that increase value from low to medium and further to high. This abstract plan de nes two independently re nable subproblems. To solve the rst subproblem at the concrete level, the problem solver has to search for a sequence of inc-operators which increment the value from 0 to any medium value (any value from the set f4; 5; 6; 7g). This subproblem can be solved by a sequence of 4 inc-operators leading to the concrete state with a value of 4. Similarly, the second subproblem at the concrete level is to nd a sequence of operators which change the value from 4 to the nal value 8. Also this second subproblem can be solved by a sequence of 4 inc-operators. So we can see that the complete problem which requires a sequence of 8 concrete operators is divided into 2 subproblems where each subproblem can be solved by a 4-step plan. Because of the exponential nature of the search space, the two 4-step problems together can be solved with much less search than the 8-step problem as a whole. Following Korf's analysis sketched before, the time complexity is reduced from O(b 8 ) to O(b 4 )3 . Please note that the particular abstraction which leads to two subproblems is not central for achieving the complexity reduction. The important point is that the problem is decomposed into more than one subproblem. This kind of abstraction can be achieved by introducing a new abstract representation language which consists of the qualitative values and a corresponding abstract increment operator.\nWe can even generalize from the speci c example presented above. The problem with the dropping condition approach is that it is not possible to abstract information (e.g., the value in our example) that is coded in a single sentence in the representation. This is particularly a problem when the required solution contains a long sequence of states which only di er in a single sentence. Dropping this particular sentence leads to dropping the whole problem, and not dropping the sentence does not lead to any abstraction. What is really required is to abstract the information encoded in this single sentence which obviously requires more than just dropping the complete information.\nTo summarize, we have seen that abstraction by dropping sentences does not work for the particular kind of problems we have shown. In general, abstraction requires changing the complete representation language from concrete to abstract which usually involves the introduction of completely new abstract terms (sentences or operators). Within this general view, dropping sentences is just a special case of abstraction. The reason why dropping sentences has been widely used in hierarchical planning is that due to its simplicity, re nement is very easy because abstract states can directly be used as goals at the more detailed levels. Another very important property of abstraction by dropping sentences is that useful hierarchies of abstraction spaces can be constructed automatically from domain descriptions (Knoblock, 1990(Knoblock, , 1993(Knoblock, , 1994;;Bacchus & Yang, 1994)." }, { "figure_ref": [ "fig_1" ], "heading": "Generating of Abstract Solutions from Scratch", "publication_ref": [ "b32", "b95", "b81" ], "table_ref": [], "text": "Another limiting factor of classical hierarchical problem solving can be the way abstract solutions are computed. As pointed out by Korf, a good abstract solution must lead to mostly independent subproblems of equal size. In classical problem solving, an abstract solution is found by breadth-rst or depth-rst search using linear (e.g., Alpine, Knoblock, 1993) or non-linear (e.g., Abtweak, Yang & Tenenberg, 1990) problem solvers. For these problem solvers, the upward-solution property (Tenenberg, 1988) usually holds, which means that an abstract solution exists if a concrete-level solution exists. Usually, these problem solvers nd an arbitrary abstract solution (e.g., the shortest possible solution). Unfortunately, there is no way to guarantee that the computed solutions are re nable and lead to mostly independent subproblems of su ciently equal size, even if such a solution exists. In general, there are not even heuristics which try to guide problem solving towards the aspired kind of useful abstractions. This problem is illustrated by the following example, which additionally shows the limitation of abstraction by dropping sentences.\nImagine a large (or even in nite) state space which includes at least the 8 distinct states shown on the left of Figure 1. Each of these 8 states is described by the presence or absence of three sentences E1, E2, and E3 in the state description. In the 3-bit-vector shown in Figure 1, \"0\" indicates the absence of the sentence and \"1\" represents the presence of the sentence. The 8 di erent states described by these three sentences are arranged in a 3-dimensional cube, using one dimension for each sentence. The arrows in this diagram show possible state transitions by the available operators of the domain. 4 Each operator manipulates (adds or deletes) exactly one sentence of the state description, if certain conditions on the other sentences are ful lled. The representation of two of these operators is shown on the right Furthermore, assume that there are many more operators which connect some other states of the domain, not shown in the diagram, to the 8 depicted states. Consequently, we must assume a branching factor of b 1 at each state, which makes the search space for problem solving quite large. Besides the description of the domain, Figure 1 also shows three example problems: X ! X 0 , Y ! Y 0 and Z ! Z 0 . For example, the solution to the problem X ! X 0 is the 5-step path 000 ! 010 ! 110 ! 111 ! 101 ! 001. Now, let's consider the abstract solutions which correspond to the concrete solutions for each of the three problems. For each problem, we want to examine the three possible ways of abstraction by dropping one of the sentences. For this purpose, the geometric arrangement of the states turns out to be very useful because abstraction can be simply viewed as projecting the 3-dimensional state space onto the plane de ned by the sentences which are not dropped by abstraction. The left part of Figure 2 shows the three possible abstract state spaces which result from dropping one of the sentences. Here it is very important to see that in each abstract state space, every sentence can be modi ed unconditionally and independent of the other sentences. However, only one sentence can be modi ed by each operator. Thereby, all the constraints that exist at the concrete level are relaxed.\nThe abstraction of the concrete solution to each of the three problems (X ! X 0 , Y ! Y 0 and Z ! Z 0 ) with respect to the three possible ways of dropping conditions is shown on The sequence in which they have to be applied is indicated by the numbers which mark these operators. We can also see that whatever sentence we drop for any of the problems, an appropriate abstract solution exists which decomposes the original problem into independent re nable subproblems of su ciently equal size. The main point about this example is that none of these abstract solutions will be found by a hierarchical problem solver! The reason for this is that for each of the abstracted problems there also exists a 0-step or a 1-step solution in addition to the nine 3-step or 4-step solutions indicated by the depicted paths. However, such a short solution is completely useless for reducing the search at the next more concrete level because the original problem is not decomposed at all. The central problem with this is that most problem solvers will nd the shorter but useless solutions rst, and try to re ne them. Consequently, the search space on the concrete level is not reduced so that no performance improvement is achieved at all. However, there might be other representations of this example domain in which a hierarchical problem solver comes to a useful abstract solution. We think, however, that the representation shown is quite natural because it represents the 8 di erent states with the minimal number of binary sentences.\nTo summarize, we presented an example in which a useful abstract solution is not found by hierarchical planning although it exists. The reason for this is that planners usually try to nd shortest solutions, which is a good strategy for the ground level, but which may not be appropriate at the abstract level. Neither it is desirable to search for the longest solutions because this might cause unnecessarily long concrete plans." }, { "figure_ref": [], "heading": "Case Abstraction and Re nement", "publication_ref": [], "table_ref": [], "text": "As a way out of this problem, we propose to use experience given in the form of concrete planning cases and to abstract this experience for its reuse in new situations. Therefore, we need a powerful abstraction methodology which allows the introduction of a completely new abstract terminology at the abstract level. This makes it possible that useful abstract solutions can be expressed for domains in which abstraction by dropping conditions is not su cient. In particular, this methodology must not only serve as a means to analyze di erent abstraction approaches, but it must allow e cient algorithms for abstracting and re ning problems and solutions." }, { "figure_ref": [ "fig_2" ], "heading": "The Basic Idea", "publication_ref": [ "b76", "b75", "b57", "b84", "b89", "b85", "b92", "b9", "b11" ], "table_ref": [], "text": "We now introduce an approach which achieves case abstraction and re nement by changing the representation language. As a prerequisite, this approach requires that the abstract language itself (state description and operators) is given by a domain expert in addition to the concrete level description. We also require that a set of admissible ways of abstracting states is implicitly prede ned by a generic abstraction theory. This is of course an additional knowledge engineering requirement, but we feel that this is a price we have to pay to enhance the power of hierarchical problem solving. Recent research on knowledge acquisition already describes approaches and tools for the acquisition of concrete level and abstract level operators in real-world domains (Schmidt & Zickwol , 1992;Schmidt, 1994). An abstract language which is given by the user has the additional advantage that abstracted cases are expressed in a language with which the user is familiar. Consequently, understandability and explainability, which are always important issues when applying a system, can be achieved more easily.\nAs a source for learning, we assume a set of concrete planning cases, each of which consists of a problem statement together with a related solution. As is the case in Prodigy (Minton et al., 1989), we only consider sequential plans, i.e., plans with totally ordered operators. The planning cases we assume do not include a problem solving trace as for example the problem solving cases in Prodigy/Analogy (Veloso, 1992;Veloso & Carbonell, 1993;Veloso, 1994). In real-world applications, a domain expert's solutions to previous problems are usually recorded in a company's ling cabinet or database. These cases can be seen as a collection of the company's experience, from which we want to draw power.\nDuring a learning phase, a set of abstract planning cases is generated from each available concrete case. An abstract planning case consists of an abstracted problem description together with an abstracted solution. The case abstraction procedure guarantees that the abstract solution contained in an abstract case can always be re ned to become a solution of the concrete problem contained in the concrete case that became abstracted. Di erent abstract cases may be situated at di erent levels of abstraction or may be abstractions according to di erent abstraction aspects. Di erent abstract cases can be of di erent utility and can reduce the search space at the concrete level in di erent ways. It can also happen that several concrete cases share the same abstraction. The set of all abstract planning cases that are learned is organized in a case-base for e cient retrieval during problem solving.\nDuring the problem solving phase, this case base is searched until an abstract case is found which can be applied to the current problem in hand. An abstract case is applicable to the current problem if the abstracted problem contained in the abstract planning case is an abstraction of the current problem. However, we cannot guarantee that an abstract solution contained in a selected abstract case can really be re ned to become a solution to the current problem. It is at least known that each abstract solution from the case base was already useful for solving one or more previous problems, i.e., the problems contained in those concrete cases from which the abstract case was learned. Since the new problem is similar to these previous problems because both can be abstracted in the same way, there is at least a high chance that the abstract solution is also useful for solving the new problem. When the new problem is solved by re nement a new concrete case arises which can be used for further learning. Paris (Plan Abstraction and Re nement in an Integrated System) follows the basic approach just described. Figure 3 shows an overview of the whole system and its components.\nBesides case abstraction and re nement, Paris also includes an explanation-based approach for generalizing cases during learning and for specializing them during problem solving. Furthermore, the system also includes additional mechanisms for evaluating di erent abstract cases and generalizations derived by the explanation-based component. This evaluation component measures the reduction in search time caused by each abstract plan when solving those concrete problems from the case base for which the abstract plan is applicable.\nBased on this evaluation, several di erent indexing and retrieval mechanisms have been developed. In these retrieval procedures those abstract cases are preferred which have caused the most reduction in search during previous problem solving episodes. In particular, abstract cases which turn out to be useless for many concrete problems may even become completely removed from the case-base. The spectrum of developed retrieval approaches ranges from simple sequential search, via hierarchical clustering up to a sophisticated approach for balancing a hierarchy of abstract cases according to the statistical distribution of the cases within the problem space and their evaluated utility. More details on the generalization procedure can be found in (Bergmann, 1992a), while the evaluation and retrieval mechanisms are reported in (Bergmann & Wilke, 1994;Wilke, 1994). The whole multistrategy system including the various interactions of the described components will be the topic of a forthcoming article, while rst ideas can already be found in (Bergmann, 1992b(Bergmann, , 1993)). However, as the target of this paper we will concentrate on the core of Paris, namely the approach to abstraction and re nement. " }, { "figure_ref": [ "fig_3" ], "heading": "Informal Description of the Abstraction Approach", "publication_ref": [], "table_ref": [], "text": "We rst give an informal description of the abstraction approach in Paris, based on our small example shown in Figure 1 to enhance the understanding of the subsequent formal sections. Suppose that the solution to the problem X ! X 0 is available as a concrete problem solving experience. The task is now to learn an abstract case which can be bene cially used to solve future problems such as Y ! Y 0 and Z ! Z 0 . This learning task must be achieved within an abstraction approach which is stronger than dropping sentences. If we look at Figure 4, it becomes obvious that by changing the representation a single abstract case can be learned which is useful for all three concrete problems. The abstract plan shown indicates which concrete states have to be abstracted towards a single abstract state, such that a single abstract plan exists which is useful for all three problems." }, { "figure_ref": [ "fig_3" ], "heading": "Abstract Language and Generic Abstraction Theory", "publication_ref": [], "table_ref": [], "text": "To achieve this kind of abstraction, our approach requires that the abstract language (states and operators), as well as a generic abstraction theory is provided by the user. For the example in Figure 4, the abstract language must contain the new abstract sentences A 1 ; : : : ; A 4 and the three abstract operators which allow the respective state transitions. These abstract operators, called Oa i (i 2 f1; : : : ; 3g), can be de ned as follows:\nOperator: Oa i Precondition: A i Delete: A i Add: A i+1\nFor each new abstract sentence, the user must provide a set of generic abstraction rules which describe how this sentence is de ned in terms of the available sentences of the con-crete language. The generic abstraction theory de ned by these rules speci es a set of admissible state abstractions. For our example, the generic abstraction theory must contain the following two rules to de ne the new abstract sentence A 1 : :E1 ^E2 ! A 1 and :E1 ^:E2 ^:E3 ! A 1 . In general, the de nition of the generic abstraction theory does not require that all state abstractions are noted explicitly. Abstract states can be derived implicitly by the application of a combination of several rules from the generic abstraction theory.\nBesides the kind of abstraction described above, the user may also want to specify a di erent type of abstraction which she/he also considers useful. For example, we can assume that abstraction by dropping the sentence E1 should also be realized. In this case, the abstract language must contain a copy of the two sentences which are not dropped, i.e., the sentences E2 and E3. Therefore, the user 5 may de ne two abstract sentences A 5 and A 6 by the following rules of the generic abstraction theory: E2 ! A 5 and E3 ! A 6 . Of course, the respective abstract operators must also be speci ed.\nSince the domain expert or knowledge engineer must provide the abstract language and the generic abstraction theory, she/he must already have one or more particular kinds of abstraction in mind. She/he must know what kind of details can be omitted when solving a problem in an abstract fashion. With our approach, the knowledge-engineer is given the power to express the kind of abstraction she/he considers useful." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Model of Case Abstraction", "publication_ref": [], "table_ref": [], "text": "Based on the given abstract language and the generic abstraction theory, the abstraction of a planning case can be formally described by two abstraction mappings: a state abstraction mapping and a sequence abstraction mapping. These two mappings describe two dimensions for reducing the level of detail in a case. The state abstraction mapping reduces the level of detail of a state description while changing the representation language. For the case abstraction indicated in Figure 4, the state abstraction mapping must map the concrete states 000, 011 and 010 onto an abstract state described by the new sentence A 1 , and simultaneously it must map all other concrete states occurring in the plan onto the respective abstract states described by the new sentences A 2 , A 3 , and A 4 . The sequence abstraction mapping reduces the level of detail in the number of states which are considered at the abstract level by relating some of the concrete states from the concrete case to abstract states of the abstract case. While some of the concrete states can be skipped, each abstract state must result from a particular concrete state. For example, in Figure 4, the abstraction of the plan 000 ! 010 ! 110 ! 111 ! 101 ! 001 requires a sequence abstraction mapping which relates the rst abstract state described by A 1 to the rst concrete state 000, the second abstract state described by A 2 to the third concrete state 110, and so forth. In this example, the second and the fth concrete states are skipped." }, { "figure_ref": [ "fig_3", "fig_3", "fig_1" ], "heading": "Learning Abstract Planning Cases", "publication_ref": [], "table_ref": [], "text": "The procedure for learning such abstract planning cases from a given concrete planning case is decomposed into four separate phases. For our simple example, these phases are shown in 5. Please note that for abstraction by dropping sentences, we can also consider an ALPINE-like algorithm which generates the required abstract language and the generic abstraction theory automatically. Figure 5: The four phases of case abstraction for the solution to the problem X ! X 0 Figure 5. In phase-I, the states which result from the execution of the plan contained in the concrete case are determined. Therefore, each operator contained in the plan (starting from the rst operator) is applied and the successor state is computed. This process starts at the initial state contained in the case and leads to a nal state, which should be the goal state contained in the case. In phase-II, we derive all admissible abstractions for each concrete state computed in the rst phase. For this purpose, the generic abstraction theory is used to determine all abstract sentences that can be derived from a respective concrete state by applying the rules of the generic abstraction theory. Figure 5 shows the abstract sentences that can be derived by the generic abstraction theory sketched above. For example, we can see that for the second concrete state an abstract description can be derived which contains two abstract sentences: the abstract sentence A 1 required to achieve the type of abstraction shown in Figure 4 and additionally the abstract sentences A 5 required for abstraction by dropping sentences. Please note that by this process, the representation language of states is changed from concrete to abstract. The next two phases deal with the abstract operators. As already stated, abstract operators are given in the abstract language provided by the user. However, we do not assume operator abstraction rules which associate an abstract operator to a single concrete operator or a sequence of concrete operators. The reason for this is that such operator abstraction rules are extremely hard to acquire and even harder to keep complete. In the next two phases of case abstraction, we search for transitions of abstract states based on the available abstract operators. In phase-III, an acyclic directed graph is constructed. An edge leads from an abstract state i to a successor abstract state j (not necessarily to the next abstract state), if the abstract operator is applicable in state i and its application leads to the state j. The de nition of the abstract operators are used in this process. The available abstract operators determine which transitions can be included in the graph. Figure 5 shows the resulting graph, provided that the abstract operators sketched in Section 3.3.1 are contained in the abstract language. In this graph the transitions shown in plain line style result from the operators Oa i , while the transitions shown in dashed line style result from the operators required for abstraction by dropping conditions.\nIn phase-IV the graph is searched for consistent paths from the initial abstract state to the nal abstract state. The paths must be consistent in the sense that in the resulting path (i.e., an abstract plan) every abstract operator is correctly applicable in the state that results from the previous operator. Moreover, the state abstraction which is required for this abstract plan must not change within the plan. In Figure 5 two paths of this kind are shown. The lower path represents the abstract planning case Ca 1 (abstract initial and nal state together with the operator sequence) that results from the kind of abstraction shown in Figure 4. The upper path represents the abstract planning case Ca 2 that results from abstraction by dropping the sentence E1. This is the same abstract plan as shown in Figure 2 for the problem X ! X 0 . Together with the two plans, the abstract state descriptions that result from the operator application are shown. Please note that these state descriptions are always a subset of the description which are derived by the generic abstraction theory. For example, the description of the fourth abstract state derived in phase-II, contains the sentences A 3 ; A 5 ; A 6 . This abstract state occurs in both abstract cases which are computed in phase-IV. In the case Ca 2 , the respective state is described by the sentences A 5 and A 6 because these are the only sentences which result from the application of the operators starting at the abstract initial state. In the case Ca 1 , the abstract state is described by the sentence A 3 because this sentence results from the application of the operator Oa 2 .\nFrom this example we can see that the abstract operators have two functions. The rst function is to select some of the concrete states that become abstracted. For example, in the abstract case Ca 1 , the second concrete state is skipped, even if the rst and the second concrete states can be abstracted to di erent abstract descriptions in phase-II. The reason for this is that there is no abstract operator that a) leads from the rst abstracted state to the second abstracted state and which b) is also consistent with the other operators in the rest of the path. The second function of the abstract operators is to select some of the abstract sentences that are considered in the abstract planning case. For example, in the abstract case Ca 1 , the sentences A 1 ; : : : ; A 4 are considered while the sentences A 5 and A 6 are left out. The reason for this is that the abstract operators Oa 1 ; Oa 2 ; Oa 3 which occur in the plan don't use A 5 and A 6 in their precondition and don't manipulate these sentences.\nAfter phase-IV is nished, a set of abstract planning cases is available. These planning cases can then be stored in the case-base and used for further problem solving." }, { "figure_ref": [ "fig_4" ], "heading": "Selecting and Refining Abstract Cases", "publication_ref": [], "table_ref": [], "text": "During problem solving, an abstract case must be selected from a case-base, and the abstract plan contained in this case must be re ned to become a solution to the current problem. During case retrieval we must search for an abstract case which is applicable, i.e., it contains a problem description that is an abstraction of the current problem. For example, assume that the problem Y ! Y 0 should be solved after the case X ! X 0 was presented for learning.\nIn this situation the case-base contains the two abstract cases Ca 1 and Ca 2 shown in phase-IV of Figure 5. The abstract case Ca 1 can be used for solving the new problem, because the initial state 000 of the new problem can be abstracted to A 1 by applying the generic abstraction theory. Similarly, the nal state 100 can be abstracted to A 4 . However, the abstract case Ca 2 is not applicable because the nal abstract state cannot be abstracted to A 6 . Consequently, the lower abstract case must be used. During plan re nement we can re ne the abstract operators sequentially from left to right as shown in Figure 6. Thereby each abstract operator de nes an abstract goal state, i.e., the state that results after the execution of the operator. For example, the abstract operator Oa 1 de nes the abstract goal A 2 . To re ne an abstract operator, we search for a concrete operator sequence, starting from the current concrete state (i.e., the initial state for the rst operator), until a concrete state is reached that can be abstracted to the desired goal state. If such a state is found it can be used as a starting state for the re nement of the next abstract operator. For the solution of the problem Y ! Y 0 , the re nement of the abstract operator Oa 1 can be achieved by a sequence of two concrete operators leading to the concrete state 110. This concrete state is then used as a starting state to re ne the next abstract operator Oa 2 .\nThis re nement procedure nishes if the last abstract operator is re ned in a way that the nal concrete state is achieved. Please note that in this type of re nement the operators themselves are not used directly, instead the sequence of states which results from their execution are used. Alternatively, we could have also stored an abstract case as a sequence of abstracted states. From our experience, storing a sequence of operators requires less space than storing a sequence of states. This will become obvious when looking at the domain that will be introduced in Section 8. Besides this the abstract operators play an important role in the learning phase." }, { "figure_ref": [], "heading": "Relations to Skeletal Plans", "publication_ref": [ "b21" ], "table_ref": [], "text": "A similar experience-based or case-based variant for nding an abstract solution can be found in an early paper by Friedland and Iwasaki (1985) in which the concept of skeletal plans is introduced. A skeletal plan is \" ...] a sequence of generalized steps, which, when instantiated by speci c operations in a speci c problem context will solve a given problem p.161]. ...] Skeletal plans exist at many levels of generality. At the most general level, they are only a few basic plans, but these are used as `fall-backs', when more speci c, easier to re ne plans cannot be found. p. 164].\" Skeletal plans are solutions to planning problems at di erent levels of detail and are consequently abstract plans. During problem solving they are recalled from a library and re ned towards a concrete solution. So this approach can be seen as an early idea for integrating abstraction and case-based reasoning. However, there are several di erences between the skeletal plan approach and the Paris approach. In the skeletal plan approach no model of the operators (neither concrete nor abstract) is used to describe the preconditions and e ects of operators as is done in Paris. There is no explicit notion of states and abstraction or re nement of states. Instead, the plan re nement is achieved by stepping down a hierarchy of operators, guided by heuristic rules for operator selection. In particular, no approach which supports the automatic acquisition of skeletal plans was provided. Unfortunately, the skeletal plan approach has not yet been investigated in as much detail as current work in the eld of speedup-learning. There is neither a formal model of skeletal planning nor empirical evaluations.\nIn the rest of this paper we will introduce and investigate the Paris approach more formally." }, { "figure_ref": [], "heading": "Basic Terminology", "publication_ref": [ "b65", "b20", "b49", "b50", "b55", "b14" ], "table_ref": [], "text": "In this section we want to introduce the basic formal terminology used throughout the rest of this paper. Therefore we will de ne a formal representation for problem solving domains. We want to assume that problem solving in general can be viewed as transforming an initial state into a nal state by using a sequence of operators (Newell & Simon, 1972). Following a Strips-oriented representation (Fikes & Nilsson, 1971), the domain of problem solving D = hL; E; O; Ri is described by a rst-order language 6 L, a set of essential atomic sentences E of L (Lifschitz, 1987), a set of operators O with related descriptions, and additionally, a set of rules (Horn clauses) R out of L. The essential sentences (which must be atomic) are the only sentences that are used to describe a state. A state s 2 S describes the dynamic part of a situation in a domain and consists of a nite subset of ground instances of essential sentences of E. With the symbol S, we denote the set of all possible states descriptions in a domain, which is de ned as S = 2 E , with E = fe je 2 E and is a substitution such that e is groundg. In addition, the Horn clauses R allow the representation of static properties which are true in all situations. These Horn clauses must not contain an essential sentence in the head of a clause.\nAn operator o(x 1 ; : : : ; x n ) 2 O is described by a triple hPre o ; Add o ; Del o i, where the precondition Pre o is a conjunction of atoms of L, and the add-list Add o and the deletelist Del o are nite sets of (possibly instantiated) essential sentences of E. Furthermore, the variables occuring in the operator descriptions must follow the following restrictions: fx 1 ; : : : ; x n g V ar(Pre o ) V ar(Del o ) and fx 1 ; : : : ; x n g V ar(Add o ). 7 An instantiated operator is an expression of the form o(t 1 ; : : : ; t n ), with t i being ground terms of L. A term t i describes the instantiation of the variable x i in the operator description. For notational convenience we de ne the instantiated precondition as well as the instantiated add-list and delete-list for an instantiated operator as follows: Pre o(t 1 ;:::;tn) := Pre o , Add o(t 1 ;:::;tn) := fa ja 2 Add o g, Del o(t 1 ;:::;tn) := fd jd 2 Del o g, with hPre o ; Add o ; Del o i is 6. The basic language is rst order, but with the deductive rules given in Horn logic only a subset of the full rst-order language is used.\n7. These restrictions can however be relaxed such that fx1; : : : ; xng V ar(Preo) is not required. But the introduced restriction simpli es the subsequent presentation.\nthe description of the (uninstantiated) operator o(x 1 ; : : : ; x n ), and = fx 1 =t 1 ; : : : ; x n =t n g is the corresponding instantiation. The introduced Strips-oriented formalism for de ning a problem solving domain is similar in form and expressiveness to the representations typically used in general problem solving or planning. A state can be described by a nite set of ground atoms in which functions can also be used. Full Horn logic is available to describe static rules. The restriction to Horn clauses has the advantage of being powerful while allowing e cient proof construction by using the well known SLD-refutation procedures (Lloyd, 1984). Compared to the Prodigy Description Language (PDL) (Minton, 1988;Blythe et al., 1992) our language does not provide explicit quanti cation by a speci c syntactic construct, but a similar expressiveness can be reached by the implicit quanti cation in Horn clauses. Moreover, our language does not provide any kind of type speci cation for constants or variables as in PDL but we think that this is not a major disadvantage. Besides these points our language is quite similar to PDL." }, { "figure_ref": [], "heading": "A Formal Model of Case Abstraction", "publication_ref": [], "table_ref": [], "text": "In this section we present a new formal model of case abstraction which provides a theory for changing the representation language of a case from concrete to abstract. As already stated we assume that in addition to the concrete language the abstract language is supplied by a domain expert. Following the introduced formalism, we assume that the concrete level of problem solving is de ned by a concrete problem solving domain D c = hL c ; E c ; O c ; R c i and the abstract level of (case-based) problem solving is represented by an abstract problem solving domain D a = hL a ; E a ; O a ; R a i. For reasons of simplicity, we assume that both domains do not share the same symbols 9 . This condition can always be achieved by renaming symbols. In the remainder of this paper states and operators from the concrete domain are denoted by s c and o c respectively, while states and operators from the abstract domain are denoted by s a and o a respectively. The problem of case abstraction can now be described as transforming a case from the concrete domain D c into a case in the abstract domain D a (see Figure 7). This transformation will now be formally decomposed into two independent mappings: a state abstraction mapping , and a sequence abstraction mapping (Bergmann, 1992c). The state abstraction mapping transforms a selection of concrete state descriptions that occur in the solution to a problem into abstract state descriptions, 8. In the following, we will simply omit the parameters of operators and instantiated operators in case they are unambiguous or not relevant. 9. Otherwise, a symbol (or a sentence) could become ambiguous which would be a problem when applying the generic abstraction theory. It would be unclear whether a generic abstraction rule refers to a concrete or an abstract sentence \nα n i+1 i 4 3 2 1 1 2 m D a D c α α α j j+1 β(0) = 0 β(1) = 3 β (j) = i β(m) = n O c O c O c O c O c O c O c O a O a O a O a O a\nFigure 7: General idea of abstraction while the sequence abstraction mapping speci es which of the concrete states are mapped and which are skipped." }, { "figure_ref": [], "heading": "State Abstraction", "publication_ref": [ "b22" ], "table_ref": [], "text": "A state abstraction mapping translates states of the concrete world into the abstract world.\nDe nition 1 (State Abstraction Mapping) A state abstraction mapping : S c ! S a is a mapping from S c , the set of all states in the concrete domain, to S a , the set of all states in the abstract domain. In particular, must be an e ective total function. This general de nition of a state abstraction mapping does not impose any restrictions on the kind of abstraction besides the fact that the mapping must be a total many-toone function. However, to restrict the set of all possible state abstractions to a set of abstractions which a user considers useful, we assume that additional domain knowledge about how an abstract state relates to a concrete state can be provided. This knowledge must be expressed in terms of a domain speci c generic abstraction theory A (Giordana, Roverso, & Saitta, 1991).\nDe nition 2 (Generic Abstraction Theory) A generic abstraction theory is a set of Horn clauses of the form e a a 1 ; : : : ; a k . In these rules e a is an abstract essential sentence, i.e., e a = E a for E a 2 E a and a substitution . The body of a generic abstraction rule consists of a set of sentences from the concrete or abstract language, i.e., a i are atoms out of L c L a .\nBased on a generic abstraction theory, we can restrict the set of all possible state abstraction mappings to those which are deductively justi ed by the generic abstraction theory.\nDe nition 3 (Deductively Justi ed State Abstraction Mapping) A state abstraction mapping is deductively justi ed by a generic abstraction theory A, if the following conditions hold for all s c 2 S c : if 2 (s c ) then s c R c A ` and if 2 (s c ) then for all sc such that sc R c A ` holds, 2 (s c ) is also ful lled.\nIn this de nition the rst condition assures that every abstract sentence reached by the mapping is justi ed by the abstraction theory. Additionally, the second requirement guarantees that if an abstract sentence is used to describe an abstraction of one state, it must also be used to describe the abstraction of all other states, if the abstract sentence can be derived by the generic abstraction theory. Please note that a deductively justi ed state abstraction mapping can be completely induced by a set E a with respect to a generic abstraction theory as follows: (s c ) := f 2 js c R c A ` g. Unless otherwise stated we always assume deductively justi ed state abstraction mappings. To summarize, the state abstraction mapping transforms a concrete state description into an abstract state description and thereby changes the representation of a state from concrete to abstract. Please note that deductively justi ed state abstraction mappings need not to be de ned by the user. They will be determined automatically by the learning algorithm that will be presented in Section 6." }, { "figure_ref": [], "heading": "Sequence Abstraction", "publication_ref": [ "b54" ], "table_ref": [], "text": "The solution to a problem consists of a sequence of operators and a corresponding sequence of states. To relate an abstract solution to a concrete solution, the relationship between the abstract states (or operators) and the concrete states (or operators) must be captured. Each abstract state must have a corresponding concrete state but not every concrete state must have an associated abstract state. This is due to the fact that abstraction is always a reduction in the level of detail (Michalski & Kodrato , 1990), in this situation, a reduction in the number of states. For the selection of those concrete states that have a corresponding abstraction, the sequence abstraction mapping is de ned as follows:\nDe nition 4 (Sequence Abstraction Mapping) A sequence abstraction mapping : N ! N relates an abstract state sequence (s a 0 ; : : : ; s a m ) to a concrete state sequence (s c 0 ; : : : ; s c n ) by mapping the indices j 2 f1; : : : ; mg of the abstract states s a j into the indices i 2 f1; : : : ; ng of the concrete states s c i , such that the following properties hold:\n(0) = 0 and (m) = n: The initial state and the goal state of the abstract sequence must correspond to the initial and goal state of the respective concrete state sequence.\n(u) < (v) if and only if u < v: The order of the states de ned through the concrete state sequence must be maintained for the abstract state sequence.\nNote that the de ned sequence abstraction mapping formally maps indices from the abstract domain into the concrete domain. As an abstraction mapping it should better map indices from the concrete domain to indices in the abstract domain, such as the inverse mapping 1 does. However, such a mapping is more inconvenient to handle formally since the range of de nition of 1 must always be considered. Therefore we stick to the presented de nition." }, { "figure_ref": [], "heading": "Case Abstraction", "publication_ref": [], "table_ref": [], "text": "Based on the two abstraction functions introduced, our intuition of case abstraction is captured in the following de nition. ! s c i for all i 2 f1; : : : ; ng and s a j 1 o a j ! s a j for all j 2 f1; : : : ; mg and if there exists a state abstraction mapping and a sequence abstraction mapping , such that: s a j = (s c (j) ) holds for all j 2 f0; : : : ; mg. This de nition of case abstraction is demonstrated in Figure 7. The concrete space shows the sequence of n operations together with the resulting state sequence. Selected states are mapped by into states of the abstract space. The mapping maps the indices of the abstract states back to the corresponding concrete states." }, { "figure_ref": [], "heading": "Generality of the Case Abstraction Methodology", "publication_ref": [], "table_ref": [], "text": "In the following, we brie y discuss the generality of the presented case abstraction methodology. We will see that hierarchies of abstraction spaces as well as di erent kinds of abstractions can be represented simultaneously using the presented methodology." }, { "figure_ref": [ "fig_9" ], "heading": "Different kinds of Abstractions", "publication_ref": [], "table_ref": [], "text": "In general, there will be more than one possible abstraction of an object in the world. Abstraction can be performed in many di erent ways. An example of two di erent abstractions of the same case has already been shown in the example in Figure 5. In this example, two di erent abstractions (see the abstract cases Ca 1 and Ca 2 ) have been derived from the same concrete case. Our abstraction methodology is able cope with di erent abstractions in case they are speci ed by the user. Assume we are given one concrete domain D c and two di erent abstract domains D a1 and D a2 , each of which represents two di erent kinds of abstraction. Furthermore, assume that both abstract domains do not share the same symbols 10 . We can always de ne a single abstract domain D a by joining the individual abstract domains which then includes both kinds of abstractions (see Figure 8 (a)). This property is formally captured in the following simple lemma.\nLemma 6 (Joining di erent abstractions) If a concrete domain D c and two disjoint abstract domains D a1 and D a2 are given, then a joint abstract domain D a = D a1 D a2 can be de ned as follows: Let D a1 = (L a1 ; E a1 ; O a1 ; R a1 ) and let D a2 = (L a2 ; E a2 ; O a2 ; R a2 ). Then D a = D a1 D a2 = (L a1 L a2 ; E a1 E a2 ; O a1 O a2 ; R a1 R a2 ). The joint abstract domain D a ful lls the following property: if C a is an abstraction of C c with respect to (D c , D a1 ) or with respect to (D c , D a2 ), then C a is also an abstraction of C c with respect to (D c ; D a )." }, { "figure_ref": [ "fig_9" ], "heading": "Hierarchy of Abstraction Spaces", "publication_ref": [ "b72", "b29" ], "table_ref": [], "text": "Most work on hierarchical problem solving assume a multi-level hierarchy of abstraction spaces for problem solving (e.g., Sacerdoti, 1974;Knoblock, 1989). Even if the presented approach contains only two domain descriptions, a hierarchy of abstract domains can simply be mapped onto the presented two-level model as shown in Figure 8 (b). Assume that a hierarchy of disjoint domain descriptions (D 0 ; : : : ; D l ) is given. In particular, the domain D +1 is assumed to be more abstract than the domain D . In such a multi-level hierarchy of abstraction spaces, a case C at the abstraction level D is an abstraction of a case C 0 , if there exists a sequence of cases (C 1 ; : : : ; C 1 ) such that C i is out of the domain D i and C i+1 is an abstraction of C i with respect to (D i ; D i+1 ) for all i 2 f0; : : : ; 1g. Such a multilevel hierarchy of domain descriptions can always be reduced to a two-level description. The abstract domain of this two-level description contains the union of all the levels from the multi-level hierarchy. This property is formally captured in the following lemma. Since we have shown that di erent kinds of abstractions as well as hierarchies of abstraction spaces can be directly represented within our two-level case abstraction methodology, we can further restrict ourselves to exactly these two levels." }, { "figure_ref": [], "heading": "Computing Case Abstractions", "publication_ref": [ "b91", "b50" ], "table_ref": [], "text": "We now present the Pabs algorithm (Bergmann, 1992c;Wilke, 1993) for automatically learning a set of abstract cases from a given concrete case. Thereby, we assume that a concrete domain D c and an abstract domain D a are given together with a generic abstraction theory A. We use the functional notation C a 2 PABS(hD c ; D a ; Ai; C c ) to denote that C a is an element of the set of abstract cases returned by the Pabs algorithm.\nThe algorithm consists of the four separate phases introduced in Section 3. In the following we will present these phases in more detail.\nIn the rst three phases, we require a procedure for determining whether a conjunctive formula is a consequence of a set of Horn clauses. For this purpose, we use a SLD-refutation procedure (Lloyd, 1984) which is given a set of Horn clauses (a logic program) C together with conjunctive formula G (a goal clause). The refutation procedure determines a set of answer substitutions such that C `G holds for all 2 . We write = SLD(C; G). This SLD-refutation procedure performs some kind of backward-chaining and works as follows. It selects a literal from the goal clause G (i.e., the left most literal) and searches for a Horn clause in the logic program C that contains a literal in its head that uni es with the selected goal literal. The selected literal is removed from G and the body (if not empty) of the applied clause is added at the beginning of the goal clause. Then the most general uni er of the goal literal and the head of the clause is applied to the whole new goal clause. The resulting goal clause is called resolvent. This process continues until the goal clause becomes empty or until no more resolvents can be built. In the former case, the goal has been proven and an answer substitution is computed by composing the substitutions used during resolution. Backtracking is used to look for possible other selections of applicable Horn rules to determine alternative answer substitutions. The set of all answer substitutions is returned as set . If the whole space of possible applications of the available Horn rules has been searched unsuccessfully, the goal clause is not a consequence of the logic program C and the SLD-refutation procedure terminates without an answer substitution ( = ;). This must not be confused with the situation in which an empty substitution is returned ( = f;g), if no variables occur in G. In phase-III of the Pabs algorithm, we will also require the derivation trees in addition to the answer substitutions. Then we write = SLD(C; G) and assume that is a set of pairs ( ; ), where is an answer substitution and is a derivation of C `G .\nIn order to assure the termination of the SLD-refutation procedure we have to require that the abstract domain and the generic abstraction theory is designed according to the following principles11 : For each state abstract s a 2 S a and each abstract operator o a 2 O a where o a is described by hPre o a ; Add o a; Del o a i, SLD(s a R a ; Pre o a ) must lead to a nite set of ground substitutions of all variables which occur in Pre o a .\nFor each state s c 2 S c and each abstract essential sentence E 2 E a , SLD(s c R c A; E) must lead to a nite set of ground substitutions of all variables which occur in E.\nIn the following the four phases of the Pabs algorithm are explained in detail." }, { "figure_ref": [], "heading": "Phase-I: Computing the Concrete State Sequence", "publication_ref": [ "b55", "b57", "b89" ], "table_ref": [], "text": "As input to the case abstraction algorithm, we assume a concrete case C c = hhs c I ; s c G i; (o c 1 ; : : : ; o c n )i. Note that (o c 1 ; : : : ; o c n ) is a totally ordered sequence of instanti- ated operators similar to the plans in Prodigy (Minton, 1988;Minton et al., 1989;Veloso & Carbonell, 1993). In the rst phase, the state sequence which results from the simulation of problem solution is computed as follows:\nAlgorithm 1 (Phase-I: Computing the concrete state sequence) s c 0 := s c I for i := 1 to n do if SLD(s c i 1 R c ; Pre o c i ) = ; then STOP \\Failure: Operator not applicable\" s c i := (s c i 1 n Del o c i ) Add o c i end if s c G 6 s c n then STOP \\Failure: Goal state not reached\" By this algorithm, the states s c i are computed, such that s c i 1 o c i ! s c i holds for all i 2 f1; : : : ; ng. If a failure occurs the given plan is not valid, i.e., it does not solve the given problem." }, { "figure_ref": [], "heading": "Phase-II: Deriving Abstract Essential Sentences", "publication_ref": [], "table_ref": [], "text": "Using the derived concrete state sequence as input, the following algorithm computes a sequence of abstract state descriptions (s a i ) by applying the generic abstraction theory separately to each concrete state.\nAlgorithm 2 (Phase-II: State abstraction)\nfor i := 0 to n do s a i := ; for each E 2 E a do := SLD(s c i R c A; E) for each 2 do s a i := s a i fE g end end end\nPlease note that we have claimed that the domain theories are designed in a way that is nite and only contains ground substitution of all variables in E. Therefore, every description s a i consists only of ground atoms and is consequently a valid abstract state description. Within the introduced model of case abstraction we have now computed a superset for the outcome of possible state abstraction mappings. Each deductively justi ed state abstraction mapping is restricted by (s c i ) s a i = fe 2 S a js c i R c A `eg for all i 2 f1; : : : ; ng. Consequently, we have determined all abstract sentences that an abstract case might require." }, { "figure_ref": [], "heading": "Phase-III: Computing Possible Abstract State Transitions", "publication_ref": [], "table_ref": [], "text": "In the next phase of the algorithm, we search for instantiated abstract operators which can transform an abstract state sa i s a i into a subsequent abstract state sa j s a j (i < j).\nTherefore, the preconditions of the instantiated operator must at least be ful lled in the state sa i and consequently in also s a i . Furthermore, all added e ects of the operator must be true in sa j and consequently also in s a j . for each a 2 Add 0 o do M 0 := ; for each 2 M do for each e 2 s a j do if there is a substitution such that a = e then M 0 := M 0 f g end end M := M 0 end (* Now, M contains the set of all possible substitutions *) (* such that all added sentences are contained in s a j *) for each 2 M do G := G fhi; j; o(x 1 ; : : : ; x u ) ; ig end end end end end\nThe set of all possible operator transitions are collected as directed edges of a graph which vertices represent the abstract states. In the algorithm, the set G of edges of the acyclic directed graph is constructed. For each pair of states (s a i ,s a j ) with i < j it is checked whether there exists an operator o(x 1 ; : : : ; x u ) which is applicable in s a i . For this purpose, the SLD-refutation procedure computes the set of all possible answer substitutions such that the precondition of the operator is ful lled in s a i . The derivation which belongs to each answer substitution is stored together with the operator in the graph since it is required for the next phase of case abstraction. This derivation is an \\and-tree\" where each inner-node re ects the resolution of a goal literal with the head of a clause and each leaf-node represents the resolution with a fact. Note that for proving the precondition of an abstract operator the inner nodes of the tree always refer to clauses of the Horn rule set R a , while the leave-nodes represent facts stated in R a or essential sentences contained in s a i . Then each answer substitution is applied to the add-list of the operator leading to a partially instantiated add-list Add 0 o . Note that there can still be variables in Add 0 o because the operator may contain variables which are not contained in its precondition but may occur in the add-list. Therefore, the set M of all possible substitutions is incrementally constructed such that a 2 s aj holds for all a 2 Add 0 o . The completely instantiated operator derived thereby is nally included as a directed edge (from i to j) in the graph G.\nBy this algorithm it is guaranteed that each (instantiated) operator which leads from s a i to s a j is applicable in s a i and that all essential sentences added by this operator are contained in s a j . Furthermore, if the applied SLD-refutation procedure is complete (it always nds all answer substitutions), then every instantiated operator which is applicable in s a i such that all essential sentences added by this operator are contained in s a j is also contained in the graph. From this follows immediately that if (s c (i 1) ) o a i ! (s c (i) ) holds for an arbitrary deductively justi ed state abstraction mapping and a sequence abstraction mapping , then h (i 1); (i); o a i ; i 2 G also holds." }, { "figure_ref": [], "heading": "Phase-IV: Determining Sound Paths", "publication_ref": [], "table_ref": [], "text": "Based on the state abstractions s a i derived in phase-II and on the graph G computed in the previous phase, phase-IV selects a set of sound paths from the initial abstract state to the nal abstract state. A set of signi cant abstract sentences and a sequence abstraction mapping are also determined during the construction of each path.\nAlgorithm 4 (Phase-IV: Searching sound paths) 12 Paths := fh(); ;;( (0) = 0)ig while there exists h(o a 1 ; : : : ; o a k ); ; i 2 Paths with (k) < n do Paths := Paths n h(o a 1 ; : : : ; o a k ); ; i for each hi; j; o a ; i 2 G with i = (k) do let E be the set of essential sentences contained in the derivation let 0 = E Add o a if for all 2 f1; : : : ; kg holds: (s a ( 1) \\ 0 ) o a ! (s a ( ) \\ 0 ) and (s a (k) \\ 0 ) o a ! (s a j \\ 0 ) then Paths := Paths fh(o a 1 ; : : : ; o a k ; o a ); 0 ; f (k + 1) = jgi g end end Cases Abs := ; for each h(o a 1 ; : : : ; o a k ); ; i 2 Paths with (k) = n do\nCases Abs := Cases Abs fhhs a 0 \\ ; s a n \\ i; (o a 1 ; : : : ; o a k )ig end return Cases Abs While the construction of the sequence abstraction mapping is obvious, the set represents the image of a state abstraction mapping and thereby determines the set of sentences 12. Please note that h(o a 1 ; : : : ; o a k ); ; i matches fh(); ;; ( (0) = 0)ig with k = 0. The operator n denotes set di erence.\nthat have to be reached in order to assure the applicability of the constructed operator sequence. Note that from the state abstraction mapping can be directly determined as follows: (s c i ) = fe 2 js c i R c A `eg. The idea of the algorithm is to start with an empty path. A path is extended by an operator from G in each iteration of the algorithm until the path leads to the nal state with the index n. New essential sentences 0 may occur in the proof of the precondition or as added e ects of this new operator. The path constructed so far must still be consistent according to the extension of the state description and, in addition, the new operator must transform the sentences of correctly.\nAs a result, phase-IV returns all cases that are abstractions of the given concrete input case with respect to concrete and abstract domain de nitions and the generic abstraction theory. Depending on the domain theory, more than a single abstract case can be learned from a single concrete case as already shown in Figure 5." }, { "figure_ref": [], "heading": "Correctness and Completeness of the PABS Algorithm", "publication_ref": [], "table_ref": [], "text": "Finally, we want to state again the strong connection between the formal model of case abstraction and the presented algorithm. The algorithm terminates if the domain descriptions and the generic abstraction theory are formulated as required in the beginning of this section, so that the SLD-resolution procedure always terminates. The algorithm is correct, that is every abstract case computed by the Pabs algorithm is a case abstraction according to the introduced model. If the SLD-refutation procedure applied in Pabs is complete every case which is an abstraction according to De nition 5 is returned by Pabs. This property is captured in the following theorem. " }, { "figure_ref": [], "heading": "Complexity of the Algorithm", "publication_ref": [], "table_ref": [], "text": "The complexity of the algorithm is mainly determined by the phases III and IV. The worst case complexity of phase-III is O(n 2 C 1 C 2 ) where n is the length of the concrete plan and C 1 and C 2 are dependent on the domain theories as follows: C 1 = jO a j j j and C 2 = jAdd Oa j (jE a j j j) jAdd Oa j . Thereby, jO a j represents the number of abstract operators, j j is the maximum number of substitutions found by the SLD-refutation procedure, jAdd Oa j is the maximum number of added sentences in an abstract operator, and jE a j is the number of abstract essential sentences. The complexity of phase-IV can be determined as O(n 2 (n 1) C 1 ). If we assume constant domain theories the overall complexity of the Pabs algorithm can be summarized as O(n 2 (n 1) ). The exponential factor comes from possibly exponential number of paths in a directed acyclic graph with n nodes if every state is connected to every successor state. Whether a graph of this kind appears is very much dependent on the abstract domain theory, because it determines which transitions of abstract states are possible. This exponential nature does not lead to a time complexity problem in the domains we have used. Additionally, we want to make clear that this computational e ort must be spent during learning and not during problem solving. If the time required for learning is very long, the learning phase can be executed o -line.\nThe space complexity of the algorithm is mainly determined by phase-III because all derivations of the proofs of the abstract operators' preconditions must be stored. This can sum up to n 2 C 1 C 2 derivations in the worst case. This did not turn out to be a problem in the domains we used because each derivation was very short (in most cases not more 3 inferences with static Horn rules). The reason for this is that the derivations relate to abstract operators which very likely contain less preconditions than the concrete operators." }, { "figure_ref": [], "heading": "Re nement of Abstract Cases", "publication_ref": [], "table_ref": [], "text": "In the previous section we have described how abstract cases can be automatically learned from concrete cases. Now we assume a case-base which contains a set of abstract cases. We want to show how these abstract cases can be used to solve problems at the concrete level. Furthermore, we discuss the impact of the speci c form of the abstract problem solving domain on the improvement in problem solving that can be achieved." }, { "figure_ref": [], "heading": "Applicability and Re nability of Abstract Cases", "publication_ref": [ "b81" ], "table_ref": [], "text": "For a given abstract case and a concrete problem description, the question arises in which situations the abstract case can be re ned to solve the concrete problem. For this kind of re nability an a-posterior de nition can be easily given as follows.\nDe nition 9 (Re nability of an abstract case) An abstract case C a can be re ned to solve a concrete problem p if there exists a solution o c to p, such that C a is an abstraction of hp; o c i.\nObviously, the re nability property is undecidable in general since otherwise planning itself would be decidable. However, we can de ne the applicability of an abstract case as a decidable necessary property for re nability as follows.\nDe nition 10 (Applicability of an abstract case) An abstract case C a = hhs a 0 ; s a m i, (o a 1 ; : : : ; o a m )i can be applied to solve a concrete problem p = hs c I ; s c G i if there exists a state abstraction mapping such that s a i 2 Im( ) for all i 2 f0; : : : ; mg and (s c I ) = s a 0 and (s c G ) = s a m . Thereby, Im( ) denotes the image of the state abstraction mapping , i.e., all abstract states that can be reached.\nFor an applicable abstract case, it is at least guaranteed that the concrete initial and goal states map to the abstract ones and that concrete intermediate states exists that can be abstracted as required by the abstract case.\nEven if applicability is a necessary precondition for re nability it does not formally guarantee re nability, since the downward solution property (Tenenberg, 1988), which states that every abstract solution can be re ned, is a too strong requirement to hold in general for our abstraction methodology. However, it is indeed guaranteed that each abstract case contained in the case-base is already an abstraction of one or more previous concrete cases due to the correctness of the Pabs algorithm used for learning. If one of the problems contained in these concrete cases has to be solved again it is guaranteed that the learned abstract case can be re ned to solve the problem. Consequently, each abstract case in the case-base can at least be re ned to solve one problem that has occurred in the past.\nAbstract solutions which are useless because they can never be re ned to solve any concrete problem will never be in the case-base and are consequently never tried in solving a problem. Therefore, we expect that each abstract case from the case-base has a high chance of being also re nable for new similar problems for which it can be applied." }, { "figure_ref": [], "heading": "Selecting an Applicable Abstract Case", "publication_ref": [], "table_ref": [], "text": "To decide whether an abstract case can be applied to solve a concrete problem P, we have to determine a suitable state abstraction mapping. Because we assume deductively justi ed state abstraction mappings, the required state abstraction mapping can always be induced by the set = S m i=0 s a i as shown in Section 5.1. Consequently, C a is applicable to the problem p = hs c I ; s c G i if and only if s a 0 = f 2 j s c I R c A ` g and s a m = f 2 j s c G R c A ` g. Since every abstract case we use for solving a new problem has been learned from another concrete case, it is known that for each abstract state s a i there must be at least one concrete state (from that previous concrete state) that can be abstracted via to s a i . Consequently, s a i 2 Im( ) holds. Together with the introduced restrictions on the de nition of A and R c with respect to a complete SLD-refutation procedure (see Section 6), the applicability of an abstract case is decidable. Algorithm 5 describes the selection of an applicable abstract case for a problem p = hs c I ; s c G i in more detail. Algorithm 5 (Selection of an applicable abstract case) s a I := s a G := ;\nfor each E 2 E a do := SLD(s c I R c A; E)\ns a I := s a I S 2 E\nfor each E 2 E a do := SLD(s c G R c A; E)\ns a G := s a G S 2 E\nrepeat repeat Select a new case C a = hhs a 0 ; s a m i; (o a 1 ; : : : ; o a m )i from the case base with s a 0 s a I and s a m s a G if no more cases available then refine DFID (s c I ; (); ;; s c G ) return the result of refine DFID for i := 1 to m 1 do s a i := (s a i 1 n Del o a i ) Add o a i := S m i=0 s a i until (s a I \\ ) = s a 0 and (s a G \\ ) = s a m refine DFID (s c I ; (s a 1 ; : : : ; s a m 1 ); ; s c G ) until refine DFID returns success(p) return success(p) At rst, the initial and nal concrete states of the problem are abstracted using the generic abstraction theory. Thereby, an abstract problem description hs a I ; s a G i is determined.\nThen, in a pre-selection step, an abstract case is chosen form the case base. All of the abstract sentences contained in the initial and nal abstract state of this case must be contained in the abstracted problem description hs a I ; s a G i. This condition, however, does not guarantee that the selected case is applicable with respect to De nition 10. The set of abstract sentences inducing the respective state abstraction mapping is computed and the applicability condition is checked to test whether the selected case is applicable. If the selected case is not applicable, a new case must be retrieved. If an applicable abstract case has been determined the re nement algorithm refine DFID (see following section) is executed. This algorithm uses the sequence of intermediate abstract states (s a 1 ; : : : ; s a m 1 ), previously determined from the abstract plan of the case, to guide the search at the concrete level. The operators contained in the abstract plan are not used anymore. The re nement procedure returns success(p), if the re nement succeeds with the solution plan p. If the re nement fails (the procedure returns failure), another case is selected. If no more cases are available the problem is solved by pure search without any guidance by an abstract plan." }, { "figure_ref": [], "heading": "Re ning an Abstract Plan", "publication_ref": [ "b41", "b43", "b51" ], "table_ref": [], "text": "The re nement of a selected abstract case starts with the concrete initial state from the problem statement. The search proceeds until a sequence of concrete operations is found which leads to a concrete state s c , such that s a 1 = f 2 j s c R c A ` g holds. The applicability condition of the abstract case guarantees that such a state exists (s a i 2 Im( )) but it is not guaranteed that the required concrete operator sequence exists too. Therefore, this search task may fail which causes the whole re nement process to fail also. If the rst abstract operator can be re ned successfully a new concrete state is found. This state can then be taken as a starting state to re ne the next abstract operator in the same manner. If this re nement fails we can backtrack to the re nement of the previous operator and try to nd an alternative re nement. If the whole re nement process reaches the nal abstract operator it must directly search for an operator sequence which leads to the concrete goal state s c G . If this concrete goal state has been reached the concatenation of concrete partial solutions leads to a complete solution to original problem. This re nement demands for a search procedure which allows an abstract goal specication. All kinds of forward-directed search such as depth-rst iterative-deepening (Korf, 1985b) or best-rst search (Korf, 1993) procedures can be used for this purpose because states are explicitly constructed during search. These states can then be tested to see if they can be abstracted towards the desired goal. In Paris we use depth-rst iterative-deepening search described by Algorithm 6. This algorithm consists of two recursive procedures. The top-level procedure refine DFID receives the concrete initial state s c I , the concrete nal state s c G , the sequence of intermediate abstract states S a = (s a 1 ; : : : ; s a k ) derived from the abstract case, as well as the set which induces the state abstraction mapping. This procedure increments the maximum depth for the depth-rst search procedure search bounded up to the maximum Deep Max . The procedure search bounded performs the actual search. The goal for this search is either an abstract state, i.e., the rst abstract state in S a , or the concrete goal state s c G if all abstract state have already been visited. The procedure performs a depth-rst search by applying the available concrete operators and recursively calling the search procedure with the concrete state s c new which results from the operator application.\nplanning (McAllester & Rosenblitt, 1991) can also be applied for re nement under certain circumstances. Therefore, we would either require a state concretion function or we have to turn the rules of the generic abstraction theory A into virtual concrete operators.\nA state concretion function must be able to determine a single state or a nite set of concrete states from a given abstract state together with the concrete problem description. Thereby, the concrete problem description may help to reduce the number of possible concrete states. The derived state concretions can then be used as concrete goal states from which a backward directed search may start.\nAlternatively, we can turn the process of state concretion directly into the search procedure by representing each rule in the generic abstraction theory as a virtual abstract operator. The precondition of a rule in the generic abstraction theory becomes the precondition of the virtual operator and the conclusion of the rule becomes a positive e ect of this operator. When using the virtual concrete operators together with the operators of the concrete domain, a backward-directed planner can use the abstract state directly as a goal for search. The part of the plan in the resulting solution which only consists of concrete operators (and not of virtual operators) can be taken as a re nement of the abstract operator." }, { "figure_ref": [], "heading": "Criteria for Developing an Abstract Problem Solving Domain", "publication_ref": [ "b42" ], "table_ref": [], "text": "The abstract problem solving domain and the generic abstraction theory used have an important impact on the improvement in problem solving that can be achieved. Therefore, it is desirable to have a set of criteria which state how a \\good\\ abstract domain de nition should look. Strong criteria allowing quantitative predictions of the resulting speedups can hardly be developed. For other hierarchical planners such criteria don't exist either. However, we can give a set of factors which determine the success of our approach. The overall problem solving time is in uenced mainly by the following four factors: independent re nability of abstract operators, goal distance of abstract operators, concrete scope of applicability of abstract operators, and the complexity of the generic abstraction theory. 7.5.1 Independent Refinability of Abstract Operators Following Korf's analysis of hierarchical problem solving (Korf, 1987) introduced in Section 2, our plan re nement approach reduces the overall search space from b n to P m i=1 b ( (i) (i 1)) . Thereby, b is the average branching factor, n is the length of the con- crete solution, and is the sequence abstraction mapping used in the abstraction of the concrete case to the abstract case. As already mentioned, we cannot guarantee that an abstract plan which is applicable to a problem can really be re ned. Furthermore, Korf's analysis assumes that no backtracking between the re nement of the individual abstract operators is required which cannot be guaranteed. Some of the computational advantage of abstraction is lost in either of these two cases.\nHowever, if the abstract operators occurring in the abstract problem solving domain ful ll the strong requirement of independent re nability, then it is guaranteed that every applicable abstract case can be re ned without any backtracking. An abstract operator o a is independently re nable if for each s c , sc 2 S c and every state abstraction mapping if The problem with this requirement is that it seems much to hard to develop an abstract problem solving domain in which all operators ful ll this requirement. Although we cannot expect that all operators in the abstract problem solving domain are independently re nable, a knowledge engineer developing an abstract domain should still try to de ne abstract operators which can be independently re ned in most situations, i.e., for most s c , sc 2 S c and most state abstraction mapping an applicable abstract operator can be re ned to a concrete operator sequence. Although this notion of mostly independent re nability is not formal we feel that it is practically useful when developing an abstract domain de nition. The more abstract operators that can be re ned independently in many situations, the higher is the chance that an abstract plan composed of these operators is also re nable." }, { "figure_ref": [], "heading": "Goal Distance of Abstract Operators", "publication_ref": [ "b42" ], "table_ref": [], "text": "The goal distance (cf. subgoal distance, Korf, 1987) is the maximum length of the sequence of concrete operators required to re ne a particular abstract operator. The longer the goal distance the larger is the search space required to re ne the abstract operator. In particular, the complexity of the search required to re ne a complete abstract plan is determined by the largest goal distance of the abstract operators that occur in the abstract plan. Hence there is a good reason to keep the goal distance short. However, the goal distance negatively interacts with the next factor, namely the concrete scope of applicability of abstract operators." }, { "figure_ref": [], "heading": "Concrete Scope of Applicability of Abstract Operators", "publication_ref": [], "table_ref": [], "text": "The concrete scope of applicability of an abstract operator speci es how many concrete states can be abstracted to an abstract state in which the abstract operator is applicable, and how many concrete states can be abstracted to an abstract state that can be reached by an abstract operator. This scope is determined by the de nition of the abstract operator and by the generic abstraction theory which is responsible for specifying admissible state abstractions. The concrete scope of applicability of the abstract operators determines the applicability of the abstract plans that can be learned. An abstract plan which is only applicable to a few concrete problems is only of limited use in domains in which the problems to be solved vary very much. Hence, the concrete scope of applicability of abstract operators should be as large as possible. Unfortunately, according to our experience, abstract operators which have a large scope usually also have a larger goal distance and operators with a short goal distance don't have a large scope of applicability. Therefore, a compromise between these two contradicting issues must be found." }, { "figure_ref": [], "heading": "Complexity of the Generic Abstraction Theory", "publication_ref": [], "table_ref": [], "text": "The fourth factor which in uences the problem solving time is the complexity of the generic abstraction theory. This theory must be applied each time a new concrete state is created during concrete level search. The more complex the generic abstraction theory, the more time is required to compute state abstractions. Hence, the generic abstraction theory should not require complicated inferences and should avoid backtracking within the SLD-refutation procedure.\nAlthough these four factors don't allow a precise prediction of the expected problem solving behavior of the resulting system, they provide a focus on what to consider when designing an abstract problem solving domain and related generic abstraction theory." }, { "figure_ref": [], "heading": "An Example Domain: Process Planning in Mechanical Engineering", "publication_ref": [ "b78", "b33" ], "table_ref": [], "text": "The Paris approach has been successfully tested with toy-domains such as the familiar towers of Hanoi (Simon, 1975). For these domains, hierarchical problem solvers which use a dropping sentence approach have also proven very useful (Knoblock, 1994).\nThis section presents a new example domain we have selected from the eld of process planning in mechanical engineering and which really requires a stronger abstraction approach. 13 We have selected the goal of generating a process plan for the production of a rotary-symmetric workpiece on a lathe. The problem description, which may be derived from a CAD-drawing, contains the complete speci cation (especially the geometry) of the desired workpiece (goal state) together with a speci cation of the piece of raw material (called mold) it has to be produced from (initial state).\nThe left side of Figure 9 shows an example of a rotary-symmetric workpiece which has to be manufactured out of a cylindrical mold. 14 Rotary parts are manufactured by putting the mold into the xture (chuck) of a lathe. The chucking xture, together with the attached mold, is then rotated with the longitudinal axis of the mold as rotation center. As the mold is rotated a cutting tool moves along some contour and thereby removes certain parts of the mold until the desired goal workpiece is produced. Within this process it is very hard to determine the sequence in which the speci c parts of the workpiece have to be removed and the cutting tools to be used. When a workpiece is chucked a certain area of the workpiece is covered by the chucking tool and cannot be processed by a cutting tool. Moreover, a workpiece can only be chucked if the area which is used for chucking is plain. Otherwise the xation would not be su ciently stable. Hence, many workpieces are usually processed by rst chucking the workpiece on one side and processing the accessible area. Then the workpiece is chucked at the opposite side and the area that was previously covered can be processed. Processing the example workpiece shown in Figure 9 requires that the workpiece is rst chucked at the left side while the right side is processed. Then the processed right side can be used to chuck the workpiece because the area is plain and allows stable xing. Hence, the left side of the workpiece including the small groove can be processed. Now we explain the representation of this domain in more detail. The complete de nition of the domain can be found in Online Appendix 1. Several simpli cations of the real domain were required in order to obtain a domain de nition that could be e ciently handled in a large set of experiments. One restriction is that we can only represent workpieces with right-angled contour elements. For example, a conical contour cannot be represented. Many di erent cutting and chucking tools are available in real-life process planning. We (1,1) Figure 9: An example workpieces with grid representation have restricted ourselves to a single chucking tool and three di erent cutting tools. The speci cation of the themselves have also been simpli ed. For example, the rotation speed of workpiece and the feed of the cutting tool are also parameters that can play a role when processing a workpiece. The impact of these parameters has also been neglected. Despite these simpli cations the remaining part of this real-world domain is not trivial and represents a substantial subset of the most critical problems in this domain." }, { "figure_ref": [], "heading": "An Example Workpiece", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Concrete Domain", "publication_ref": [], "table_ref": [], "text": "We now explain the concrete problem solving domain by giving a detailed description of the states and the operators." }, { "figure_ref": [], "heading": "State Description", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "For the representation of this domain at the concrete level, the exact geometry of the workpiece must be represented as a state, including the speci c measures of each detail of the contour. However, the complete workpiece can always be divided into atomic areas which are always processed as a whole. Therefore the state representation is organized by using a grid which divides the entire workpiece into several disjoint rectangular areas of di erent sizes (see the right side of Figure 9). Together with a grid coordinate the speci c position and size of the corresponding rectangular area are represented. This grid is used as a static part of the state description which does not change during planning. However di erent problems require di erent grids. The speci c shape of a workpiece during planning is represented by specifying the status for each grid rectangle. In Table 1 the predicates used to represent the workpiece are described in more detail.\nBesides the description of the workpiece, the state representation also contains information about how the workpiece is chucked and which kind of cutting tool is currently used. Table 2 describes the predicates which are used for this purpose." }, { "figure_ref": [], "heading": "Predicate Description xpos max ypos max", "publication_ref": [], "table_ref": [], "text": "The predicates xpos max(x grid ) and ypos max(y grid ) specify the size of the grid in the direction of the x-coordinate and the y-coordinate respectively. A state consists of exactly one instance of each of these predicates, e.g., xpos max(4) and ypos max(5) in the example shown in Figure 9.\ngrid xpos grid ypos\nThe predicates grid xpos(x grid ; x start ; x size ) and grid ypos(y grid ; y start ; y size ) specify the geometrical position and size of grid areas in the direction of the x-coordinate and y-coordinate respectively. The rst argument of these predicates speci es the coordinate of the grid areas, the second argument declares the geometrical starting position, and the third argument speci es the size of the grid areas. A state consists of exactly one instance of each of these predicates for each di erent x-coordinate and y-coordinate. For the example above, grid xpos(1,0,18), grid xpos(2,18,2), grid xpos (3,20,165), grid xpos(4,185,40) specify the grid in x-direction and grid ypos(1,0,8), : : :, grid ypos (5,26,8) specify the grid in y-direction.\nmat\nThe predicate mat(x grid ; y grid ; status) describes the status of a particular grid area speci ed by the coordinates (x grid ; y grid ). The argument status can be instantiated with one of the three constants raw, workpiece, or none.\nThe constant raw indicates that the speci ed area still consists of raw material which must be removed by further cutting operators. The constant workpiece speci es that the area consists of material that belongs to the goal workpiece. The constant none speci es that the area does not contain any material, i.e., there was no material present in the mold or the material has already been removed by previous cutting operations. One instance of a mat predicate is required for each grid area to specify its current state.\nWhile the previously mentioned predicates does not change during the execution of a plan, the mat predicate is changed by each cutting operator. In particular, the initial state and the goal state of a problem di ers in the status assigned to those grid areas that must become removed. For example, in the initial state of the example shown above, the sentence mat(4,2,raw) will be present while the nal state contains the sentence mat(4,2,none). A process plan to manufacture a certain workpiece consists of a sequence of operators. The total order of the operators is not a problem for this domain because the manufacturing steps are also executed sequentially on a lathe. 15 We have chosen four di erent operators" }, { "figure_ref": [], "heading": "Predicate Description chuck pos", "publication_ref": [], "table_ref": [], "text": "The predicate chuck pos(side) describes whether the workpiece is currently chucked on either side. The parameter side can be instantiated with one of the three constants none, right, or left. The constant none speci es that the workpiece is not chucked at all and the constants right and left specify that the workpiece is chucked at the respective side. Each state contains exactly one instance of this predicate.\ncovered\nThe predicate covered(x min ; x max ) speci es the areas of the workpiece which are currently covered by the chucking tool. This predicate declares those areas with an x-coordinate lying within the interval x min ; x max ] as being covered. Covered areas cannot be processed by a cutting tool. A state consist of exactly one instance of this predicate if the workpiece is chucked." }, { "figure_ref": [], "heading": "cut tool cut direction", "publication_ref": [], "table_ref": [], "text": "The predicates cut tool(id) and cut direction(dir) specify a unique identication (id) of the cutting tool which is currently used when an area is processed and the direction (dir) in which the cutting tool moves. The parameter id can be any symbol that speci es a legal cutting tool described by predicates included in the static rules R c of the concrete domain description. The parameter dir can be instantiated by one of the three constants left, right and center. The value left speci es that the cutting tool moves from left to right, right speci es that the cutting tool moves from right to left, and center speci es that the cutting tool move from outside towards the center of the workpiece.\nTable 2: Essential sentences for the representation of the chucking and cutting tools to represent the chucking of a workpiece, the selection of a cutting tool, and the cutting process itself. These operators are described in Table 3.\nManufacturing the workpiece shown in Figure 9 requires a 15-step plan as shown in Figure 10. At rst, the workpiece is chucked on the left side. Then a cutting tool is selected which allows cutting from right to left. With this tool the indicated grid areas are removed. Please note that the left side of the workpiece cannot be processed since it is covered by the chucking tool. Then (see the right side of Figure 10), the workpiece is unchucked and chucked on its right side. With a tool that allows processing from left to right, the upper part of the mold is removed. Finally, a speci c tool is used to manufacture the small groove." }, { "figure_ref": [], "heading": "Abstract Domain", "publication_ref": [], "table_ref": [], "text": "In this example we can see that the small groove can be considered a detail which can be processed after the basic contour of the workpiece has been established. The most important characteristic of this example is that the right part of the workpiece is processed before the left side of the workpiece. This sequence is crucial to the success of the plan. If the groove" }, { "figure_ref": [], "heading": "Operator Description chuck", "publication_ref": [], "table_ref": [], "text": "The operator chuck(side) speci es that the workpiece is chucked at the speci ed side. The side parameter can be instantiated with the constants left and right. Chucking is only allowed if the workpiece is not chucked already and if the surface used for chucking is plain. As e ect of the chucking operation, respective instances of the predicate chuck pos and covered are included in the state description.\nunchuck\nThe operator unchuck speci es that the chucking of the workpiece is removed. This operation can only be executed if the workpiece is chucked already. As e ect of this operation, the parameter of the predicate chuck pos is changed to none and the predicate covered is deleted.\nuse tool\nThe operator use tool(dir; id) speci es which tool is selected for the subsequent cutting operators and in which direction the cutting tool moves. The workpiece must be chucked before a tool can be chosen. The e ect of the operator is that respective instantiations of the predicates cut tool and cut direction are added to the state. The parameters of the use tool operator have the same de nition as in the respective predicates.\ncut\nThe operator cut(x grid ; y grid ) speci es that the raw material in the grid area indicated by the coordinates (x grid ; y grid ) is removed. The e ect of this operator is that the predicate mat which speci es the status of this particular area is changed from status raw to the status none. However, to apply this operator several preconditions must be ful lled. The workpiece must be chucked and the chucking tool must not cover the speci ed area and the area must be accessible by the cutting tool. Moreover, a cutting tool which allows the processing of the selected area must already have been selected. Each cutting tool imposes certain constraints on the geometrical size of the area that can be processed with it. For details, see the full description of the domain in Online Appendix 1.\nTable 3: Concrete operators would have been processed rst the workpiece could never be chucked on the left side and the processing of the right side would consequently be impossible. Domain experts told us that this situation is not speci c for the example shown. It is of general importance for many cases. This fact allows us to select parts of the problem description and the solution which can be considered as details from which we can abstract. Parts which are \\essential\" must be maintained in an abstract case. We found out that we can abstract from the detailed shape of the workpiece as long as we distinguish between the processing of the left and right side of the workpiece. Furthermore, it is important to distinguish between the rough contour of the workpiece and the small details such as grooves. We have developed Figure 10: A plan for manufacturing the workpiece an abstract domain de nition containing a new language for describing states and operators based on this abstraction idea." }, { "figure_ref": [], "heading": "State Description", "publication_ref": [], "table_ref": [], "text": "We introduce a new abstract grid which divides the workpiece into a left, a middle, and a right area to abstract from the speci c location of a concrete grid area. These areas are called complex processing areas. Each area is assigned a particular status. Furthermore, an abstract state contains the information whether a complex processing area contains small contour elements (such as grooves), but not how these grooves exactly look like. To abstract from the very detailed conditions for chucking a workpiece, an abstract state only contains an approximation of these conditions, stating that a workpiece cannot be chucked at a particular side, if this side contains small contour elements that have been already processed. The predicates used to represent an abstract state are described in more detail in Table 4." }, { "figure_ref": [], "heading": "Operators", "publication_ref": [], "table_ref": [], "text": "We consider an abstract operator which completely processes one complex area of the workpiece, an operator which only processes a complex area roughly, and an operator which processes all the small grooves of a complex area. We also consider an abstract chucking operator because the chucking has a strong impact on the overall plan. Table 5 shows the available abstract operators." }, { "figure_ref": [], "heading": "Generic Abstraction Theory", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "The generic abstraction theory de nes the sentences used to describe an abstract state (see Table 4) in terms of the sentences of the concrete state (see Tables 1 and2) by a set of Horn rules. The de nition of abstract sentence is explained in more detail in Table 6." }, { "figure_ref": [], "heading": "Predicate Description abs area state", "publication_ref": [], "table_ref": [], "text": "The predicate abs area state(area; status) describes the status of each of the three complex processing areas. The argument area speci es one of the complex processing areas left, middle, and right.\nThe argument status describes the status of the respective area.\nThe status can be either todo, rough, and ready. The status todo speci es that the area needs some processing of large contour elements, while in a rough area only some small contour elements such as grooves need to be processed. The status ready speci es that the area is completed. An abstract initial state usually contains one or more complex processing areas of the status todo, while in the abstract goal state all complex processing areas have the status ready." }, { "figure_ref": [], "heading": "abs small parts", "publication_ref": [], "table_ref": [], "text": "The predicate abs small parts(area) speci es that the complex processing area (area) contains small contour elements that need to be manufactured." }, { "figure_ref": [], "heading": "abs chuck pos", "publication_ref": [], "table_ref": [], "text": "The predicate abs chuck pos(side) describes whether the workpiece is currently chucked on either side. The parameter side can be instantiated with one of the three constants none, right, or left. This predicate has exactly the same meaning as the chuck pos predicate at the concrete level. This predicate is not abstracted at all but only renamed.\nabs chuckable wp The predicate abs chuckable wp(side) describes whether the workpiece can be be chucked at the left or right side if this side has been completely processed.\nTable 4: Essential sentences for describing an abstract state\nWe have strongly considered the factors that in uence the quality of a domain (see Section 7.5) during the development of the abstract problem solving domain and the generic abstraction theory. Although none of the de ned abstract operators is independently renable, all of them are mostly independently re nable. The preconditions of each abstract operator still contains approximations of the conditions that must be ful lled in order to assure that a concrete operator sequence exist that re nes the abstract operator. For example, the predicate abs chuckable wp(side) is an approximation of the detailed condition (a plain surface) required for chucking. The goal distance of each operator is quite di erent and strongly depends on the problem to be solved. While the goal distance of the set xation operators is no more than two (possibly one unchuck operator followed by a chuck operator) the goal distances of the other abstract operators are di erent. For example, the goal distance of the process ready operator depends on the number of concrete grid areas belonging" }, { "figure_ref": [], "heading": "Operator Description set xation", "publication_ref": [], "table_ref": [], "text": "The operator set xation(side) speci es that the workpiece is chucked at the speci ed side. The side parameter can be instantiated with the constants left, right and none. The constant none speci es that the chucking is removed. Compared to the concrete operator chuck the preconditions for chucking at a side have been simpli ed. The e ect of this operator is that the predicate abs chuck pos is modi ed.\nprocess rough The operator process rough(area) speci es that the complex processing area (area) is being processed completely up to the small contour elements. The parameter area can be either left, middle, or right. The precondition of this operator only requires that the workpiece is chucked at a di erent side than area. The e ect of this operator is that the predicate abs area state is modi ed." }, { "figure_ref": [], "heading": "process ne", "publication_ref": [], "table_ref": [], "text": "The operator process ne(area) speci es that all small contour elements of the complex processing area (area) are being processed. The parameter area can be either left, middle, or right. The precondition of this operator only requires that the large contour elements of this side of the workpiece are already processed and that the workpiece is chucked at a di erent side. The e ect of this operator is that the predicate abs area state is modi ed.\nprocess ready The operator process ready(area) speci es that the indicated complex area of the workpiece is being completely processed, including large and small contour elements. The e ect of this operator is that the predicate abs area state is modi ed.\nTable 5: Abstract operators to the respective abstract area and containing material that needs to be removed. The goal distance is the number of these gird areas, say c, plus the number of required use tool operations (less than or equal to c). Hence, the goal distance is between c and 2c. Because this goal distance can become very long for the more complex problems, the two operators process rough and process ne are introduced. They only cover the processing of the small and the large grid areas respectively and consequently have a smaller goal distance than the process ready operator. While the goal distance of these two operators is smaller they have a smaller concrete scope of applicability than the process ready operator. For example the process ready operator can be applied in any state in which some arbitrary areas need to be processed, but process ne can only be applied in states in which all large grid areas are already processed.\nAlthough we have only developed a simpli ed version of the whole domain of production planning in mechanical engineering for rotary symmetrical workpieces we feel that Abstract Predicate Description in terms of the predicates of the concrete domain abs area state\nThe predicate abs area state(area; status) describes the status of each of the three complex processing areas. The left processing area consists of the areas of the concrete grid which are covered, if the workpiece is chucked at the left side. Similarly, the right processing area consists of those concrete grid areas which are covered if the workpiece is chucked at the right side. The middle processing area consists of those areas which are never covered by any chucking tool. The status of a complex processing area is todo, if there exists a concrete large grid area which belongs to the complex processing area and which needs to be processed. A grid area is considered as large if its size in direction of the x-coordinate is larger than 3 mm. The status of a complex processing area is rough, if all large grid areas of the complex processing area are already processed and if there exists a concrete small grid area which belongs to the complex processing area and which needs to be processed. A gird area is considered as small if its size in direction of the x-coordinate is smaller or equal than 3 mm. The status of a complex processing area is ready if all concrete grid areas which belong to the complex processing area have been processed." }, { "figure_ref": [], "heading": "abs small parts", "publication_ref": [], "table_ref": [], "text": "The sentence abs small parts(area) holds if there exists a small concrete grid area (size smaller or equal than 3 mm) which belongs to the complex processing area and which needs to be processed." }, { "figure_ref": [], "heading": "abs chuck pos", "publication_ref": [ "b75", "b76" ], "table_ref": [], "text": "The sentence abs chuck pos(side) holds if and only if the concrete sentence chuck pos(side) holds.\nabs chuckable wp The predicate abs chuckable wp(side) describes whether the workpiece can still be chucked at the left or right side if this side is completely processed. This sentence holds if the part of the desired workpiece which belongs to respective side is completely plain. That is, all concrete grid areas with the status workpiece range up to the same y-coordinate.\nTable 6: Generic abstraction theory a domain expert together with a knowledge engineer will be able to de ne an abstract domain representation and a generic abstraction theory for a complete domain. In particular, model-based interactive knowledge acquisition tools like MIKADO (Schmidt, 1994;Schmidt & Zickwol , 1992) can make such a complete modeling task much more feasible. " }, { "figure_ref": [ "fig_12", "fig_1", "fig_12", "fig_1", "fig_12" ], "heading": "Abstracting and Re ning a Process Planning Case", "publication_ref": [], "table_ref": [], "text": "We now explain how the example case shown in Figure 9 can be abstracted and how this abstract case can be reused to solve a di erent planning problem. This process is demonstrated in Figure 11. The top of this gure shows the concrete planning case C 1 already presented in Figure 9. This case is abstracted by the Pabs algorithm presented in Section 6. The algorithm returns 6 di erent abstract cases 16 . One of these abstract cases is shown in the center of the gure. The abstract solution plan consists of a sequence of 6 abstract operators. The sequence of the operators in the plan is indicated by the Roman numerals. The particular abstraction is indicated between the concrete and the abstract case and denotes which sequence of concrete operators is turned into which abstract operator.\nThe learned abstract case can now be used to solve the new problem C 2 whose initial and nal concrete states are shown in the bottom of the gure. Even if the concrete workpiece looks quite di erent from the workpiece in case C 1 the abstract case can be used to solve the problem. The reason for this is that the new workpiece also requires that the left and right side must be processed. In particular the right side must also be processed before the left side is processed because the left side contains two small grooves which prevent the workpiece from being be chucked at that side after it is processed. However, we can see that most abstract operators (in particular the operators II, VI, and V) are re ned to completely di erent sequences of concrete operators than those from which they were abstracted.\nAs already mentioned, the abstract operators used are not independently re nable but only mostly independently re nable. Consequently, it can happen that an applicable abstract case cannot be re ned. Figure 12 shows an example of a concrete planning problem for which the abstract case shown in Figure 11 is applicable but not re nable. The reason for this is the location of the small abstract part at the left side of the workpiece. This small part consists of the concrete grid area (1,3) in which raw material must be removed. However, in this speci c situation, this small part must be removed before the large parts, the left side of the workpiece contains (the grid areas (2,3), (3,3), and (2,2)), can be removed. The reason for this is that without removing this small part, the larger parts located right of the small part cannot be accessed by any cutting tool that is able to cut the areas (2,3) and (3,3). Consequently this problem can only be solved with the plan shown on the right side of Figure 12. Unfortunately, this plan is not a re nement of the abstract plan shown in Figure 11, because this abstract plans requires that the large parts must be removed before the small parts are removed. Hence, the re nement of the operator process rough(left) fails. In this situation the problem solver must select a di erent abstract plan." }, { "figure_ref": [], "heading": "Empirical Evaluation and Results", "publication_ref": [], "table_ref": [], "text": "This section presents the results of an empirical study of Paris in the mechanical engineering domain already introduced. This evaluation was performed with the fully implemented Paris system using only the abstraction abilities of the system. The generalization component was switched-o for this purpose. We have designed experiments which allow us to judge the performance improvements caused by various abstract cases derived by Pabs. Furthermore, we have analyzed the average speed-up behavior of the system with respect to a large set of randomly selected training and test cases." }, { "figure_ref": [], "heading": "Planning Cases", "publication_ref": [ "b25", "b26" ], "table_ref": [], "text": "For this empirical evaluation 100 concrete cases have been randomly generated. Each case requires about 100-300 sentences to describe the initial or nal state, most of which are instances of the mat predicate. The length of the solution plans ranges from 6 to 18 operators. Even if the generated cases only represent simple problems compared to the problems a real domain expert needs to solve, the search space required to solve our sample problems is already quite large. This is due to the fact that the branching factor b is between 1:7 and 6:6, depending on the complexity of the problem. Hence, for a 18-step solution the complete search space consists of 3:7 10 15 states.\nProdigy and Alpine are highly dependent on the representation used, in particular if their strategy is restricted to dropping sentences (Holte et al., 1994(Holte et al., , 1995)). However, there might be another representation of our domain for which those hierarchical planners can improve performance but we think that our representation is quite \"natural\" for our domain.\nFrom this rst trial we can conclude that the application domain and representation we have chosen for the following experiments with Paris really require more than dropping sentences to achieve an improvement by abstraction." }, { "figure_ref": [], "heading": "Evaluating the PARIS Approach", "publication_ref": [], "table_ref": [], "text": "The rst experiment with Paris was designed to evaluate the hypotheses that in our domain there is a need (I) for changing the representation language during abstraction, and (II) for reusing abstract cases instead of generating abstract solutions from scratch. To test these hypotheses we rely on the time for solving the randomly generated problems using di erent modes of the Paris system." }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [], "table_ref": [], "text": "In this experiment we used the Paris system to solve the 100 problems from the randomly generated cases. Thereby the goal of abstraction is to improve the concrete-level problem solver, which performs a brute-force search with a depth-rst iterative-deepening search strategy (Korf, 1985a) as introduced in Section 7.3. The improvement is determined in terms of problem solving time required to solve a single problem. Paris is used to solve the 100 problems in three di erent modes: Pure search: The problem solver is used to solve each problem by pure search without use of any abstraction.\nHierarchical planning: In this mode Paris uses the introduced abstract domain. However, abstract cases are not recalled from a case library but they are computed automatically by search as in standard hierarchical planning, but using the new abstraction language. So, the problem solver rst tries to search for a solution to the original problem at the abstract domain and then tries to re ne this solution. During this hierarchical problem solving, backtracking between the two levels of abstraction and between each subproblem can occur. Thereby, we used hierarchical planning with the new abstraction methodology instead of dropping sentences.\nReasoning from abstract cases: In this mode we rst used Paris to learn all abstract cases which come out of the 100 concrete cases. For each problem, all abstract cases that exists according to our abstraction methodology are available when one of the problems is to be solved. During problem solving we measured the time required for solving each problem using every applicable abstract cases. Then, for each problem, three abstract cases are determined: a) the best abstract case, i.e., the case which leads to the shortest solution time, b) the worst abstract case (longest solution time) which is an abstraction of the aspired solution case, and c) the worst applicable abstract case is determined. The di erence between b) and c) relates to the di erence between applicable and re nable abstract cases introduced in Section 7.1. An abstract case selected in c) is applicable to the current problem, but might not be an abstraction of the case from which the problem is taken. In b) only abstract cases are selected which are indeed abstractions of the current problem, i.e., abstract cases which have been previously learned from the case from which the problem is taken. These three di erent cases are selected to gure out the impact of case selection (which is not addressed in this paper) on the proposed method.\nAlthough every problem can theoretically be solved by our brute-force search procedure, the exponential nature of the search space avoids the solution of complex problems within reasonable time. Therefore, a time-bound of 200 CPU seconds on a Sun Sparc-ELC computer was introduced in each of the three modes described above. If this limit-bound is exceeded the problem remains unsolved. Increasing this time-bound would increase the number of solvable problems in each of the three modes." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b18", "b18" ], "table_ref": [ "tab_7" ], "text": "We have determined the solution time for each of the 100 problems in each of the described modes. The average solution time as well as the number of problems that could be solved within the time limit is shown in Table 7. We have determined these values for reasoning from abstract cases separately for each of the three types of abstract cases. The signi cance of the speedup results has be investigated by using a maximally conservative sign test (Etzioni & Etzioni, 1994). Unfortunately it turned out that the speedup of hierarchical planning over pure search was not signi cant. We also couldn't nd a signi cant speedup of reasoning from abstract cases when using always the worst applicable abstract case (c) over pure search. This was due to the large number of doubly censored data (both problem solvers cannot solve the problem within the time limit), which were counted against the speedup hypothesis. However, the improvements of pure search by reasoning from re nable abstract cases were signi cant (p < 0:000001) when using the best re nable case (a) and when using the worst re nable case (b). Furthermore, it turned out that the speedup of reasoning from re nable cases over hierarchical planning was also signi cant for an upper bound of the p-value of 0:001. The mentioned p-value is a standard value used in statistical hypothesis tests. It is the probability, assuming that the hypothesis does not hold, of encountering data that favors the hypothesis as much or more than the observed data in the experiment (Etzioni & Etzioni, 1994). Therefore a result is more signi cant if the p-value is smaller. From this analysis, we can clearly see, that our two basic hypotheses are supported by our experimental data. Even if not signi cant we can see a moderate improvement in the problem solving time and in the number of solved problems when using hierarchical planning with changing the representation language. Please remember that hierarchical planning by dropping conditions did not lead to any improvement at all (see Section 9.2). Obviously, changing the representation language during abstraction is required to improve problem solving in our domain as stated in the rst hypothesis (I).\nVery strong support for the second hypothesis (II) can also be found in the presented data. We can see signi cant speedups by reasoning from abstract cases over pure search and even over hierarchical planning. Only if the worst abstract case is used for each problem to be solved, the speedup is not signi cant and the problem solving behavior is slightly worse than in hierarchical planning. Please note that this situations is extremely unlikely to happen at all. With a sophisticated indexing and retrieval of abstract cases this situation can be avoided for the most part." }, { "figure_ref": [], "heading": "Evaluating the Impact of Di erent Training Sets", "publication_ref": [], "table_ref": [], "text": "In one respect the previous experiment is based on a very optimistic assumption. We always assume that all abstract cases required for solving a problem have been learned in advance. This situation is not a realistic scenario for an application. Usually, one set of cases is available for training the system while a di erent set of problems needs to be solved. So we cannot assume that good applicable abstract cases are always available to solve a new problem. Furthermore, the presented example also shows that the problem solving time can vary a lot if di erent abstract cases are selected during problem solving. Therefore, we have designed a new experiment to evaluate the improvements caused by the Paris approach in a more realistic scenario." }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [ "b18" ], "table_ref": [ "tab_8", "tab_9", "tab_7", "tab_7", "tab_9", "tab_10", "tab_11", "tab_7" ], "text": "We have randomly chosen 10 training sets of 5 cases and 10 training sets of 10 cases from the 100 available cases. These training sets are selected independently from each other.\nThen, each of the 20 training sets is used for a separate experiment. In each of the 20 experiments, those of the 100 cases which are not used in the particular training set are used to evaluate the performance of the resulting system. Training set and test set are completely independent by this procedure. During this problem solving task, we did not determine the problem solving behavior for all applicable abstract cases, but we used a simple automatic mechanism to retrieve one (hopefully a good) applicable abstract case for a problem. Therefore, the cases are organized linearly in the cases base, sorted by the length of the abstract plan contained in the case. The case base is sequentially searched from longer to shorter plans until an applicable case is found. This heuristic is based on the assumption that a longer abstract plan is more speci c than a shorter abstract plan and We have statistically evaluated the second experiment. Table 8 shows the number of abstract cases which could be learned from the di erent training sets. The minimum, the maximum and the average number of abstract cases that could be learned from the 10 training sets of the same size is indicated. Note that altogether 42 abstract cases can be learned if all 100 cases would have been used for training as in the previous experiment. From the 10 training sets which contained 5 cases each, between 7 and 15 abstract cases could be learned. As expected, if the size of the training set is increased more abstract cases can be learned.\nTable 9 shows the average problem solving time after learning from the di erent sets. This table also shows the minimum, the maximum and the average problem solving time for the 10 di erent training sets of the two sizes. We can see that the best training sets leads to a problem solving time which is similar or only slightly worse than the optimum shown in Table 7. Even in the average case, considerable improvements over the pure search and hierarchical problem solving (compare Table 7 andTable 9) can be discovered. positive results can also be identi ed when looking at the percentage of solved problems, shown in Table 10. Here we can also see that for the best training sets the number of solved problems is close to the maximum that can be achieved by this approach. Even in the worst training set considerably more problems could be solved than by pure search or hierarchical planning. Additionally all of the above mentioned speedup results were analyzed with the maximally conservative sign test as described in (Etzioni & Etzioni, 1994). Table 11 summarizes the signi cance results for speeding up pure search and a hierarchical problem solver. It turned out that 19 of the 20 training sets lead to highly signi cant speedups (p < 0:0005) over pure search. For this hard upper bound on p-values only about half of the training sets lead to signi cant di erences between reasoning from abstract cases and hierarchical planning. At a slightly higher upper bound of p < 0:05, about 3=4 of the training sets caused a signi cantly better performance than hierarchical planning.\nAltogether, the reported experiment showed that even a small number of training cases (i.e., 5% and 10%) can already lead to strong improvements on problem solving. We can see that not all abstract cases must be present, as in the rst experiment, to be successful. Furthermore, this experiment has shown that even a simple retrieval mechanism (sequential search) can select bene cial abstract cases from the library. Neither of the training situations in the second experiment lead to results which are as worse as the worst case shown in Table 7." }, { "figure_ref": [], "heading": "Quality of the Produced Solutions", "publication_ref": [ "b67", "b84" ], "table_ref": [], "text": "Although the main purpose of this approach is to improve the performance of a problem solver, the quality of the produced solutions is also very important for a practical system. The solution length can be used as a very simple criterion to determine the quality of a solution. However, in general the quality of a solution should re ect the execution costs of a plan, the plans robustness, or certain user preferences (Perez & Carbonell, 1993).\nBecause such quality measures are very di cult to assess, in particular in our manufacturing domain, we rely on this simple criterion also used for evaluating the quality of solutions in Prodigy/Analogy (Veloso, 1992)." }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [], "table_ref": [], "text": "We have analyzed the solutions computed in the previous set of experiments to assess the quality of the solutions produced by Paris. Therefore, the length of solutions derived during problem solving, after learning from each of the 20 training sets, are compared to the length of the nearly optimal solutions contained in the concrete cases." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_12", "tab_12" ], "text": "For each training set the length of each solution derived in the corresponding testing phase is compared to the length of the solution noted in the concrete case. The percentage of solutions with shorter, equal, or longer solution length is determined for each training set separately, and the average over the 10 training sets with equal size is determined. Table 12 shows the result of this evaluation. It turned out that there was no big di erence in the quality results between the 20 training sets. In particular, the size of the training sets did not have a strong in uence on the results. In Table 12 we can see that between 72% (22% + 50%) and 74% (20% + 54%) of the solutions produced are of equal or better quality than the solutions contained in the concrete cases. Please note that the concrete cases used for testing are always di erent from the cases used for training. Additionally, the solutions to which we compare the results produced by Paris are already nearly optimal solutions due to the case generation procedure.17 Taking this into account, these results are already fairly good." }, { "figure_ref": [], "heading": "Impact of the Abstract Problem Solving Domain", "publication_ref": [], "table_ref": [], "text": "The experiments reported before were conducted with the concrete and abstract domain representation presented in Section 8 and in Online Appendix 1. In this nal experiment the impact of the speci c choice of an abstract problem solving domain is investigated." }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [], "table_ref": [], "text": "We created a new abstract problem solving domain which is less constrained than the one used before. For this purpose one operator was completely removed and certain conditions of the remaining operators were removed also. In particular, the set xation operator was removed and the conditions abs chuck pos, abs chuckable wp, and chuck comp were removed from the preconditions of the three remaining operators. Hence, the fact that the chucking of a workpiece has an impact on the production plan is now neglected at the abstract level. However, the concrete problem solving domain and the generic abstraction theory was not modi ed at all. Consequently, chucking still plays an important role at the concrete level. The set of experiments described in Section 9.4 was repeated with the less constrained abstract problem solving domain but using the same training and testing sets as before." }, { "figure_ref": [ "fig_3" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_13", "tab_13", "tab_7", "tab_9" ], "text": "Table 13 and 14 summarize the results of these experiments. Table 13 shows the average problem solving time which occurs after learning from the di erent training sets. It turns out that for all training sets, learning improves the concrete level problem solver, but that the speedup is much smaller than when using the original abstract problem solving domain (cf. Table 7 and9). In particular, none of the resulting speedups over concrete level problem solving were signi cant. A similar result can be observed when comparing the percentage of solved problems (see Figure 14). There is still a slight improvement in the number of problems that could be solved after learning but the improvement is much smaller than when using the original abstract problem solving domain (cf. This experiment supported the general intuition that the abstract problem solving domain has a signi cant impact on the improvement in problem solving that can be achieved through reasoning from abstract cases. The reason why the less constrained domain leads to worse results than the original abstract domain can be explained with respect to the criteria explained in Section 7.5. Since important preconditions of the abstract operators were removed there are many situations in which the new operators cannot be re ned. This holds particularly for those situations in which a workpiece cannot be chucked to perform the required cutting operations. The new abstract operators are not mostly independently re nable. Moreover, since the abstract operator set xation is removed the concrete chuck and unchuck operator must be introduced during the re nement of the remaining abstract operators. Consequently, the goal distance of these abstract operators is increased. These two factors are the reason for worse results when using the less constrained abstract domain theory. Paris as a system which learns useful axioms of the abstract system, by composing several smaller elementary axioms (the operators). However, to prove a formula (the existence of a solution) in the abstract system, exactly one axiom (case) is selected. So the deductive machinery of the abstract system is restricted with respect to the ground space. Depending on the learned abstract cases the abstractions of Paris are either theory decreasing (TD) or theory increasing (TI). If the case-base of abstract cases is completely empty then no domain axiom is available and the resulting abstractions are consequently TD. If the casebase contains the maximally abstract case hhtrue; truei(nop)i18 (and the generic abstraction theory contains the clause ! true), then this case can be applied to every concrete problem and the resulting abstraction is consequently TI. Even if this maximally abstract case does not improve the ground level problem solving, it should be always included into the case-base to ensure the TI property, that is not loosing completeness. The case retrieval mechanism must however guarantee, that this maximally abstract case is only chosen for re nement if no other applicable case is available. Note, that this is ful lled for the retrieval mechanism (sequential search from longer to shorter plans) we used in our experiments." }, { "figure_ref": [], "heading": "Skeletal Plans", "publication_ref": [ "b21", "b44", "b29", "b30", "b32", "b33", "b60", "b16", "b36", "b74", "b4", "b37", "b15", "b87", "b19", "b61", "b28", "b46", "b55", "b57", "b55", "b63", "b27", "b89", "b85", "b27", "b89", "b85", "b54", "b52" ], "table_ref": [], "text": "As already mentioned in Section 3.4 the Paris approach is inspired by the idea of skeletal plans (Friedland & Iwasaki, 1985). A abstract cases can be seen as a skeletal plan, and our learning algorithm is a means to learn skeletal plans automatically out of concrete plans. Even if the idea of skeletal plans is intuitively very appealing, to our knowledge, this paper contains the rst comprehensive experimental support of usefulness of planning with skeletal plans. Since we have shown that skeletal plans can be acquired automatically, this planning method can be applied more easily.\nFor the same purpose, Anderson and Farley (1988) and Kramer and Unger (1992) proposed approaches for plan abstraction which go in the same direction as the Paris algorithm. However, this approach automatically forms abstract operators by generalization, mostly based on dropping sentences. Moreover, in the abstracted plan, every concrete operator is abstracted, so that the number of operators is not reduced during abstraction. Thereby this abstraction approach is less powerful than Paris style abstractions. 10.1.3 Alpine's Ordered Monotonic Abstraction Hierarchies Alpine (Knoblock, 1989(Knoblock, , 1990(Knoblock, , 1993(Knoblock, , 1994) ) automatically learns hierarchies of abstraction spaces from a given domain description or from a domain description together with a planning problem. As mentioned several times before, Alpine relies on abstraction by dropping sentences. However, this enables Alpine to generate abstraction hierarchies automatically. For a stronger abstraction framework such as the one we follow in Paris, the automatic generation of abstraction hierarchies (or abstract domain descriptions) does not seem to be realistic due to the large (in nite) space of possible abstract spaces. To use our powerful abstraction methodology, we feel that we have to pay the price of losing the ability to automatically construct an abstraction hierarchy.\nAnother point is that the speci c property of ordered monotonic abstraction hierarchies generated by Alpine, allows an e cient plan re nement. During this re nement, an ab-stract plan can be expanded at successively lower levels by inserting operators. Furthermore, already established conditions of the plan are guaranteed not to be violated anymore during re nement. Unfortunately, this kind of re nement cannot be performed for Paris-style abstractions. Especially, there is no direct correspondence between the abstract operators and concrete operators. Consequently, an abstract plan cannot be extended to become a concrete plan. However, the main function of the abstract plan is maintained, namely that the original problem is decomposed into several smaller subproblems which causes the main reduction in search. The presented Paris approach uses experience to improve problem solving, similar to several approaches from machine learning, mostly from explanation-based learning (Mitchell et al., 1986;DeJong & Mooney, 1986), case-based reasoning (Kolodner, 1980;Schank, 1982;Altho & Wess, 1992;Kolodner, 1993) or analogical problem solving (Carbonell, 1986;Veloso & Carbonell, 1988). The basic ideas behind explanation-based learning and case-based or analogical reasoning are very much related. The common goal of these approaches is to avoid problem solving from scratch in situations which have already occurred in the past. Explanations (i.e., proofs or justi cations) are constructed for successful solutions already known by the system. In explanation-based approaches, these explanations mostly cover the whole problem solving process (Fikes, Hart, & Nilsson, 1972;Mooney, 1988;Kambhampati & Kedar, 1994), but can also relate to to problem solving chunks (Rosenbloom & Laird, 1986;Laird, Rosenbloom, & Newell, 1986) of some smaller size or even to single decisions within the problem solving process (Minton, 1988;Minton et al., 1989). Explanation-based approaches generalize the constructed explanations during learning by extensive use of the available domain knowledge and store the result in a control rule (Minton, 1988) or schema (Mooney & DeJong, 1985). In case-based reasoning systems like Priar (Kambhampati & Hendler, 1992) or Prodigy/Analogy (Veloso & Carbonell, 1993;Veloso, 1994) cases are usually not explicitly generalized in advance. They are kept fully instantiated in a case library, annotated with the created explanations. Unlike cases in Paris which are problem-solution-pairs, such cases are complete problem solving episodes containing detailed information of each decision that was taken during problem solving. During problem solving, those cases are retrieved which contain explanations applicable to the current problem (Kambhampati & Hendler, 1992;Veloso & Carbonell, 1993;Veloso, 1994). The detailed decisions recorded in these cases are then replayed or modi ed to become a solution to the current problem. All these approaches use some kind of generalization of experience, but none of these approaches use the idea of abstraction to speedup problem solving based on experience. As already noted in (Michalski & Kodrato , 1990;Michalski, 1994), abstraction and generalization must not be confused. While generalization transforms a description along a set-superset dimension, abstraction transforms a description along a level-of-detail dimension.\nThe only exception is given in (Knoblock, Minton, & Etzioni, 1991a) where Alpine's abstractions are combined with EBL component of Prodigy. Thereby, control rules are learned which do not refer to the ground space of problem solving but also to the abstract spaces. These control rules speedup problem solving at the abstract level. However, the control rules guide the problem solver at the abstract level so that it nds solutions faster and not in a manner that it nds re nable abstract solutions. Although we did not have any experience with this kind of integration of abstraction and explanation-based learning, we assume that the control rules generated by the EBL component will also guide the problem solver towards short abstract solutions which do not cause much reduction in search in several circumstances." }, { "figure_ref": [], "heading": "Requirements and Limitations of PARIS", "publication_ref": [], "table_ref": [], "text": "In the following, we will summarize again the requirements and limitations of the Paris approach. The main requirements are the availability of a good abstract domain description and in the availability of concrete cases." }, { "figure_ref": [], "heading": "Abstract Domain", "publication_ref": [ "b76", "b75" ], "table_ref": [], "text": "The most important prerequisite of this method is the availability of the required background knowledge, namely the concrete world description, the abstract world description, and the generic abstraction theory. For the construction of a planning system, the concrete world descriptions must be acquired anyway, since they specify the \\language\\ of the problem description (essential sentences) and the problem solution (operators). The abstract world and the generic abstraction theory must also be acquired. We feel that this is indeed the price we have to pay to make planning more tractable in certain practical situations.\nNevertheless, the formulation of an adequate abstract domain theory is crucial to the success of the approach. If those abstract operators are missing which are required to express a useful abstract plan, no speedup can be achieved. What we need are mostly independently re nable abstract operators. If such operators exist, they can be simply represented in the abstract domain using the whole representational power. For hierarchical planning with dropping conditions, such an abstract domain must also be implicitly contained in a concrete domain in a way that the abstract domain remains, if certain literals of the concrete domain are removed (see Section 2.1). We feel that this kind of modeling is very much harder to achieve than modeling the abstract view of a domain explicitly in a distinct planning space as in Paris. Additionally, the requirement that the abstract domain is given by the user has also the advantage that the learned abstract cases are expressed in terms the user is familiar with. Thereby, the user can understand an abstract case very easily. This can open up the additional opportunity to involve the user in the planning process, for example in the selection of an abstract cases she/he favors.\nResearch on knowledge acquisition has shown that human experts employ a lot of abstract knowledge to cope with the complexity of real-world planning problems. Speci c knowledge acquisition tools have been developed to comfortably acquire such abstract knowledge from di erent sources. Especially, the acquisition of planning operators is addressed in much detail in (Schmidt & Zickwol , 1992;Schmidt, 1994)." }, { "figure_ref": [], "heading": "Availability of Cases", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "As a second prerequisite, the Paris approach needs concrete planning cases (problemsolution pairs). In a real-world scenario such cases are usually available in a company's ling cabinet or database. According to this requirement we share the general view from machine learning that the use of this kind of experience is the most promising way to cope with highly intractable problems. For the Paris approach the available cases must be somehow representative for future problem solving tasks. The known cases must be similar enough to the new problems that abstract cases can really be reused. Our experiments give strong indications that even a small set of concrete cases for training leads to high improvements in problem solving (see Table 9 to 11)." }, { "figure_ref": [], "heading": "Generality of the Achieved Results", "publication_ref": [ "b86", "b68", "b12" ], "table_ref": [], "text": "The reported experiments were performed with a speci c base-level problem solver which performs a depth-rst iterative-deepening search strategy (Korf, 1985a). However, we strongly believe that the Paris abstractions are also bene cial for other problem solvers using backward-chaining, means-end analysis or nonlinear partial-order planning. As shown in (Veloso & Blythe, 1994), there is not one optimal planning strategy. Di erent planning strategies usually rely on di erent commitments during search. Each strategy can be useful in one domain but may be worse in others. However, for most search strategies, the length of the shortest possible solution usually determines the amount of search which is required.\nIn Paris, the whole search problem is decomposed into several subproblems which allow short solutions. Consequently, this kind of problem decomposition should be of use for most search strategies. Moreover, we think that the idea of reasoning from abstract cases, formulated in a completely new terminology than the ground space will also be useful for other kinds of problem solving such as design or model-based diagnosis. For model-based diagnosis, we have developed an approach (Pews & Wess, 1993;Bergmann, Pews, & Wilke, 1994) similar to Paris. The domain descriptions consist of a model of a technical system for which a diagnosis has to be found. It describes the behavior of each elementary and composed component of the system at di erent levels of abstraction. During model-based diagnosis, the behavior of the technical system is simulated and a possible faulty component is searched which can cause the observed symptoms. Using abstract cases, this search can be reduced and focused onto components which have been already defective (in other similar machines) and which are consequently more likely to be defective in new situations." }, { "figure_ref": [ "fig_2" ], "heading": "Future Work", "publication_ref": [ "b56" ], "table_ref": [ "tab_7" ], "text": "Future research will investigate goal-directed procedures for re nement such as backwarddirected search or non-linear partial order planners (see Section 7.4). Additionally, more experience must be gained with additional domains and di erent representations of them. Furthermore, we will address the development of highly e cient retrieval algorithms for abstract cases. As Table 7 shows, the retrieval mechanism has a strong in uence on the achieved speedup. Even if the linear retrieval we have presented turned out to be pretty good, we expect a utility problem (Minton, 1990) to occur when the size of the casebase grows. Furthermore, a good selection procedure for abstract cases should also use some feedback from the problem solver to evaluate the learned abstract cases based on the speedup they cause. This would eliminate unbene cial cases or abstract operators from the case-base or the abstract problem solving domain. Experiments with di erent indexing and retrieval mechanisms have recently indicated that this is possible.\nFurthermore, the speedup caused by a combination of di erent approaches such as abstraction and explanation-based learning should be addressed. Within the Paris system an explanation-based component for case generalization is still present (see Figure 3), but was not used for the experiments because the plain abstraction itself had to be evaluated. In further experiments, abstraction, explanation-based learning and the integration of both has to be addressed comprehensively. This will hopefully lead to a better understanding of the di erent strengths these methods have.\nAs a more long-term research goal, Paris-like approaches should be developed and evaluated for other kinds of problem solving tasks such as con guration and design or, as already started, for model-based diagnosis. with respect to (D 1 ; D ), it also exists a state abstraction mapping 0 and a sequence abstraction mapping 0 such that 0 (s 0 0 (j) ) = s j for all j 2 f0; : : : ; mg. Now, we can de ne a state abstraction mapping 00 (s) = 0 ( (s)) and a sequence abstraction mapping 00 (j) = ( 0 (j)). It is easy to see, that 00 is a well de ned state abstraction mapping (s s 0 ) (s) (s 0 ) ) 0 ( (s)) 0 ( (s 0 ))) and that 00 is a well de ned sequence abstraction mapping ( ( 0 (0)) = 0 ; ( 0 (m)) = (k) = n ; u < v , 0 (u) < 0 (v) , ( 0 (u)) < ( 0 (v))). Furthermore it follows 00 (s c 00 (j) ) = 0 ( (s c ( 0 (j)) )) = 0 (s 0 0 (j) ) = s a j , leading to the conclusion that C is an abstraction of Proof: Correctness (\\ \"): If C a is returned by Pabs, then h(o a 1 ; : : : ; o a k ); ; i 2 Paths holds19 in phase-IV. We can de ne a state abstraction mapping (s) := fe 2 jR c A s `eg, which, together with the sequence abstraction mapping will lead to the desired conclusion.\nFor every operator o a i , we know by construction of phase-IV, that h (i 1); (i); o a i ; i 2 G holds. By construction of phase-III, we can conclude that s a (i 1) R a `Pre o a i holds and that consequently E R a `Pre o a i also holds for the respective execution of the body of the while-loop in phase-IV. Since E 0 holds and `is a monotonic derivation operator, it is obvious that (s c (i) ) R a `Pre o a i . Furthermore, the `if for all'-test, which is executed before the extension of the path, ensures that (s a (i 1) \\ ) o a i ! (s a (i) \\ ) holds. Together with the ful llment of the precondition of the operator we have (s c (i 1) ) o a i ! (s c (i) ).\nThus, we have shown, C a is correct abstraction with respect to De nition 5.\nCompleteness (\\ \"): Assume, case C a = hhs a 0 ; s a m i; (o a 1 ; : : : ; o a m )i is an abstraction of C c based on a deductively justi ed state abstraction mapping. Then there exists a state ab-straction mapping and a sequence abstraction mapping such that (s c (i 1) ) o a i ! (s c (i) ) holds for all i 2 f1; : : : ; mg. Since is deductively justi ed by A, it follows by construction of phase-II, that (s c i 1 ) s a i 1 . Since `is a monotonic derivation operator, the preconditon of o a i is also ful lled in s a (i 1) . Furthermore, the addlist of the operator is ful lled in (s c (i) ) and is consequently also ful lled in s a i . By the construction of phase-III, it is now guaranteed, that h (i 1); (i); o a i ; i 2 G. Now, we would like to show, that in phase-IV: there exists a sequence of assignments to the variable Paths, such that h(); 0 ; 0 i 2 Paths, h(o a 1 ); 1 ; 1 i 2 Paths, : : :, h(o a 1 ; : : : ; o a m ); m ; m i 2 Paths , k ( ) = ( ) for 2 f0; : : : ; kg ( k \\ s a l ) (s c l ) for l 2 f1; : : : ; ng and k S k l=1 Add o a l . The proof is by induction on i. The induction basis is obvious due to the initialization of the Paths variable. Now, assume that h(o a 1 ; : : : ; o a k ); k ; k i 2 Paths (with k < m) at some state of the execution of phase-IV. Since, h (k); (k + 1); o a k+1 ; i 2 G holds as argued before, and (k) = k (k) by induction hypothesis, the selected operator sequence is tried to be extended by o a = o a k+1 in the body of the while-loop. Additionally, we know, that E contains exactly those sentences which are required to proof the precondition of o a k+1 . Note, that since the SLD-resolution procedure is assumed to be complete and o a k+1 is applicable in (s c k ), E is required to proof the preconditition of o a if and only if E (s c (k) ). Since is deductively justi ed, 8e 2 E ; 8l 2 f1; : : : ; mg holds: e 2 (s c (l) ) if s c (l) R c A `e. By construction of the s a l , 8e 2 E ; 8l 2 f1; : : : ; mg holds: e 2 (s c (l) ) if e 2 s a l . Consequently, E \\ s a l (s c l ) for all l 2 f1; : : : ; mg. On the other hand, we also know that o a k+1 leads to (s c (k+1) ). Consequently, Add o a k+1 (s c (k+1) ). Following the same argumentation as above, we can conclude that (Add o a k+1 \\ s a l ) (s c l ) for all l 2 f1; : : : ; mg. Consequently, for 0 = k E Add o a k+1 holds that 0 \\ s a l (s c l ). Now, we can conclude that Paths is extended by o a k+1 as follows. Since (s c ( 1) ) o a ! (s c ( ) ) holds and that Add o a 2 0 and ( 0 \\ s a ( ) ) (s c ( ) ), we can immediately follow that ( 0 \\ s a ( 1) ) o a ! ( 0 \\ s a ( ) ). Consequently, h(o a 1 ; : : : ; o a k ; o a k+1 ); k+1 ; k+1 i 2 Paths with k+1 = 0 and k+1 ( ) = k ( ) = ( ) for 2 f1; : : : ; kg and k+1 (k + 1) = (k). So, the induction hypothesis is ful lled for k + 1. Thereby, it is shown that C a is returned by Pabs. 2 recent version of the paper. We are also greatly indebted to the anonymous JAIR reviewers who helped to signi cantly improve the paper. This research was partially supported by the German \\Sonderforschungsbereich\" SFB-314 and the Commission of the European Communities (ESPRIT contract P6322, the Inreca project). The partners of Inreca are" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors want to thank Agnar Aamodt, Jaime Carbonell, Padraig Cunningham, Subbarao Kambhampati, Michael M. Richter, Manuela Veloso, as well as all members of our research group for many helpful discussions and for remarks on earlier versions of this paper. Particularly, we want to thank Padraig Cunningham for carefully proof-reading the AcknoSoft (prime contractor, France), tecInno (Germany), Irish Medical Systems (Ireland) and the University of Kaiserslautern (Germany)." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b35" ], "table_ref": [], "text": "If an abstract goal state has been reached it is removed from the list S a and the re nement continues with the next abstract state which is then again the rst one in the list. and for all e 2 s a 1 holds: SLD(s c I R c A; e) 6 = ; and for all e 2 n s a 1 holds: SLD(s c I R c A; e Please note that this kind of re nement is di erent from the standard notion of re nement in hierarchical problem solving (Knoblock et al., 1991b). This is because there is no strong correspondence between an abstract operator and a possible concrete operator. Moreover, the justi cation structure of a re ned abstract plan is completely di erent from the justi cation structure of the abstract plan itself because of the completely independent de nition of abstract and concrete operators. Even if this is a disadvantage compared to the usual re nement procedure used in hierarchical problem solving, the main computational advantage of abstraction caused by the decomposition of the original problem into smaller subproblems is maintained." }, { "figure_ref": [], "heading": "Alternative Search Procedures for Re nement", "publication_ref": [ "b20" ], "table_ref": [], "text": "Besides the forward-directed search procedure currently used in Paris backward-directed search as used in means-end analysis (Fikes & Nilsson, 1971) The case generation procedure leads to solutions which are optimal or nearly optimal. All solutions which require less than 10 steps are optimal solutions in the sense that they are known to be the shortest solution to the problem they solve. All solutions which are longer than 10 steps have been manually checked to see whether they contain steps which are obviously redundant. Such redundant steps have been removed. Although these solutions are not necessarily shortest solutions, they are nevertheless acceptably short." }, { "figure_ref": [], "heading": "Evaluating Abstraction by Dropping Sentences", "publication_ref": [ "b32", "b14" ], "table_ref": [], "text": "At rst we used the recent version of Alpine (Knoblock, 1993) together with Prodigy-4 (Blythe et al., 1992) to check whether abstraction by dropping sentences can improve problem solving in our domain represented as described in Section 8. Therefore, we used only the concrete problem solving domain as domain theory for Prodigy. Unfortunately, for this representation, Alpine was not able to generate an ordered monotonic abstraction hierarchy. The reason for this is that Alpine can only distinguish a few di erent groups of literals because only a few di erent literal names (and argument types) can be used in the problem space. For example, Alpine cannot distinguish between the di erent sentences which are described by the mat or the grid xpos predicate. But this is very important for abstraction. We would like to drop those parts of the grid which represent small rectangles such as grooves. However, this would require the examination of the measures associated with a grid area (as argument) and also the relation to other surrounding grid areas. Therefore, which sentence to drop (or which criticalities to assign) cannot be decided statically by the name of the predicate or the type of the arguments. All hierarchical planners including" }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b72", "b81", "b82", "b95", "b30", "b26", "b54", "b23", "b52", "b81" ], "table_ref": [], "text": "In this paper we have shown in detail that in hierarchical problem solving (Sacerdoti, 1974;Tenenberg, 1988;Unruh & Rosenbloom, 1989;Yang & Tenenberg, 1990;Knoblock, 1990) the limited view of abstraction by dropping sentences as well as the strategy by which abstract solutions are computed lead to poor behavior in various relevant situations. This observation is supported by comprehensive arti cial examples (see Section 2.1 and 2.2) and a real-world example from the domain of mechanical engineering (see Section 8), further supported by an experiment (see Section 9.2). The recent results reported in (Holte et al., 1995) support these observations very well.\nIn general, abstraction is the task of transforming a problem or a solution from a concrete representation into a di erent abstract representation, while reducing the level of detail (Michalski & Kodrato , 1990;Giunchiglia & Walsh, 1992;Michalski, 1994). However, in most hierarchical problem solvers, the much more limited view of abstraction by dropping sentences is shown to be the reason why e cient ways of abstracting a problem and a solution are impossible (e.g., see Section 2.1 and Figure 4). The second weakness of most hierarchical problem solvers is that they usually compute arbitrary abstract solutions and not solutions which have a high chance of being re nable at the next concrete level. Although the upward solution property (Tenenberg, 1988) guarantees that a re nable abstract solution exists, it is not guaranteed that the problem solver nds this abstract solution (e.g., see Section 2.2). Problem solvers are not even heuristically guided towards re nable abstract solutions.\nWith the Paris approach we present a new formal abstraction methodology for problem solving (see Section 5) which allows abstraction by changing the whole representation language from concrete to abstract. Together with this formal model, a correct and complete learning algorithm for abstracting concrete problem solving cases (see Section 6) is given. The abstract solutions determined by this procedure are useful for solving new concrete problems, because they have a high chance of being re nable.\nThe detailed experimental evaluation with the fully implemented Paris system in the domain of mechanical engineering strongly demonstrates that Paris can signi cantly improve problem solving in situations in which a hierarchical problem solver using dropping sentences fails to show an advantage (see Table 7 to 11)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "We now discuss the Paris approach in relation to other relevant work in the eld." }, { "figure_ref": [], "heading": "Theory of Abstraction", "publication_ref": [ "b23", "b24" ], "table_ref": [], "text": "Within Giunchiglia and Walsh's (1992) theory of abstraction, the Paris approach can be classi ed as follows: The formal system of the ground space 1 is given by the concrete problem solving domain D c using the situation calculus (Green, 1969) for representation.\nThe language of the abstract formal system 2 is given by the language of the abstract problem solving domain D a . However, the operators of D a are not turned into axioms of 2 . Instead, the abstract cases build the axioms of 2 . Moreover, the generic abstraction theory A de nes the abstraction mapping f : 1 ) 2 . Within this framework, we can view Appendix A. Proofs This section contains the proofs of the various lemma and theorems.\nLemma 6 (Joining di erent abstractions) If a concrete domain D c and two disjoint abstract domains D a1 and D a2 are given, then a joint abstract domain D a = D a1 D a2 can be de ned as follows: Let D a1 = (L a1 ; E a1 ; O a1 ; R a1 ) and let D a2 = (L a2 ; E a2 ; O a2 ; R a2 ). Then D a = D a1 D a2 = (L a1 L a2 ; E a1 E a2 ; O a1 O a2 ; R a1 R a2 ). The joint abstract domain D a ful lls the following property: if C a is an abstraction of C c with respect to (D c , D a1 ) or with respect to (D c , D a2 ), then C a is also an abstraction of C c with respect to (D c ; D a ). Proof: The proof of this lemma is quite simple. If C a is an abstraction of C c with respect to (D c , D ai ), then there exists a sequence abstraction mapping and a sequence abstraction mapping as required in De nition 5. As it is easy to see, the same abstraction mappings will also lead to the respective case abstraction in (D c ; D a ). 2 Lemma 7 (Multi-Level Hierarchy) Let (D 0 ; : : : ; D l ) be an arbitrary multi-level hierarchy of domain descriptions. For the two-level description (D c , D a ) with D a = S l =1 D and D c = D 0 holds that: if C a is an abstraction of C c with respect to (D 0 ; : : : ; D l ) then C a is also an abstraction of C c with respect to (D c , D a ). Proof: Let C = hhs 0 ; s m i; o i be a case in domain D (intermediate state are denoted by s j ), let C 0 = hhs 0 0 ; s 0 n i; o 0 i be a case in domain D 0 (intermediate state are denoted by s 0 i ), and let C be an abstraction of case C 0 with respect to (D 0 ; : : : ; D ). Then a sequence of cases (C 1 ; : : : ; C 1 ) exists such that C i is from the domain D i and C i+1 is an abstraction of the case C i with respect to (D i ; D i+1 ) for all i 2 f0; : : : ; 1g. Now we proof by induction over that C is also an abstraction of C 0 with respect to (D c , D a ) (see gure 13). The basis ( = 1) is obvious: C 1 is abstraction of C 0 with respect to (D 0 ; D 1 ) and is consequently also an abstraction with respect to (D c , D a ). Now, assume that the lemma holds for any cases up to the domain D 1 . It follows immediately that C 1 is an abstraction of C 0 with respect to (D c , D a ). Let C 1 = hhs 0 0 ; s 0 k i; o 0 i and let the intermediate states be denoted by s 0 r . From De nition 5 follows, that a state abstraction mapping and a sequence abstraction mapping exists, such that (s c (r) ) = s 0 r for all r 2 f0; : : : ; kg. Because C is an abstraction of C 1" } ]
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "-8. unchuck, chuck(right) 1. chuck(left) 2.-6", "year": "" }, { "authors": "", "journal": "use_tool(center, t3)", "ref_id": "b1", "title": "", "year": null }, { "authors": "", "journal": "unchuck Solution 1. chuck", "ref_id": "b2", "title": "", "year": "" }, { "authors": "", "journal": "use_tool(right,t2)", "ref_id": "b3", "title": "", "year": "" }, { "authors": "K D Altho; S Wess", "journal": "Springer", "ref_id": "b4", "title": "Case-based reasoning and expert system development", "year": "1992" }, { "authors": "J S Anderson; A M Farly", "journal": "", "ref_id": "b5", "title": "Plan abstraction based on operator generalization", "year": "1988" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "F Bacchus; Q Yang", "journal": "Arti cial Intelligence", "ref_id": "b7", "title": "Downward re nement and e ciency of hierarchical problem solving", "year": "1994" }, { "authors": "R Bergmann", "journal": "Springer", "ref_id": "b8", "title": "Knowledge acquisition by generating skeletal plans", "year": "1992" }, { "authors": "R Bergmann", "journal": "", "ref_id": "b9", "title": "Learning abstract plans to speed up hierarchical planning", "year": "1992" }, { "authors": "R Bergmann", "journal": "", "ref_id": "b10", "title": "Learning plan abstractions", "year": "1992" }, { "authors": "R Bergmann", "journal": "", "ref_id": "b11", "title": "Integrating abstraction, explanation-based learning from multiple examples and hierarchical clustering with a performance component for planning", "year": "1993" }, { "authors": "R Bergmann; G Pews; W Wilke", "journal": "Springer", "ref_id": "b12", "title": "Explanation-based similarity: A unifying approach for integrating domain knowledge into case-based reasoning", "year": "1994" }, { "authors": "R Bergmann; W Wilke", "journal": "", "ref_id": "b13", "title": "Inkrementelles Lernen von Abstraktionshierarchien aus maschinell abstrahierten Pl anen", "year": "1994" }, { "authors": "J Blythe; O Etzioni", "journal": "", "ref_id": "b14", "title": "Prodigy4.0: The manual and tutorial", "year": "1992" }, { "authors": "J G Carbonell", "journal": "", "ref_id": "b15", "title": "Derivational analogy: A theory of reconstructive problem solving and expertise aquisition", "year": "1986" }, { "authors": "G Dejong; R Mooney", "journal": "Machine Learning", "ref_id": "b16", "title": "Explanation-based learning: An alternative view", "year": "1986" }, { "authors": "O Etzioni", "journal": "Arti cial Intelligence", "ref_id": "b17", "title": "A structural theory of explanation-based learning", "year": "1993" }, { "authors": "O Etzioni; R Etzioni", "journal": "Machine Learning", "ref_id": "b18", "title": "Statistical methods for analyzing speedup learning", "year": "1994" }, { "authors": "R E Fikes; P E Hart; N J Nilsson", "journal": "Arti cial Intelligence", "ref_id": "b19", "title": "Learning and executing generalized robot plans", "year": "1972" }, { "authors": "R E Fikes; N J Nilsson", "journal": "Arti cial Intelligence", "ref_id": "b20", "title": "Strips: A new approach to the application of theorem proving to problem solving", "year": "1971" }, { "authors": "P E Friedland; Y Iwasaki", "journal": "Journal of Automated Reasoning", "ref_id": "b21", "title": "The concept and implementation of skeletal plans", "year": "1985" }, { "authors": "A Giordana; D Roverso; L Saitta", "journal": "Springer", "ref_id": "b22", "title": "Abstracting background knowledge for concept learning", "year": "1991" }, { "authors": "F Giunchiglia; T Walsh", "journal": "Arti cial Intelligence", "ref_id": "b23", "title": "A theory of abstraction", "year": "1992" }, { "authors": "C Green", "journal": "", "ref_id": "b24", "title": "Application of theorem proving to problem solving", "year": "1969" }, { "authors": "R Holte; C Drummond; M Perez; R Zimmer; A Macdonald", "journal": "", "ref_id": "b25", "title": "Searching with abstractions: A unifying framework and new high-performance algorithm", "year": "1994" }, { "authors": "R Holte; T Mkadmi; R Zimmer; A Macdonald", "journal": "", "ref_id": "b26", "title": "Speeding up problem solving by abstraction: A graph-oriented approach", "year": "1995" }, { "authors": "S Kambhampati; J A Hendler", "journal": "Arti cial Intelligence", "ref_id": "b27", "title": "A validation-structure-based theory of plan modi cation and reuse", "year": "1992" }, { "authors": "S Kambhampati; S Kedar", "journal": "Arti cial Intelligence", "ref_id": "b28", "title": "A uni ed framework for explanation-based generalization of partially ordered partially instantiated plans", "year": "1994" }, { "authors": "C A Knoblock", "journal": "Kluwer", "ref_id": "b29", "title": "A theory of abstraction for hierachical planning", "year": "1989" }, { "authors": "C A Knoblock", "journal": "MIT Press", "ref_id": "b30", "title": "Learning abstraction hierarchies for problem solving", "year": "1990" }, { "authors": "C A Knoblock", "journal": "", "ref_id": "b31", "title": "Search reduction in hierarchical problem solving", "year": "1991" }, { "authors": "C A Knoblock", "journal": "Kluwer Academic Publishers", "ref_id": "b32", "title": "Generating abstraction hierarchies: An automated approach to reducing search in planning", "year": "1993" }, { "authors": "C A Knoblock", "journal": "Arti cial Intelligence", "ref_id": "b33", "title": "Automatically generating abstractions for planning", "year": "1994" }, { "authors": "C A Knoblock; S Minton; O Etzioni", "journal": "", "ref_id": "b34", "title": "Integrating abstraction and explanation-based learning in PRODIGY", "year": "1991" }, { "authors": "C A Knoblock; J D Tenenberg; Q Yang", "journal": "", "ref_id": "b35", "title": "Characterizing abstraction hierarchies for planning", "year": "1991" }, { "authors": "J L Kolodner", "journal": "", "ref_id": "b36", "title": "Retrieval and Organizational Strategies in Conceptual Memory", "year": "1980" }, { "authors": "J L Kolodner", "journal": "", "ref_id": "b37", "title": "Case-based reasoning", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b38", "title": "", "year": "" }, { "authors": "R E Korf", "journal": "Arti cal Intelligence", "ref_id": "b39", "title": "Toward a model of representation changes", "year": "1980" }, { "authors": "R E Korf", "journal": "Arti cal Intelligence", "ref_id": "b40", "title": "Depth-rst iterative-deepening: An optimal admissible tree search", "year": "1985" }, { "authors": "R E Korf", "journal": "Arti cal Intelligence", "ref_id": "b41", "title": "Macro-operators: A weak method for learning", "year": "1985" }, { "authors": "R E Korf", "journal": "Arti cal Intelligence", "ref_id": "b42", "title": "Planning as search: A quantitative approach", "year": "1987" }, { "authors": "R E Korf", "journal": "Arti cal Intelligence", "ref_id": "b43", "title": "Linear-space best-rst search", "year": "1993" }, { "authors": "M Kramer; C Unger", "journal": "", "ref_id": "b44", "title": "Abstracting operators for hierarchical planning", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b45", "title": "", "year": "" }, { "authors": "J Laird; P Rosenbloom; A Newell", "journal": "Kluwer Academic Publishers", "ref_id": "b46", "title": "Universal Subgoaling and Chunking: The Automatic Generation and Learning of Goal Hierarchies", "year": "1986" }, { "authors": "P Langley; J Allen", "journal": "", "ref_id": "b47", "title": "A uni ed framework for planning and learning", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b48", "title": "", "year": "" }, { "authors": "V Lifschitz", "journal": "", "ref_id": "b49", "title": "On the semantics of STRIPS", "year": "1987" }, { "authors": "J Lloyd", "journal": "Springer", "ref_id": "b50", "title": "Foundations of Logic Programming", "year": "1984" }, { "authors": "D Mcallester; D Rosenblitt", "journal": "", "ref_id": "b51", "title": "Systematic nonlinear planning", "year": "1991" }, { "authors": "R S Michalski", "journal": "", "ref_id": "b52", "title": "Inferential theory of learning as a conceptual basis for multistrategy learning", "year": "1994" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b53", "title": "", "year": "" }, { "authors": "R S Michalski; Y Kodrato", "journal": "", "ref_id": "b54", "title": "Research in machine learning: Recent progress, classi cation of methods, and future directions", "year": "1990" }, { "authors": "S Minton", "journal": "Kluwer", "ref_id": "b55", "title": "Learning Search Control Knowledge: An Explanation-Based Approach", "year": "1988" }, { "authors": "S Minton", "journal": "Arti cal Intelligence", "ref_id": "b56", "title": "Quantitativ results concerning the utility of explanation-based learning", "year": "1990" }, { "authors": "S Minton; J G Carbonell; C Knoblock; D R Kuokka; O Etzioni; Y Gil", "journal": "Arti cial Intelligence", "ref_id": "b57", "title": "Explanation-based learning: A problem solving perspective", "year": "1989" }, { "authors": "S Minton; M Zweben", "journal": "", "ref_id": "b58", "title": "Learning, planning and scheduling: An overview", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b59", "title": "", "year": "" }, { "authors": "T M Mitchell; R M Keller; S T Kedar-Cabelli", "journal": "Machine Learning", "ref_id": "b60", "title": "Explanation-based generalization: A unifying view", "year": "1986" }, { "authors": "R J Mooney", "journal": "", "ref_id": "b61", "title": "Generalizing the order of operators in macro-operators", "year": "1988" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b62", "title": "", "year": "" }, { "authors": "R J Mooney; G F Dejong", "journal": "", "ref_id": "b63", "title": "Learning schemata for natural language processing", "year": "1985" }, { "authors": "I Mozetic", "journal": "", "ref_id": "b64", "title": "Abstraction in model-based diagnosis", "year": "1990" }, { "authors": "A Newell; H Simon", "journal": "Prentice-Hall Englewood Cli s", "ref_id": "b65", "title": "Human Problem Solving", "year": "1972" }, { "authors": "J Paulokat; S Wess", "journal": "", "ref_id": "b66", "title": "Planning for machining workpieces with a partial-order, nonlinear planner", "year": "1994" }, { "authors": "M Perez; J Carbonell", "journal": "", "ref_id": "b67", "title": "Automated acquisition of control knowledge to improve the quality of plans", "year": "1993" }, { "authors": "G Pews; S Wess", "journal": "", "ref_id": "b68", "title": "Combining model-based approaches and case-based reasoning for similarity assessment and case adaptation in diagnositc applications", "year": "1993" }, { "authors": "D Plaisted", "journal": "Arti cal Intelligence", "ref_id": "b69", "title": "Theorem proving with abstraction", "year": "1981" }, { "authors": "D Plaisted", "journal": "", "ref_id": "b70", "title": "Abstraction using generalization functions", "year": "1986" }, { "authors": "P Rosenbloom; J Laird", "journal": "", "ref_id": "b71", "title": "Mapping explanation-based learning onto SOAR", "year": "1986" }, { "authors": "E Sacerdoti", "journal": "Arti cial Intelligence", "ref_id": "b72", "title": "Planning in a hierarchy of abstraction spaces", "year": "1974" }, { "authors": "E Sacerdoti", "journal": "American-Elsevier", "ref_id": "b73", "title": "A Structure for Plans and Behavior", "year": "1977" }, { "authors": "R C Schank", "journal": "Cambridge University Press", "ref_id": "b74", "title": "Dynamic Memory: A Theory of Learning in Computers and People", "year": "1982" }, { "authors": "G Schmidt", "journal": "", "ref_id": "b75", "title": "Modellbasierte, interaktive Wissensakquisition und Dokumentation von Domaenenwissen", "year": "1994" }, { "authors": "G Schmidt; M Zickwol", "journal": "", "ref_id": "b76", "title": "Cases, models and integrated knowledge acquisition to formalize operators in manufacturing", "year": "1992" }, { "authors": "J Shavlik; P O'rorke", "journal": "Kluwer Academic Publishers", "ref_id": "b77", "title": "Empirically evluation EBL", "year": "1993" }, { "authors": "H Simon", "journal": "Cognitive Psychology", "ref_id": "b78", "title": "The functional equivalence of problem solving skills", "year": "1975" }, { "authors": "J Tenenberg", "journal": "", "ref_id": "b79", "title": "Preserving consistency across abstraction mappings", "year": "1987" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b80", "title": "", "year": "" }, { "authors": "J Tenenberg", "journal": "", "ref_id": "b81", "title": "Abstraction in Planning", "year": "1988" }, { "authors": "A Unruh; P Rosenbloom", "journal": "", "ref_id": "b82", "title": "Abstraction in problem solving and learning", "year": "1989" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b83", "title": "", "year": "" }, { "authors": "M M Veloso", "journal": "", "ref_id": "b84", "title": "Learning by analogical reasoning in general problem solving", "year": "1992" }, { "authors": "M M Veloso", "journal": "Springer", "ref_id": "b85", "title": "PRODIGY/ANALOGY: Analogical reasoning in general problem solving", "year": "1994" }, { "authors": "M M Veloso; J Blythe", "journal": "", "ref_id": "b86", "title": "Linkability: Examining causal link commitments in partial-order planning", "year": "1994" }, { "authors": "M M Veloso; J G Carbonell", "journal": "", "ref_id": "b87", "title": "Integrating derivational analogy into a general problem solving architecture", "year": "1988" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b88", "title": "", "year": "" }, { "authors": "M M Veloso; J G Carbonell", "journal": "", "ref_id": "b89", "title": "Towards scaling up machine learning: A case study with derivational analogy in PRODIGY", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b90", "title": "", "year": "" }, { "authors": "W Wilke", "journal": "", "ref_id": "b91", "title": "Entwurf und Implementierung eines Algorithmus zum wissensintensiven Lernen von Planabstraktionen nach der PABS-Methode", "year": "1993" }, { "authors": "W Wilke", "journal": "", "ref_id": "b92", "title": "Entwurf, Implementierung und experimentelle Bewertung von Auswahlverfahren f ur abstrakte Pl ane im fallbasierten Planungssystem PARIS", "year": "1994" }, { "authors": "D Wilkins", "journal": "", "ref_id": "b93", "title": "Practical Planning: Extending the classical AI planning paradigm", "year": "1988" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b94", "title": "", "year": "" }, { "authors": "Q Yang; J Tenenberg", "journal": "", "ref_id": "b95", "title": "Abtweak: Abstracting a nonlinear, least commitment planner", "year": "1990" } ]
[ { "formula_coordinates": [ 18, 152.26, 101.69, 348.86, 93.81 ], "formula_id": "formula_0", "formula_text": "α n i+1 i 4 3 2 1 1 2 m D a D c α α α j j+1 β(0) = 0 β(1) = 3 β (j) = i β(m) = n O c O c O c O c O c O c O c O a O a O a O a O a" }, { "formula_coordinates": [ 28, 118.08, 392.16, 86.4, 31.8 ], "formula_id": "formula_1", "formula_text": "s a I := s a I S 2 E" }, { "formula_coordinates": [ 28, 118.08, 432.72, 90.72, 32.04 ], "formula_id": "formula_2", "formula_text": "s a G := s a G S 2 E" } ]
Building and Re ning Abstract Planning Cases by Change of Representation Language
Abstraction is one of the most promising approaches to improve the performance of problem solvers. In several domains abstraction by dropping sentences of a domain description { as used in most hierarchical planners { has proven useful. In this paper we present examples which illustrate signi cant drawbacks of abstraction by dropping sentences. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract. However, to achieve a powerful change of the representation language, the abstract language itself as well as rules which describe admissible ways of abstracting states must be provided in the domain model. This new abstraction approach is the core of Paris (Plan Abstraction and Re nement in an Integrated System), a system in which abstract planning cases are automatically learned from given concrete cases. An empirical study in the domain of process planning in mechanical engineering shows signi cant advantages of the proposed reasoning from abstract cases over classical hierarchical planning.
Ralph Bergmann; Wolfgang Wilke
[ { "figure_caption": "Figure 2 :2Figure 2: Abstract state spaces by dropping conditions", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The components of the Paris System", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: An example of case abstraction", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Re nement of an abstract case for the solution of the problem Y ! Y 0", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Di erent kinds of abstractions (a) and abstraction hierarchies (b)", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Lemma 7 (Multi-Level Hierarchy) Let (D 0 ; : : : ; D l ) be an arbitrary multi-level hierarchy of domain descriptions. For the two-level description (D c , D a ) with D a = S l =1 D and D c = D 0 holds that: if C a is an abstraction of C c with respect to (D 0 ; : : : ; D l ) then C a is also an abstraction of C c with respect to (D c , D a ).", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "For each concrete state s c 2 S c and each concrete operator o c 2 O c where o c is described by hPre o c ; Add o c ; Del o ci, SLD(s c R c ; Pre o c ) must lead to a nite set of ground substitutions of all variables which occur in Pre o c .", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Theorem 8 (8Correctness and completeness of the PABS algorithm) If a complete SLDrefutation procedure is used in the Pabs algorithm, then Case C a is an abstraction of case C c with respect to (D c ; D a ) and the generic theory A, if and only if C a 2 PABS(hD c ; D a ; Ai; C c ).", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "(c ) holds, then there exists a sequence of concrete operators (o c 1 ; : : : ; o c k", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Abstracting and Re ning an Example Case", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "10.1.4 Explanation-based Learning, Case-based Reasoning and Analogy", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Abstraction mappings for hierarchies of abstraction spaces", "figure_data": "", "figure_id": "fig_14", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "C 0 with respect to (D c , D a ). 2 Theorem 8 (Correctness and completeness of the Pabs algorithm) If a complete SLDrefutation procedure is used in the Pabs algorithm, then Case C a is an abstraction of case C c with respect to (D c ; D a ) and the generic theory A, if and only if C a 2 PABS(hD c ; D a ; Ai; C c ).", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "An instantiated operator o is applicable in a state s, if and only if s R `Pre o holds. 8 An instantiated operator o transforms a state s 1 into a state s 2 (we write: s 1 o ! s 2 ) if and only if o is applicable in s 1 and s 2 = (s 1 n Del o ) Add o . A problem description p = hs I ; s G i consists of an initial state s I together with a nal state s G . The problem solving task is to nd a sequence of instantiated operators (a plan) o = (o 1 ; : : : ; o l ) which transforms the initial state into the nal state (s I s G ). A case C = hp; oi is a problem description p together with a plan o that solves p.", "figure_data": "o 1 !o l", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Essential sentences for the representation of the workpiece", "figure_data": "8.1.2 Operators", "figure_id": "tab_5", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of the average solution time per problem and the number of solved problems within a time-bound of 200 seconds. The table compares pure search (depth-rst iterative deepening), hierarchical planning using the abstract problem solving domain, and reasoning from abstract cases with di erently selected abstract cases.", "figure_data": "Problem solving mode Pure search Hierarchical planning Reasoning from abstract cases (a) Best re nable case (b) Worst re nable case (c) Worst applicable caseAverage solution time (sec.) Solved problems 156 29 107 50 35 94 63 79 117 45", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Comparison of the number of learned abstract cases for a) the 10 training sets each of which consists of 5 concrete cases and b) the 10 training sets each of which consists of 10 concrete cases. The table shows the minimum, the maximum, and the average number of abstract cases learned from the 10 training sets of the respective size.", "figure_data": "Size of training sets (cases) 5 10Number of abstract cases minimum maximum 7 15 8 25average 9.1 14.2Size of training sets (cases) 5 10Average problem solving time (sec.) best set worst set average 43 89 59 35 76 56", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Comparison of the problem solving time required for reasoning from abstract cases after separate training with a) the 10 training sets each of which consists of 5 concrete cases and b) the 10 training sets each of which consists of 10 concrete cases. The table shows the average problem solving time per problem for the best, the worst and the average training set out of the 10 training sets of each size.", "figure_data": "divides the actual problem into more, but smaller subproblems. Consequently the longest applicable plan should lead to the best improvement.9.4.2 Results", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Comparison of the percentage of solved problems after separate training with a) the 10 training sets each of which consists of 5 concrete cases and b) the 10 training sets each of which consists of 10 concrete cases. The table shows the percentage of solved problems for the best, the worst and the average training set out of the 10 training sets of each size.", "figure_data": "The same", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Comparison of the signi cance (p-value) of the speedup results over pure search and hierarchical planning after separate training with a) the 10 training sets each of which consists of 5 concrete cases and b) the 10 training sets each of which consists of 10 concrete cases. The table shows the number of training sets which cause signi cant speedups for di erent p-values.", "figure_data": "", "figure_id": "tab_11", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Comparison of the length of the solutions created through reasoning from learned abstract cases and the solutions available in the concrete cases. The table shows the average percentage of solutions with shorter/equal/longer solution length after separate training with a) the 10 training sets each of which consists of 5 concrete cases and b) the 10 training sets each of which consists of 10 concrete cases.", "figure_data": "Size of training sets (cases) 5 10Average percentage of solutions with shorter/equal/longer solution length shorter equal longer 20 54 26 22 50 28", "figure_id": "tab_12", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Table 7 and 10). Using a less constrained abstract problem solving domain: Comparison of the problem solving time required for reasoning from abstract cases after separate training with a) the 10 training sets each of which consists of 5 concrete cases and b) the 10 training sets each of which consists of 10 concrete cases. The table shows the average problem solving time per problem for the best, the worst and the average training set out of the 10 training sets of each size.", "figure_data": "Size of training sets (cases) 5 10Average problem solving time (sec.) best set worst set average 114 118 117 107 112 110Size of training sets (cases) 5 10Percentage of Solved Problems best set worst set average 55 52 53 58 54 56", "figure_id": "tab_13", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Using a less constrained abstract problem solving domain: Comparison of the percentage of solved problems after separate training with a) the 10 training sets each of which consists of 5 concrete cases and b) the 10 training sets each of which consists of 10 concrete cases. The table shows the percentage of solved problems for the best, the worst and the average training set out of the 10 training sets of each size.", "figure_data": "", "figure_id": "tab_14", "figure_label": "14", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b15", "b17", "b4", "b5", "b14", "b15", "b19" ], "table_ref": [], "text": "In many problems of articial intelligence, inferences are drawn on the basis of interpretation or analysis of measured data. However, when measured data are inaccurate, interpreting or analyzing them is very dicult. In diagnosis or signal analysis, for example, the general reasoning method is to compare measured data with reference values (Reiter, 1987;Shortlie & Buchanan, 1975). When measured data are not accurate due to noise or other unforeseen reasons, the comparison between measured data and reference values can not lead to any useful conclusion. A rule like \\if there is a strong peak in 3000 cm 01 -3100 cm 01 on the infrared spectrum of an unknown compound, then the unknown compound may contain at least one benzene-ring\" may work in ideal cases. However, the rule can not work in general cases. For example, when the spectral data are inaccurate, e.g., the measured peak in 3000 cm 01 -3100 cm 01 is not a strong peak but a medium one, or a measured strong peak is not exactly located in 3000 cm 01 -3100 cm 01 but is slightly shifted, the rule may not be applied.\nIn practical problems, especially in data rich problems such as diagnosis and interpretation, measured data are often inaccurate. One reason is that the measuring methods are error-prone. For example, a patient's temperature or blood-pressure may be inaccurately measured or entered, and a witness may inaccurately describe the features of a criminal. The other reason is that the real data are not noise-free. For example, among the received signals, there may be some noise mixed up, and what is worse, infrared spectral data (peaks) themselves may be noisy, i.e., some peaks may be aected by noise or other factors.\nIdentifying inaccurate data has long been regarded as a signicant and dicult problem in AI. Many methods have been proposed to deal with the problem. Fuzzy logic provides a mathematical framework for representation and calculation of inaccurate data (Zadeh, 1978). By fuzzy logic, reference value x 0 is associated with a fuzzy interval 4x. If a measured data item falls into [x 0 0 4x; x 0 + 4x], then it can be identied as the reference value with a corresponding membership degree. Probability theory and possibility theory are also widely used for handling inaccuracy and uncertainty (Dempster, 1968;Duda, Hart, & Nilsson, 1976;Pearl, 1987;Shafer, 1976;Shortlie & Buchanan, 1975). The above methods are commonly used in AI systems. The way of applying them, however, depends on the nature of domain problems, and there is not yet a standard and generally accepted method thus far.\nWe present a method for identifying inaccurate data on the basis of qualitative correlations among related data. The method is based on the essential consideration that some data items within a dataset are qualitatively dependent: a set of data may describe the same phenomenon, or refer to the same behavior. For example, a patient's temperature, blood pressure and other symptomatic data reect the patient's disease, and a couple of peaks on an infrared spectrum indicate the presence of a partial component. We call the dependency among data within a dataset qualitative correlations among related data1 . By considering qualitative correlations among related data, we can obtain conrmatory or disconrmatory evidence to identify inaccurate data. In general, related data should be simultaneously present or absent, so if most of the related data have been completely identied, these data will enhance the identication of the rest. For example, a benzene-ring can create many other peaks besides the strong peak in 3000 cm 01 -3100 cm 01 . All the peaks created by the benzene-ring are related data which have qualitative correlations. If all the peaks except that in 3000 cm 01 -3100 cm 01 have been completely identied, the benzene-ring is quite likely to be contained by the unknown compound. Therefore, the inaccurate peak around 3000 cm 01 -3100 cm 01 may still be identied. In fact, spectroscopists frequently use the following knowledge in addition to the rules given at the beginning of this section:\nIf there is a strong peak around 3000 cm 01 -3100 cm 01 , then the spectrum may be partially created by benzene-rings |{ check peaks around 1650 cm 01 , 1550 cm 01 and 700 cm 01 -900 cm 01 to make sure because a benzene-ring may have other peaks there at the same time.\nThe central idea of our method is to nd evidence for identifying inaccurate data by considering qualitative correlations among related data. The idea is very common in human thinking. When all the data except blood pressure of a patient show that the patient has a certain disease, we would naturally suspect that the blood pressure of the patient was inaccurately entered. Similarly, when all the peaks except one indicate that a partial component is present, we would naturally suspect that the unmatched peak was inaccurately measured or the peak was aected by noise or something else. If acceptable solutions can be made by assuming an inaccurate data item to be a reference value based on qualitative correlations between the data item and its related data, the inaccurate data item may be compensated and hence identied.\nOur contributions include: (1) a method which assumes an inaccurate data item to be a certain reference value based on the qualitative correlations between the inaccurate data item and all of its related data, (2) an algorithm which crystallizes the method, and (3) a practical system which uses the algorithm to interpret infrared spectra.\nThe key point is a new concept called support coecient function (SCF ) for extracting, representing, and calculating qualitative correlations among related data. When measured data are inaccurate, the qualitative correlations among related data can provide evidence for conrming or disconrming the hypothesis that the measured data are the same as the reference values. An approach to determining dynamic shift intervals of inaccurate data, an approach to calculating possibility of identifying inaccurate data, and an algorithm for identifying inaccurate data are proposed on the basis of SCF , respectively.\nThe method requires few assumptions in advance, so it can avoid inconsistency in knowledge and data bases. The method identies inaccurate data by considering qualitative correlations among related data, so it is quite eective and ecient, especially in the case of problems where dependencies among data apparently exist. In general, qualitative correlations among data can always, more or less, be extracted. In the worst case where qualitative correlations are not known a priori, the method degenerates to a conventional fuzzy method2 .\nWe have developed a practical system for interpreting infrared spectra by using the method (Zhao & Nishida, 1994). The primary task of the system is to identify unknown compounds by interpreting their infrared spectra. We have fully tested the system against several hundred real spectra. The experimental results show that the method is signicantly better than the traditional methods used in many similar systems. The rate of correctness (RC) and the rate of identication (RI) which are two important standards for evaluating the solutions of infrared spectrum interpretation are near 74% and 90% respectively, and the former is the highest among known systems.\nIn the following sections, we rst describe the problem of identifying inaccurate data in Section 2. In Section 3 we give some denitions including the concept of support coecient function (SCF ) and other concepts based on SCF . In Section 4 we introduce our method for identifying inaccurate data by considering qualitative correlations among related data. Section 5 demonstrates the application of the method to a knowledge-based system for infrared spectrum identication, and shows the experimental results of the system. Related work is discussed in Section 6. Conclusions are addressed in Section 7." }, { "figure_ref": [], "heading": "Problem Description", "publication_ref": [], "table_ref": [], "text": "In practical problems, measured data can be represented as a nite set: M D = fd 1 ; d 2 ; :::; d n g; and reference values can also be represented as a nite set: RV = fr 1 ; r 2 ; :::; r N g: Suppose interpreting or analyzing measured data is carried out on the basis of so-called \\if-then\" rules in which the premises are comparisons between M D and RV like \\if d i = r j then ...\", or \\if (r i 2 MD) ^(r j 2 M D) then ...\". When M D is accurate, the main operation implied by these premises is usually to nd a corresponding reference value from RV for each data item in M D. However, when M D is inaccurate, the operation becomes complicated. In this case, it is dicult to determine which reference value an inaccurate data item corresponds to, e.g., for some measured data no reference value may be simply identied, while for others more than one may be available.\nFor example, if received signals are known to be accurate, and an expected signal (reference value) can not be found from the signal series (measured data), then we can conclude that the expected signal does not appear. However, if received signals are inaccurate, and an expected signal can not be identied from the signal series, it is hard to decide whether the expected signal does not appear or appears but looks dierent due to the inaccuracy.\nMost currently known approaches for dealing with inaccurate data such as fuzzy logic and probabilistic reasoning are mainly based on quantitative similarity or closeness between measured data and reference values. In some cases, however, the identity of qualitative features is more eective and reliable than quantitative similarity or closeness.\nConsider signal analysis again. If an inaccurate signal has the same qualitative features as the expected one such as the interval of frequency, the signal may still be identied even though its quantitative features are slightly dierent from those of the expected one such as strength etc.; conversely, an inaccurate signal may not be identied if it is quantitatively similar to an expected signal but does not have the same qualitative features as the expected one.\nWe discussed the following points in Section 1, (1) some data items within a dataset are qualitatively dependent (i.e., they are related data), (2) there are qualitative correlations among related data, and (3) qualitative correlations among related data enable us to conrm or disconrm the identity of qualitative features.\nTherefore, RV and M D can be, explicitly or implicitly, divided into nite groups on the basis of qualitative dependencies among data, and the data in each group are related to each other. For example, RV can be divided into R 1 , R 2 , ... and R k :\nRV = R 1 [ R 2 [ ::: [ R k ;\nwhere R j = fr j l j r j l 2 RV; 1 l mg:\nThe qualitative correlations among related data in R j include: (1) data in R j should be simultaneously present or absent which means that all reference values in R j should have corresponding data in MD, (2) the presence of r jp may enhance the presence of r jq , and the absence of r j p may depress the presence of r j q . Considering the qualitative correlations among related data will lead to evidence for the identication of inaccurate data.\nThe problem of interpreting/analyzing inaccurate data is to make qualitative hypotheses for M D, or in other words, to nd a subset of RV for MD, which is corresponding to M D:\nIN(M D); (IN(MD) RV ):\nThe problem can be briey represented as the following predicate calculus:\n8d i 8R j ((d i @R j ) ^(R j @MD) ! R j IN (MD)) 3 ;\nwhere \\d i @R j \" and \\R j @MD\" are two essential qualitative predicates in our method which represent that d i possibly (qualitatively) belongs to R j (i.e., ? d i 2 R j ), and R j possibly (qualitatively) belongs to MD (i.e., ? R j M D), respectively. Determining \\A@B\" is based on qualitative correlations among related data. The work presented in this paper is mainly concentrated on determining \\d i @R j \" and \\R j @MD\", and realizing the above predicate calculus." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "Before introducing our method, we rst put forward and explain several new concepts in this section." }, { "figure_ref": [], "heading": "Qualitative Correlations among Related Data", "publication_ref": [], "table_ref": [], "text": "Denition 3.1 Related data: If data d 1 , d 2 , ..., and d m describe a common phenomenon, or they refer to the same behavior simultaneously, then they can be treated as related data.\nFor example, a patient's temperature, blood pressure and other symptomatic data are related data, and all the features for describing a criminal are also related data. The phenomenon that some data within a dataset are related data is more apparent in engineering. For instance, there are two types of related data in infrared spectrum interpretation as shown in Figure 1. First, as far as a single peak is concerned, the frequency (position) f i , strength (height) s i , and width (shape) w i of the peak are related data. Second, a partial component may create numerous peaks at the same time. If we consider all the peaks that a partial component may create, all of these peaks are related data. Denition 3.2 Qualitative correlations among related data: If d i and d j are two related data items, then the presence of d i enhances the presence of d j , and the absence of d i depresses the presence of d j . This kind of eect is called qualitative correlations among related data.\nf i s i w i Figure 1: Example of related data in spectrum interpretation\nConsider the above example of spectrum interpretation again. If spectral data are inaccurate (i.e., some measured peaks look like but are not exactly the same as reference peaks), considering qualitative correlations among related data may lead to qualitative evidence for the identication of inaccurate data. For example, suppose the frequency of a peak is slightly dierent from the reference value, and both the strength and width of the peak are the same as the reference values. Then the frequency of the peak may still be identied since both of its related data support it. Similarly, if peaks at low frequency sections are inaccurate, considering related peaks at high frequency sections may help identify these peaks, and vice versa." }, { "figure_ref": [], "heading": "Support Coecient Function", "publication_ref": [], "table_ref": [], "text": "Denition 3.3 Support coecient function (SCF): If there are m 0 1 data related to d i , then the support coecient function of d i calculates the total eects from the related data by considering the qualitative correlations between d i and each of its related data. Suppose (d i ; d j ) represents the qualitative correlation between d i and d j , then the support coecient function of d i can be dened as:\nSCF i = ( m X j=1;j6 =i (d i ; d j ); m):\nSCF i should directly depend on how many and how much related data support d i . When SCF i is greater than a certain value given by domain experts, the related data tend to support d i ; otherwise, the related data tend to depress d i ." }, { "figure_ref": [], "heading": "Evidence Based on SCF", "publication_ref": [], "table_ref": [], "text": "In Section 2, we used \\d i @R j \" to express that d i can be qualitatively identied from R j .\nRealizing \\d i @R j \" requires a denition of a shift interval 4 for R j such as: R j 6 4 = f(r j l 6 4) j l = 1; 2; :::; mg; and a denition of the possibility of \\d i 2 R j 6 4\".\nThe above formula is similar to that in fuzzy logic, but contains completely dierent meanings. The primary dierence is that the shift intervals are dynamically determined by SCF i , while in fuzzy logic, the fuzzy intervals are usually provided by domain experts in advance or calculated with quantitative criteria. Denition 3.4 Shift interval: Shift interval is a dynamic region for inaccurate data. Given a standard fuzzy interval for inaccurate data, the shift interval of d i varies around the standard fuzzy interval on the basis of SCF i . When SCF i shows that the related data support d i , the shift interval of d i becomes wider than the standard fuzzy interval. On the other hand, when SCF i shows that the related data do not support d i , the shift interval of d i becomes narrower than the standard fuzzy interval. Denition 3.5 Evidence based on SCF i : SCF i determines the shift interval of d i , that is, SCF i determines how widely d i is allowed to shift. The wider the shift interval, the more easily d i is identied. Therefore, SCF i provides conrmatory or disconrmatory evidence for identifying d i ." }, { "figure_ref": [], "heading": "Making Qualitative Hypotheses for Inaccurate Data", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce and analyze our method for identifying inaccurate data. We rst discuss the processes of realizing two essential predicates in our method, \\d i @R j \" and \\R j @MD\" respectively. Then, we present an algorithm for making qualitative hypotheses for inaccurate data (i.e., for realizing the predicate calculus described in Section 2)." }, { "figure_ref": [], "heading": "Predicate \\d i @R j \"", "publication_ref": [], "table_ref": [], "text": "When d i is accurate, \\d i @R j \" is equal to \\d i 2 R j \". If there is a reference value in R j which corresponds to d i (i.e., r jp 2 R j and r jp = d i ), then d i @R j = T . If there is no reference value corresponding to d i , then d i @R j = F . When d i is inaccurate, however, it is not sure whether r j p corresponds to d i . In this case, \\d i @R j \" means that d i possibly (qualitatively) belongs to R j , or in other words, r jp possibly (qualitatively) corresponds to d i . The value of \\d i @R j \" is not T or F , but the possibility of \\r jp = d i \" or \\d i 2 R j \".\nWe discussed in Section 2 that in some cases the identity of qualitative features is more robust and reliable than quantitative similarity or closeness. We have also discussed that qualitative correlations among related data can lead to evidence for the identity of qualitative features in diagnosis or interpretation. So if r j p (r j p 2 R j ) is assumed to correspond to d i , and there are m 01 reference values (r j 1 , r j 2 , ..., r j p01 , r j p+1 , ..., r jm ) related to r jp , then each of the m 01 reference values should correspond to a certain data item in MD, and the m 0 1 data items in M D are also related to each other. Therefore, qualitative correlations between d i and its m 0 1 related data items in MD should be considered.\nOur method rst determines the possibility of \\r jp = d i \" by calculating the similarity or closeness between r jp and d i like conventional fuzzy methods, then considers qualitative correlations among related data to obtain evidence for updating the possibility. When the qualitative correlations show that the related data support \\r j p = d i \", the possibility of \\r j p = d i \" will increase. When the qualitative correlations show that the related data do not support \\r jp = d i \", the possibility will decrease." }, { "figure_ref": [], "heading": "Defining Support Coefficient Function", "publication_ref": [], "table_ref": [], "text": "Suppose r j q (r j q 2 R j ) corresponds to d t . Because r j q is related to r j p , d t is related to d i .\nAs we have discussed, the qualitative correlation between d i and d t means that if d t exists, then d i is enhanced; otherwise, d i is depressed.\nWe rst dene the qualitative correlation between two related data items, d i and d t , as:\nc i (d t ) =\n( As there are m reference values in R j , we can dene the support coecient function SCF i for d i based on c i (d t ) (t = 1; 2; :::; m; t 6 = i):\n1\nSCF i = 1 + P m t=1;t6 =i c i (d t ) m\nwhere 0 < SCF i 1, and SCF i expresses the total qualitative correlations between d i and all of its related data. In other words, SCF i reects the support coecient of r jp corresponding to d i .\nIf m = 1, then SCF i = 1. When m > 1, SCF i is in the direct ratio to the number of the related data which may be identied from M D." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Determining Dynamic Shift Interval", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Suppose d o is a standard fuzzy interval of inaccurate data, we dene the dynamic shift interval of d i based on SCF i as:\n4d i = (2m 0 1)d o m 2 SCF i\nwhere 0 < 4d i < 2d o , and 4d i is in the direct ratio to SCF i .\nIf m = 1, then SCF i = 1, and 4d i = d o . In other words, when qualitative correlations among data are not known a priori, SCF i = 1 and 4d i = d o . In this case, the method degenerates to a conventional fuzzy method.\nWhen m is xed, the more the related data are identied, the greater SCF i is, therefore the greater 4d i is. When SCF i is xed, 4d i depends on the number of related data.\nTable 1 We can draw the following properties from the above formulas.\nProperty 1: With the same m, the more the related data are identied, the greater SCF i is; otherwise, the smaller SCF i is.\nProperty 2: With the same m, the greater the SCF i , the greater is 4d i . In other words, the more the related data support d i , the more widely d i is allowed to shift.\nProperty 3: With the same SCF i , the greater the m, the less 4d i varies along with m. In other words, the greater the number of related data, the less a single related data item can aect d i .\nProperty 2 and Property 3 are illustrated in Figure 2.\nd o d o 2 0 m SCF i = 1 SCF i = 0.5 SCF i = 0.3 SCF i = 0.1 d i Figure 2: 4d i versus m with dierent SCF i\nProperty 4: 4d i is in linear relation to SCF i . The slope is equal to, or greater than 1.5, which means that 4d i heavily depends on SCF i .\nProperty 5: Along with the increase of m, the slope increases very slightly. In other words, 4d i depends on the number of the related data which support d i , rather than the total number of related data.\nProperty 4 and Property 5 are illustrated in Figure 3. The value of \\d i @R j \" is equal to the possibility of \\r j p = d i \" which can be calculated by using the following formula: i = 1 0 j d i 0 r j p j 4d i where i 1.\nAt a glance, the representation of i looks like the membership degree of \\r jp 0 4d i d i r j p + 4d i \" in fuzzy logic. However, the meaning is completely dierent, for 4d i is neither provided by domain experts nor determined by quantitative similarity or closeness.\nHere 4d i is determined on the basis of qualitative correlations among related data. When qualitative correlations among related data are not considered, 4d i is d o , and the possibility is 1 0 jd i 0r j p j d o . With the consideration of qualitative correlations, the possibility is updated. Two new properties can be drawn from the above formula for calculating i .\nProperty 6: With the same d i , the greater the 4d i , the greater is i . In other words, the wider the dynamic shift interval, the greater is the value of \\d i @R j \". Formally, if 4d 00 i 4d 0 i 4d i , then 00 i 0 i i .\nProperty 7: SCF i provides qualitative evidence for accepting or rejecting d i as r jp since i is in the direct ratio to 4d i , and 4d i is in the direct ratio to SCF i .\nProperty 6 and Property 7 are illustrated in Figure 4. The above process of realizing \\d i @R j \" and calculating the value of \\d i @R j \" can be expressed by the following procedure.\n1 0 u i u i u i d i d i d i d i r j p\nProcedure d i @R j select r j p f rom R j ; SCF i = 0;\nif d i = r jp f SCF i = 1; i = 1; g elsef for each r j l 2 R j (l = 1; :::; m; l 6 = p)f calculate c i (d t ) 4 ; SCF i = SCF i + c i (d t ); g SCF i = (1 + SCF i )=m;\n4d i = d o 2 SCF i 2 (2m 0 1)=m; i = 10 j d i 0 r j p j =4d i ; g When R j can be identied as a subset of M D with a certain possibility (i.e., P=S), the procedure returns T (i.e., the value of P=S); otherwise, the procedure returns F ." }, { "figure_ref": [], "heading": "Algorithm for Making Qualitative Hypotheses for Inaccurate Data", "publication_ref": [], "table_ref": [], "text": "We give the following algorithm for interpreting/analyzing measured data based on procedure d i @R j and procedure R j @MD. When measured data are not accurate, the algorithm can identify inaccurate data items by considering qualitative correlations among related data.\nAlgorithm M aking-Qualitative-Hypotheses\nIN (MD) = ;; f or i = 1 to n f for j = 1 to k f P (R j ) = 0; if d i @R j (i:e:; Procedure d i @R j ) if R j @MD (i:e:; Procedure R j @MD) f R j ! IN (MD); P (R j ) = R j @MD; g end if end if g end for g end f or\nend algorithm\nIn the algorithm, P (R j ) represents the value of \\R j @MD\". The algorithm is actually the realization of the predicate calculus: 8d i 8R j ((d i @R j ) ^(R j @MD) ! R j IN (MD)).\nFor each measured data item in fd 1 , d 2 , ..., d n g, the algorithm searches fR 1 , R 2 , ..., R k g once. For each R j (R j = fr j 1 ; r j 2 ; :::; r j m g), the algorithm checks other n 0 1 measured data items for m times, and other m 01 reference values for n times. Therefore, with blind search, the number of operations is about (at worst): n 2 k 2 [m 2 (n 0 1) + n 2 (m 0 1)] = 2 2 k 2 m 2 n 2 0 k 2 n 2 0 k 2 m 2 n. Since k and m are two constants, the complexity of the algorithm is O(n 2 )." }, { "figure_ref": [], "heading": "Application to Infrared Spectrum Interpretation", "publication_ref": [], "table_ref": [], "text": "We have developed a knowledge-based system for interpreting infrared spectra by applying the proposed method, and have fully tested the system against several hundred real spectra. The experimental results show that the proposed method is signicantly better than the conventional methods used in many similar systems." }, { "figure_ref": [], "heading": "Infrared Spectrum Interpretation", "publication_ref": [ "b3", "b10" ], "table_ref": [], "text": "The primary task of infrared spectrum interpretation is to identify unknown objects by interpreting their infrared spectra. In this paper, we will limit the problem to interpretation of infrared spectra of compounds to determine composition of unknown compounds without loss of generality.\nSelecting infrared spectrum interpretation as the domain of application is out of the following reasons:\n1. Interpreting infrared spectra is a very signicant problem in both academic research and industrial application. For example, in chemical science and engineering, interpreting infrared spectra of compounds is the most eective way to identify unknown compounds, and to analyze the composition and purity of compounds (Colthup, Daly, & Wiberley, 1990).\n2. Interpreting infrared spectra is a very dicult problem. First, spectral data are huge in quantity, and complex in representation. Second, both symbolic reasoning and numerical analysis are needed to interpret infrared spectral data (Puskar, Levine, & Lowry, 1986;Sadtler, 1988).\n3. Interpreting infrared spectra is a typical problem dealing with inaccurate data since spectral data are often inaccurate. They often shift from their theoretical values due to various reasons. For example, the following is an assertion for spectrum interpretation:\nThe high frequency peak of partial component P C is located at F i .\nIn practice, however, the peak of P C may irregularly shift around F i due to noise or other unforeseen reasons. When the above assertion is used to identify real spectra, uncertainty arises." }, { "figure_ref": [], "heading": "Applying the Proposed Method to Infrared Spectrum Interpretation", "publication_ref": [], "table_ref": [], "text": "Interpreting infrared spectra is a special problem of diagnosis. Suppose the infrared spectrum of an unknown compound can be thresholded and represented as a nite set of peaks (i.e., the measured dataset MD): Sp = fp 1 ; p 2 ; :::; p n g;\nwhere every peak consists of the frequency (position) f , strength (height) s, and width (shape) w, respectively: p i = (f i ; s i ; w i ) i = 1; 2; :::; n: Because f i , s i and w i refer to the same peak p i , they are related data. This is the rst kind of related data in infrared spectrum interpretation.\nSuppose there are nite partial components (i.e., reference values RV ): P C = fPC 1 ; P C 2 ; :::; P C k g = ffp j 1 ; p j 2 ; :::; p j m g j j = 1; 2; :::; kg = ff(f j p ; s j p ; w j p ) j p = 1; 2; :::; mg j j = 1; 2; :::; kg.\nBecause f jp , s jp and w jp also refer to the same reference peak p jp , they are the rst kind of related data as well.\nThe spectroscopic knowledge for interpreting infrared spectra is usually expressed as \\if p i is equal to p j p , then p i may be created by partial component P C j \". Here \\p i is equal to p jp \" represents that f i , s i , and w i are equal to f jp , s jp , and w jp respectively.\nThe rst kind of related data has the following qualitative correlations:\n1. f i , s i and w i should be identied simultaneously, that is, if f i is f j p , then s i is s j p and w i is w j p , and if s i is s jp , then f i is f jp and w i is w jp , and if w i is w jp , then f i is f jp and s i is s jp .\n2. related data support each other. For example, if both f i and s i have been identied, then they will enhance the identication of w i . Conversely, if f i and s i have not been identied, then they will weaken the identication of w i .\nOur method for identifying f i , s i and w i based on the qualitative correlations among them can be formalized as the following predicate calculi, respectively: 8f i 8p jp ((f i @p jp ) ^(p jp @p i ) ! p i is created by P C j ), and 8s i 8p jp ((s i @p jp ) ^(p jp @p i ) ! p i is created by P C j ), and 8w i 8p jp ((w i @p jp ) ^(p jp @p i ) ! p i is created by P C j ), where \\p i is created by P C j \" means that f i , s i and w i can be qualitatively identied to be f j p , s j p and w j p .\nIn general, each partial component may create nite peaks at the same time. So if p i is created by P C j , then Sp is partially created by P C j ; if Sp is partially created by P C j , then all the peaks that P C j may create should be contained by Sp simultaneously. Therefore, all the peaks created by a partial component are also related data. This is the second kind of related data in infrared spectrum interpretation.\nThe second kind of related data has the following qualitative correlations:\n1. all the peaks of a partial component should be identied simultaneously, that is, if p i is p jp , then p j l 2 Sp (l = 1; 2; :::; m; l 6 = p).\n2. the peaks created by the same partial component support each other. For example, if most of the peaks of a partial component have been identied, these peaks will enhance the identication of the rest peaks. Conversely, if most of the peaks of a partial component can not be identied, then the identication of the rest peaks will be depressed.\nOur method for identifying related peaks based on the qualitative correlations can be formalized as the following predicate calculus: 8p i 8PC j ((p i @P C j ) ^(PC j @Sp) ! P C j IN (Sp))." }, { "figure_ref": [ "fig_2" ], "heading": "System for Interpreting Infrared Spectra", "publication_ref": [], "table_ref": [], "text": "Our system is implemented with C and MS-WINDOWS. Figure 5 shows the data ow diagram of the system. The input data of the system are infrared spectra of unknown compounds, and the solutions are partial components that the input spectra may contain. Because inferences are based on qualitative features of spectral data and qualitative correlations among related data, the system can gain high correct interpretation performance with noisy spectral data.\nAs we mentioned before, there are two types of related data in infrared spectrum interpretation: all the features of a single peak (i.e., f i , s i and w i of p i ), and all the peaks of a single partial component (i.e., p 1 , p 2 , ... and p m ). The inference engine of the system employs the proposed method to both types of the related data when inaccuracy arises." }, { "figure_ref": [ "fig_3" ], "heading": "An Example", "publication_ref": [ "b2", "b6", "b16", "b3" ], "table_ref": [ "tab_2" ], "text": "We discuss the performance of the system through the following example. Figure 6 shows an infrared spectrum of an unknown compound. The spectrum is very hard to interpret since the peak with an arrow (named p 1 ) shifts substantially. Our system correctly identies that p 1 is created by partial component benzene-ring.\nIn contrast, many similar systems can not correctly identify the peak (Clerc, Pretsch, & Zurcher, 1986;Hasenoehrl, Perkins, & Griths, 1992;Wytho, Buck, & Tomellini, 1989) since the peak of a benzene-ring at this frequency position (named p b 1 ) should be a strong peak (i.e., s b1 > 1:000) according to spectroscopic knowledge, not a medium one (s 1 = 0:510) as the case in this example. Systems based on conventional fuzzy methods usually assume a fuzzy interval for each inaccurate peak, then determine the membership degree that the inaccurate peak is in the fuzzy interval. Suppose the reference value for a strong peak is 1:000, and the fuzzy interval for a strong peak is 0:300 (Colthup, Daly, & Wiberley, 1990), then only peaks with strength of 1 6 0:300 can be regarded as strong peaks. Obviously, by conventional fuzzy methods, the possibility of p 1 being a strong peak is zero, i.e., benzene0ring (s 1 ) = 0.\nInferring on the basis of qualitative correlations among related data, our system makes a correct interpretation of the spectrum. Through the following two cases, we introduce the inference process of the system, and at the same time demonstrate the use of our method for identifying inaccurate data. Because the frequency (position) and width (shape) of p 1 are both the same as those of benzene-ring, the possibility of f 1 being identied as f b1 is 100% (i.e., benzene0ring (f 1 ) = 1), and the possibility of w 1 being identied as w b1 is also 100% (i.e., benzene0ring (w 1 ) = 1 5 . As we have discussed before, f 1 , s 1 and w 1 are related data, so we can obtain conrm evidence for identifying s 1 by considering qualitative correlations among s 1 , f 1 and w 1 : benzene0ring (f 1 ) = 1, so, c s 1 (f 1 ) = 1 (c s 1 (f 1 ) represents the qualitative correlation between s 1 and f 1 ), benzene0ring (w 1 ) = 1, so, c s 1 (w 1 ) = 1 (c s 1 (w 1 ) represents the qualitative correlation between s 1 and w 1 ) so, SCF s 1 = 1+2 3 = 1, and 4s 1 = (601)20:300 3 2 1 = 0:500, and s 1 @p b 1 = 1 0 100:510 0:500 = 0:02. 5. 3(d) means the possibility of d being identied by conventional fuzzy methods, i.e., SCF is not considered.\nRoughly, when SCF i > 0:5, related peaks tend to support p i . When related peaks support p i , 4d i > 1. When 4d i > 1, p i @benzene 0 ring > i .\nTable 2 shows the relation among p i @benzene 0 ring, i and 4d i .\np i @benzene 0 ring In the above example, SCF 1 = 0:850, and 4d 1 = 1:658, so p 1 @benzene 0 ring = 1 0 1 0 0:755 1:658 = 0:852:\nTherefore, the possibility of p 1 being identied as p b 1 increases from 0:755 to 0:852 due to qualitative correlations among related peaks. The process is similar to the probability propagation in probabilistic reasoning. Here identifying p 1 is a hypothesis, and qualitative correlations among related data of p 1 are pieces of evidence.\nAfter all the peaks of the benzene-ring are identied, the possibility that the benzenering is contained by Sp can be nally calculated by employing the same method as described in Section 5.4.1." }, { "figure_ref": [], "heading": "Analysis of Experimental Results", "publication_ref": [ "b2", "b16", "b3" ], "table_ref": [], "text": "We compare two methods in the experiments. The rst method (called \\AF \") is a conventional fuzzy method which is used by most similar systems (Clerc, Pretsch, & Zurcher, 1986;Wytho, Buck, & Tomellini, 1989). To use AF , each reference value must be associated with a fuzzy interval for dealing with inaccuracy. Both reference values and fuzzy intervals are empirically determined (Colthup, Daly, & Wiberley, 1990).\nTable 3 lists some reference values and their fuzzy intervals used by AF . CH 3 2960 6 15cm 01 strong 6 0:3 sharp 6 1 2870 6 15cm 01 strong 6 0:3 sharp 6 1 1450 6 10cm 01 medium 6 0:3 sharp 6 0:5 ... benzene 0 ring 3055 6 25cm 01 strong 6 0:3 sharp 6 1:5 1645 6 10cm 01 medium 6 0:3 sharp 6 0:5 1550 6 30cm 01 medium 6 0:3 sharp 6 1 1450 6 3cm 01 medium 6 0:3 sharp 6 0 ... 0CH 2 0 OH 3635 6 5cm 01 strong 6 0:3 broad 6 1 3550 6 25cm 01 strong 6 0:3 sharp 6 1 ..." }, { "figure_ref": [], "heading": "Table 3: Some reference values and their fuzzy intervals", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "The membership function of AF is: r (d) = maxf0; 1 0 j d 0 r j 4d g;\nwhere d is a measured data item, r is a reference value, 4d is the fuzzy interval of r, and 0 r (d) 1.\nThe second method (called \\AF 3 \") is the proposed method. AF 3 uses the same ref- erence values and fuzzy intervals as AF , but the fuzzy intervals in AF 3 are only used as standard fuzzy intervals based on which dynamic shift intervals are determined by considering qualitative correlations among related data.\nAF and AF 3 use the same reference values and empirical fuzzy intervals. The formula for calculating membership degrees in AF (i.e., r (d) = maxf0; 1 0 jd0rj 4d g) is also similar to the formula for calculating possibility in AF 3 (i.e., i = 1 0 jd i 0r j p j 4d i ). However, in AF , 4d is simply an empirical fuzzy interval, while in AF 3 , 4d i is a dynamic shift interval based on qualitative correlations among related data.\nWe have tested the system against several hundred real infrared spectra of organic compounds. The experimental results show that AF 3 is signicantly better than AF .\nTable 4 lists part of the experimental results in which the rst column indicates the solutions obtained by AF ; the second column indicates the solutions obtained by AF 3 ; and the third column shows the correct solutions.\n-CH[CH3]2 NH2- -CH2- -CH2- -CH2- CH3- CH3- CH3- -[CH2]n- -[CH2]n- >C=CH- >C=CH- >C=CH- -C=CH -C- -C- CH3 CH3 -CH CH3 CH3 -CH -CH2- CH3- -CH2- CH3- -CH2- -CH2- -CH2- CH3- CH3- CH3- -CH[CH3]2 C Cl Cl CH3- NH2- -CH2- -CH2- -CH2- CH3- CH3- CH3- -[CH2]n- -[CH2]n- >C=CH- >C=CH- >C=CH- -C- CH3 CH3 -CH -CH2- CH3- CH3- CH3- -C- CH3 CH3 -CH -CH2-CH3- -CH2- -CH2- -CH2- -CH2- CH3- CH3- -C=CH CH3- C Cl Cl -C=C- CH3- CH3- -C- -CH2- -CH2- -CH2- CH3- CH3- CH3- -[CH2]n- -[CH2]n- >C=CH- >C=CH- >C=CH- -C- CH3 CH3 -CH -CH2- CH3- CH3- -C- CH3 CH3 -CH -CH2- CH3- -CH2- -CH2- -CH2- CH3- CH3- CH3- CH3- -C=CH -CH[CH3]2 CH3- C Cl Cl -C=C- CH3- NH2- -C- 2/3 1/3 4/5 2/3 2/3 1/2 3/4 3/4 2/3 2/3 2/3 2/2\n: identified PC set is the same as the PC set in the correct solution(in this case, RI=1) n : identified PC set is not the same as the PC set in the correct solution(the number indicates the RI)\nAF (Without SCF) AF* (With SCF) Correct Solutions\nTable 4: Experimental results with AF and AF 3\nThere are two important standard metrics for evaluating solutions of infrared spectrum interpretation: Denition 5.1 Rate of correctness (RC): the rate that the identied partial component set is exactly the same as the partial component set in the correct solutions. Denition 5.2 Rate of identication (RI): the rate that the partial components in the correct solutions are identied.\nTable 5 shows the comparison between AF and AF 3 with the two standard metrics. 5 demonstrates that both the RC and RI increase by integrating SCF , but the RC increases more signicantly. The reason is that although AF can identify most partial components of unknown compounds, the rate that it can identify all partial components of unknown compounds is low because there are always some partial components whose measured peaks seriously shift from the reference values." }, { "figure_ref": [], "heading": "Comparison with Related Systems", "publication_ref": [], "table_ref": [], "text": "Related systems mainly fall into the following four categories: (1) Systems based on Y/N classication, (2) Systems based on fuzzy logic, (3) Systems based on pattern recognition, and (4) Systems based on neural networks." }, { "figure_ref": [], "heading": "Systems Based on Yes/No Classification", "publication_ref": [ "b3", "b6", "b10", "b16" ], "table_ref": [], "text": "The method commonly used by spectroscopists in practice is numerical analysis (Colthup, Daly, & Wiberley, 1990). Numerical analysis is primarily based on comparison between spectral data and reference values. Reference values are usually some regions like f requency : 3615 65cm 01 or strength : 1:000 60:300. If spectral data are in certain regions, the answer of classication is yes; otherwise, the answer is no.\nMost systems for interpreting infrared spectra use this method (Hasenoehrl, Perkins, & Griths, 1992;Puskar, Levine, & Lowry, 1986;Wytho, Buck, & Tomellini, 1989). For example, in Wytho's system, rules for comparing spectral data are in the following forms." }, { "figure_ref": [], "heading": "ANY PEAK(S)", "publication_ref": [], "table_ref": [], "text": "FREQUENCY:1700-1707 STRENGTH:0.7-1.0 WIDTH:SHARP TO BROAD ANSWER -YES-ACTION -*** The advantage of these systems is that they are very easy to develop because they can directly use spectroscopic knowledge, and do not need further computation. However, the problem is that each of these systems is only applicable to a class of compounds, or pure compounds because in the case of seriously inaccurate spectral data, the reference values (regions) can not reect the inaccuracy. For example, Hasenoehrl's system is only for distinguishing compounds containing at least one carbonyl functionality from other compounds, although the RI of the system is about 98% (naturally, the RC is not available), and Puskar's system is only for identifying hazardous substances.\nIn fact, spectroscopists also use qualitative analysis in some specic cases in addition to the formal spectroscopic knowledge, such as \\if the peaks in 600 cm 01 -900 cm 01 look like the peaks of benzene-rings, then the peaks in 3000 cm 01 -3100 cm 01 are quite likely to be created by a benzene-ring\". Unfortunately, the qualitative analysis was hardly applied to these systems since it can not be used in usual ways. In contrast, our system can successfully use the qualitative analysis like spectroscopists. The way of using it is the method proposed in this paper. As a result, our system is applicable to all compounds which exhibit high performance with respect to correctness." }, { "figure_ref": [ "fig_4" ], "heading": "Systems Based on Fuzzy Logic", "publication_ref": [ "b2" ], "table_ref": [], "text": "Since spectral data are always inaccurate, and the representation of spectroscopic knowledge is quite like that in fuzzy logic, some systems naturally use fuzzy logic or some techniques similar to fuzzy logic (Clerc, Pretsch, & Zurcher, 1986). In these systems, fuzzy intervals which are similar to the regions described in Section 5.6.1 are given for reference values, and memberships of inaccurate data are calculated on the basis of the degrees that the inaccurate data are in the fuzzy intervals. These systems are better than those described in Section 5.6.1 in some cases, but the degrees that inaccurate data are in fuzzy intervals do not necessarily reect the possibility of the inaccurate data being the reference values. For example, in Figure 7, it is dicult to determine which peak is closer to the reference value only by considering the degrees that peak a and peak b are in the fuzzy interval. However, by applying the method proposed in this paper, the above problem can be easily solved. As we discussed in Section 5.6.1, in practice spectroscopists also frequently use knowledge about correlations among peaks in addition to the formalizable spectroscopic knowledge. This kind of knowledge is essential to our method which enables us to use qualitative correlations among related data as evidence for the identication of inaccurate data.\nWe have compared the fuzzy method used by these systems with our method in Section 5.5. So far as we know, the RC of our system is the highest among the similar systems, and the RI of our system is higher than that of most of the systems." }, { "figure_ref": [ "fig_5" ], "heading": "Systems Based on Pattern Recognition", "publication_ref": [ "b7" ], "table_ref": [], "text": "Some systems use pattern recognition techniques to interpret infrared spectra (Jalsovszky & Holly, 1988;Sadtler, 1988), of which Sadtler is the most popular commercial system. The system compares known patterns with unknown ones, and determines the possibility of an unknown pattern being a known one by calculating the quantitative similarity or closeness between the two patterns.\nUnlike fuzzy techniques, pattern recognition considers a group of data (i.e., a pattern) at the same time. However, pattern recognition is primarily based on quantitative analysis. We have discussed that in many cases especially when the inaccuracy of spectral data is not slight, qualitative features of spectral data are much more important than quantitative ones. For example, Figure 8 shows two simple cases. The dierence between the two patterns in (a) is smaller than that in (b). From the viewpoint of Sadtler, the two patterns in (a) are closer than those in (b). However, the two patterns in (b) may be the same in some cases, while the two patterns in (a) may not be the same in any case. The reason is that the qualitative features (frequency positions of peaks) of the two patterns in (a) are dierent. Because quantitative similarity and closeness are not always sound, most systems based on pattern recognition including Sadtler can not give concrete solutions. In general, the solutions of these systems are only a series of candidates from which users have to nally decide the possible one by themselves. It is dicult to compare these systems with ours because the solutions of these systems are quite loose, and neither the RC nor the RI is available. Sadtler, for example, usually gives the list of all known patterns associated with the values of quantitative dierences between the unknown patterns and these known ones." }, { "figure_ref": [], "heading": "Systems Based on Neural Networks", "publication_ref": [ "b0", "b12" ], "table_ref": [], "text": "Recently, neural networks have been applied to infrared spectrum interpreting systems (Anand, Mehrotra, Mohan, & Ranka, 1991;Robb & Munk, 1990). In Anand's system, a neural network approach is used to analyze the presence of amino acids in protein molecules. To this specic classication, the RI of Anand's system is about 87%, and the RC is not available. In Robb's system, a linear neural network model is developed for interpreting infrared spectra. The system is for general purpose like our system. Without prior input of spectrum-structure correlations, the RC of Robb's system is equal to 53.3%.\nAlthough the RC and RI of our system are both higher than those of the two systems, we still think that using neural networks is very promising, especially when model training or system learning is a must. The research concerning applying neural networks to our system is left for the future." }, { "figure_ref": [], "heading": "Related Work and Discussion", "publication_ref": [ "b1", "b8", "b17", "b7", "b5", "b15" ], "table_ref": [], "text": "Identifying inaccurate data has long been regarded as a signicant and dicult problem in AI. Many methods and techniques have been proposed.\nFuzzy logic provides the mathematical fundamentals of representation and calculation of inaccurate data (Bowen, Lai, & Bahler, 1992;Negoita & Ralescu, 1987;Zadeh, 1978). Our method is primarily based on fuzzy theory. But compared with conventional fuzzy techniques, the advantages of our method include: (1) fuzzy intervals of inaccurate data are dynamically determined so that dynamic information can be used; (2) fuzzy intervals are based on qualitative features of data and qualitative correlations among related data so that the solutions are more robust. The limitation of our method is that when qualitative correlations among related data are not known in advance, the method degenerates to a conventional fuzzy method. For instance, if SCF is unavailable, the two methods described in Section 5.5 become the same.\nPattern recognition provides the techniques for interpreting measured data in group (Jalsovszky & Holly, 1988). By pattern recognition methods, related data and connections among data can be considered. However, there are two preconditions which must be satised for complex data analysis by pattern recognition to be successful. The rst precondition is that we have to obtain adequate data bases from which we can derive the patterns we need to recognize, and the second precondition is that we have to demonstrate that there are suitable metrics of similarity between patterns. When patterns explicitly exist, and measured patterns are not seriously noisy (e.g., ngerprint recognition), pattern recognition methods are eective. However, if patterns are not explicit, or patterns change irregularly which implies that there is not a stable metrics for determining the similarity between patterns (e.g., spectrum interpretation), our method is more practical and robust.\nIn identifying inaccurate data, the roles of \\d i @R j \" and \\R j @MD\" are quite similar to the role of subjective statements or prior probabilities in other systems (Duda, Hart, & Nilsson, 1976;Shortlie & Buchanan, 1975). However, the essential dierence is that our method dynamically calculates the values of \\d i @R j \" and \\R j @MD\" from qualitative correlations among related data so that it does not need many assumptions beforehand, and can avoid inconsistency in knowledge and data bases. Our method can also handle possibility propagation among inference networks. Readers may have noticed it from the process of considering the second kind of related data in spectrum interpretation (see Section 5.4.2).\nWhen statistical samples are sucient, or subjective statements can be consistently obtained, probabilistic reasoning methods can be applied to inaccurate data identication. When statistical samples of inaccurate data are not enough and consistent subjective statements are not available, our method is very eective.\nOur ongoing research related to probabilistic reasoning is to consider the interaction among identied partial components. As we discussed before, spectroscopists frequently use the knowledge such as \\if C 6 H 6 coexists with CH 3 , then the peaks of CH 3 around 2900 cm 01 may shift\", or \\if -C-O-C-has been identied, then the strength of the peaks of CH 3 may change\". Therefore, it is possible to update the possibilities of identied partial components by considering the interaction among them. Using probabilistic reasoning to analyze the eects among identied partial components would not only help us identify inaccurate data, but also provide us with the reason why the data are inaccurate. The research and experiments will be the subject of our sequel paper." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we have presented a new method for identifying inaccurate data on the basis of qualitative correlations among related data. We rst introduced a new concept called support coecient function (SCF ). Then, we proposed an approach to determining dynamic shift intervals of inaccurate data based on SCF , and an approach to calculating possibility of identifying inaccurate data, respectively. We also presented an algorithm for using qualitative correlations among related data as conrmatory or disconrmatory evidence for the identication of inaccurate data. We have developed a practical system for interpreting infrared spectra by applying the proposed method, and have fully tested the system against several hundred real spectra. The experimental results show that the proposed method is signicantly better than the conventional methods used in many similar systems. In this paper we have also described the system and the experimental results.\nBriey, our novel work includes:\n1. A method which assumes an inaccurate data item to be a certain reference value on the basis of qualitative correlations between the inaccurate data item and all of its related data.\n2. An algorithm which crystallizes the method.\n3. A practical system which uses the algorithm to interpret infrared spectra." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "Thanks to the editors and anonymous reviewers of JAIR for their helpful comments and suggestions, and to Chunling Sui and Mitchell Bradt for proofreading the manuscript. This research was partially supported by Horiba Ltd., Kyoto, Japan, and the rst author wishes to thank ASTEM Research Institute, Kyoto, Japan, where he worked as a researcher in 1991 -1994. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "if i > 0 return i ; else return N IL end procedure\nWhen d i can be identied with a certain possibility (i.e., i > 0), the procedure returns T (i.e., the value of i ); otherwise, the procedure returns F ." }, { "figure_ref": [], "heading": "Predicate \\R j @MD\"", "publication_ref": [ "b2" ], "table_ref": [], "text": "When M D is accurate, \\R j @MD\" is equal to \\R j M D\". If all the m reference values in R j can be identied from MD, then R j @MD = T ; otherwise R j @MD = F . When M D is inaccurate, however, \\R j @MD\" means that R j is possibly (qualitatively) a subset of M D. The value of \\R j @MD\" is not T or F , but the possibility that all the reference values in R j can be identied from M D.\nIf l > 0 (l = 1; 2; :::; m), then R j can be regarded as a subset of M D with a certain possibility. Let s 1 , s 2 , ..., and s m be the priorities of the reference values in R j , then the value of \\R j @MD\" can be calculated based on 1 , 2 , ..., and m by using the following formula: R j @MD = P m l=1 s l 2 l P m l=1 s l ; s l > 0; l > 0:\nSuppose i has been calculated by using procedure d i @R j , then the process of realizing \\R j @MD\" and calculating the value of \\R j @MD\" can be expressed by a simple procedure.\nProcedure R j @MD P = s i 2 i ; S = s i ;\nf or l = 1 to m (l 6 = p)f l = d t @R j ; if l > 0f P = P + s l 2 l ; S = S + s l ; g elsef P = 0; exit; g g if P > 0 return P=S; else return N IL end procedure By considering SCF s 1 , the possibility of p 1 being regarded as a strong peak of benzenering increases from 0 to 0:02. As possibility, 0:02 may not be dierent from 0:04 or 0:06, but 0:02 is signicantly dierent from 0. Many near-misses may be handled by the negligible possibility. For example, in most systems based on fuzzy and other methods (Clerc, Pretsch, & Zurcher, 1986), it is impossible to identify p 1 to be \\strong\" (i.e., benzene0ring (s 1 ) = 0), but considering qualitative correlations among related data makes it possible although the possibility is only 0:02.\nAs mentioned before, f 1 and w 1 are both the same as the reference values, so f 1 @p b 1 = 1, and w 1 @p b 1 = 1.\nSuppose the priorities of f 1 , s 1 and w 1 are 2, 1 and 1 respectively, then the possibility of p 1 being identied as p b 1 is:\n= 0:755:" }, { "figure_ref": [], "heading": "Case II: Considering the Second Kind of Related Data", "publication_ref": [], "table_ref": [], "text": "The process of considering the second kind of related data is quite similar.\nWe have got that the possibility of p 1 being created by a benzene-ring is 1 ( 1 = 0:755).\nSuppose the benzene-ring can create m peaks: fp b 1 , p b 2 , ..., p b m g, then the m peaks are related to each other. If p 1 is created by the benzene-ring, then Sp is partially created by the benzene-ring, i.e., the benzene-ring is contained by the unknown spectrum; if Sp is partially created by the benzene-ring, then the other m 0 1 peaks of the benzene-ring should also be identied. By using the same procedure as obtaining 1 , we can get 2 , 3 , ... and m as well. According to our method, the qualitative correlation between two related peaks, p i and p j , is dened as: Let d o = 1, then 4d i = 2m 0 1 m 2 SCF i ; 0 < 4d i < 2; and p i @benzene 0 ring = 1 0 1 0 i 4d i ; p i @benzene 0 ring 1:" } ]
[ { "authors": "R Anand; K Mehrotra; C K Mohan; S Ranka", "journal": "", "ref_id": "b0", "title": "Analyzing Images Containing Multiple Sparse Patterns with Neural Networks", "year": "1991" }, { "authors": "J Bowen; R Lai; D Bahler", "journal": "", "ref_id": "b1", "title": "Lexical Imprecision in Fuzzy Constraint Networks", "year": "1992" }, { "authors": "J T Clerc; E Pretsch; M Zurcher", "journal": "Mikrochim. Acta", "ref_id": "b2", "title": "Performance Analysis of Infrared Library Search Systems", "year": "1986" }, { "authors": "L Colthup; H Daly; S E Wiberley", "journal": "Academic Press", "ref_id": "b3", "title": "Introduction to Infrared and Raman Spectroscopy", "year": "1990" }, { "authors": "A P Dempster", "journal": "Journal of the Royal Statistical Society", "ref_id": "b4", "title": "A Generalization of Bayesian Inference", "year": "1968" }, { "authors": "R O Duda; P E Hart; N J Nilsson", "journal": "", "ref_id": "b5", "title": "Subjective Bayesian Methods for Rule-Based Inference Systems", "year": "1976" }, { "authors": "E J Hasenoehrl; J H Perkins; P R Griths", "journal": "Journal of Anal. Chem", "ref_id": "b6", "title": "Expert System Based on Principal Components Analysis for the Identication of Molecular Structures from Vapor-Phase Infrared Spectra", "year": "1992" }, { "authors": "G Jalsovszky; G Holly", "journal": "Journal of Molecular Structure", "ref_id": "b7", "title": "Pattern Recognition Applied to Vapour-Phase Infrared Spectra: Characteristics of vOH Bands", "year": "1988" }, { "authors": "C V Negoita; D Ralescu", "journal": "", "ref_id": "b8", "title": "Simulation, Knowledge-Based Computing, and Fuzzy Statistics", "year": "1987" }, { "authors": "J Pearl", "journal": "", "ref_id": "b9", "title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "year": "1988" }, { "authors": "M A Puskar; S P Levine; S R Lowry", "journal": "Journal of Anal. Chem", "ref_id": "b10", "title": "Computerized Infrared Spectral Identication of Compounds Frequently Found at Hazardous Waste Sites", "year": "1986" }, { "authors": "R Reiter", "journal": "Articial Intelligence", "ref_id": "b11", "title": "A Theory of Diagnosis From First Principles", "year": "1987" }, { "authors": "E W Robb; M E Munk", "journal": "Mikrochim. Acta", "ref_id": "b12", "title": "A Neural Network Approach to Infrared Spectrum Interpretation", "year": "1990" }, { "authors": "", "journal": "", "ref_id": "b13", "title": "Sadtler PC Spectral Search Libraries, Product Introduction & User's Manual", "year": "1988" }, { "authors": "G Shafer", "journal": "Princeton Uni. Press", "ref_id": "b14", "title": "A Mathematical Theory of Evidence", "year": "1976" }, { "authors": "E H Shortlie; B G Buchanan", "journal": "Mathematical Biosciences", "ref_id": "b15", "title": "A Model of Inexact Reasoning in Medicine", "year": "1975" }, { "authors": "B J Wytho; C F Buck; S A Tomellini", "journal": "Analytica Chimica Acta", "ref_id": "b16", "title": "Descriptive Interactive Computer-Assisted Interpretation of Infrared Spectra", "year": "1989" }, { "authors": "L A Zadeh", "journal": "Fuzzy Sets Syst", "ref_id": "b17", "title": "Fuzzy Set as a Basis for a Theory of Possibility", "year": "1978" }, { "authors": "Q Zhao", "journal": "", "ref_id": "b18", "title": "An Ecient Method of Solving Constraint Satisfaction Problems in IR Spectrum Interpretation", "year": "1994" }, { "authors": "Q Zhao; T Nishida", "journal": "", "ref_id": "b19", "title": "A Knowledge Model for Infrared Spectrum Processing", "year": "1994" } ]
[ { "formula_coordinates": [ 4, 246.24, 605.22, 119.48, 17.24 ], "formula_id": "formula_0", "formula_text": "RV = R 1 [ R 2 [ ::: [ R k ;" }, { "formula_coordinates": [ 5, 229.68, 178.72, 152.67, 12.74 ], "formula_id": "formula_1", "formula_text": "IN(M D); (IN(MD) RV ):" }, { "formula_coordinates": [ 5, 185.4, 235.86, 241.17, 17.05 ], "formula_id": "formula_2", "formula_text": "8d i 8R j ((d i @R j ) ^(R j @MD) ! R j IN (MD)) 3 ;" }, { "formula_coordinates": [ 6, 159.48, 177.73, 293.22, 137.58 ], "formula_id": "formula_3", "formula_text": "f i s i w i Figure 1: Example of related data in spectrum interpretation" }, { "formula_coordinates": [ 6, 233.64, 561.8, 144.72, 36.9 ], "formula_id": "formula_4", "formula_text": "SCF i = ( m X j=1;j6 =i (d i ; d j ); m):" }, { "formula_coordinates": [ 8, 96.84, 261.16, 37.38, 14.08 ], "formula_id": "formula_5", "formula_text": "c i (d t ) =" }, { "formula_coordinates": [ 8, 150.3, 254.5, 5.45, 12.74 ], "formula_id": "formula_6", "formula_text": "1" }, { "formula_coordinates": [ 8, 241.2, 414.2, 128.27, 38.71 ], "formula_id": "formula_7", "formula_text": "SCF i = 1 + P m t=1;t6 =i c i (d t ) m" }, { "formula_coordinates": [ 8, 242.1, 607.92, 127.26, 30.56 ], "formula_id": "formula_8", "formula_text": "4d i = (2m 0 1)d o m 2 SCF i" }, { "formula_coordinates": [ 9, 161.9, 452.43, 278.3, 211.78 ], "formula_id": "formula_9", "formula_text": "d o d o 2 0 m SCF i = 1 SCF i = 0.5 SCF i = 0.3 SCF i = 0.1 d i Figure 2: 4d i versus m with dierent SCF i" }, { "formula_coordinates": [ 11, 132.62, 115.41, 187.22, 275.69 ], "formula_id": "formula_10", "formula_text": "1 0 u i u i u i d i d i d i d i r j p" }, { "formula_coordinates": [ 11, 167.22, 499.56, 208.24, 143.23 ], "formula_id": "formula_11", "formula_text": "if d i = r jp f SCF i = 1; i = 1; g elsef for each r j l 2 R j (l = 1; :::; m; l 6 = p)f calculate c i (d t ) 4 ; SCF i = SCF i + c i (d t ); g SCF i = (1 + SCF i )=m;" }, { "formula_coordinates": [ 13, 167.22, 236.4, 286.38, 239.54 ], "formula_id": "formula_12", "formula_text": "IN (MD) = ;; f or i = 1 to n f for j = 1 to k f P (R j ) = 0; if d i @R j (i:e:; Procedure d i @R j ) if R j @MD (i:e:; Procedure R j @MD) f R j ! IN (MD); P (R j ) = R j @MD; g end if end if g end for g end f or" }, { "formula_coordinates": [ 21, 120.3, 132.95, 372.31, 506.43 ], "formula_id": "formula_13", "formula_text": "-CH[CH3]2 NH2- -CH2- -CH2- -CH2- CH3- CH3- CH3- -[CH2]n- -[CH2]n- >C=CH- >C=CH- >C=CH- -C=CH -C- -C- CH3 CH3 -CH CH3 CH3 -CH -CH2- CH3- -CH2- CH3- -CH2- -CH2- -CH2- CH3- CH3- CH3- -CH[CH3]2 C Cl Cl CH3- NH2- -CH2- -CH2- -CH2- CH3- CH3- CH3- -[CH2]n- -[CH2]n- >C=CH- >C=CH- >C=CH- -C- CH3 CH3 -CH -CH2- CH3- CH3- CH3- -C- CH3 CH3 -CH -CH2-CH3- -CH2- -CH2- -CH2- -CH2- CH3- CH3- -C=CH CH3- C Cl Cl -C=C- CH3- CH3- -C- -CH2- -CH2- -CH2- CH3- CH3- CH3- -[CH2]n- -[CH2]n- >C=CH- >C=CH- >C=CH- -C- CH3 CH3 -CH -CH2- CH3- CH3- -C- CH3 CH3 -CH -CH2- CH3- -CH2- -CH2- -CH2- CH3- CH3- CH3- CH3- -C=CH -CH[CH3]2 CH3- C Cl Cl -C=C- CH3- NH2- -C- 2/3 1/3 4/5 2/3 2/3 1/2 3/4 3/4 2/3 2/3 2/3 2/2" }, { "formula_coordinates": [ 21, 129.35, 101.6, 358.32, 11.42 ], "formula_id": "formula_14", "formula_text": "AF (Without SCF) AF* (With SCF) Correct Solutions" } ]
Using Qualitative Hypotheses to Identify Inaccurate Data
Identifying inaccurate data has long been regarded as a signicant and dicult problem in AI. In this paper, we present a new method for identifying inaccurate data on the basis of qualitative correlations among related data. First, we introduce the denitions of related data and qualitative correlations among related data. Then we put forward a new concept called support coecient function (SCF ). SCF can be used to extract, represent, and calculate qualitative correlations among related data within a dataset. We propose an approach to determining dynamic shift intervals of inaccurate data, and an approach to calculating possibility of identifying inaccurate data, respectively. Both of the approaches are based on SCF. Finally we present an algorithm for identifying inaccurate data by using qualitative correlations among related data as conrmatory or disconrmatory evidence. We have developed a practical system for interpreting infrared spectra by applying the method, and have fully tested the system against several hundred real spectra. The experimental results show that the method is signicantly better than the conventional methods used in many similar systems.
Qi Zhao; Toyoaki Nishida; Zhao & Nishida
[ { "figure_caption": "Figure 3 :3Figure 3: 4d i versus SCF i with dierent m", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Value of \\d i @R j \" versus various 4d i", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Data ow diagram of the system", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: An example of infrared spectrum", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Two peaks in a fuzzy interval", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Quantitative dierences between patterns", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "if d t can be found from MD which satises: r jq 0 d o d t r jq + d o 0 if d t can not be found from M D which satises: r j q 0 d o d t r j q + d o where d o is a standard fuzzy interval of inaccurate data, and c i (d t ) expresses the qualitative correlation between d i and d t . c i (d t )=1 means that d i is enhanced since its related data item d t can be found from the measured dataset, and c i (d t )=0 means that d i is depressed since its related data item d t can not be found from the measured dataset. The denition of c", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "shows the relation among 4d i , m and SCF i . Relation among 4d i , m and SCF i", "figure_data": "4d i11050m10050010001 0.8 SCF i 0.5 0.3 0.1d o / / / /1.9000d o 1.9800d o 1.9900d o 1.9980d o 1.9990d o 1.5200d o 1.5840d o 1.5920d o 1.5984d o 1.5992d o 0.9500d o 0.9900d o 0.9950d o 0.9990d o 0.9995d o 0.5700d o 0.5940d o 0.5970d o 0.5994d o 0.5997d o 0.1900d o 0.1980d o 0.1990d o 0.1998d o 0.1999d o", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Relation among p i @benzene 0 ring, i and 4d i", "figure_data": "10.8i 0.50.301.310.8460.6150.4620.2314d i1.1 1 0.91 1 10.818 0.8 0.7780.545 0.5 0.4440.364 0.3 0.2220.091 0 -0.1110.710.7140.2860-0.429", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation of AF & AF 3 with RC and RI Table", "figure_data": "RC (error-rate) RI (error-rate)AF AF 30.455 0.736(0.545) (0.264)0.812 0.894(0.188) (0.106)", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b4", "b25", "b33", "b37", "b30", "b7", "b45", "b27", "b12", "b31", "b16", "b36" ], "table_ref": [], "text": "Induction and deduction are both underlying processes in intelligent agents. Induction \\involves intellectual leaps from the particular to the general\" (D'Ignazio & Wold, 1984). It plays an important part in knowledge acquisition or learning. D' Ignazio and Wold (1984) claim that indeed, \\All the laws of nature were discovered by inductive reasoning.\" Deduction is a form of reasoning with and about acquired knowledge. It typically does not result in the generation of new facts, rather it establishes cause-e ect relationships between existing facts. Deduction may be applied forward by seeking the consequences of certain existing hypotheses or backward to discover the necessary conditions for the achievement of certain goals. Despite their di erences, induction and deduction are strongly interrelated. The ability to reason about a domain of knowledge is often based on rules about that domain, that must be acquired somehow; and the ability to reason can often guide the acquisition of new knowledge or learning.\nInductive learning has been the subject of much research leading to the design of a variety of algorithms (e.g., Clark & Niblett, 1989;Michalski, 1983;Quinlan, 1986;Salzberg, 1991). In general, inductive learning systems generate classi cation rules from examples. Typically, the system is rst presented a set of examples (objects, situations, etc.), also known as a training set. Examples are usually expressed in the attribute-value language and represent recorded instances of attribute-value pairs together with their corresponding classi cation. The system's goal is then to discover sets of su cient critical features or rules that properly classify the examples of the training set (convergence) and adequately extend to previously unseen examples (generalization).\nThough machines are still a far cry from matching human qualitative inductive leaps, inductive learning systems have proven useful over a wide range of applications in medicine c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\n(breast cancer, hepatitis detection), banking (credit screening), defense (mine-rock discrimination), botany (iris variety identi cation, venomous mushroom detection) and others (Murphy & Aha, 1992).\nThe study of deductive reasoning goes at least as far back as the early Greek philosophers, such as Socrates and Aristotle. Its formalization has given rise to a variety of logics, from propositional to rst-order predicate logic to default logic to several non-monotonic extensions. Many of these logics have been successfully implemented in arti cial systems (e.g., PROLOG, expert systems). They typically consist of a pre-encoded knowledge or rule base, a given set of facts (identi ed as either causes or consequences) and some inference engine. The inference engine carries out the deductive process using the rules in the rule base and the facts it is provided. Several of these systems have been successfully used in various domains, such as medical diagnosis (Clancey & Shortli e, 1984) and geology (Duda & Reboh, 1984).\nOne of the greatest challenges of current deductive systems is knowledge acquisition, that is, the construction of the rule base. Typically, the rule base is generated as domain knowledge is extracted from human experts and carefully engineered into rules. Knowledge acquisition is a tedious task that presents many di culties both practically and theoretically. If a su ciently rich training set can be obtained, then inductive learning may be used e ectively to complement the traditional approach to knowledge acquisition. Indeed, a system's knowledge base can be constructed from both rules encoded a priori and rules generated inductively from examples. In other words, rules and examples need not be mutually exclusive. The strong knowledge principle (Waterman, 1986) and early work on bias (Mitchell, 1980) suggest the need for prior knowledge. Rules supplied a priori are one simple form of prior knowledge that has been used successfully in several inductive systems (e.g., Giraud-Carrier & Martinez, 1993;Ourston & Mooney, 1990). Similarly, proposals have been made to enhance deductive systems with learning capabilities (e.g., Haas & Hendrix, 1983;Rychener, 1983).\nIt is these authors' contention that the study of the interdependencies between learning and reasoning and the subsequent integration of induction and deduction into uni ed frameworks may lead to the development of more powerful models. This paper describes a system, called FLARE (Framework for Learning And REasoning), that attempts to combine inductive learning using prior knowledge together with reasoning. Induction and deduction in FLARE are carried out within the con nes of non-recursive, propositional logic. Learning is e ected incrementally as the system continually adapts to new information. Prior knowledge is given by a teacher in the form of rules. Within the context of a particular inductive task, these rules may serve to produce useful learning biases. Simple defaults combined with learning capabilities enable FLARE to exhibit reasoning that is normally considered non-monotonic.\nThe paper is organized as follows. Section 2 presents FLARE and argues the validity of the uni ed framework. FLARE's representation language is described and the algorithms employed in learning and reasoning are detailed. Section 3 reports experimental results on classical datasets, a number of \\well-designed\" reasoning protocols and several other applications, including two simple expert systems. Some of the limitations of the system are also described. Section 4 discusses related work in induction and deduction. Finally, section 5 concludes the paper by summarizing the results and discussing further research." }, { "figure_ref": [], "heading": "FLARE -A Framework for Learning and Reasoning", "publication_ref": [], "table_ref": [], "text": "In this section, FLARE's learning and reasoning mechanisms are detailed. A description and discussion of FLARE's representation language are given rst in Section 2.1, along with some useful de nitions and a simple, practical example that will serve as a running example throughout the paper. Sections 2.2 to 2.5 then follow a top-down approach to the description of FLARE.\n2.1 FLARE's Representation Language FLARE's representation language is an instance of the attribute-value language (AVL). In FLARE, attributes may range over nominal domains and bounded linear domains, including closed intervals of continuous numeric values. The basic elements of knowledge in AVL are vectors de ned over the cross-product of the domains of the attributes. The components of a vector specify a value for each attribute. The following simple extension is made to AVL.\nIf A is an attribute and D is the domain of A, then A takes on values from D f?; ?g. The special symbols ? and ? stand for don't-care and don't-know, respectively.\nThe semantics associated with ? and ? are di erent. An attribute whose value is ? is one that is known (or assumed) to be irrelevant in the current context, while an attribute whose value is ? may be relevant but its actual value is currently unknown. The ? symbol allows the encoding of rules, while the ? symbol accounts for missing attribute values in real-world observations." }, { "figure_ref": [], "heading": "First-Order to Attribute-Value Translation", "publication_ref": [ "b4", "b25", "b33", "b8", "b25" ], "table_ref": [ "tab_0", "tab_1", "tab_2", "tab_4" ], "text": "Since learning and reasoning tasks are often expressed in English with simple, direct counterparts in the classical rst-order logic language (FOL), it is necessary for FLARE to translate FOL clauses into their AVL equivalent. AVL is clearly not as expressive as FOL, so that FLARE has some inherent limitations. For the purposes of this discussion, let predicates of the form p(x) and p(x; C) where C is a constant be called avl-predicates. Then, the FOL clauses that can be translated into AVL are of two kinds:\n1. ground facts: p(C) or :p(C) where C is a constant (e.g., block(A)). 2. simple implications: (8x)P(x) ) q(x) where P(x) is a conjunction of avl-predicates and q(x) is, without loss of generality, a single, possibly negated avl-predicate (e.g., block(x) ^weight(x; heavy) ) :on table(x)).\nAll clauses involve at most one universally quanti ed variable and are thus essentially nonrecursive, propositional clauses. Despite its restricted language, FLARE e ectively handles a signi cant range of applications. Moreover, AVL accounts for simple, e cient matching mechanisms and lends itself naturally to many inductive learning problems as witnessed by its use in many successful learning systems (Clark & Niblett, 1989;Michalski, 1983;Quinlan, 1986). FOL statements of the aforementioned forms are translated in a straightforward way into an equivalent symbolic-valued AVL representation, as shown in Figure 1. A similar transformation has been proposed in the context of ILP (D zeroski, Muggleton, & Russell, 1993). Like FLARE, some ILP systems, such as LINUS (Lavra c, D zeroski, & Grobelnik, 1. Attribute de nition: For each avl-predicate, create a matching Boolean (for p(x)) or multi-valued (for p(x; C)) attribute. If there are ground facts, create a multi-valued attribute, called label, whose values are those of the constants.\n2. Vector de nition: For each implication, create a matching vector where attributes corresponding to premise and conclusion have their appropriate value and all other attributes are set to ?. For each ground fact, create a matching vector where the value of label is that of the constant and the attribute corresponding to the predicate has its appropriate value. Tag the attribute corresponding to the conclusion. The creation of attribute label in step 2 stems from the fact that ground facts of the form p(C) can be rewritten as simple implications of the form label(x; C) ) p(x). Notice how the attributes whose values are ? in a vector correspond exactly to those predicates that do not appear in the premise of the corresponding FOL clause. The attribute corresponding to q(x) has di erent usages. It functions as a conclusion during forward chaining and as a target classi cation during inductive learning. In some cases, it can also be used as a goal.\nTo avoid unnecessary confusion, the attribute corresponding to q(x) is simply referred to as the target-attribute. The values of the target-attribute are subsequently tagged with the subscript T. The translation from FOL to AVL is currently performed manually.\nIt is clear that as the number of predicates increases, so does the size of the vectors. Since all vectors are of the same size and many of them may only have values set for a relatively small number of their attributes, this may result in large memory requirements, as well as in an increase of execution time of operations on vectors. When there are predicates that qualify di erent values of the same concept (e.g., red(x), yellow(x), for color), it is possible to limit the size of the vectors by translating such predicates into a single multi-valued attribute (e.g., color(x; V ), where V is a constant: red, yellow, etc.). This is particularly useful for the conclusion part q(x) when it corresponds to a classi cation for x.\nTables 1 through 4 contain four simple examples that demonstrate the transformation. Each derived attribute in the AVL column is followed by its type (b for Boolean, m for multi-valued). Table 1 shows the Nixon Diamond, a classical example of con icting defaults. Informally, the Nixon Diamond states that Republicans are typically not paci st but Quakers are typically paci st. The con ict then arises as one asserts that Nixon is both a Republican and a Quaker. Table 2 contains assertions about animals and their ability to y. It states that animals normally do not y, birds are typically ying animals and penguins are birds that do not y. Table 3 shows statements regarding eyes and their tness for lenses. Finally, Table 4 contains some facts about a simple blocks world. Informally, the problem of supervised learning may be described as follows. Given (1) a set of categories, (2) for each category, a set of instances of \\objects\" in that category and (3) optional prior knowledge, produce a set of rules su cient to place objects in their correct category. In AVL, instances consist of sets of attribute-value pairs or vectors, describing characteristics of the objects they represent, together with the object's category. In this context, the category is a target-attribute. An example is a vector in which all attributes are set to either ? or one of their possible values. A rule is a vector in which some of the attributes have become ? as a result of generalization during inductive learning. A precept is similar to a rule but, unlike a rule, it is not induced from examples. Precepts are either given by a teacher or deduced from general knowledge relevant to the domain under study. In the context of a given rule or precept, the ? attributes have no e ect on the value of the category. Precepts and rules thus represent several examples. For instance, let p = (?; 1; 0; 0 T ) be a precept, where all attributes range over the set f0,1,2g. Then p represents the three examples: (0; 1; 0; 0 T ),\nFOL AVL Ani (b) Bir (b) Pen (b) Fly (b) Animal(x)) :Fly(x) 1 ? ? 0 T Bird(x))Animal(x) 1 T 1 ? ? Bird(x))Fly(x) ? 1 ? 1 T Penguin(x))Bird(x) ? 1 T 1 ? Penguin(x)) :Fly(x) ? ? 1 0 T\n(1; 1; 0; 0 T ) and (2; 1; 0; 0 T ).\nThe distinction between rules and precepts is limited to learning. In reasoning, all vectors (including examples that do not generalize) are rules. In FLARE, rules are formed by dropping conditions (Michalski, 1983), that is, under certain circumstances (see Section 2.4.2), one attribute is set to ?. Precepts, on the other hand, are rules encoded a priori.\nThey re ect some high-level knowledge (or common sense) about the real-world. A precept \\suggests something advisory and not obligatory communicated typically through teaching\" (Webster's Dictionary)." }, { "figure_ref": [], "heading": "Running Example", "publication_ref": [ "b18", "b22" ], "table_ref": [ "tab_5" ], "text": "To illustrate the above de nitions and algorithms of the following sections, a nal example of the transformation is constructed, based on the mediadv knowledge base (Harmon & King, 1985). This purposely simple example will serve as a running example throughout (Lifschitz, 1988) the paper. A discussion of the complete mediadv knowledge base is in Section 3.5. Here, two conditions (i.e., instructional feedback and presentation modi cation) are left out and only a few of the original rules are used. Table 5 contains the informal English version of the knowledge used (with reference to the rules of mediadv it was generated from when applicable) and its corresponding translation into AVL vectors.\nLet KB be the resulting set of vectors. The attributes are given in the order: situation, stimulus-situation, response, appropriate-response, stimulus-duration, training-budget and media. Note that all the attributes are nominal. The symbolic values used in the English statements are transformed into equivalent nominal values in the vectors. Hence, for example, the rst statement gives rise to a vector in which the attribute situation is set to 0 (the corresponding nominal value of schematics for this attribute), and the targetattribute stimulus-situation is set to 0 (the corresponding nominal value of symbolic for this attribute).\nThe top goal is for the system to suggest the most e ective media for training, based on four conditions: situation, response, stimulus-duration, and training-budget. Note that the attributes stimulus-situation and appropriate-response can be used as subgoals in reaching the nal conclusion. Vectors v 13 to v 17 are examples since all of the conditions have set values. They are not part of the original mediadv knowledge base but are added to exercise important features of the algorithms. As KB is given, all vectors of KB with condition attributes set to ? are precepts rather than rules. For instance, v 7 and v 12 are precepts. Then, the term rule applies to new generalizations, induced by FLARE from KB. For instance, v 0 8 (see Section 2.4.3) is a rule." }, { "figure_ref": [ "fig_2" ], "heading": "Algorithmic Overview", "publication_ref": [], "table_ref": [], "text": "FLARE is a self-adaptive, incremental system. It uses domain knowledge and empirical evidence to construct and maintain its knowledge base. FLARE's knowledge base is interpreted as a \\best so far\" set of rules for coping with the current application. In that sense, FLARE follows the scienti c approach to theory formation/revision: available prior knowledge and experience produce a \\theory\" that is updated or re ned continually by new evidence.\nFLARE involves three main functions whose de nitions and high-level algorithmic interactions are given in Figure 2. The details of each function's implementation are given in the following sections. An intuitive overview is presented here." }, { "figure_ref": [], "heading": "English Statements", "publication_ref": [], "table_ref": [], "text": "Equivalent AVL Vectors If situation = schematics (Rule3) v 1 = 0 0 T ? ? ? ? ? Then stimulus-situation = symbolic If situation = conversation (Rule4) v 2 = 1 1 T ? ? ? ? ? Then stimulus-situation = verbal If situation = photograph (Rule2) v 3 = 2 2 T ? ? ? ? ? Then stimulus-situation = pictorial If response = observing or (Rule5) v 4 = ? ? 0 0 T ? ? ? response = thinking v 5 = ? ? 1 0 T ? ? ? Then appropriate-response = covert If response = emoting (Rule10) v 6 = ? ? 2 1 T ? ? ? Then appropriate-response = a ective If stimulus-situation = verbal and (Rule13) v 7 = ? 1 ? 0 0 ? 2 T appropriate-response = covert and stimulus-duration = brief Then media = lecture If stimulus-situation = verbal or (Rule14) v 8 = ? 1 ? 0 0 1 3 T stimulus-situation = symbolic or v 9 = ? 0 ? 0 0 1 3 T stimulus-situation = pictorial and v 10 = ? 2 ? 0 0 1 3 T appropriate-response = covert and stimulus duration = Conceptually, FLARE's execution consists of two phases. In the preprocessing phase, FLARE uses prior knowledge in the form of general rules that may be viewed as encoding \\commonsense\" knowledge. Using deduction from given facts, domain-speci c precepts are generated as an instantiation of the general knowledge to the domain at hand. Section 2.5 details the Generate-Precepts function. The need for generating and explicitly encoding precepts as individual vectors in such a preprocessing phase arises because FLARE's inductive mechanisms take place at the vector level. Thus, even though it is always possible to deduce them from the general knowledge, precepts are most useful in induction when they are made explicit.\nIn normal processing, FLARE executes an, at least conceptually, in nite loop. Steps (a) and (b) are executed every time new information (in the form of AVL vectors) is presented to the system. In step (a), FLARE reasons from the \\facts\" provided by the input vector and the rules found in the current knowledge base. Rule-based reasoning and similaritybased reasoning are combined as discussed in Section 2.3 to derive a value for the targetattribute, as well as other attributes along the forward chain to the conclusion. In step (b), FLARE adapts its current knowledge base. Because FLARE is a supervised learner, it can only adapt when a target value for the target-attribute is explicitly given as part of the information presented. The combination of steps (a) and (b) is referred to as learning. Section 2.4 details the Adapting function.\nNote that reasoning based upon available knowledge prior to adapting is plausible. Even when available information is insu cient and/or incomplete, humans often attempt to make a tentative decision and get corrected if necessary. At any one time, the decision made represents a kind of \\best guess\" given currently available information. The more (correct) information becomes available, the more accurate decisions become." }, { "figure_ref": [], "heading": "FLARE's Reasoning", "publication_ref": [ "b40", "b40", "b26", "b47" ], "table_ref": [], "text": "FLARE implements a simple form of rule-based reasoning combined with similarity-based reasoning, similar to CONSYDERR (Sun, 1992). Sun has argued that such a combination e ectively decreases the system's susceptibility to brittleness (Sun, 1992). In particular, in the absence of applicable rules or when information is incomplete, FLARE relies on similarity with previously encountered situations to make useful predictions. Others have also argued that analogy is a necessary condition for commonsense reasoning and the subsequent overcoming of brittleness (Minsky & Riecken, 1994;Wollowski, 1994). Section 2.3.1 shows how the notion of Clark's completion (1978) can be applied to inductively learned rules and exploited by similarity-based reasoning to generate new rules. Sections 2.3.2 to 2.3.7 describe and illustrate FLARE's reasoning mechanisms." }, { "figure_ref": [], "heading": "Completion", "publication_ref": [ "b3", "b5" ], "table_ref": [], "text": "Inductively learned rules of the form (8x)P(x) ) q(x), where P is a conjunction of avlpredicates, are essentially classi cation rules or de nitions that establish relationships between features, captured by P(x), and concepts, expressed by q(x). In keeping with the classical assumption that what is not known by a learning system is false by default, inductively generated rules lend themselves naturally to the completion principle proposed by Clark (1978). That is, classi cation rules become \\if and only if\" statements, i.e., P(x) , q(x). Hence, under completion, if q(x) is known to be true, then it is possible to conclude that P(x) is true.\nClearly, completion does not apply to all rules. Inductively learned rules are inherently de nitional as they essentially encode a concept's description in terms of a set of features. Other rules, such as those relating concepts at the same relative cognitive level, are not de nitional. For example, given that birds are animals and that some x is an animal, it does not follow that x is a bird. Note that, in addition to inductively learned rules, de nitions may be given to FLARE as prior knowledge.\nThe completion principle is particularly useful when it interacts with similarity-based reasoning to generate new rules, as shown in the following derivation.\nHypotheses:\n1. (8x)P(x) ) q(x), which may be completed. 2. (8x)P 0 (x) ) q 0 (x). 3. P \\ P 0 6 = ; (i.e., P and P 0 have some attributes in common). 4. q(x) is true.\nDerivation:\n1. q(x) from hypothesis 4. 2. P(x) from completion applied to hypothesis 1. 3. q 0 (x) from similarity-based reasoning using hypotheses 2 and 3.\nA new implication between concepts, namely q(x) ) q 0 (x), is thus generated. Though FLARE is capable of deriving q 0 (x) from q(x), it does not actually store the new implication in its knowledge base.\nThe following example adapted from (Collins & Michalski, 1989) illustrates the use of the above derivation. Assume that the system has learned a description of the Chaco area in terms of a set G of geographical conditions (i.e., G(x) )area(x; theChaco)). Furthermore, assume that the system knows a rule that encodes a set of conditions C su cient for the raising of cattle (i.e., C(x) )raise(x; cattle)) and C is such that C and G share a number of conditions. If the system is now told that the area of interest is the Chaco, it rst deduces by completion that the conditions in G are met and then, by taking advantage of the similarity between G and C, the system concludes that cattle may be raised in the Chaco. Note that the level of con dence in the conclusion depends upon the amount of similarity.\nIn FLARE, the representation is extended and a de nition indicator is tagged to those statements that may be completed (i.e., prior de nitions, inductively learned classi cations). Note that, though somewhat cumbersome, this extension is needed since FLARE does not physically separate concepts and the features used to describe them. CONSYDERR on the other hand provides natural support for the dichotomy. FLARE's representation makes learning more readily applicable and preserves consistency with previously developed models. At this point, the issue of achieving both the dichotomy and easy learning remains open." }, { "figure_ref": [], "heading": "FLARE's Reasoning Function", "publication_ref": [], "table_ref": [], "text": "Deduction in FLARE is applied forward. Hence, facts must be provided so as to initiate reasoning. These facts are coded into a vector in which attributes whose values are known are accordingly set, while all other attributes are ? (i.e., don't-know). One attribute is designated as the target-attribute and, if known, its value is also provided. FLARE then uses the rules of its knowledge base and the facts to derive a value for the target-attribute. The Reasoning function is shown in Figure 3. Note that in this discussion, the current knowledge base is assumed to be non-empty. If the knowledge base is empty, the system cannot deduce anything other than ?.\nStep (1) applies completion rst. FLARE nds all asserted (i.e., neither ? nor ?) attributes of v that are target-attributes of de nitions in the current knowledge base. If any such attribute is found, and for all of them, completion is applied by \\copying\" into v all asserted attributes of the corresponding de nitions that are ? in v. The following two issues must be addressed by FLARE in implementing completion.\n1. Since some attributes may be involved in the de nitions of more than one targetattribute or concept, it follows that there may be more than one values to be copied into a given attribute when completing these de nitions. 2. Since FLARE's concepts and rules consist of sets of vectors, where each vector is a conjunction and all the vectors sharing the same target-attribute form a disjunction, it follows that some de nitions may be disjunctive as well." }, { "figure_ref": [], "heading": "DEFINITION", "publication_ref": [], "table_ref": [], "text": "Input: the current knowledge base, a set of facts encoded by a vector v, one designated target-attribute and optionally, the target value of the target-attribute.\nOutput: a vector v + equal to v together with further facts deduced from v, including a value for the target-attribute." }, { "figure_ref": [], "heading": "IMPLEMENTATION", "publication_ref": [], "table_ref": [], "text": "1. Completion: For each asserted attribute a of v other than the target-attribute, if a is the targetattribute of a de nition d and their values are equal, then copy all asserted attributes of d that are ? in v, into v.\n2. Forward chaining: If v's target-attribute has not been asserted (a) Repeat until no new attribute of v has been asserted i. Let w = v.\n(* create a temporary copy of v *)\nii. For each non-asserted attribute a of v other than the target-attribute, if a rule can be applied to v to assert a, then apply it by asserting a in w.\n(* based on v, assert all possible attributes (other than the target-attribute) in w *)\niii. Let v = w.\n(* copy result back into v for next level of inference *) (b) If a rule can be applied to assert v's target-attribute, then apply it. Otherwise, perform similarity-based assertion." }, { "figure_ref": [], "heading": "Figure 3: Function Reasoning", "publication_ref": [], "table_ref": [], "text": "The current implementation resolves these two issues as follows. In the rst case, potential con icts are resolved simply by giving precedence to the rst copy made (which depends upon the order in which asserted attributes are processed). In the second case, FLARE simply chooses one of the de ning conjunctions at random and applies completion to it.\nOther mechanisms (e.g., apply to all, select a winner based on some criteria, etc.) are the topic of further research.\nCompletion causes further information (in the form of asserted attributes) to be gained, thus improving the chance of reaching a goal. Indeed, the purpose of step ( 1) is two-fold. First, completion allows the system to reach goals that are not otherwise achievable by existing rules. Second, even if the top goal is not achieved directly by completion, further reasoning to achieve it is enhanced as described in Section 2.3.1.\nWhen the target-attribute has not been asserted by completion, step (2) pursues the reasoning process using forward chaining. As mentioned above, v has a single targetattribute, corresponding to the nal goal to achieve. However, at any given time, any one of the (yet) non-asserted 1 attributes of v may be designated as a subgoal that may 1. These are either ? or ?. They are ? when precepts and rules with di ering premises and conclusions are used. In such cases, it is not clear until reasoning whether they are true don't-cares or only don't-knows.\nbe useful (or necessary) in reaching the nal conclusion.\nStep (2)(a) is the heart of the reasoning process. Each execution of step (2)(a)(ii) corresponds to the achievement of all possible subgoals at a given depth in the inference process. Each iteration uses knowledge acquired in the previous iteration to attempt to derive more new conclusions using existing rules.\nStep (2)(b) concludes the reasoning phase by asserting the target-attribute.\nNotice that the target-attribute is always asserted, either by rule application or similaritybased assertion. Hence, FLARE always reaches a conclusion. In the worst case, when there is no information about the target-attribute in the current knowledge base, the value derived for the conclusion must clearly be ?. In all other cases, the validity and accuracy of the derived conclusion depend upon available information. The accuracy or con dence level may be computed in a variety of ways from information about static priorities, dynamic priorities, covers and counters (see Section 2.4).\nThe two complementary mechanisms used in asserting the target-attribute (i.e., rule application and similarity-based assertion) are described in the next two sections. They apply sequentially. If a rule exists that can be applied, then it is applied. Otherwise, similarity-based reasoning takes e ect.\nFinally, note that information regarding the way the goal is achieved could be displayed by FLARE for the purpose of human examination and inspection. Currently, FLARE is non-interactive, that is, it cannot query a user for the values of missing attributes that may help improve the accuracy of its result." }, { "figure_ref": [], "heading": "Rule Application", "publication_ref": [ "b11" ], "table_ref": [], "text": "Let val(a; x) denote the value of attribute a in vector x. In the state of knowledge represented by a vector v, a rule may be applied if it covers v. A vector x is said to cover a vector y if and only if:\n1. x and y have the same target-attribute and 2. for all remaining attributes a of x, either val(a; x) = ? or val(a; x) = val(a; y).\nFor example, in KB, v 11 covers v 7 and v 8 but v 11 does not cover v 9 or v 12 . Ignoring attributes whose value is ?, the second condition states that the set of remaining attribute-value pairs of x is a proper subset of the set of remaining attribute-value pairs of y. Intuitively, x covers y if y satis es all of the premises of x.\nTo accommodate real-valued attributes, the notion of equality is slightly extended. Given that the probability of two real values being equal is extremely small, the cover relation, because of condition 2, would essentially never hold. The following extension, borrowed from ILA (Giraud-Carrier & Martinez, 1995), is suggested. Two linear values x 1 and x 2 are equal if and only if jx 1 x 2 j , for some > 0. Hence, the vector (?, 1.2, 3.52, ?, 0 T ) covers the vector (2, 1.3, 3.48, ?, 0 T ) if = 0:5. In the current implementation, is some fraction of the range of possible values of each attribute." }, { "figure_ref": [], "heading": "Similarity-Based Assertion", "publication_ref": [ "b0", "b37", "b39", "b46" ], "table_ref": [], "text": "The notion of similarity in FLARE is captured by a non-symmetric distance function de ned over (n-dimensional) vectors. If vector x is stored in the knowledge base and vector y is presented to the system to reason about, then the distance from x to y is given by:\nD(x; y) = n X i=1 d(x i ; y i ) num asserted(x)\nwhere, if x + i ; y + i denote values of attribute i other than ? and ?, d(?; y i ) = 0 d(?; y i ) = 0:5 d(x + i ; ?) = 0:5 d(x + i ; ?) = 0:5 d(x + i ; y + i ) = (x + i 6 = y + i ) if attribute i is nominal d(x + i ; y + i ) = jx + i y + i j range(i) if attribute i is linear such that range(i) is the range of values of attribute i and num asserted(x) is the number of attributes that are not ? in x. The above equations for d are consistent with the semantics of ? and ? de ned in Section 2.1. D(x; y) is meaningful only if x and y have the same target-attribute and the targetattribute is left out of the computation. For example, D(v 11 ; v 8 ) = 0, D(v 13 ; v 14 ) = 1=4, D(v 7 ; v 16 ) = 2=3, D(v 16 ; v 7 ) = 5=8 and D(v 1 ; v 4 ) is unde ned. A detailed discussion of and justi cation for the de nition of D are found elsewhere (Giraud-Carrier & Martinez, 1994a). Since every ordered set is in one-to-one correspondence with a subset of the natural numbers, D is well de ned. To eliminate the e ects of statistical outliers on range(i), the dataset must be ridden of vectors whose attributes have such irregular values. D is an extension of the similarity function de ned for IBL (Aha, Kibler, & Albert, 1991), to inductive learning algorithms that use and/or create general rules. D applies to both nominal and linear domains, and relies on the corresponding notion of distance between values. In particular, D handles continuous values directly, without need for discretization. Currently, D treats each attribute equally. Existing methods assigning weights to each attribute-wise distance (Salzberg, 1991;Stan ll & Waltz, 1986;Wettschereck & Dietterich, 1994) may be incorporated in D.\nSimilarity-based assertion consists of asserting the target-attribute of a vector v to the value of that attribute in v's closest match given by D. Note that (since D is not symmetric) x covers y if and only if D(x; y) = 0. Hence, since 0 is the minimum of the distance function, D can be used to apply both reasoning mechanisms with the correct order (i.e., rules rst, similarity next), by computing the distance from all the rules in the current knowledge base to v and simply selecting the rule that minimizes D. As it is possible that more than one rule minimizes D, a priority scheme is devised to choose a winner. This con ict resolution procedure which relies partially on FLARE's ability to learn is outlined in Section 2.3.5." }, { "figure_ref": [], "heading": "Conflict Resolution", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Along with each vector, FLARE also stores the following information: a static priority value (static priority), a dynamic priority value (dynamic priority) and the number of vectors covered (num covers).\nThe value of static priority is set to 0 by default but may be changed by a teacher to any other value a priori (e.g., v 11 and v 12 in KB). Static priorities provide a means whereby rules may be prioritized according to some externally provided information or meta-knowledge. The value of dynamic priority is initialized to 0. Its value is not changed by a teacher, however, but evolves over time and is intended to resolve con icting defaults extensionally. Con icting defaults, such as the Nixon Diamond of Table 1 may be encoded a priori as precepts or induced from examples. In either case, they are identi ed in the reasoning phase as FLARE discovers two rules that apply equally well to the input vector. Formally, two rules R and S are in con ict over a vector v if all of the following conditions hold.\n1. D(R; v) = D(S; v) = 0 2. R and S have the same speci city 3. R and S have the same static priority 4. R and S have di erent target-attribute's value 5. R and S overlap (i.e., the sets of all possible vectors each of them covers intersect)\nTwo vectors are said to be concordant if they have the same target-attribute and the targetattribute's value is the same in both vectors. When reasoning about vector v and coming upon con icting defaults, FLARE simply increments by 1 the value of dynamic priority of the default that is concordant with v. If none of the defaults are concordant with v, then no change is made. The value of dynamic priority re ects the number of times a particular default has been supported by evidence drawn from the environment. It is thus evidence, rather than meta-knowledge, that is responsible for the emerging ordering of defaults under dynamic priority. Note that a target value must be given for the target-attribute for the above update to take place so that dynamic priority is a result of the combination of learning and reasoning. Notice also that dynamic priority evolves over time so that the system's response changes based on accumulated evidence. The value of num covers is also a result of the combination of learning and reasoning. It records the number of other vectors seen by the system so far that are concordant with and covered by the vector. It is a kind of con dence level for that vector, as it essentially counts the number of times the rule represented by the vector is con rmed by empirical evidence.\nWhen more than one rule minimizes D (i.e., can be selected for application), a winner is chosen according to the following priority scheme, where each subsequent condition is invoked if a tie exists at the previous level.\n1. Most speci c 2. Highest static priority 3. Highest dynamic priority 4. Greatest cover Speci city is de ned as the number of attributes, other than the target-attribute, whose value is not ?. A vector x is more speci c than a vector y if speci city(x)> speci city(y). For example, in KB, v 7 has speci city 3, v 8 has speci city 4 and v 13 has speci city 4, so that v 8 and v 13 are more speci c than v 7 .\nGiving priority rst to the most speci c vector allows FLARE to handle exceptions and cancellation of inheritance. Using static priorities next makes it possible to handle con icting defaults as de ned by a teacher, while dynamic priorities account for epistemological inconsistencies that may be resolved over time as more information becomes available in support of one belief or the other. Finally, selecting the vector with greatest cover allows evidence gathered from experience to guide a nal selection. Note that the current scheme gives precedence to teacher-provided information. Other ordering schemes can easily be de ned. For example, static priorities could be given as a simple form of initial bias and evidence gathered through learning (e.g., dynamic priority and cover) could be used to con rm or modify these priorities." }, { "figure_ref": [], "heading": "Illustration", "publication_ref": [], "table_ref": [], "text": "Consider KB and assume the vector v = (1; ?; 1; ?; 0; 0; ? T ) is input to the reasoning function. Execution proceeds as follows.\nStep (1) is essentially skipped as none of the attributes of v meet the looping condition. Then, there are only two loops through the forward chain before the target-attribute is set.\nExecution Trace Notice that, in forward chaining, the assertion of attributes that are subgoals does not involve similarity-based assertion but results from rule application only. As a result, the accuracy of the nal goal is increased but the ability to perform approximate reasoning is reduced. It is possible to relax this restriction thus potentially achieving more subgoals but reducing the con dence in the nal result. For example, the condition in step (2)(a)(ii) could be modi ed to allow not only rules (which are perfect matches, i.e., D = 0) but also matches deemed to be \\close enough.\" The measure of closeness can be implemented via a threshold value T D , placed on D. That is, the current condition is replaced with: Let D 0 = distance to closest match If D 0 = 0 perform rule application Else if D 0 T D perform similarity-based assertion\nThe value of T D then o ers a simple mechanism to increase the level of approximate reasoning. This is particularly useful for cases such as the Chaco example (Section 2.3.1), where, after completion, most of the reasoning is based on the amount of similarity between concepts. Notice that the above condition is functionally equivalent to the current one when T D = 0." }, { "figure_ref": [], "heading": "FLARE's Learning", "publication_ref": [], "table_ref": [], "text": "This section addresses the construction of FLARE's knowledge base through incremental, supervised learning. FLARE learns by continually adapting to the information it receives. Indeed, training vectors are assumed to become available one at a time, over time and, as is inherent in nature, some vectors may be noisy while others may be encountered more than once. Moreover, FLARE extends inductive learning from examples with prior knowledge in the form of precepts. Sections 2.4.1 to 2.4.3 describe and illustrate FLARE's learning mechanisms and Section 2.4.4 highlights some of the advantages of combining extension and intension in learning." }, { "figure_ref": [], "heading": "FLARE's Adapting Function", "publication_ref": [], "table_ref": [], "text": "Over time, FLARE is presented with a sequence of examples and precepts that are used to update its current knowledge base. The set of all examples, rules and precepts that share the same target-attribute can be viewed as a partial function mapping instances into the goal-space. In this context, an example maps a single instance to a value in the goal-space, while precepts and rules are hyperplanes that map all of their points or corresponding instances to the same value in the goal-space.\nLearning then follows a form of nearest-hyperplane learning. As mentioned in Section 2.2, it consists of rst applying the reasoning scheme, and then making further adjustments to the current knowledge base to re ect the newly acquired information. The reason the algorithm is said to be nearest-hyperplane is that the reasoning phase essentially identi es a closest match for the input vector. This closest match is either a rule (i.e., true hyperplane) or a stored example (i.e., point or degenerated hyperplane).\nThe prior application of reasoning allows the system to predict the value of the targetattribute based on information in the current knowledge base. Also, if there are missing attributes in the input vector and the knowledge base contains rules that can be applied to assert these attributes, the rules are applied so that as many of the missing attributes as possible are asserted before the nal goal is predicted. Hence, the accuracy of the prediction is increased and generalization is potentially enhanced, thus enabling FLARE to more e ectively adapt its knowledge base.\nThe system starts with an empty knowledge base. It then adapts to each new vector v, where v is either a precept or an example. If v is the rst vector, then there is no closest match and v is automatically stored in the current knowledge base. In that sense, the rst learned vector represents yet another bias for the learning system. If v is not the rst vector, then reasoning takes place producing v + . A closest match, say m, is also found and the knowledge base adapts itself based on the relationship between v + and m, as shown in Figure 4. Note that m is closest given the current available information and can, indeed, be \\far\" from v + . Hence, the order in which training takes place impacts the outcome.\nwhose attributes' values are all equal to those of p. The value incremented corresponds to the new vector's target-attribute value. The counter value that is highest represents the statistically \\most probable\" target-attribute value. In e ect, the target-attribute value of a vector is always the one with highest count. Note that this value may change over time, as new information becomes available.\nBecause a best match is rst identi ed, changes to the knowledge base are localized and are guided by the kinds of possible relationships between v + and m. These relationships are summarized below.\nv + is equal to m (i.e., noise or duplicates) or v + is subsumed by m (i.e., v + is a special case of m) or v + subsumes m (i.e., v + is a general case of m) or v + and m can produce a generalization or all other cases (e.g., v + is an exception to m, v + and m are too far apart, etc.) In the rst case, v + (or its prototype m) is already in the knowledge base and only the counters need to be updated. Note that the extension of the notion of equality discussed in Section 2.3.3 enables this part of the algorithm, in conjunction with the counters, to produce some generalization for linear attributes. In e ect, the vector retained in the knowledge base acts as a \\prototype,\" and its target-attribute's value is the one most probable among its -close neighbors. In the second case, there is no need to store v + as the current knowledge base has su cient information to correctly predict v + 's target value. In the third case, v + is stored and m removed as v + is more general than m and thus accounts for it. The fourth case captures the possibility of generalization by dropping conditions (see Section 2.4.2 for details). If generalization takes place, only one of v + or m is generalized and stored. The values of static priority and num covers are also reset so that the generalization inherits the maximum static priority value and the current value of num covers. Finally, in the fth case, v + must be added as the current knowledge base either does not produce the correct target value for v + (e.g., exceptions) or is not deemed reliable enough to properly account for v + .\nNotice that the adaptation phase takes place regardless of the target value predicted by the reasoning phase. A possible alternative would be to adapt only if the predicted target value di ers from the actual target value. It has been found empirically, however, that too much useful information is lost with this approach due to the incrementality of the system and its sensitivity to ordering. A possibly viable alternative would make use of memory. Vectors currently accounted for could be saved in memory and presented later to the system. This may be done a few times over some period of learning time until either the vectors must be stored in the knowledge base (due to changes in the knowledge base) or they are discarded as the system has gained enough con dence in its ability to account for them." }, { "figure_ref": [], "heading": "Generalization", "publication_ref": [ "b25" ], "table_ref": [], "text": "Two vectors that have the same target-attribute's value can produce a generalization when the following ve conditions hold.\n1. They di er in the value of exactly one of their attributes.\n2. The attribute on which they di er is nominal.\n3. They are concordant.\n4. The number of their attributes not equal to ? di er by at most 1. 5. At least one of them has more than one non ? attribute. Generalization then consists of setting to ? the attribute on which the two vectors di er, in the vector that is most general, as long as that vector has more than one non ? attribute. For example, vectors v 8 and v 9 of KB satisfy the above conditions and would generalize to produce a vector, say v 8+9 = (?; ?; ?; 0; 0; 1; 3 T ). The value of 1 in the fourth condition is based upon empirical evidence.\nChoosing the most general vector maximizes generalization and the condition on the number of non ? attributes guarantees that no rule is generated that would cover every other vector. This version of the dropping-the-condition rule (Michalski, 1983) is only applied to nominal attributes as it makes little sense for linear (especially real-valued) domains. For linear attributes, generalization is achieved through the artifact due to the extended notion of equality discussed above.\nLet v and w be two vectors representing n and m n examples, respectively. Furthermore, let v 0 be the generalization obtained from v and w by dropping a p-valued attribute in v. Then v 0 represents pn examples. Since v and w represent at most 2n examples, gen- eralization causes at least (p 2)n new examples to be represented. As p increases, this value also increases and, for large values of p, could lead to over generalization as only two values of a given attribute su ce to predict the outcome of all values in the current context. However, such potential over generalizations are partially o set by the system's ability to identify, retain and give precedence to exceptions.\nThere are still drawbacks to FLARE's generalization scheme. Given a set of vectors of the form v i = Sx i T k where S is xed, T is the target-attribute and x i 6 = x j for all i 6 = j, any pair of concordant (i.e., same k) vectors satisfy the generalization condition, yet only the rst such pair will generalize. All other vectors then become either subsumed by this generalization or exceptions to it. If most are exceptions, this leads to the storage of more vectors than needed, especially for large domains where various subsets of values give rise to di erent target-attribute's values. Moreover, the outcome depends upon ordering of the vectors. Also, if there exists con icts involving one (or more) value of x, then the system will end up giving unfounded precedence to the exceptions (being more speci c) and, again, these depend on the ordering. Support for internal disjunction or a more complex generalization scheme may help alleviate some of these problems. They are the topic of future research." }, { "figure_ref": [], "heading": "Illustration", "publication_ref": [], "table_ref": [], "text": "This section shows the evolution of FLARE's knowledge base as the vectors of KB (see Section 2.1.3) are presented to it as inputs. It highlights several interesting features of both reasoning and adaptation. Let KB 0 denote the current knowledge base of FLARE. As discussed above, FLARE starts with KB 0 = ;. Each vector is presented to FLARE in the order in which it appears in KB.\n1. Presentation of v 1 . KB 0 = ;. v 1 is simply added to KB 0 . 2. Presentation of v 2 . KB 0 = fv 1 g. v 1 is the closest match. v 1 and v 2 are not concordant, so v 2 is added to KB 0 . 3. Presentation of v 3 . KB 0 = fv 1 ; v 2 g. Same as above with either v 1 or v 2 . So v 3 is added to KB 0 . 4. Presentation of v 4 . KB 0 = fv 1 ; v 2 ; v 3 g. No winner can be found since none of the vectors in KB 0 have the same target-attribute as v 4 . So v 4 is added to KB 0 . While the earlier KB 0 had information about a single concept (i.e., stimulus-situation), the new KB 0 now provides FLARE with knowledge about a new concept, namely appropriate- response. By \\partitioning\" vectors along their target-attribute, FLARE naturally supports multiple concept learning.\n5. Presentation of v 5 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 g. v 4 is the only (and hence closest) match since none of the other vectors in KB 0 have the same target-attribute as v 5 . v 4 and v 5 satisfy conditions 1-4 for generalization but violate condition 5, so v 5 is added to KB 0 . 6. Presentation of v 6 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 g. Similar to step 3 with v 4 and v 5 . So v 6 is added to KB 0 . 7. Presentation of v 7 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 g. Note that v 7 is a precept. No winner can be found since none of the vectors in KB 0 have the same target-attribute as v 7 . So v 7 is added to KB 0 . A third concept, namely media, is now available. 8. Presentation of v 8 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 g. v 7 is the only (and hence closest) match since none of the other vectors in KB 0 have the same target-attribute as v 8 . v 8 is an exception to v 7 since v 7 covers v 8 but they are not concordant. Hence, v 8 is added to KB 0 . Though v 7 suggests to use lecture as a media, the added condition on training-budget found in v 8 causes that suggestion to change to lecture with slides. 9. Presentation of v 9 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 8 g. Only v 7 and v 8 may compete. D(v 7 ; v 9 ) = 1=3 and D(v 8 ; v 9 ) = 1=4. Hence, v 8 wins. v 8 and v 9 satisfy all ve conditions for generalization. The second attribute is dropped (i.e., replaced by ?) in either one, say v 8 , to produce v 0 8 = (?; ?; ?; 0; 0; 0; 3 T ). v 0 8 is added to KB 0 . All of the attributes in v 8 and v 9 have the same value, except for stimulus-situation. This is su cient for FLARE to hypothesize that the value of stimulus-situation is not critical and the attribute may thus be ignored. In other words, FLARE decides that the value of stimulus-situation is not needed when predicting lecture with slides. 10. Presentation of v 10 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 0 8 g. Only v 7 and v 0 8 may compete.\nD(v 7 ; v 10 ) = 1=3 and D(v 0 8 ; v 10 ) = 0. Hence, v 0 8 wins. v 0 8 covers v 10 and they are concordant, so FLARE adds 1 to num covers(v 0 8 ). v 10 need not be added to KB 0 . v 10 is one of the many special cases now handled by the new generalization v 0 11. Presentation of v 11 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 0 8 g. Notice that v 11 is also a precept, so that precepts may be given at any time during learning. Only v 7 and v 0 8 may compete. D(v 7 ; v 11 ) = 1=6 and D(v 0 8 ; v 11 ) = 1=3. Hence, v 7 wins. Neither one covers the other; they are not equal; they cannot produce generalization (violate condition 3). Thus, v 11 is added to KB 0 . Note that v 11 has a static priority of 1. 12. Presentation of v 12 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 0 8 ; v 11 g. v 7 , v 0 8 and v 11 compete. D(v 7 ; v 12 ) = 1=2, D(v 0 8 ; v 12 ) = 2=3 and D(v 11 ; v 12 ) = 1=4. Hence, v 11 wins. Neither one covers the other; they are not equal; they cannot produce generalization (violate condition 3). Thus, v 12 is added to KB 0 . Note that v 12 has a static priority of 3. Since v 11 and v 12 overlap, precedence would be given to v 12 in case of a con ict. 13. Presentation of v 13 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 0 8 ; v 11 ; v 12 g. In this case, some non-asserted attributes of v 13 may be asserted through reasoning, before the targetattribute. Rules v 2 and v 4 are applied to assert the second and fourth attributes respectively. The result is v 0 13 = (1; 1; 0; 0; 0; 1; 2 T ). v 7 , v 0 8 , v 11 and v 12 compete to assert the target-attribute. D(v 7 ; v 0 13 ) = 0, D(v 0 8 ; v 0 13 ) = 0, D(v 11 ; v 0 13 ) = 0, and D(v 12 ; v 0 13 ) = 1=2. Both v 7 and v 0 8 win over v 11 since they are more speci c. However, v 7 and v 0 8 have the same speci city. In fact, they satisfy all ve conditions that identify them as con icting defaults in the current context. Hence, FLARE adds 1 to dynamic priority(v 7 ) since v 7 and v 0 13 are concordant. Reasoning then proceeds, giving precedence to v 7 . Since v 7 covers v 0 13 and they are concordant, v 0 13 need not be added to KB 0 . 14. Presentation of v 14 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 0 8 ; v 11 ; v 12 g. Some non-asserted attributes of v 14 may be asserted through reasoning, before the target-attribute. Rules v 2 and v 5 are applied to assert the second and fourth attributes respectively. The result is v 0 14 = (1; 1; 1; 0; 0; 1; 2 T ). The rest is identical to step 13. Now, dynamic priority(v 7 ) = 2 and v 0 14 is not added to KB 0 . 15. Presentation of v 15 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 0 8 ; v 11 ; v 12 g. Some non-asserted attributes of v 15 may be asserted through reasoning, before the target-attribute. Rules v 3 and v 6 are applied to assert the second and fourth attributes respectively. The result is v 0 15 = (2; 2; 2; 1; 1; 0; 0 T ). v 7 , v 0 8 , v 11 and v 12 compete to assert the target-attribute. D(v 7 ; v 0 15 ) = 1, D(v 0 8 ; v 0 15 ) = 1, D(v 11 ; v 0 15 ) = 1, and D(v 12 ; v 0 15 ) = 1=2. Hence, v 12 wins. Neither one covers the other; they are not equal; they cannot produce generalization (violate condition 3). Thus, v 0 15 is added to KB 0 . 16. Presentation of v 16 and v 17 . KB 0 = fv 1 ; v 2 ; v 3 ; v 4 ; v 5 ; v 6 ; v 7 ; v 0 8 ; v 11 ; v 12 ; v 0 15 g. Both are equal to v 0 15 . Neither v 16 nor v 17 need be added to KB 0 but the appropriate counter values are incremented in v 0 15 . The result is counters 0] = 2, counters 1] = 0, counters 2] = 1 and counters 3] = 0. Thus, the target-attribute's value of v 0 15 is currently 0.\nThe resulting KB 0 , after processing KB is shown in Figure 5. The variables p, c and dp stand for static priority, cover number and dynamic priority, respectively. At an intuitive level, FLARE has used both learning and reasoning mechanisms to deal with KB. Induction v 1 = 0 0 T ? ? ? ? ? (p = c = dp = 0) v 2 = 1 1 T ? ? ? ? ? (p = c = dp = 0) v 3 = 2 2 T ? ? ? ? ? (p = c = dp = 0) v 4 = ? ? 0 0 T ? ? ? (p = c = dp = 0) v 5 = ? ? 1 0 T ? ? ? (p = c = dp = 0) v 6 = ? ? 2 1 T ? ? ? (p = c = dp = 0) v 7 = ? 1 ? 0 0 ? 2 T (p = c = 0; dp = 2) v 0 8 = ? ? ? 0 0 1 3 T (p = 0; c = 1; dp = 0) v 11 = ? 1 ? ? 0 ? 0 T (p = 1; c = dp = 0) v 12 = ? 1 ? 1 ? ? 1 T (p = 3; c = dp = 0) v 0 15 = 2 2 2 1 1 0 0 T (p = c = dp = 0)\nFigure 5: KB 0 (on vectors v 8 ; v 9 ; v 10 ) has allowed the system to decide that the stimulus-situation was irrelevant in predicting the use of lecture-with-slides. Deduction from the empirical evidence provided by vectors v 13 and v 14 has caused FLARE to break the \\tie\" between rules v 7 and v 0 8 in favor of v 7 . Prior knowledge relative to vectors v 11 and v 12 was encoded as static priorities, thus giving precedence to v 12 in case of con icts. Hence, if the vector v = (1; ?; 0:?; 0; 1; ? T ) is presented to FLARE after KB 0 is acquired, the second and fourth attributes are rst asserted as previously discussed to produce v 0 = (1; 1; 0; 0; 0; 1; ? T ). Then, v 7 , v 0 8 and v 11 compete. v 7 and v 0 8 win due to speci city. v 7 and v 0 8 also have same static priorities but v 7 wins due to dynamic priority and the result is (1; 1; 0; 0; 0; 1; 2 T ). Now, if the vector (1; ?; 2; ?; 0; 0; ? T ) is presented, a similar situation arises between v 11 and v 12 . The con ict is resolved with static priorities." }, { "figure_ref": [], "heading": "Extensionality and Intensionality", "publication_ref": [ "b34" ], "table_ref": [], "text": "As it is able to use prior knowledge in the form of precepts together with raw examples, FLARE e ectively combines the intensional approach (based on features, expressed here by precepts) and the extensional approach (based on instances, expressed by examples) to learning and reasoning. With this combination, FLARE can resolve con icting defaults, such as the Nixon Diamond (Reiter & Griscuolo, 1981), by either being told explicitly which default prevails (e.g., religious conviction is more important than political a liation) or by computing relative dynamic priorities (see Section 2.3.5) from examples of Republican-Quakers.\nMost inductive learning systems are purely extensional, while most reasoning systems are purely intensional. It is therefore these authors' contention that, if induction and deduction are to be integrated, then a combination of the two approaches is desirable. It is also clear that the combination increases exibility. On the one hand, extensionality accounts for the system's ability to adapt to its current environment, i.e., to be more autonomous. On the other hand, intensionality provides a mechanism by which the system can be taught and thus does not have to unnecessarily su er from poor or atypical learning environments.\nIn the context of reasoning, precepts provide a useful medium to encode certain rstorder language statements (e.g., the rule base of an expert system) that can, in turn, be learned by FLARE (in the usual way) and later be used for reasoning purposes. " }, { "figure_ref": [ "fig_5" ], "heading": "FLARE's Automatic Generation of Precepts", "publication_ref": [], "table_ref": [ "tab_2", "tab_2", "tab_2" ], "text": "Section 2.1.2 introduced the notion of precepts as generalized AVL vectors in which some of the attributes have the special value ? (i.e., don't-care). Precepts may be encoded directly by a teacher or deduced automatically from general knowledge. FLARE provides a simple (o -line) mechanism for the automatic generation of precepts in the preprocessing phase described in Section 2.2.\nFLARE uses prior knowledge in the form of general rules that may be viewed as encoding \\commonsense\" knowledge involving some of the attributes of the application domain. With the appropriate setting for deduction, FLARE can then generate domain-speci c precepts that can be used as biases for inductive learning or for further reasoning about the speci c domain to which they apply.\nConsider the example in Table 3 from Section 2.1.1. Assume that the system is to inductively learn rules regarding the suitability of lenses for patients from a set of examples whose attributes include the patient's tear-production rate (tpr). The statements in Table 3 capture general knowledge about eyes. Informally, they state that:\n1. Low tear-production rate causes dryness of the eyes. 2. Dry eyes are not t for lenses. When provided with the fact that the target-attribute of the system has to do with tting lenses, the general knowledge may be used to produce a domain-dependent precept that states that, if a patient has low tear-production rate, then he/she should not be tted lenses. The precept, in turn, provides a useful bias to the system during further induction from examples.\nThe process of generating precepts described above is essentially one of acquiring the general knowledge (or rules) and reasoning from it as described in Figure 6. When general rules are available, the function Generate-Precepts is always invoked prior to any other work by FLARE.\nThe function Generate-Precepts actually makes use of the other functions of FLARE. In step (1), it constructs a knowledge base from the general rules using learning as described in Section 2.4. In step (2), it reasons, as described in Section 2.3, using the acquired knowledge and facts enabling the general knowledge to be applied to the domain. The facts are encoded as a vector in which attributes found in the general knowledge are set to appropriate values and all others set to ?. Since precepts are mostly used as learning biases, the designated target-attribute is typically the target concept of an inductive application. In the lenses example of Table 3, the appropriate setting is obtained by creating a vector such that attribute Tpr is set to low and attribute Fit is designated as the target-attribute.\nHaving incorporated the two rules in its knowledge base in step (1), FLARE would then easily deduce a precept of the form: if Tpr is low then Fit is false, independent of any other don't-care conditions.\nThough the function Generate-Precepts is automated, the setting of the relevant attributes and interpretation of the result rely on a teacher. More automatic mechanisms may be considered, where the system could try any combination of a learning problem's attributes values to instantiate general knowledge. Then, any such instantiation that causes the target-attribute to become asserted is a potential precept. However, that process would be exponential and most of it would probably not lead to any useful conclusion." }, { "figure_ref": [], "heading": "Experimental Results and Demonstrations", "publication_ref": [ "b22", "b30" ], "table_ref": [], "text": "A set of classical commonsense benchmark problems has been proposed by Lifschitz (1988) and the UCI repository (Murphy & Aha, 1992) contains many useful training sets for inductive learning. This section reports results obtained with FLARE on several of these datasets. Results on a number of other uses of the framework, including two expert systems, are also presented. Finally, some of the limitations of the system are described.\nOne artifact of the implementation is that, since variables cannot be added dynamically, all attributes must be de ned a priori. All attributes that do not appear in rules, examples or precepts are set to don't-care. This is consistent with the semantics of don't-care and does not interfere with the algorithm since the distance D essentially treats learned don't-cares as neutral values." }, { "figure_ref": [], "heading": "Inductive Learning and Prior Knowledge", "publication_ref": [ "b30", "b33", "b4", "b35", "b48", "b0", "b46", "b48" ], "table_ref": [ "tab_2" ], "text": "In order to test the predictive accuracy of FLARE, the standard training set/test set approach is used. The value of v's target-attribute is provided but it is not used during reasoning. Rather, the system reasons based on its current knowledge base and all of the asserted attributes of v. When reasoning is completed, the \\computed\" target value is compared with the \\actual\" target value.\nSeveral datasets from the UCI repository (Murphy & Aha, 1992) were chosen. They represent a wide variety of situations, as shown in Table 6. The column labelled \\Size\" indicates the total number of examples in the dataset. The column labelled \\Attributes\" records the number and type (L for linear, N for nominal) of all the attributes, other than the target (or output) attribute. The column labelled \\Output\" shows the number of output classes.\nFLARE's results were gathered for each of the above applications, using 10-way crossvalidation. Each dataset is randomly broken into 10 sets of approximately equal size. Then, in each turn, one of the sets is used for testing, while the remaining 9 are used for learning. This process is repeated 10 times, one for each test set, so that every item of data is in the test set once and only once. Because FLARE's outcome is dependent upon the ordering of 7: FLARE: Induction data during learning, each turn was repeated 10 times with a new random ordering of the training set. The predictive accuracy for a given turn is the average of the 10 corresponding trials and the predictive accuracy for the dataset is the average of the 10 turns.\nResults are shown in Table 7. The rst number (PA) represents predictive accuracy (in %) on the test set after training and the second number (IR) is the inductive ratio, de ned as the ratio of the size (in number of rules) of the nal knowledge base to the number of instances used in learning. IR is another measure of the generalization power of FLARE, as well as an indication of FLARE's memory requirements. Results of PA with ID3 (Quinlan, 1986), ordered CN2 (Clark & Niblett, 1989) and Backpropagation (Rumelhart & McClelland, 1986) are also included for comparison. They were also obtained using 10-way cross-validation and are as reported by Zarndt (1995).\nFor the set of selected applications, FLARE's performance in generalization compares favorably with that of ID3, CN2 and Backpropagation, as well as with that of other inductive Application no prec. w/prec. lenses 79.0 -.43 80.5 -.33 voting-84 92.9 -.63 94.5 -.25 tic-tac-toe 81.5 -1.0 88. Aha et al., 1991;Wettschereck & Dietterich, 1994;Zarndt, 1995). In addition, the knowledge base maintained by FLARE is generally signi cantly smaller than the set of all training vectors. The rst ve applications were further selected to illustrate the e ect of prior knowledge on predictive accuracy and inductive ratio. For each application, the above experimental procedure is repeated but the set of training examples is now augmented by precepts given a priori (i.e., before the training set is presented). Results are reported in Table 8. Each column shows both PA and IR. Here, the precepts are obtained from domain knowledge provided with the application (voting-84) or generated from the authors common sense (zoo, lenses, hepatitis, tic-tac-toe). They serve as learning biases. The results with precepts show an average increase of 2.6% in predictive accuracy and a decrease of 31.3% of the inductive ratio. The decrease in IR demonstrates that prior knowledge allows pruning of parts of the input space during learning. Indeed, starting with the same number of training vectors, FLARE ends up with a knowledge base containing about one-third less vectors than when precepts are not used. Hence, precepts not only increase generalization performance, they also reduce memory requirements.\nThe lenses application was also used to demonstrate how precepts may be generated automatically by deducing domain-dependent information from general knowledge, as discussed in Section 2.5. The example of Table 3 from Section 2.1.1 was implemented (as described in Section 2.5) and a precept stating that, if the Tpr attribute is set to low, then lenses should not be prescribed was generated. That precept was, in turn, used prior to performing inductive learning as described above.\nThe process of inductive learning with automatically generated prior knowledge is twophase, where both phases perform the same operations on di erent pieces of information. In the rst phase, general knowledge expressed as rules (and translated into AVL) is learned by FLARE. Then FLARE reasons based on some instantiation that links the general knowledge to the current domain. The result of this reasoning phase is one (or more) precept containing domain-dependent information. In the second phase, FLARE learns from the generated precepts and any other available examples. The result is a set of inductively generated rules." }, { "figure_ref": [], "heading": "Classical Reasoning Protocols", "publication_ref": [ "b22", "b22", "b22" ], "table_ref": [], "text": "Several problems from the set of Benchmark Problems for Formal Nonmonotonic Reasoning (Lifschitz, 1988), were presented to FLARE. The problems were rst translated into their corresponding AVL representation. FLARE is able to properly incorporate the premises and correctly derive the expected conclusions for the following classes of problems from (Lifschitz, 1988):\nA1 -basic default reasoning. A2 -default reasoning with irrelevant information A3 -default reasoning with several defaults A5 -default reasoning in an open domain A9 -priority between defaults B1 -linear inheritance (top-down) B2 -tree-structured inheritance B3 -one-step multiple inheritance B4 -multiple inheritance Problem A4 involves a disabled default and problems A6 through A8 deal with unknown exceptions. Such problems cannot be represented in FLARE. Problems A10 and A11 deal with instances of defaults and reasoning about priority. Though not directly representable in FLARE, they are e ectively solved via the use of static (A10) or dynamic (A11) priorities. The other classes of problems de ned by Lifschitz (1988) (i.e., reasoning about actions, uniqueness of names and autoepistemic reasoning) are beyond the current scope of FLARE.\nNote that, in order to work properly, some of the above problems require added processing. In particular, problems A1, A2, A3 and A5 involve both classes of objects and particular instances of these classes. Problem A1, for example, is given as follows: blocks A and B are heavy, heavy blocks are normally located on the table, A is not on the table.\nTranslating to AVL gives: A 1 T ?, B 1 T ?, ? 1 1 T and A ? 0 T , where the rst attribute is a multi-valued attribute representing the objects in the universe and the second and third attributes are Boolean, encoding the predicate heavy and on t able respectively. Now, if A ? ? T is shown, A 1 ? T will be derived from the rst vector and will then match both ? 1 1 T and A ? 0 T exactly. It seems reasonable that priority should be given to the later since it involves A (an instance) explicitly. To solve this problem, vectors involving explicit references to instances of objects have their static priority set to 1 while all other vectors have their static priority set to 0. This is, of course, an artifact of encoding. An alternative is to write all facts relative to a given instance as a de nition whose target-attribute is the instance value. Then completion would guarantee the correct outcome.\nThe above problems are characteristics of important forms of human patterns of reasoning. However, they are arti cial as they have been manufactured explicitly with the intent of isolating one salient feature of nonmonotonic reasoning, independent of all others. To further investigate the properties of FLARE and the combination of learning and reasoning, other more \\real-world\" applications must be designed and experimented with. Section 3.5 presents such preliminary applications. The next two sections present simple applications that further exercise FLARE's ability to learn incrementally and to combine learning with reasoning in useful ways." }, { "figure_ref": [], "heading": "The Nixon Diamond", "publication_ref": [ "b34" ], "table_ref": [ "tab_0" ], "text": "The Nixon Diamond (Reiter & Griscuolo, 1981), reproduced as Table 1 in Section 2.1.1, is important as a prototype of a class of interesting problems involving con icting defaults. It is used here to demonstrate FLARE's mechanisms to handle such con icts both intensionally and extensionally.\nFLARE's static priorities o er a simple way of resolving the Nixon Diamond intensionally, based on some externally provided information (e.g., religious convictions supersede political a liations). In that case, both defaults are given along with an appropriate static priority.\nAnother alternative consists of providing the defaults without any priority. This corresponds to a possibly more natural situation where the system really is in a don't-know state when it comes to deciding on Nixon's dispositions. Yet, such don't-know states are uncomfortable and it is the authors' contention that any kind of information that may allow a decision to be made should be used. Hence, a simple epistemological approach is adopted, where the con ict arises due to beliefs rather than facts. In this case, it is possible to attempt to resolve the con ict by observing instances of Republican-Quaker. The relative number of paci sts and non-paci sts can then serve as evidence to lean towards one decision or the other. In other words, it is the system's observation of what seems most common in its environment that creates its belief. This is not unlike the way humans deal with many similar situations.\nA nal approach, which combines inductive learning and reasoning, consists of not providing the system with any default. Rather, examples of Republicans, Quakers and Republican-Quakers are shown and the system automatically comes up with both the defaults (through induction) and their relative priorities.\nAll three of these experiments were run with FLARE and the results are as expected. In the third case, the actual knowledge base depends upon the ordering. It consists of one vector for Republicans that are not Quakers or one vector for Quakers that are not Republicans, one default vector for Quakers or Republicans and one vector for Republican-Quakers. The target value of the vector for Republican-Quakers is not had via dynamic priority but via the counters. Functionally, however, the result is identical." }, { "figure_ref": [], "heading": "Do Birds Typically Fly?", "publication_ref": [], "table_ref": [], "text": "Incremental learning is one of FLARE's important features. With incrementality, the system is self-adaptive in the sense that its current knowledge base is representative of its experience with its environment so far. And the knowledge base can be continually updated as new information becomes available. To exercise incrementality a simple example of bottom-up inheritance involving birds was designed.\nThe application has four attributes, two of which correspond to Ostrich and Bird. The other two are other (undetermined) attributes of birds (e.g., Feather). The target-attribute is Boolean and characterizes the ability to y. At rst, the system is exposed mostly to ostrich-birds (maybe the experiment is started in Australia). When asked whether birds typically y (i.e., only the Bird attribute is asserted and all other inputs are don't-know), FLARE concludes that birds do not y, which is consistent with its current experience with the \\world.\" However, as more new instances of ying birds (i.e., other than ostriches and penguins) are encountered, FLARE adapts its knowledge and when asked again, concludes that birds y. Correct knowledge about ostrich-birds is also preserved. That is, if the system is shown an ostrich, it will still conclude that the ostrich does not y.\nOf course, a precept may also be given to the system at any given time, stating that birds typically y. The idea is that FLARE o ers both options naturally. The system may be taught so as not to su er from poor or atypical learning environments (e.g., Australia for birds' ying ability prediction), or it may be left to adapt to its environment. As research on autonomous agents continues, this later ability becomes important. Note that the above example also illustrates one of FLARE's limitations. The system either concludes that birds do y or that they do not. There is no mechanism for representing a middle ground in such a way that FLARE could reason about it at the meta-level. Decisions made in the presence of con icts are also \\crisp\" as demonstrated by the simple, rigid con ict resolution mechanism discussed in Section 2.3.5. Even though, the system may be able to produce more fuzzy-like results by associating each decision with a con dence level, it would still not be able to reason about these at the meta-level." }, { "figure_ref": [], "heading": "Learning Expert Systems", "publication_ref": [ "b18", "b38" ], "table_ref": [], "text": "In order to better assess FLARE's reasoning mechanisms, two expert system knowledge bases are used. One is called mediadv (Harmon & King, 1985) and is intended to help designers or committees choose the most appropriate media to deliver a training program. It consists of 20 rules with chains of inference of length 2 at most. The other is called health (Sawyer & Foster, 1986) and is intended to predict the longevity of patients based on a variety of factors (e.g., weight, personality, etc.). It is much larger as it contains 77 rules and more complex as it involves longer chains of inference. Five rules were left out as one is redundant (i.e., rule 17 is identical to rule 14) and four are only needed in the interactive setting in which the original system is described. Hence, only 72 rules are considered.\nBoth sets of rules were translated into AVL. The 20 rules of mediadv produce 99 vectors and the 72 rules of health produce 72 vectors. The number of vectors for mediadv is much larger than the original number of rules because many of the rules contain internal disjunctions. In AVL, a new vector must be constructed for each possible combination arising from the disjunctions. For example, the rule: If ((A=1 or A=2 or A=5) and (B=2.9 or B=7.8)) then C, gives rise to 6 vectors corresponding to the equivalent set of rules: If ((A=1) and (B=2.9)) then C, If ((A=1) and (B=7.8)) then C, If ((A=2) and (B=2.9)) then C, etc.\nThe sets of vectors corresponding to the original knowledge bases are not encoded into FLARE. Rather, they are presented to the system to be learned. Hence, some generalization may take place. In fact, the nal number of vectors (after learning) in mediadv is only 71, while in health it is 65.\nThe mediadv example is clearly very simple and presents little interest in terms of deduction. However, its purpose here is to show how the system's current knowledge base may be updated through learning. Of particular interest is the case of con icts that arise because two or more rules may apply to a given situation, while implying di erent goal values. In mediadv, such a con ict exists between rules 13 and 14 and between rules 16 and 17. Rules 13 and 14 are used as illustration. Let X be some xed conjunction of conditions not shown. Then: rule 13: if (X) and (training budget = small or training budget = medium) then media to consider = lecture rule 14: if (X) and (training budget = medium) then media to consider = lecturewith-slides It is clear that, in some cases, these rules con ict. The important issue is that it is dicult to avoid such occurrences in large knowledge bases elicited from experts. As FLARE supports learning, it is possible, however, to look at various (historical) situations where training budget was medium and check which media was used then. This information can, in turn, be used to give precedence to one rule over the other. Moreover, this precedence need not be xed after so many examples have been considered. Indeed, it may evolve over time and even change radically depending on circumstances.\nAn example, using several additional instances of (X) and (training budget = medium)] together with a target value for media to consider, was implemented. The instances used caused the value of dynamic priority of rule 13 to be greater than that of rule 14, thus e ectively giving (evidential) precedence to rule 13.\nOur experiments with the health knowledge base demonstrate FLARE's ability to perform deduction. The experiments conducted involve chains of inference of reasonable lengths and are fairly intuitive. Results are summarized in Table 9. The rst column contains the list of attributes used in the knowledge base. Then, each pair (setting, result) of columns represents an experiment in reasoning with the knowledge base. The setting column contains the data FLARE starts with. Unknown conditions (or attributes) are initialized as don't-knows (i.e., ?). The result column shows the state of knowledge after reasoning. Each derived piece of information is italicized and subscripted by the depth of inference at which it was derived.\nStarting with the facts in the setting column, FLARE successively infers new conclusions until it reaches a value for the top goal, longevity. Details of the inference process are given for the rst setting only. They are easily extended to the other settings. The rst setting corresponds to an average adult female of Asian race, with little vices or excesses and a reasonable diet. FLARE rst infers that:\nHer relative weight is normal (absolute weight < 110 lbs and small frame). Her personality type is A, as she is aggressive. Her blood pressure is average (normal fat and salt intake). Her base longevity is average, namely 67 (range is 48-84).\nHer chances of living longer (i.e., add years to base-longevity) are good.\nAnd then, based on this added information, infers that her risk is actually high and though the chances of living longer are good, the actual value added to base-longevity is 0 (i.e., factor = none). Finally, as one would have expected, the woman's longevity is predicted to be average (i.e., 67). The other settings further illustrate FLARE's ability to perform forward chaining. The second setting corresponds to a very unhealthy older male whose longevity is accordingly Table 9: Health Knowledge Base predicted to be low and the third setting describes a young healthy female whose life is expectedly predicted to be quite long. Note that though the results may seem impressive, the experiments are only \\anecdotal.\" Note that as in classical expert systems, the identi cation of a closest match during reasoning could be used to extend FLARE so that it may query the user for missing information as well as justify both the queries and the decisions made." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b15", "b28" ], "table_ref": [], "text": "The above applications serve to demonstrate that FLARE holds promise. However, FLARE has many important limitations, several of which were mentioned throughout the paper. Some of them are summarized here.\nFLARE's use of AVL as a representation language limits its applicability to relatively simple problems. Induction and deduction are carried out within the con nes of nonrecursive, propositional logic. Such a restriction makes the combination of learning and reasoning more accessible since much research has taken place within this context. However, rst-order predicate logic seems a minimum requirement for any system claiming reasoning abilities.\nAlthough FLARE produces good results, the applications it was tested on are relatively simple. For example, many of the databases in the UCI repository have low complexity and relatively unsophisticated learning methods perform well on them. This explains why FLARE's extremely coarse generalization scheme seems su cient to attain reasonable predictive accuracy. Similarly, the reasoning problems presented are somewhat straightforward. It follows that simple mechanisms such as static priorities and other counting devices used by FLARE are su cient.\nFLARE does not have any meta-level abilities. The system is unable to reason about its own knowledge and is subsequently unable to produce meaningful middle ground solutions. Yet, work on Cyc (Guha & Lenat, 1994) strongly suggests that meta-knowledge is indispensable in carrying out uncertain reasoning.\nIt is clear that FLARE only \\scratches the surface\" of the problem of e ectively and e ciently combining induction and deduction. Work on ILP (Muggleton, 1992) may shed some light on the issue of bringing systems like FLARE to a rst-order logic level." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b14", "b11", "b20", "b44", "b37", "b37", "b46", "b39", "b0", "b4", "b9", "b27", "b1", "b12", "b17", "b23", "b28", "b29", "b40", "b10", "b31", "b47", "b42", "b41", "b24", "b34", "b19", "b43" ], "table_ref": [], "text": "FLARE follows in the tradition of PDL2 (Giraud-Carrier & Martinez, 1994b) and ILA (Giraud-Carrier & Martinez, 1995), as it attempts to combine inductive learning using prior knowledge together with reasoning. Unlike PDL2 and ILA whose prior knowledge must be pre-encoded and whose reasoning power is limited to classi cation (i.e. 1-step forward inferences only), FLARE supports the automatic generation of precepts and forward chaining to any arbitrary depth. Whereas PDL2's actual operation tends to decouple learning and reasoning (i.e., the system essentially uses distinct mechanisms to perform either one), ILA implements an inherently more incremental approach by combining them into a 2-phase algorithm that always reasons rst and then adapts accordingly. FLARE further extends ILA by providing a natural transformation from constrained rst-order clauses to attribute-value vectors and a more accurate characterization of con icting defaults.\nIn attempting to construct an uni ed framework for learning and reasoning, FLARE follows a synergistic approach, similar (at least in concept) to that taken in SOAR (Laird, Newell, & Rosenbloom, 1987) and NARS (Wang, 1993) for example. There are also a variety of inductive learning models and reasoning systems that bear similarity with the corresponding components of FLARE. Some of them are discussed here.\nInduction in FLARE is carried out much the same way as in NGE (Salzberg, 1991). However, because generalization is e ected only by setting some attribute(s) to don't-care, the produced generalizations or generalized exemplars (Salzberg, 1991), are hyperplanes, rather than hyperrectangles, in the input space. Hence, FLARE implements a nearesthyperplane learning algorithm. FLARE also uses static and dynamic priorities to break ties between equidistant generalizations. Moreover, where it was shown that overlapping hyperrectangles may hinder performance (Wettschereck & Dietterich, 1994), FLARE allows overlapping hyperplanes for purposes of dealing with con icting defaults.\nIn the case that no generalizations are constructed from the training examples, FLARE degenerates into a restricted form of MBR (Stan ll & Waltz, 1986). The distance metric used is similar to IBL's metric (Aha et al., 1991) but it also handles don't-care attributes (which are non-existent in instance-based learners) and treats missing attributes somewhat di erently. Where IBL considers missing attributes to be complete mismatches, FLARE chooses a more middle-ground approach that may better capture the inherent notion of missing or \\don't-know\" attributes.\nLearning in FLARE contrasts with algorithms such as CN2 (Clark & Niblett, 1989), where all training examples must be available a priori. Rather, FLARE follows an incremental approach similar to that argued by Elman (1991), except that it is the knowledge itself that is evolved, rather than the system's structure. Moreover, learning in FLARE can be e ected continually. Any time an example or a precept is presented and its target output is known, FLARE can adapt.\nPrior knowledge may take a variety of forms, some of which are discussed by Mitchell (1980) and Buntine (1990). The form most relevant to FLARE consists of domain-speci c inference rules, either pre-encoded or deduced from more general rules. Systems that explicitly combine inductive learning with this kind of prior knowledge include PDLA (Giraud-Carrier & Martinez, 1993), ScNets (Hall & Romaniuk, 1990), ASOCS (Martinez, 1986) and ILP (Muggleton, 1992;Muggleton & De Raedt, 1994). ScNets are hybrid symbolic, connectionist models that aim at providing an alternative to knowledge acquisition from experts. Known rules may be pre-encoded and new rules can be learned inductively from examples. The representation lends itself to rule generation but the constructed networks are complex and generalization does not appear trivial. ASOCS and PDLA are dynamic, self-organizing networks that learn, incrementally, from both examples and rules. In ASOCS, order matters and con icts are simply solved by giving priority to the most recent rules. PDLA is less order-dependent and provides evidence-driven mechanisms for the handling of con icts. As in ScNets, prior knowledge in ASOCS and PDLA takes the form of explicitly encoded, domain-speci c rules. FLARE's approach is more exible. Because the system can reason, domain-speci c rules (or precepts) can be deduced automatically from more general rules. ILP models o er the same exibility. At the intersection of logic programming and inductive learning, ILP takes advantage of the full expressiveness of rst-order predicate logic to learn rst-order theories from background theories and examples. FLARE's representation language, though capable of handling both nominal and linear (including continuous and numerical) data, is only as expressive as non-recursive, propositional clauses. However, in this simpler setting, FLARE supports evidential reasoning and the prioritization of rules.\nFLARE's use of rules and similarity in reasoning is similar to CONSYDERR's (Sun, 1992). However, CONSYDERR is strictly concerned with a connectionist approach to concept representation and commonsense reasoning. The resulting model is elegant. It consists of a two-level architecture that naturally captures the dichotomy between concepts and the features used to describe them. However, it does not address the problem of learning (how such a skill could be incorporated is also unclear) and is currently limited to reasoning from concepts. FLARE's representation is not as elegant but the model can e ectively reason from concepts or from features. CONSYDERR deals only with Boolean features and a concept's representation is limited to a single conjunction of features. FLARE's concepts generally consist of several conjunctions of features, each representing partial and complementary de nitions of the concept. Also, since the domain of features is not restricted, FLARE uses a more general distance metric than CONSYDERR's similarity measure based on feature overlap. However, FLARE currently has no mechanisms for individual weighting of features, which may cause performance degradation and increased memory requirements in the presence of a large number of irrelevant features. FLARE's ability to evolve its knowledge base over time is similar to that found in theory-re nement systems such as RTLS (Ginsberg, 1990), EITHER (Ourston & Mooney, 1990, 1994) and KBANN (Towell, Shavlik, & Noordewier, 1990;Towell & Shavlik, 1994). RTLS implements a 3-phase algorithm for re nement. It rst reduces the current theory to a form suitable for inductive learning, then performs learning and, nally, retranslates the result into a new theory. This process is potentially costly. In FLARE, the language of the theory is the same as the language of induction, that is, the theory is always in reduced form. Though the language is not as rich, it allows revision to take place e ciently for each new example, incrementally. EITHER is similar to FLARE as it assumes an approximate theory and allows correction of both overly-general and overly-speci c rules. The mechanisms for revision are di erent. EITHER may add/remove antecedents and rules, while FLARE may remove antecedents and add rules and exceptions. EITHER currently only handles Boolean attributes, while FLARE has no such restriction. However, EITHER uses both explanation-based learning and inductive learning in revision, while FLARE is strictly inductive. KBANN, like EITHER, only deals with propositional, non-recursive Horn clauses. Prior knowledge is expressed identically to FLARE's pre-encoded precepts (i.e., domain-speci c inference rules in the form of Prolog-like clauses). KBANN translates the given knowledge base into an equivalent arti cial neural network (ANN) and may then perturb it and learn using the backpropagation algorithm. In FLARE, there is no ANN; the knowledge base is simply stored as individual rules. Overall, FLARE provides a slightly more general and synergistic approach. New evidence is constantly used to revise the current state of knowledge. There are currently no mechanisms in FLARE to deal explicitly with fuzzy rules. However, several mechanisms exist to handle inconsistencies and con icts. FLARE always makes a decision based on available evidence. A con dence level can also be produced to characterize the \\goodness\" of the decision.\nFLARE's limited handling of non-monotonicity di ers from the approach taken in logic. Non-monotonic logics typically extend rst-order predicate logic through added \\machinery,\" such as circumscription (McCarthy, 1980), semi-normal defaults (Reiter & Griscuolo, 1981) or hierarchical theories (Konolige, 1988), while essentially preserving consistency. FLARE's approach consists of tolerating inconsistencies in the knowledge base but providing reasoning mechanisms that ensure that no inconsistent conclusions are ever reached. It essentially consists of using normal defaults for inheritance and an external criterion for cancellation (Vilain, Koton, & Chase, 1990). The current criterion relies mostly on a simple counting argument (for dynamic priorities and covers). Though this approach has proven su cient for the simple propositional examples described here, it is likely to break down on more sophisticated examples and domains." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper highlights some of the interdependencies between learning and reasoning and details a system, called FLARE, that combines inductive learning using prior knowledge" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by grants from Novell Inc. and WordPerfect Corp. Many thanks also to our reviewers for helpful and constructive comments." }, { "figure_ref": [], "heading": "DEFINITION", "publication_ref": [], "table_ref": [], "text": "Input: the current knowledge base, the vector v + output by function Reasoning and the target value of the target-attribute. Output: updated knowledge base. IMPLEMENTATION 1. Let m be the vector of the current knowledge base such that D(m;v + ) is smallest (i.e., m is v + 's closest match in the current knowledge base).\n2. If all the attributes have equal values in v + and m, then add 1 to m.counters v + .target-attribute's value] (* if m is identical to or the prototype of v + , then: do not store v + , update m's counters *) 3. Else if m covers v + and m and v + are concordant, then add 1 to m.num covers (* if m subsumes or is a generalization of v + , then: do not store v + , increase m's con dence *)\n4. Else if v + covers m and m and v + are concordant, then add 1 to v + .num covers, delete m from the knowledge base and add v to the knowledge base (* if v + subsumes or is a generalization of m, then: replace m by v + , increase v + 's con dence *)\n5. Else if v + and m can produce a generalization, then (* if there is a possibility of generalization *)\nIf v + is more speci c than m and m has more than one non ? attribute, then drop the condition in m and set m.static priority to maxfm.static priority, v + .static priorityg (* if m is more general than v + and dropping condition is possible, then: do not store v + , drop condition in m, update static priority *)\nElse if v + has more than one non ? attribute, then drop the condition in v + , set v.static priority to maxfm.static priority, v + .static priorityg, set v + .num covers to m.num covers, delete m from the knowledge base and add v + to the knowledge base (* if v + is more general than m and dropping condition is possible, then: drop condition in v + , replace m by v + , update parameters *)\nElse add v + to the knowledge base (* if dropping condition is impossible, then: store v + in knowledge base *)\n6. Else add v + to the knowledge base (* default case: store v + in the knowledge base *)\nFigure 4: Function Adapting\nThe array counters contains an entry for each possible value of the target-attribute and is also stored with each vector. All the counters are initialized to 0, except the one corresponding to the vector's target-attribute value which is initialized to 1. The counters evolve over time and are used to handle noise. For any vector p in the current knowledge base, exactly one counter value is incremented (by 1) each time a new vector is presented, together with reasoning within the con nes of non-recursive, propositional logic. Several important positive conclusions may be drawn from the results of this research. In particular, Performance in induction is improved in terms of both memory requirement and generalization when prior knowledge is used.\nInduction from examples can be used to e ectively resolve con icting defaults extensionally.\nCombining rule-based and similarity-based reasoning provides a useful means of performing approximate reasoning and tends to reduce brittleness.\nInduction o ers a valuable complement to classical knowledge acquisition techniques from experts.\nExperiments with FLARE on a variety of applications demonstrate promise. However, much work still remains to be done to achieve a more complete and meaningful integration of learning and reasoning. Areas of future work include the following:\nDesigning mechanisms to use reasoning to guide learning.\nAttempting to overcome (or appropriately use) the order-dependency.\nProviding support for internal disjunction.\nImproving the use of inductively learned rules in reasoning (the support is available but the induction may not produce useful rules).\nPossibly incorporating backward chaining.\nTranslating the system's knowledge base back from AVL to FOL.\nFurther experimenting with larger applications.\nExtending the language to rst-order." } ]
[ { "authors": "D Aha; D Kibler; M Albert", "journal": "Machine Learning", "ref_id": "b0", "title": "Instance-based learning algorithms", "year": "1991" }, { "authors": "W Buntine", "journal": "", "ref_id": "b1", "title": "A Theory of Learning Classi cation Rules", "year": "1990" }, { "authors": "", "journal": "Addison-Wesley Publishing Company", "ref_id": "b2", "title": "Readings in Medical Arti cial Intelligence: The First Decade", "year": "1984" }, { "authors": "K Clark", "journal": "Plenum Press", "ref_id": "b3", "title": "Negation as failure", "year": "1978" }, { "authors": "P Clark; T Niblett", "journal": "Machine Learning", "ref_id": "b4", "title": "The CN2 induction algorithm", "year": "1989" }, { "authors": "A Collins; R Michalski", "journal": "Cognitive Science", "ref_id": "b5", "title": "The logic of plausible reasoning: A core theory", "year": "1989" }, { "authors": "F D'ignazio; A Wold", "journal": "Franklin Watts Library Edition", "ref_id": "b6", "title": "The Science of Arti cial Intelligence", "year": "1984" }, { "authors": "R Duda; R Reboh", "journal": "Ablex Publishing Corp", "ref_id": "b7", "title": "AI and decision making: The PROSPECTOR experience", "year": "1984" }, { "authors": "S Muggleton; S Russell; S ", "journal": "", "ref_id": "b8", "title": "Learnability of constrained logic programs", "year": "1993" }, { "authors": "J Elman", "journal": "", "ref_id": "b9", "title": "Incremental learning, or the importance of starting small", "year": "1991" }, { "authors": "A Ginsberg", "journal": "", "ref_id": "b10", "title": "Theory reduction, theory revision, and retranslation", "year": "1990" }, { "authors": "Giraud-Carrier ; C Martinez; T ", "journal": "", "ref_id": "b11", "title": "ILA: Combining inductive learning with prior knowledge and reasoning", "year": "1995" }, { "authors": "Giraud-Carrier ; C Martinez; T ", "journal": "", "ref_id": "b12", "title": "Using precepts to augment training set learning", "year": "1993" }, { "authors": "Giraud-Carrier ; C Martinez; T ", "journal": "Kluwer Academic Publishers", "ref_id": "b13", "title": "An e cient metric for heterogeneous inductive learning applications in the attribute-value language", "year": "1994" }, { "authors": "Giraud-Carrier ; C Martinez; T ", "journal": "", "ref_id": "b14", "title": "An incremental learning model for commonsense reasoning", "year": "1994" }, { "authors": "R Guha; D Lenat", "journal": "Communications of the ACM", "ref_id": "b15", "title": "Enabling agents to work together", "year": "1994" }, { "authors": "N Haas; G Hendrix", "journal": "Morgan Kaufmann Publishers, Inc", "ref_id": "b16", "title": "Learning by being told: Acquiring knowledge for information management", "year": "1983" }, { "authors": "L Hall; S Romaniuk", "journal": "", "ref_id": "b17", "title": "A hybrid connectionist, symbolic learning system", "year": "1990" }, { "authors": "P Harmon; D King", "journal": "John Wiley & Sons, Inc", "ref_id": "b18", "title": "Expert Systems", "year": "1985" }, { "authors": "K Konolige", "journal": "", "ref_id": "b19", "title": "Hierarchic autoepistemic theories for nonmonotonic reasoning", "year": "1988" }, { "authors": "J Laird; A Newell; P Rosenbloom", "journal": "Arti cial Intelligence", "ref_id": "b20", "title": "SOAR: An architecture for general intelligence", "year": "1987" }, { "authors": "N Lavra C; S Grobelnik; M ", "journal": "", "ref_id": "b21", "title": "Learning nonrecursive de nitions of relations in LINUS", "year": "1991" }, { "authors": "V Lifschitz", "journal": "", "ref_id": "b22", "title": "Benchmark problems for formal nonmonotonic reasoning", "year": "1988" }, { "authors": "T Martinez", "journal": "", "ref_id": "b23", "title": "Adaptive Self-Organizing Networks", "year": "1986" }, { "authors": "J Mccarthy", "journal": "Arti cial Intelligence", "ref_id": "b24", "title": "Circumscription: A form of nonmonotonic reasoning", "year": "1980" }, { "authors": "R Michalski", "journal": "Arti cial Intelligence", "ref_id": "b25", "title": "A theory and methodology of inductive learning", "year": "1983" }, { "authors": "M Minsky; D Riecken", "journal": "Communications of the ACM", "ref_id": "b26", "title": "A conversation with Marvin Minsky about agents", "year": "1994" }, { "authors": "T Mitchell", "journal": "", "ref_id": "b27", "title": "The need for biases in learning generalizations", "year": "1980" }, { "authors": "S Muggleton", "journal": "Academic Press", "ref_id": "b28", "title": "Inductive Logic Programming", "year": "1992" }, { "authors": "S Muggleton; L De Raedt", "journal": "Journal of Logic Programming", "ref_id": "b29", "title": "Inductive logic programming: Theory and methods", "year": "1994" }, { "authors": "P Murphy; D Aha", "journal": "", "ref_id": "b30", "title": "UCI repository of machine learning databases", "year": "1992" }, { "authors": "D Ourston; R Mooney", "journal": "", "ref_id": "b31", "title": "Changing the rules: A comprehensive approach to theory re nement", "year": "1990" }, { "authors": "D Ourston; R Mooney", "journal": "Arti cial Intelligence", "ref_id": "b32", "title": "Theory re nement combining analytical and empirical methods", "year": "1994" }, { "authors": "J Quinlan", "journal": "Machine Learning", "ref_id": "b33", "title": "Inductive learning of decision trees", "year": "1986" }, { "authors": "R Reiter; G Griscuolo", "journal": "", "ref_id": "b34", "title": "On interacting defaults", "year": "1981" }, { "authors": "D Rumelhart; J Mcclelland", "journal": "MIT Press", "ref_id": "b35", "title": "Parallel and Distributed Processing: Explorations in the Microstructure of Cognition", "year": "1986" }, { "authors": "M Rychener", "journal": "Morgan Kaufmann Publishers, Inc", "ref_id": "b36", "title": "The instructible production system: A retrospective analysis", "year": "1983" }, { "authors": "S Salzberg", "journal": "Machine Learning", "ref_id": "b37", "title": "A nearest hyperrectangle learning method", "year": "1991" }, { "authors": "B Sawyer; D Foster", "journal": "John Wiley & Sons, Inc", "ref_id": "b38", "title": "Programming Expert Systems in Pascal", "year": "1986" }, { "authors": "C Stan Ll; D Waltz", "journal": "Communications of the ACM", "ref_id": "b39", "title": "Toward memory-based reasoning", "year": "1986" }, { "authors": "R Sun", "journal": "Knowledge Acquisition", "ref_id": "b40", "title": "A connectionist model for commonsense reasoning incorporating rules and similarities", "year": "1992" }, { "authors": "G Towell; J Shavlik", "journal": "Arti cial Intelligence", "ref_id": "b41", "title": "Knowledge-based arti cial neural networks", "year": "1994" }, { "authors": "G Towell; J Shavlik; M Noordewier", "journal": "", "ref_id": "b42", "title": "Re nement of approximate domain theories by knowledge-based neural networks", "year": "1990" }, { "authors": "M Vilain; P Koton; M Chase", "journal": "", "ref_id": "b43", "title": "On analytical and similarity-based classi cation", "year": "1990" }, { "authors": "P Wang", "journal": "", "ref_id": "b44", "title": "Non-axiomatic reasoning system (version 2.2)", "year": "1993" }, { "authors": "D Waterman", "journal": "Addison Wesley", "ref_id": "b45", "title": "A Guide to Expert Systems", "year": "1986" }, { "authors": "D Wettschereck; T Dietterich", "journal": "Machine Learning", "ref_id": "b46", "title": "An experimental comparison of the nearestneighbor and nearest-hyperrectangle algorithms", "year": "1994" }, { "authors": "M Wollowski", "journal": "", "ref_id": "b47", "title": "Case-based reasoning as a means to overcome the frame problem", "year": "1994" }, { "authors": "F Zarndt", "journal": "", "ref_id": "b48", "title": "A comprehensive case study: An examination of connectionist and machine learning algorithms", "year": "1995" } ]
[ { "formula_coordinates": [ 5, 168.96, 88.8, 270.72, 88.8 ], "formula_id": "formula_0", "formula_text": "FOL AVL Ani (b) Bir (b) Pen (b) Fly (b) Animal(x)) :Fly(x) 1 ? ? 0 T Bird(x))Animal(x) 1 T 1 ? ? Bird(x))Fly(x) ? 1 ? 1 T Penguin(x))Bird(x) ? 1 T 1 ? Penguin(x)) :Fly(x) ? ? 1 0 T" }, { "formula_coordinates": [ 13, 239.28, 101.28, 132.24, 59.92 ], "formula_id": "formula_1", "formula_text": "D(x; y) = n X i=1 d(x i ; y i ) num asserted(x)" } ]
An Integrated Framework for Learning and Reasoning
Learning and reasoning are both aspects of what is considered to be intelligence. Their studies within AI have been separated historically, learning being the topic of machine learning and neural networks, and reasoning falling under classical (or symbolic) AI. However, learning and reasoning are in many ways interdependent. This paper discusses the nature of some of these interdependencies and proposes a general framework called FLARE, that combines inductive learning using prior knowledge together with reasoning in a propositional setting. Several examples that test the framework are presented, including classical induction, many important reasoning protocols and two simple expert systems.
Christophe G Giraud-Carrier; Tony R Martinez
[ { "figure_caption": "FigureFigure 1: FOL to AVL Transformation FOL AVL Rep (b) Qua (b) Pac (b) Republican(x)) :Paci st(x) 1 ? 0 T Quaker(x))Paci st(x) ? 1 1 T", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "and stimulus-duration = persistent and training-budget = small Then media = role-play-w/verbal-feedback", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: FLARE -Algorithmic Overview", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(1) (a) Let w = v (b) (i) Designate the second attribute as a rst subgoal (ii) Apply rule v 2 : result is w = (1; 1; 1; ?; 0; 0; ? T ) (i) Designate the fourth attribute as second subgoal (ii) Apply rule v 5 : result is w = (1; 1; 1; 0; 0; 0; ? T ) (c) Let v = w (2) Two con icting rules exist: D(v 7 ; v) = D(v 11 ; v) = 0 Apply v 7 (more speci c): result is v = (1; 1; 1; 0; 0; 0; 2 T ) 2.3.7 Approximate Reasoning", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "DEFINITIONInput: a set of general rules, a set of facts and one designated target-attribute Output: one or more precepts IMPLEMENTATION 1. Learning general knowledge: Perform Learning on the set of general rules. 2. Reason from facts: Perform Reasoning with a vector encoding the given facts and the designated target-attribute.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Function Generate-Precepts", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Flying or Not Flying(Lifschitz, 1988) ", "figure_data": "FOL Tear-prod-rate(x,low))Eyes(x,dry) Eyes(x,dry)) :Fit(x)AVL Tpr (m) Eye (m) Fit (b) low dry T ? ? dry 0 T", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Fitting Lenses", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Simple Blocks World, adapted from", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Simple KB Running Example", "figure_data": "DEFINITIONFunction: Generate-Precepts{ Input: a set of general rules, a set of facts and one designated target-attribute.{ Output: one or more precepts.Function: Reasoning{ Input: the current knowledge base, a set of facts encoded in a vector v, one designated target-attribute and optionally, the target value of the target-attribute. { Output: a vector v + equal to v together with further facts deduced from v, including a derived value for the target-attribute.Function: Adapting{ Input: the current knowledge base, the vector v + output by function Reasoning and the target value of the target-attribute.{ Output: updated knowledge base.IMPLEMENTATION1. Preprocessing: Perform Generate-Precepts2. Main loop: For each vector presented to the system(a) Perform Reasoning(b) If there is a target value for the target-attribute, perform Adapting", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b22", "b28", "b1", "b15", "b22", "b22", "b28", "b7", "b16", "b19", "b1", "b15", "b24", "b25", "b10", "b24", "b25", "b10", "b11" ], "table_ref": [], "text": "Problems of learning on temporal domains can be signi cantly hindered by the presence of long-term dependencies in the training data. A sequence of random variables (e.g., a sequence of observations fy 1 ; y 2 ; : : :y t ; : : :y T g, denoted y T 1 ) is said to exhibit long-term dependencies if the variables y t at a given time t are signi cantly dependent on the variables y t 0 at much earlier times t 0 t. In these cases, a system trained on this data (e.g., to model its distribution, or make classi cations or predictions) has to be able to store for arbitrarily long durations bits of information in its state variable, called x t here. In general, the di culty is not only to represent these long-term dependencies, but also to learn a representation of past context which takes them into account. Recurrent neural networks (Rumelhart, Hinton, & Williams, 1986;Williams & Zipser, 1989), for example, have an internal state and a rich expressive power that provide them with the necessary long-term memory capabilities.\nAlgorithms that could e ciently learn to represent long-term context would be useful in many areas of Arti cial Intelligence. For example, they could be applied to many problems in natural language processing, both at the symbolic level (e.g., learning grammars and language models), and subsymbolic level (e.g., modeling prosody for speech recognition or synthesis).\nIn order to train the learning system, however, an e ective mechanism of credit assignment through time is needed. To change the parameters of the system in order to change the internal state of the system at time t, so as to \\improve\" the internal state of the system c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. later in the sequence, one can recursively propagate credit or error information backwards in time. For example, the Baum-Welch algorithm for HMMs (Baum, Petrie, Soules, & Weiss, 1970;Levinson, Rabiner, & Sondhi, 1983) and the back-propagation through time algorithm for recurrent neural networks (Rumelhart et al., 1986) rely on such kind of recursion. Numerous gradient-descent based algorithms have been proposed for solving the credit assignment problems in recurrent networks (e.g., Rumelhart et al., 1986;Williams & Zipser, 1989). Yet, many researchers have found practical di culties in training recurrent networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals (Bengio, Simard, & Frasconi, 1994;Mozer, 1992;Rohwer, 1994). Bengio et al. (1994) have also found theoretical reasons for this di culty and proved a negative result for parametric dynamical systems with a non-linear state to next-state recurrence 1 x t = f t (x t 1 ): it will be increasingly di cult to train such as system with gradient descent as the duration of the dependencies to be captured increases. Let J be the matrix of partial derivatives of the state to next-state function, J ij = @x t;i @x t 1;j . A mathematical analysis of the problem shows that, depending on the norm jJj of the Jacobian matrix J, one of two conditions arises in such systems. When jJj < 1, the dynamics of the network allow it to reliably store bits of information for arbitrary durations, even with bounded input noise; however, gradients with respect to an error at a given time step vanish exponentially fast as one propagates them backward in time. On the other hand, when jJj > 1, gradients can ow backward, but the system is locally unstable and cannot reliably store bits of information for a long time. Bengio et al. (1994) showed how this hurts the learning of long-term dependencies by putting exponentially more weight on the in uence of short-term dependencies (in comparison to long-term dependencies) over the gradient of a cost function with respect to trainable parameters. The above negative result applies to non-linear parameterized dynamical systems such as most recurrent networks, but not to linear probabilistic models such as hidden Markov models (HMMs). These models are a special case of our previous result in which the 1-norm jJj = 1, because this matrix is a stochastic matrix, i.e., a matrix A of transition probabilities A ij = P(x t = jjx t 1 = i), where the state variable x t can take a nite number of values.\nThe main contribution of this paper is therefore an extension of the negative results found by Bengio et al. (1994) to the case of Markovian models, which include standard HMMs (Baum et al., 1970;Levinson et al., 1983) as well as variations of HMMs such as Input/Output HMMs (IOHMMs) (Bengio & Frasconi, 1995b), and Partially Observable Markov Decision Processes (POMDPs) (Sondik, 1973(Sondik, , 1978;;Chrisman, 1992). We nd that in general, a phenomenon of di usion of context and credit assignment, due to the ergodicity of the transition probability matrices, hampers both the representation and the learning of long-term context in the hidden state variable.\nBoth homogeneous and non-homogeneous Markovian models are considered. Homogeneous here means that the transition probabilities of the Markov model are constant over time t. Non-homogeneous means that these transition probabilities are allowed to be different for each time step, e.g., as a function of an external input that may be di erent at each time step. In the homogeneous case (e.g., standard HMMs), such models can learn the distribution P(y T 1 ) of output sequences y T 1 = y 1 ; y 2 ; : : :; y T by associating an output distri-bution P(y t jx t = i) to each value i of the discrete state variable x t . In the non-homogeneous case, transition and output distributions are conditional on an input sequence, allowing to model relationships between input and output sequences. In the case of IOHMMs (Bengio & Frasconi, 1995b), one thus learns a model P(y T 1 ju T 1 ) of the conditional distribution of an output sequence y T 1 when an input sequence u T 1 is given. This can be used to perform sequence regression or classi cation, as with recurrent networks. In the case of POMDPs (Sondik, 1973(Sondik, , 1978;;Chrisman, 1992), used to control a process with a hidden state, one wants not only to build such a model, but also to select a proper sequence a T 1 of (discrete) actions in order to maximize a discounted sum of future rewards that depends on the action taken, the observed output sequence y T 1 and the estimated distribution of the state trajectory. Note that the sequence of actions a T 1 in POMDPs and the sequence of inputs u T 1 in IOHMMs play a similar role in this paper, inasmuch as both are responsible for the non-homogeneity of the Markov chain. In the following, we shall use the same symbol u T 1 to denote the sequence that controls transition probabilities, i.e. inputs for IOHMMs and actions for POMDPs.\nThe negative results presented in this paper are directly applicable to learning algorithms such as the EM algorithm (Dempster, Laird, & Rubin, 1977) or other gradient-based optimization algorithms, which rely on gradually and iteratively modifying continuous-valued parameters (such as transition probabilities, or parameters of a function computing these probabilities) in order to optimize a learning criterion." }, { "figure_ref": [], "heading": "Mathematical Preliminaries", "publication_ref": [ "b18", "b23" ], "table_ref": [], "text": "A rst-order Markovian model is de ned by a discrete set of states f1; : : :ng, a probabilistic transition function (state to next-state), and a probabilistic output function (state to output). The discrete state variable x t can take values in f1; : : :ng at each time step. We will write A ij for the element (i; j) of a matrix A, A n = AA : : :A for the n th power of A, and (A n ) ij for the element (i; j) of A n . See (Rabiner, 1989) for an introduction to HMMs, and (Seneta, 1981) for a basic reference on positive matrices.\nThe Markovian independence assumption implies that the state variable x t summarizes the past of the sequence: P(x t jx 1 ; x 2 ; : : :; x t 1 ) = P(x t jx t 1 ). Another independence assumption, when the state x t is hidden but an output y t is observed, is that the distribution of y t at time t does not depend on the other past variables when x t is given. State transitions at time t may depend on the u t (the current input for IOHMMs or the current action for POMDPs) and can be collected into an n by n transition matrix A t de ned by A ij (u t ) = P(x t = j j x t 1 = i; u t ; ) where is a vector of adjustable parameters. In the homogeneous case, the transition matrix is constant, i.e., A t = A. The parameters are then usually directly identi ed with the elements of the transition matrix A.\nOutput emissions y t depend on u t and the present state, as speci ed by the output (also called emission) distribution P(y t j x t ; u t ; #), with parameters #. For example, if the Markov chain is homogeneous and the output values belong to a nite alphabet of cardinality k, then the parameters # can be collected in a k by n matrix B, B li = P(y t = l j x t = i).\nAn output sequence y T 1 can be generated according to the distribution P(y T 1 ju T 1 ) (nonhomogeneous case) or P(y T 1 ) (homogeneous case) represented by the model, as follows. First an initial state x 0 is selected according to a distribution P(x 0 ) on initial states (usually multinomial, sometimes requiring n 1 extra parameters, or a xed choice of a single initial state). Then the state x t can be recursively picked in function of the previous state x t 1 , by choosing an x t 2 f1; : : :; ng according to the multinomial distribution P(x t jx t 1 ; u t ; ). At each time step, an output can then be generated according to the distribution P(y t j x t ; u t ; #).\nState transitions can be constrained by a directed graph G, whose nodes are associated to the states of the Markov chain. In particular, the probability P(x t = i j x t 1 = j) will be constrained to be zero if there is no edge from node j to node i." }, { "figure_ref": [], "heading": "Learning in Markovian Models", "publication_ref": [ "b1", "b15", "b18", "b0" ], "table_ref": [], "text": "The learning objective is often to maximize the output likelihood P(y T 1 ; ), or the output likelihood given the input P(y T 1 j u T 1 ; ), where comprises all the parameters of the model. This can be accomplished with an EM algorithm when the form of the output and transition probability models are simple enough, e.g. in the case of HMMs (Baum et al., 1970;Levinson et al., 1983;Rabiner, 1989) or IOHMMs (Bengio & Frasconi, 1995b). Alternatives, for maximizing the output likelihood or other criteria (such as the more discriminant mutual information between the output sequence and the correct model, Bahl et al. 1986), are usually based on some gradient-based optimization algorithm, requiring the computation of the gradient of the learning criterion with respect to the model parameters. In all of these cases, the learning algorithms perform products involving the transition probability matrices (Bengio & Frasconi, 1995a, 1995b), such as i;t = P(y t 1 ; x t = i j u t 1 ) = P(y t j x t = i; u t ) P `A`i (u t ) `;t 1 i;t = P(y T t j x t = i; u T t ) = P `Ai`( u t+1 )P(y t j x t+1 = l; u t+1 ) `;t+1 :\n(1)\nwhere the overall output likelihood is obtained from the nal time step:\nP(y T 1 j u T 1 ) = X i i;T :\nNote that if L is the learning criterion and i;T = @L @ i;T , then i;t = @L @ i;t . In terms of matrices, we can write t = t A 0 t 1 A 0 1 0 t = A t t A T T T\n(2) where t = 1;t : : : n;t ] 0 , t = 1;t : : : n;t ] 0 and t is a diagonal matrix of emission probabilities P(y t jx t = i; u t ) (for the i th element). The matrix A t contains the transition probabilities at time t, i.e. (A t ) ij = P(x t = j j x t 1 = i; u t ; ). It can be easily veri ed that the compact notation\nA (t 0 ;t) = A t 0 A t 0 +1 A t 1 A t (3)\nfor products of matrices 2 can be used to describe the e ect of the distribution of the state x t 0 at time t 0 on the distribution of the state x t at time t > t 0 : A (t 0 ;t) ij = P(x t = j j x t 0 = i; u t t 0 ; ). Therefore, we will study how this product evolves under various conditions, when t t 0 increases (for long-term dependencies). We will nd in what (rather general) conditions A (t 0 ;t) tends to become ill-conditioned, more precisely, when x t becomes more and more independent of x t 0 as t t 0 increases. In Section 4.2, we also discuss equations (2) as T t increases. In the following subsection we rst introduce some standard mathematical tools for studying such products of non-negative matrices." }, { "figure_ref": [], "heading": "De nitions", "publication_ref": [], "table_ref": [], "text": "De nition 1 (Non-negative matrices) A matrix A is said to be non-negative, written A 0, if A ij 0 8i; j.\nPositive matrices are de ned similarly.\nBy extension, we will also write A B when 8i; j, A ij B ij .\nDe nition 2 (Stochastic matrices) A non-negative square matrix A 2 R n n is called row stochastic (or simply stochastic in this paper) if P n j=1 A ij = 1 8i = 1 : : :n:\nDe nition 3 (Allowable matrices) A non-negative matrix is said to be row column] allowable if every row column] sum is positive. An allowable matrix is both row and column allowable.\nA non-negative matrix can be associated to the directed transition graph G that constrains the Markov chain. The incidence matrix à corresponding to a given non-negative matrix A is the 0-1 matrix obtained by replacing all positive entries of A by a 1. The incidence matrix of A is a connectivity matrix corresponding to the graph G (assumed to be connected here). Some algebraic properties of A are described in terms of the topology of G. Indices of the matrix A correspond to nodes of G (we will also use \\states of the model\", talking about a Markovian model).\nDe nition 4 (Irreducible Matrices) A non-negative n n matrix A is said to be irreducible if for every pair i; j of indices, 9 m = m(i; j) positive integer s.t. (A m ) ij > 0.\nA matrix A is irreducible if and only if the associated graph is strongly connected (i.e., there exists a path between any pair of states i; j). A reducible matrix is one that is not irreducible. If 9k s.t. (A k ) ii > 0 (i.e., there is a path of length k from node i to itself), d(i) is called the period of index i if d(i) is the greatest common divisor (g.c.d.) of those k for which (A k ) ii > 0 (i.e., there are also paths of length k, 2k, 3k, etc..., with k = d(i)). In an irreducible matrix all the indices have the same period d, which is called the period of the matrix. The period of a matrix is the g.c.d. of the lengths of all cycles in the associated transition graph G.\nAn example of a periodic matrix of period 3 is illustrated by the graph G 1 of Figure 2.\nAll the paths starting from one of the states and returning to it are of length 3k for some positive integer k.\nDe nition 5 (Primitive matrix) A non-negative matrix A is said to be primitive if there exists a positive integer k s.t. A k > 0.\nTherefore, in a graph with a corresponding primitive matrix, one can always nd a path of length greater than some k between any two nodes, and if there exists a path of length k between nodes i and j, there are also paths of length k + 1, k + 2, etc... In the analysis below, we will consider submatrices (and corresponding subgraphs) which are primitive.\nNote that an irreducible matrix is either periodic or primitive (i.e., of period 1), and that a primitive stochastic matrix is necessarily allowable." }, { "figure_ref": [], "heading": "The Perron-Frobenius Theorem", "publication_ref": [ "b2", "b23" ], "table_ref": [], "text": "Right eigenvectors v of a matrix A and their corresponding eigenvalues have the following properties (see Bellman, 1974, for more on eigenvalues and eigenvectors):\ndeterminant(A I) = 0:\nwhere I is the identity matrix, and Av = v i.e., X j A ij v j = v i :\nNote that for a stochastic matrix A the largest eigenvalue has norm 1, which can be shown as follows. Letting i = argmax j jv j j, we obtain\nj j = X j A ij v j v i X j jA ij j jv j j jv i j X j A ij 1:\nHence all the eigenvalues have norm less or equal to 1. Let us de ne the vector of ones 1 = 1; 1; ; 1] 0 , where v 0 denotes the transpose of v. Since A1 = 1 by de nition of stochastic matrices, 1 is an eigenvalue and 1 is its corresponding right eigenvector.\nThe following theorem will be useful in characterizing homogeneous products of stochastic matrices (as in HMMs).\nTheorem 1 (Perron-Frobenius Theorem) Suppose A is an n n non-negative primitive matrix. Then there exists an eigenvalue r such that:\n1. r is real and positive; 2. r can be associated with strictly positive left and right eigenvectors; 3. r > j j for any eigenvalue 6 = r; 4. the eigenvectors associated with r are unique to constant multiples. 5. If 0 B A and is an eigenvalue of B, then j j r. Moreover, j j = r implies B = A.\n6. r is a simple root of the characteristic equation determinant(A rI) = 0.\nSee proof in the book by Seneta, 1981, Theorem 1.1.] A direct consequence of the Perron-Frobenius theorem for stochastic matrices is therefore the following:\nCorollary 1 Suppose A is a primitive stochastic matrix. Then its largest eigenvalue is 1 and there is only one corresponding right eigenvector 1 = 1; 1; ; 1] 0 . Furthermore, all other eigenvalues are less than 1 in modulus.\nProof. A1 = 1 by de nition of stochastic matrices. As shown above, all the eigenvalues have a modulus less or equal to 1. Thus, we deduce from the Perron-Frobenius Theorem that 1 is the largest eigenvalue, 1 is the unique associated eigenvector, and all other eigenvalues < 1.2\nIn the next section we will discuss the consequences of this corollary for HMMs. As shown by Seneta (1981), we should also note that if A is stochastic but periodic with period d, then A has d eigenvalues of modulus 1 which are the d complex roots of 1." }, { "figure_ref": [], "heading": "Ergodicity", "publication_ref": [], "table_ref": [], "text": "In this section we analyze the case of a primitive transition matrix as well as the general case with a so-called canonical re-ordering of the matrix indices (de ned below). We introduce ergodicity coe cients in order to measure the di culty in learning long-term dependencies." }, { "figure_ref": [], "heading": "Simplest Case: Homogeneous and Primitive", "publication_ref": [], "table_ref": [], "text": "A straightforward application of the Perron-Frobenius theorem and the associated corollary 1 is given in the following theorem.\nTheorem 2 If A is a primitive stochastic matrix, then as t ! 1, A t ! 1v 0 where v 0 is called the unique stationary distribution of the Markov chain. The rate of approach is geometric. See proof in the book by Seneta, 1981, Theorem 4.2.] The intuition behind the proof simply relies on the fact that when a matrix A is taken to a certain power A n , it is equivalent to take its eigenvalues to the same power. As we have seen earlier, all the eigenvalues are less or equal to one in modulus. Therefore, the eigenvalues of A which are less than 1 are associated to near zero eigenvalues of A n , as n ! 1. The only eigenvalues which do not converge to zero are those whose modulus is 1.\nThere is only one such eigenvalue in the case of primitive stochastic matrix (associated to the eigenvector 1). In the case of periodic matrices of period d, discussed below, there are complex eigenvalues whose modulus is 1 and which are among the d th roots of unity.\nWe recall that the rank of a matrix A is the dimension of the linear subspace spanned by the eigenvectors of A and corresponds to the number of linearly independent rows (or columns). Since the matrix A obtained by the product 1v 0 of two vectors has rank 1, we obtain the following from Theorem 2. If A is primitive, then lim t!1 A t converges to a matrix whose eigenvalues are all 0 except for one eigenvalue = 1 (with corresponding eigenvector 1), i.e., the rank of this product converges to 1, which means that its rows are proportional. For a stochastic matrix, row proportionality is equivalent to row equality.\nSince (A t t 0 ) ij = P(x t = jjx t 0 = i) it follows that the distribution over the states at time t > t 0 becomes gradually independent of the distribution P(x t 0 ) over the states at time t 0 as t t 0 increases. This is illustrated in Figure 6, which shows products of 1, 2, 3 and 4 random primitive stochastic matrices, and rapid convergence to row equality, i.e., P(x t = jjx t 0 = i) does not depend any more on i as t t 0 becomes large. It means that, as one moves forward in time, context information is di used, and gradually lost. A consequence of Theorem 2 is therefore that it is very di cult to model long-term dependencies in sequential data using a homogeneous HMM with a primitive transition matrix. After having introduced ergodicity coe cients in the next sections, we will be able to discuss the more general case of non-homogeneous models (such as IOHMMs and POMDPs), as well as, comment on the di usion of context information in the forward and backward HMM equations (2)." }, { "figure_ref": [], "heading": "Coe cients of ergodicity", "publication_ref": [ "b23", "b23" ], "table_ref": [], "text": "To study products of non-negative matrices and the loss of information about initial state in Markov chains (particularly in the non-homogeneous case), we will de ne two coe cients of ergodicity. First, we introduce the projective distance between vectors v and w: d(v 0 ; w 0 ) = max i;j ln( v i w j v j w i ):\nNote that some form of contraction takes place when d(v 0 A; w 0 A) d(v 0 ; w 0 ) (Seneta, 1981), i.e., applying the linear operator A to the vectors v 0 and w 0 brings them \\closer\" (according to the above projective distance).\nDe nition 6 Birkho 's contraction coe cient B (A), for a non-negative column-allowable matrix A, is de ned in terms of the projective distance:\nB (A) = sup v;w>0;v6 = w d(v 0 A; w 0 A) d(v 0 ; w 0 ) : Dobrushin's coe cient 1 (A), for a stochastic matrix A, is de ned as follows:\n1 (A) = 1 2 max i;j X k ja ik a jk j:\n(4)\nBoth B and 1 are called proper ergodicity coe cients, i.e., they have the properties that, rstly, 0 (A) 1, and secondly, that (A) = 0 if and only if A has identical rows (and therefore rank 1). The coe cients of ergodicity quantify the ergodicity of a matrix, i.e., at what rate a power of the matrix converges to rank 1. Furthermore, (A 1 A 2 ) (A 1 ) (A 2 ) (Seneta, 1981). Therefore, as discussed in the next section, these coe cients can also be applied to quantify how fast a product of matrices converges to rank 1." }, { "figure_ref": [], "heading": "Products of Stochastic Matrices", "publication_ref": [ "b23" ], "table_ref": [], "text": "Let A (1;t) denote a forward product of stochastic matrices A 1 ; A 2 ; A t . From the properties of B and 1 , if (A t ) < 1; 8t > 0 then lim t!1 (A (1;t) ) = 0, i.e., lim t!1 A (1;t) has rank 1 and identical rows. Weak ergodicity of a product of matrices is then de ned in terms of a proper ergodic coe cient (such as B or 1 ) converging to 0:\nDe nition 7 (Weak Ergodicity) The products of stochastic matrices A (t 0 ;t) are weakly ergodic if and only if for all t 0 0 as t ! 1, (A (t 0 ;t) ) ! 0.\nThe following theorem relates weak ergodicity to rank lossage in products of stochastic matrices and, therefore to the problem of learning and representing long-term context.\nTheorem 3 Let A (1;t) be forward products of non-negative and allowable matrices; then A (1;t) is weakly ergodic if and only if the following conditions both hold: 1. 9 t 0 s.t. A (t 0 ;t) > 0 8 t t 0 ;\n2. A (t 0 ;t) ik A (t 0 ;t) jk ! W ij (t) > 0 as t ! 1, i.e., rows of A (t 0 ;t) tend to proportionality.\nSee the proof in the book by Seneta (1981), Lemma 3.3 and 3.4.]\nFor stochastic matrices, row-proportionality (2nd condition above) is equivalent to rowequality since rows sum to 1. Note that the limit lim t!1 A (t 0 ;t) itself does not need to exist in order to have weak ergodicity. If such a limit exists and it is a matrix with all rows equal, then the product is said to be strongly ergodic." }, { "figure_ref": [], "heading": "Canonical Decomposition and Periodic Graphs", "publication_ref": [ "b23" ], "table_ref": [], "text": "Any non-negative matrix A can be rewritten by relabeling its indices in the following canonical decomposition (Seneta, 1981), with diagonal blocks B i , C i and Q: where the B i and C i blocks are irreducible, the B i blocks are primitive and the C i blocks are periodic. De ne the corresponding sets of states as S B i , S C i , S Q . Q might be reducible, but the groups of states in S Q leak into the B or C blocks, i.e., S Q represents the transient part of the state space. This decomposition is illustrated in Figure 1. We will consider three cases: paths starting from a state in B i , Q or C i . In the rst case, for homogeneous and non-homogeneous Markov models (with constant incidence matrix Ãt = Ã0 ), because P(x t 2 S Q jx t 1 2 S Q ) < 1, lim t!1 P(x t 2 S Q jx 0 2 S Q ) = 0. In the second case, because the B i are primitive, we can apply Theorem 1 to these sub-matrices, and starting from a state in S B i , all information about an initial state at t 0 is gradually lost." }, { "figure_ref": [], "heading": "Periodic Graphs", "publication_ref": [ "b23" ], "table_ref": [], "text": "A more di cult case to analyze is the third case, i.e., that of paths from state j at time t 0 to state k at time t, with initial state j 2 S C i associated to a periodic block. Let d i be the period of the i th periodic block C i . It can be shown (Seneta, 1981) that taking d products of periodic matrices with the same incidence matrix and period d yields a block-diagonal matrix whose d blocks are primitive. Thus a product C (t 0 ;t) retains information about the initial block in which x t 0 was. However, for every such block of size > 1, information will be gradually lost about the exact identity of the state within that block. This is best demonstrated through a simple example. Consider the incidence matrix represented by the graph G 1 of Figure 2. It has period 3 and the only non-deterministic transition is from state 1, which can yield into either one of two loops. When many stochastic matrices with this graph are multiplied together, information about the loop in which the initial state was is gradually lost (i.e., if the initial state was 2 or 3, this information is gradually lost). What is retained is the phase information, i.e., in which block (f0g, f1g, or f2,3g) of a cyclic chain was the initial state. This suggests that it will be easy to learn about the type of outputs associated to each block of a cyclic chain, but it will be hard to learn anything else. Suppose now that the sequences to be modeled are slightly more complicated, requiring an extra loop of period 4 instead of 3, as in Figure 2. In that case A is primitive: all information about the initial state will be gradually lost." }, { "figure_ref": [], "heading": "Representing and Learning Long-Term Context", "publication_ref": [], "table_ref": [], "text": "Based on the analysis of the previous section, which apply both the homogeneous and nonhomogeneous cases, we nd in this section that in order to absolutely avoid all di usion of context and credit information (both learning and representing context), the transitions should be deterministic (0 or 1 probability). For HMMs, this unfortunately corresponds to a system that can only model cycles (and is therefore not very useful for most applications). Both learning and representing context are hurt by the same ergodicity phenomenon because the state to next state transformation is linear, i.e., forward and backward propagation are symmetrical.\nWe discuss the practical impact of this ergodicity problem for incremental learning algorithms (such as EM and gradient ascent in likelihood)." }, { "figure_ref": [], "heading": "Learning Long-Term Dependencies: a Discrete Problem?", "publication_ref": [ "b20", "b13", "b26", "b20" ], "table_ref": [], "text": "To better understand the problem, it is interesting to look at a particular instance of the EM algorithm for HMMs, more speci cally, at a form of the update rule for transition probabilities,\nA ij A ij @L @A ij P j A ij @L @A ij : (6\n)\nwhere L is the likelihood of the training sequences. We might wonder if, starting from a positive stochastic matrix, the learning algorithm could learn the topology, i.e., replace some transition probabilities by zeroes. Starting from A ij > 0 we could obtain a new A ij = 0 only if @L @A ij = 0, i.e., on a local maximum of the likelihood L. Thus the EM training algorithm will not exactly obtain zero probabilities. Transition probabilities might however approach 0. Furthermore, once A ij has taken a near-zero value, it will tend to remain small. This suggests that prior knowledge (or initial values of the parameters), rather than learning, should be used, if possible, to determine the important elements of the topology, and for establishing the long-term relations between elements of the observed sequences.\nIt is also interesting to ask in which conditions we are guaranteed that there will not be any di usion (of in uence in the forward phase, and credit in the backward phase of training). It requires that all of the eigenvalues have a norm that is 1. This can be achieved with periodic matrices C (of period d), which have d eigenvalues that are the d roots of 1 on the complex unit circle. To avoid any loss of information also requires that C d = I be the identity, since any diagonal block of C d with size more than 1 will bring a loss of information (because of ergodicity of primitive matrices). This can be generalized to reducible matrices whose canonical form is composed of periodic blocks C i with C d i = I.\nThe condition we are describing actually corresponds to a matrix with only 1's and 0's. For this type of matrix, the incidence matrix Ãt of A t is equal to the matrix A t itself.\nTherefore, when Ãt is xed, the Markov chain is also homogeneous. It appears that many interesting computations cannot be achieved with such constraints (i.e., only allowing one or more cycles of the same period and a purely deterministic and homogeneous Markov chain). Furthermore, if the parameters of the system are the transition probabilities themselves (as in ordinary HMMs), such solutions correspond to a subset of the corners of the 0-1 hypercube in parameter space. Away from those solutions, learning is mostly in uenced by short term dependencies, because of di usion of credit. Furthermore, as seen in equation ( 6), algorithms like EM will tend to stay near a corner once it is approached. This suggests that discrete optimization algorithms, rather continuous local algorithms, may be more appropriate to explore the (legal) corners of this hypercube. Examples of to this approach are found in the area of grammar inference for natural language modeling (e.g., variable memory length Markov models, Ron et al., 1994, or constructive algorithms for learning context-free grammars, Lari & Young, 1990, Stolcke & Omohundro, 1993). The problem of di usion studied here applies only to algorithms that use gradient information (such as the Baum-Welch and gradient-based algorithms) and a gradual modi cation of transition probabilities. It would be interesting to evaluate how such constructive and discrete search algorithms perform when properly solving the task requires to learn to represent long-term context. On the basis of the results of this paper, however, we believe that in order to successfully learn long-term dependencies, such algorithms should look for very sparse topologies (or very deterministic models). Note that some of the already proposed approaches (Ron et al., 1994) are limited in the type of context that can be represented (e.g., no loops in the graph and the constraint that all intermediate observations between times t 0 and t must be represented by the state variable in order to model the in uence of y t 0 on y t )." }, { "figure_ref": [ "fig_3" ], "heading": "Di usion of Credit", "publication_ref": [ "b1", "b15", "b8", "b3" ], "table_ref": [], "text": "We have already found above that except in the special case of 0 or 1 transition probabilities, the state variable becomes more and more independent of remote past states (and therefore of remote past inputs and outputs). Since this prevents robustly representing long-term context, learning such a long-term context is also made more and more di cult for longer term dependencies.\nHowever, it is interesting to consider how the ergodicity of the transition probability matrix directly a ects the the forward-backward equations (2) (to propagate context information forward and backward) used in learning algorithms such as EM and (implicitly) gradient descent. In particular, let us consider the Dobrushin ergodicity coe cient of these matrix products. First, let V t = A t t A T T , then\n1 (V t ) 1 (A t ) 1 ( t ) 1 (A T ) 1 ( T ) = T Y =t 1 ( ) 1 (A )(7)\nWe have already seen that 1 (A t ) < 1 unless the transition probabilities are all 0 or 1.\nRemember that the emission probability matrix is diagonal. Applying the de nition of 1 (equation 4) to a diagonal matrix D, we obtain 1 (D) = 1 2 max i;j jD ii D ij j + jD ij D jj j = 1 2 max i;j jD ii j + jD jj j with i 6 = j:\nTherefore, 1 ( t ) = 1 2 max i;j P(y t jx t = i; u t ) + P(y t jx t = j; u t ) with i 6 = j;\nwhich is the average of the two largest emission probabilities at this time step. Therefore, when the transition probabilities are not all 0 or 1, in the case of discrete outputs, 1 ( t ) 1, and the ergodicity coe cient of the matrix product V t in equation ( 7) converges to 0 as T t increases. Note that this product gives the gradient of i;T with respect to j;t (from equation 1) and is used in the EM algorithm (Baum et al., 1970;Levinson et al., 1983) as well as in gradient-based algorithms (Bridle, 1990;Bengio, De Mori, Flammia, & Kompe, 1992;Bengio & Frasconi, 1995b).\nFor example, in the case of a learning criterion L, @L @ t = V t @L @ T where @L @ t is the vector @L @ 1;t : : : @L @ n;t ]. Since V t is used to propagate credit backwards, its convergence to rank 1 means that long-term credit is gradually lost as it is propagated backwards: the gradient of the learning criterion with respect to all the past states becomes the same, i.e., @L @ t converges to a multiple of 1; 1; : : :\n; 1].\nThe continuous emissions case is more di cult because the density P(y t jx t = i; u t ) can locally be greater than one. The above result can still be obtained if we restrict our attention to the cases in which the product of the largest emission probabilities at each time step is bounded, which is the most likely in practice. In the case where it is not bounded, we conjecture that the same result can be obtained by considering scaled emission probability matrices, with a scaling factor s t that is 1 when the emission probability is less than 1, and that is 1= max i P(y t jx t = i; u t ) otherwise. Letting U t = A t s t t A T s T T , although the overall gradient with respect to all the past states can grow very large (as T t increases), the rank of U t still converges to 1, and the vector t = @L @ t also converges to a (possibly very large) multiple of 1; 1; : : :; 1]. In practice we train HMMs with nite sequences. However, training will become more and more numerically ill-conditioned as one considers longer term dependencies. Consider as in Figure 3 two events e t (occurring at time t) and e (occurring at time much earlier than t), and suppose there are also \\interesting\" events occurring in between (i.e., events which should in uence the state variable at time t in order to better model outputs at time t or later). Let us consider the overall in uence of states at times s < t upon the likelihood of the outputs at time t. Because of the phenomenon of di usion of credit, and because gradients are added together, the in uence of intervening events (especially those occurring shortly before t) will be much stronger than the in uence of e . Furthermore, this problem gets geometrically worse as t increases." }, { "figure_ref": [ "fig_4" ], "heading": "Sparse Matrices and Prior Knowledge", "publication_ref": [ "b14", "b18", "b9" ], "table_ref": [], "text": "Clearly a positive matrix (corresponding to a fully-connected graph) is primitive. Thus in order to learn long-term dependencies, we would like to have many zeros in the matrix of transition probabilities (which reduces the problem of di usion, as con rmed by the experiments described in Section 5 and illustrated in Figure 5). Unfortunately, this generally supposes prior knowledge of an appropriate connectivity graph. In practical applications of HMMs, for example to speech recognition (Lee, 1989;Rabiner, 1989) or protein secondary structure modeling (Chauvin & Baldi, 1995), prior knowledge is heavily used in setting up the connectivity graph. As illustrated in Figure 4, in speech recognition systems the meaning of individual states is usually xed a-priori except within phoneme models. The representation of long-term context is therefore not learned by the HMM. Transition probabilities between groups of states representing a phoneme in a certain context are \\learned\" from text or labeled speech data. However, in that case the \\model\" is a Markov model, not a hidden Markov model: learning consists in counting co-occurrence of events such as phonemes or words. The hard problem of learning a representation of context is therefore avoided by choosing it on the basis of prior knowledge.\nAnother direction of research should be in ways to incorporate some prior knowledge with learning from examples, preferably in a way that simpli es the problem of learning (new) long-term dependencies. Our current research in this direction is based on the old AI idea of using a multi-scale representation. The state variable is decomposed into several \\sub-state\" variables (whose Cartesian product is equal to the \\full\" state variable), each operating at a di erent time scale. The a-priori assumption is that long-term context will be represented by \\slow\" state variables, which must be insensitive to the precise timing of events. This allows the propagation of context (and credit, for learning) over long durations through those higher-level state variables. To impose these multiple time scales, one can introduce constraints on the transition probabilities, such that the \\slow\" variables always have a small probability of changing at any time step. Another useful assumption is that the transition probabilities can be factored in terms of the conditional sub-state probabilities at each time scale, given the full state. We conjecture that the hypothesis behind this multiscale structure is appropriate for most \\natural\" sequence learning tasks (such as those humans perform)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section we report some experimental results. Firstly, we study, from a numerical point of view, the convergence of products of stochastic matrices. Then we report an example of training in a problem in which the span of temporal dependencies is arti cially controlled." }, { "figure_ref": [], "heading": "Di usion: Numerical Simulations", "publication_ref": [], "table_ref": [], "text": "In this experiment we measure how (and if) di erent kinds of products of stochastic matrices converged, for example to a matrix with equal rows. We ran 4 simulations, each with an 8-state non-homogeneous Markov chain but with di erent constraints on the transition graph: 1) G is fully connected; 2) G is a left-to-right model (i.e., the incidence matrix à is upper triangular); 3) G is left-to-right but only one-state skips are allowed (i.e., à is upper bidiagonal); 4) A t are periodic with period 4. Results shown in Figures 5 and6 con rm the convergence towards zero of the ergodicity coe cient at a rate that depends on the graph topology. The exception is, as expected, the case of periodic matrices. Note how the sparser graphs have a larger ergodicity coe cient, which should ease the learning of long-term dependencies. In Figure 6, we represent visually the convergence of fully connected matrices to row equality, in only 4 time steps, towards equal rows. Each of the transition probability matrices A t (t = 1; 2; 3; 4) was chosen randomly from a uniform distribution." }, { "figure_ref": [], "heading": "Training Experiments", "publication_ref": [], "table_ref": [], "text": "To evaluate how di usion impairs training, a set of controlled experiments were performed, in which the training sequences were generated by a simple homogeneous HMM with longterm dependencies, depicted in Figure 7.\nTwo branches generate similar sequences except for the rst and last symbol. The extent of the long-term context is controlled by the self transition probabilities of states 2 and 5, = P(x t = 2jx t 1 = 2) = P(x t = 5jx t 1 = 5). Span or \\half-life\" is log(:5)= log( ), i.e., span = :5). Following Bengio et al. (1994), data was generated for various span of long-term dependencies (0:1 to 1000).\nFor each series of experiments, varying the span, 20 di erent training trials were run per span value, with 100 training sequences3 . Training was stopped either after a maximum number of epochs (200), of after the likelihood did not improve signi cantly, i.e., (l(t) l(t 1))=jl(t)j < 10 5 , where l(t) is the logarithm of the likelihood of the training set at epoch t. A trial is considered successful (converged) when it yields a likelihood almost as good or better than the likelihood of the generating HMM on the same data.\nIf the HMM is fully connected (except for the nal absorbing state) and has just the right number of states, trials almost never converge to a good solution (1 in 160 did). Increasing the number of states and randomly putting zeroes in the transition matrix helps convergence. This con rms common intuition, although using more states than strictly necessary may result in worse generalization to new examples and, hence, may not be an advisable solution to solve convergence problems. The randomly connected HMMs had 3 times more states than the generating HMM and random connections were created with 20% probability. Figure 8 shows the average number of converged trials for these di erent types of HMM topologies. In all cases the number of successful trials rapidly drops to zero beyond some value of span. In failed trials, the equivalent of states 3 and 6 of the generating HMM are usually confused, i.e., these solutions don't take the beginning of the sequence into account to represent the distribution of the symbols near the end of the sequence.\nIt is interesting to note that HMMs with many more states than necessary but sparse connectivity performed much better. Typically, a sparser graph corresponds to a larger coe cient of ergodicity (as exempli ed in Figure 5), which allows long-term dependencies to be represented and learned more easily.\nAnother interesting observation is that in many cases, the training curve goes through one or more very at plateaus. Such plateaus could be explained by the di usion problem: the relative gradient with respect to some parameters is very small (thus the algorithm appears to be stuck). These plateaus can become a very serious problem when their slope approaches numerical precision or their length becomes unacceptable." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [ "b26", "b20", "b24", "b25", "b10" ], "table_ref": [], "text": "In previous work on recurrent networks (Bengio et al., 1994) we had found that, for these nonlinear dynamical parameterized systems, propagating credit over the long term was incompatible with storing information for the long term. Basically, with enough non-linearity (larger weights) to store long-term context robustly, gradients back-propagated through time vanish rapidly. In this paper, we have also found negative results concerning the representation and learning of long-term context, but they apply to Markovian models such as HMMs, IOHMMs or POMDPs. For these models, we found that both the representation and the learning of long-term context information are tied together. In general, they are both hurt by the ergodicity of the transition probability matrix (or submatrices of it). However, when the transition probabilities are close to 1 and 0, information can be stored for the long term and credit can be propagated over the long term. Like our ndings for recurrent networks, this suggests that the problem of learning long-term dependencies looks more like a discrete optimization problem. It appears di cult for local learning algorithm such as EM or gradient descent to learn optimal transition probabilities near 1 or 0, i.e., to learn the topology, while taking into account long-term dependencies. This should encourage research on alternative (discrete) algorithms for discovering HMM topology (especially for representing long-term context), such as those proposed by Stolcke & Omohundro (1993) and Ron et al. (1994). Our results suggest that such algorithms should strive to discover sparse topologies, or almost deterministic models. The arguments presented here are essentially an application of established mathematical results on Markov chains to the problem of learning long term dependencies in homogeneous and non-homogeneous HMMs. These arguments were also supported by experiments on arti cial data, studying the phenomenon of di usion of credit and the corresponding di culty in training HMMs to learn long-term dependencies.\nIOHMMs (Bengio & Frasconi, 1994, 1995b) and POMDPs (Sondik, 1973(Sondik, , 1978;;Chrisman, 1992) are non-homogeneous variants of HMMs, i.e., the transition probabilities are function of the input (for IOHMMs) or the action (for POMDPs) at each time t. The re-sults of this paper suggests that such non-homogeneous Markovian models could be better suited (in some situations) to representing and learning long-term context. For such models, forcing transition probabilities to be near 0 or 1 still allows the system to model some interesting phenomena and perform useful computations. In practice, this means that the underlying dynamics of state evolution to be modeled should be deterministic. For example, a deterministic IOHMM can recognize strings from a deterministic grammar, taking into account long-term dependencies (Bengio & Frasconi, 1995b). For HMMs this constraint restricts the model to simple cycles, which are not very interesting.\nOur analysis and numerical experiments also suggest that using many more hidden states than necessary, with a sparse connectivity, reduces the di usion problem. Another related issue to be investigated is whether techniques of symbolic prior knowledge injection (see, e.g., Frasconi, Gori, Maggini, & Soda, 1995) can be exploited to choose good topologies, or combine speci c a-priori knowledge with learning from examples.\nBased on the analysis presented here, we are also exploring another approach to learning long-term dependencies that consists in building a hierarchical representation of the state. This can be achieved by introducing several sub-state variables whose Cartesian product corresponds to the system state. Each of these sub-state variables can operate at a di erent time scale, thus allowing credit to propagate over long temporal spans for some of these variables." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "Yoshua Bengio is also with the adaptive systems department at AT&T Bell Labs (Holmdel, NJ). We would like to thank L eon Bottou for his many useful comments and suggestions, and the NSERC, FCAR, and IRIS Canadian funding agencies for support." } ]
[ { "authors": "L R Bahl; P V De Souza; R L Mercer", "journal": "", "ref_id": "b0", "title": "Maximun mutual information estimation of hidden Markov model parameters for speech recognition", "year": "1986" }, { "authors": "L E Baum; T Petrie; G Soules; N Weiss", "journal": "Ann. Math. Statistic", "ref_id": "b1", "title": "A maximization technique occuring in the statistical analysis of probabilistic functions of Markov chains", "year": "1970" }, { "authors": "R Bellman", "journal": "McGraw-Hill", "ref_id": "b2", "title": "Introduction to Matrix Analysis", "year": "1974" }, { "authors": "Y Bengio; R De Mori; G Flammia; R Kompe", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b3", "title": "Global optimization of a neural network-hidden Markov model hybrid", "year": "1992" }, { "authors": "Y Bengio; P Frasconi", "journal": "Morgan Kaufmann", "ref_id": "b4", "title": "Credit assignment through time: Alternatives to backpropagation", "year": "1994" }, { "authors": "Y Bengio; P Frasconi", "journal": "MIT Press", "ref_id": "b5", "title": "Di usion of credit in Markovian models", "year": "1995" }, { "authors": "Y Bengio; P Frasconi", "journal": "MIT Press", "ref_id": "b6", "title": "An input/output HMM architecture", "year": "1995" }, { "authors": "Y Bengio; P Simard; P Frasconi", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b7", "title": "Learning long-term dependencies with gradient descent is di cult", "year": "1994" }, { "authors": "J Bridle", "journal": "Speech Communication", "ref_id": "b8", "title": "Alphanets: a recurrent `neural' network architecture with a hidden Markov model interpretation", "year": "1990" }, { "authors": "Y Chauvin; P Baldi", "journal": "Journal of Computational Biology", "ref_id": "b9", "title": "Hidden Markov models of the g-protein-coupled receptor family", "year": "1995" }, { "authors": "L Chrisman", "journal": "", "ref_id": "b10", "title": "Reinforcement learning with perceptual aliasing: The perceptual distinctions approach", "year": "1992" }, { "authors": "A P Dempster; N M Laird; D B Rubin", "journal": "Journal of Royal Statistical Society B", "ref_id": "b11", "title": "Maximum-likelihood from incomplete data via the EM algorithm", "year": "1977" }, { "authors": "P Frasconi; M Gori; M Maggini; G Soda", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b12", "title": "Uni ed integration of explicit rules and learning by example in recurrent networks", "year": "1994" }, { "authors": "K Lari; S Young", "journal": "Computer Speech and Language", "ref_id": "b13", "title": "The estimation of stochastic context-free grammars using the inside-outside algorithm", "year": "1990" }, { "authors": "K.-F Lee", "journal": "Kluwer Academic Publ", "ref_id": "b14", "title": "Automatic Speech Recognition: the development of the SPHINX system", "year": "1989" }, { "authors": "S E Levinson; L R Rabiner; M M Sondhi", "journal": "Bell System Technical Journal", "ref_id": "b15", "title": "An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition", "year": "1983" }, { "authors": "M C Mozer", "journal": "", "ref_id": "b16", "title": "The induction of multiscale temporal structure", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "L R Rabiner", "journal": "", "ref_id": "b18", "title": "A tutorial on hidden Markov models and selected applications in speech recognition", "year": "1989" }, { "authors": "R Rohwer", "journal": "ACM Sigart Bulleting", "ref_id": "b19", "title": "The time dimension of neural network models", "year": "1994" }, { "authors": "D Ron; Y Singer; N Tishby", "journal": "", "ref_id": "b20", "title": "The power of amnesia", "year": "1994" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "D Rumelhart; G Hinton; R Williams", "journal": "MIT Press", "ref_id": "b22", "title": "Learning internal representations by error propagation", "year": "1986" }, { "authors": "E Seneta", "journal": "Springer", "ref_id": "b23", "title": "Nonnegative Matrices and Markov Chains", "year": "1981" }, { "authors": "E Sondik", "journal": "Operations Research", "ref_id": "b24", "title": "The optimal control of partially observable Markov processes over the nite horizon", "year": "1973" }, { "authors": "E Sondik", "journal": "Operations Research", "ref_id": "b25", "title": "The optimal control of partially observable Markov processes over the in nite horizon: discounted case", "year": "1978" }, { "authors": "A Stolcke; S Omohundro", "journal": "", "ref_id": "b26", "title": "Hidden Markov model induction by Bayesian model merging", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b27", "title": "", "year": "" }, { "authors": "R Williams; D Zipser", "journal": "Neural Computation", "ref_id": "b28", "title": "A learning algorithm for continually running fully recurrent neural networks", "year": "1989" } ]
[ { "formula_coordinates": [ 4, 238.08, 623.58, 284.16, 18.64 ], "formula_id": "formula_0", "formula_text": "A (t 0 ;t) = A t 0 A t 0 +1 A t 1 A t (3)" }, { "formula_coordinates": [ 6, 244.56, 244.32, 123.12, 16 ], "formula_id": "formula_1", "formula_text": "determinant(A I) = 0:" }, { "formula_coordinates": [ 6, 197.52, 390.96, 217.2, 37.9 ], "formula_id": "formula_2", "formula_text": "j j = X j A ij v j v i X j jA ij j jv j j jv i j X j A ij 1:" }, { "formula_coordinates": [ 11, 260.88, 562.86, 256.64, 39.28 ], "formula_id": "formula_3", "formula_text": "A ij A ij @L @A ij P j A ij @L @A ij : (6" }, { "formula_coordinates": [ 11, 517.52, 575.4, 4.72, 15.2 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 13, 173.04, 167.28, 349.2, 37.56 ], "formula_id": "formula_5", "formula_text": "1 (V t ) 1 (A t ) 1 ( t ) 1 (A T ) 1 ( T ) = T Y =t 1 ( ) 1 (A )(7)" }, { "formula_coordinates": [ 13, 343.68, 555.12, 17.28, 16 ], "formula_id": "formula_6", "formula_text": "; 1]." } ]
Di usion of Context and Credit Information in Markovian Models
This paper studies the problem of ergodicity of transition probability matrices in Markovian models, such as hidden Markov models (HMMs), and how it makes very di cult the task of learning to represent long-term context for sequential data. This phenomenon hurts the forward propagation of long-term context information, as well as learning a hidden state representation to represent long-term context, which depends on propagating credit information backwards in time. Using results from Markov chain theory, we show that this problem of di usion of context and credit is reduced when the transition probabilities approach 0 or 1, i.e., the transition probability matrices are sparse and the model essentially deterministic. The results found in this paper apply to learning approaches based on continuous optimization, such as gradient descent and the Baum-Welch algorithm. 1. For example, in the case of a recurrent neural network with recurrent weight matrix W and input vector ut at time t, the next-state recurrence is ft(xt 1) = tanh(Wxt 1 + ut)
Yoshua Bengio; Paolo Frasconi
[ { "figure_caption": "2. To verify equation (3), just apply recursively the simple decomposition rule of probabilities P(a) = P b P(a j b)P(b).", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :Figure 2 :12Figure 1: Transition graph corresponding to the canonical decomposition. Large dotted circles represent subgroups of states associated to submatrices B i , C i , and Q in equation (5). The large arrows on the upper right area generically represent transitions from some states in Q to some states in B i and C i . Transitions among states in each subgroup are depicted inside the large circles.", "figure_data": "", "figure_id": "fig_2", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Typical problem with short-term dependencies hiding the long-term dependencies.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Learning of a representation of context in speech recognition HMMs is typically limited to what happens within a phoneme. Higher-level representations are chosen from prior knowledge and those parameters are often estimated from simple co-occurrence statistics.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :Figure 6 :Figure 7 :Figure 8 :5678Figure 5: Convergence of Dobrushin's coe cient (see De nition 6) in product of stochastic matrices associated to non-homogeneous Markov chains constrained by di erent transition graphs. The attening of the bottom curve is due to the limits of numerical precision in the computer experiments.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5678", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b10", "b11", "b9", "b19", "b3", "b13", "b17", "b12", "b0", "b20", "b2", "b7", "b8", "b16", "b23" ], "table_ref": [], "text": "Symmetric networks such as Hop eld networks, Boltzmann machines, mean-eld and Harmony networks are frequently investigated for use in optimization, constraint satisfaction and approximation of NP-hard problems (Hop eld, 1982(Hop eld, , 1984;;Hinton & Sejnowski, 1986;Peterson & Hartman, 1989;Smolensky, 1986;Brandt, Wang, Laub, & Mitra, 1988). These models are characterized by a symmetric matrix of weights and a quadratic energy function that should be minimized. Usually, each unit computes the gradient of the energy function and updates its own activation value so that the free energy decreases gradually. Convergence to a local minimum is guaranteed although in the worst case it is exponential in the number of units (Kasif, Banerjee, Delcher, & Sullivan, 1989;Papadimitriou, Sha er, & Yannakakis, 1990).\nIn many cases the problem at hand is formulated as a minimization problem and the best solutions (sometimes the only solutions) are the global minima (Hop eld & Tank, 1985;Ballard, Gardner, & Srinivas, 1986;Pinkas, 1991). The desired algorithm is therefore one c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. that manages to reduce the impact of shallow local minima, thus improving the chances of nding a global minimum. Some models such as Boltzmann machines and Harmony nets use simulated annealing to escape from local minima. These models asymptotically converge to a global minimum, meaning that if the annealing schedule is slow enough, a global minimum is found. Nevertheless, such a schedule is hard to nd and therefore, in practice, these networks are not guaranteed to nd a global minimum even in exponential time.\nIn this paper we look at the topology of symmetric neural networks. We present an algorithm that nds a global minimum for acyclic networks and otherwise optimizes treelike subnetworks in linear time. We then extend it to general topologies by dividing the network into ctitious tree-like subnetworks using the cycle-cutset scheme.\nThe algorithm is based on the method of nonserial dynamic programming methods (Bertel e & Brioschi, 1972), which was also used for constraint optimization (Dechter, Dechter, & Pearl, 1990). There the task was divided between a precompilation into a tree structure via a tree-clustering algorithm and a run-time optimization over the tree.\nOur adaptation is connectionist in style; i.e., the algorithm can be stated as a simple, uniform activation function (Rumelhart, Hinton, & McClelland, 1986;Feldman & Ballard, 1982) and it can be executed in parallel architectures using synchronous or asynchronous scheduling policies. It does not assume the desired topology (acyclic) and performs no worse than the standard local algorithms for all topologies. In fact, it may be integrated with many of the standard algorithms in such a way that the new algorithm out-performs the standard algorithms by avoiding a certain class of local minima (along tree-like subnetworks).\nOur algorithm is also applicable to an emerging class of greedy algorithms called local repair algorithms. In local repair techniques, the problem at hand is usually formulated as a minimization of a function that measures the distance between the current state and the goal state (the solution). The algorithm picks a setting for the variables and then repeatedly changes those variables that cause the maximal decrease in the distance function. For example, a commonly used distance function for constraint satisfaction problems is the number of violated constraints. A local repair algorithm may be viewed as an energy minimization network where the distance function plays the role of the energy. Local repair algorithms are sequential though, and they use a greedy scheduling policy; the next node to be activated is the one leading to the largest change in the distance (i.e., energy). Recently, such local repair algorithms were successfully used on various large-scale hard problems such as 3-SAT, n-queen, scheduling and constraint satisfaction (Minton, Johnson, & Phillips, 1990;Selman, Levesque, & Mitchell, 1992). Since local repair algorithms may be viewed as sequential variations on the energy minimization paradigm, it is reasonable to assume that improvements in energy minimization will also be applicable to local-repair algorithms.\nOn the negative side, we show that in the presence of cycles, no uniform algorithm exists that guarantees optimality even under a sequential asynchronous scheduler. An asynchronous scheduler can activate only one unit at a time while a synchronous scheduler can activate any number of units in a single time step. In addition, no uniform algorithm exists to optimize even acyclic networks when the scheduler is synchronous. Those negative results involve conditions on the parallel model of execution and therefore are applicable only to the parallel versions of local repair.\nThe paper is organized as follows: Section 2 discusses connectionist energy minimization. Section 3 presents the new algorithm activate and gives an example where it out-performs the standard local algorithms. Section 4 discusses negative results, convergence under various schedulers and self-stabilization. Section 5 extends the approach to general topologies through algorithm activate-with-cutset and suggests future research. Section 6 summarizes and discusses applications." }, { "figure_ref": [], "heading": "Connectionist Energy Minimization", "publication_ref": [ "b10", "b9" ], "table_ref": [], "text": "Given a quadratic energy function of the form: E(X 1 ; :::;\nX n ) = n X i<j w i;j X i X j n X i + i X i :\nEach of the variables X i may have a value of zero or one called the activation value, and the task is to nd a zero/one assignment to the variables X 1 ; :::X n that minimizes the energy function. To avoid confusion with signs, we will consider the equivalent problem of maximizing the goodness function: G(X 1 ; :::; X n ) = E(X 1 ; :::;\nX n ) = X i<j w i;j X i X j + X i i X i(1)\nIn connectionist approaches, we look at the network that is generated by assigning a node (i) for every variable (X i ) in the function, and by creating a weighted arc (with weight w i;j ) between node i and node j, for every term w i;j X i X j . Similarly, a bias i is given to unit i, if the term i X i is in the function. For example, Figure 1 shows the network that corresponds to the goodness function E(X 1 ; :::\n; X 5 ) = 3X 2 X 3 X 1 X 3 +2X 3 X 4 2X 4 X 5 3X 3 X 2 +2X 1 .\nEach of the nodes is assigned a processing unit and the network collectively searches for an assignment that maximizes the goodness. The algorithm that is repeatedly executed in each unit/node is called the activation function. An algorithm is uniform if it is executed by all the units.\n2 3 4 5 1 -1 -3 2 -2 1 2 -1 3 Figure 1: An example network\nWe give examples for two of the most popular activation functions for connectionist energy minimization: the discrete Hop eld network (Hop eld, 1982) and the Boltzmann machine (Hinton & Sejnowski, 1986). In the discrete Hop eld model, each unit computes its activation value using the formula:\nX i =\n( 1 if P j w i;j X j i 0 otherwise\nIn Boltzmann machines the determination of the activation value is stochastic and the probability to set the activation value of a unit to one is: P(X i = 1) = 1=(1 + e ( P j w i;j X j + i )=T );\nwhere T is the annealing temperature. Both approaches may be integrated with our topology-based algorithm; i.e., nodes that cannot be identi ed as parts of a tree-like topology use one of the standard local algorithms." }, { "figure_ref": [], "heading": "The Algorithm", "publication_ref": [ "b10" ], "table_ref": [], "text": "We assume that the model of communication between neighboring nodes is a shared memory, multi-reader, single-writer model. We also assume (for now) that scheduling is done with a central scheduler (asynchronous) and that execution is fair. In a shared memory, multireader, single-writer each unit has a shared register called the activation register. A unit may read the content of the registers of all its neighbors but write only its own. Central scheduler means that the units are activated one at a time in an arbitrary order. 1 An execution is said to be fair if every unit is activated in nitely often. We do not require selfstabilization initially. Namely, algorithms may have an initialization step and can rely on initial values. Later we will relax some of the assumptions above and examine the conditions under which the algorithm is also self-stabilized.\nThe algorithm identi es parts of the network that have no cycles (tree-like subnetworks), and optimizes the free energy on these subnetworks. Once a tree is identi ed, it is optimized using a dynamic programming method that propagates values from leaves to a root and back.\nLet us assume rst that the network is acyclic; any such network may be directed into a rooted tree. The algorithm is based on the observation that given an activation value (0/1) for a node in a tree, the optimal assignments for all its adjacent nodes are independent of each other. In particular, the optimal assignment to the node's descendants are independent of the assignments for its ancestors. Therefore, each node i in the tree may compute two values: G 1 i is the maximal goodness contribution of the subtree rooted at i, including the connection to i's parent whose activation is one. Similarly, G 0 i is the maximal goodness of the subtree, including the connection to i's parent whose activation value is zero. The acyclicity property will allow us to compute each node's G 1 i and G 0 i as a simple function of its children's values, implemented as a propagation algorithm initiated by the leaves.\nKnowing the activation value of its parent and the values G 0 j ; G 1 j of all its children, a node can compute the maximal goodness of its subtree. When the information reaches the 1. Standard algorithms need to assume the same condition in order to guarantee convergence to a local minimum (Hop eld, 1982). This condition can be relaxed by restricting that only adjacent nodes are not activated at the same time (mutual exclusion).\nroot, it can assign a value (0/1) that maximizes the goodness of the whole network. The assignment information propagates now toward the leaves. Knowing the activation value of its parent, a node can compute the preferred activation value for itself. At termination (at stable state), the tree is optimized. The algorithm has 3 basic steps:\n1. Directing a tree: knowledge is propagated from leaves toward the center so that after a linear number of steps, every unit in the tree knows its parent and children.\n2. Propagation of goodness values: the values (G 1 i and G 0 i ), are propagated from leaves to the root. At termination, every node knows the maximal goodness of its subtree and the appropriate activation value it should assign given that of its parent.\nIn particular, the root can now decide its own activation value so as to maximize the whole tree.\n3. Propagation of activation values: starting with the root, each node in turn determines its activation value. After O(depth of tree) steps, the units are in a stable state which globally maximizes the goodness.\nEach unit's activation register consists of the following elds: X i : the activation value; G 0 i and G 1 i : the maximal goodness values; and (P 1 i ; ::; P j i ): a bit for each of the j neighbors of i that indicates i's parent." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Directing a tree", "publication_ref": [ "b14", "b4" ], "table_ref": [], "text": "The goal of this algorithm is to inform every node of its role in the network and its childparent relationships. Nodes with a single neighbor identify themselves as leaves rst and then identify their neighbor as a parent (point to it). A node identi es itself as a root when all neighbors point toward it. When a node's neighbors but one point toward it, the node selects the one as a parent. Finally, a node that has at least two neighbors not pointing toward it, identi es itself as being outside the tree.\nThe problem of directing a tree is related to the problem of selecting a leader in a distributed network, and of selecting a center in a tree (Korach, Rotem, & Santoro, 1984). Our problem di ers (from general leader selection problems) in that the network is a tree. In addition, we require our algorithms to be self-stabilized. A related self-stabilizing algorithm appeared earlier (Collin, Dechter, & Katz, 1991). That algorithm is based on nding a center of the tree as the root node and therefore creates more balanced trees. The advantage of the algorithm presented here is that it is space e cient requiring only O(logd) space, when d is the maximum number of neighbors each node has. In contrast, the algorithm in Collin et al. requires O(logn), n being the network size.\nIn the algorithm we present, each unit uses one bit per neighbor to keep the pointing information: P j i = 1 indicates that node i sees its jth neighbor as its parent. By looking at P i j , node i knows whether j is pointing to it.\nIdentifying tree-like subnetworks in a general network may be done by the algorithm in Figure 2.\nIn Figure 3a, we see an acyclic network after the tree directing phase. The numbers on the edges represent the values of the P j i bits. In Figure 3b, a tree-like subnetwork is Tree Directing (for unit i):\n1. Initialization: If rst time, then for all neighbors j: P j i = 0; /* Start with clear pointers (step is not needed in acyclic nets or with almost uniform versions) 2. If there is only a single neighbor (j) and P i j = 0, then P j i = 1; /* A leaf selects its neighbor as parent if that neighbor doesn't point to it */ 3. else, if one and only one neighbor (k) does not point to i (P i k = 0), then P k i = 1, and for the rest of the neighbors: P j i = 0. /* k is the parent */ 4. else, for all neighbors j: P j i = 0. /* Node is either a root or outside the tree */ Figure 2: Tree directing algorithm identi ed inside a cyclic network. Note that node 5 is not a root since not all its neighbors are pointing toward it. " }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Propagation of goodness values", "publication_ref": [], "table_ref": [], "text": "In this phase every node i computes its goodness values G 1 i and G 0 i , by propagating these two values from the leaves to the root (see Figure 4). Given a node X i , its parent X k and its children, children(i) in the tree, it can be shown, based on the energy function (1), that the goodness values obey the following recurrence:\nG X k i = max X i 2f0;1g f X j2children(i) G X i j + w i;k X i X k + i X i g\nConsequently a nonleaf node i computes its goodness values using the goodness values of its children as follows: If X k = 0, then i must decide between setting X i = 0, obtaining a goodness of P j G 0 j , or setting X i = 1, obtaining a goodness of P j G 1 j + i . This yields: G Similarly, when X k = 1, the choice between X i = 0 and X i = 1, yields:\n0 i = maxf X j2children(i) G 0 j ; X j2children(i) G 1 j + i g 4 3 1 2 G 0 4 = 2 G 1 4 = 3 2 -2 -3 2 1 -1 -1 3 1 = 1 G 1 = 2 G 0 1 = 2 G 1 2 = 0 G 0 2 G 5 = 0 1 G 0 5 = 1 G 1 G 0 3 3 = 2 = 2 5 4 3 1 2 X 2 = 0 X 5 = 1 X 3 = 0 X 4 = 0 1 = 1 X (a) (b)\nG 1 i = maxf X j2children(i) G 0 j ; X j2children(i) G 1 j + w i;k + i g\nThe initial goodness values for leaf nodes can be obtained from the above (no children).\nThus, G 0 i = maxf0; i g, G 1 i = maxf0; w ik + i g.\nFor example, if unit 3 in Figure 4 is zero then the maximal goodness contributed by node 1 is G 0 1 = max X 1 2f0;1g f2X 1 g = 2 and is obtained at X 1 = 1. Unit 2 (when X 3 = 0) contributes G 0 2 = max X 2 2f0;1g f X 2 g = 0 obtained at X 2 = 0, while G 1 2 = max X 2 2f0;1g f3X 2 X 2 g = 2 is obtained at X 2 = 1. As for nonleaf nodes, if X 4 = 0, then when X 3 = 0, the goodness contribution will be P k G 0 k = 2 + 0 = 2, while if X 3 = 1, the contribution will be 3 + P k G 1 k = 3 + 1 + 2 = 0. The maximal contribution G 0 3 = 2 is achieved at X 3 = 0.\nGoodness values may be computed once for every node when its children's goodness values are ready; however, for self-stabilization (to be discussed later) and for simplicity, nodes may compute their goodness values repeatedly and without synchronization with their children." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Propagation of activation values", "publication_ref": [], "table_ref": [], "text": "Once a node is assigned an activation value, all its children can activate themselves so as to maximize the goodness of the subtrees they control. When such value is chosen for a node, its children can evaluate their activation values, and the process continues until the whole tree is assigned.\nThere are two kinds of nodes that may start the process: a root which will choose an activation value to optimize the entire tree, and a non-tree node which uses a standard activation function.\nWhen a root X i is identi ed, if the maximal goodness is P j G 0 j , it chooses the value \\0.\" If the maximal goodness is P j G 1 j + i , it chooses \\1.\" In summary, the root chooses its value according to:\nX i = ( 1 if P j G 1 j + i P j G 0 j 0 otherwise\nIn Figure 4 for example, G 1 5 + G 1 3 + 0 = 2 < G 0 5 + G 0 3 = 3 and therefore X 4 = 0. An internal node whose parent is k chooses an activation value that maximizes P j G x i j + w i;k X i X k + i X i . The choice therefore, is between P j G 0 j (when X i = 0) and P j G 1 j + w i;k X k + i (when X i = 1), yielding:\nX i = ( 1 if P j G 1 j + w i;k X k + i P j G 0 j 0 otherwise\nAs a special case, a leaf i chooses X i = 1 if w i;k X k i , which is exactly the discrete Hop eld activation function for a node with a single neighbor. For example, in Figure 4, X 5 = 1 since w 4;5 X 4 = 0 > 5 = 1, and X 3 = 0 since G 1 1 +G 1 2 +2X 4 + 3 = 1+2+0 3 = 0 < G 0 2 + G 0 1 = 2. Figure 4b shows the activation values obtained by propagating them from the root to the leaves." }, { "figure_ref": [], "heading": "A complete activation function", "publication_ref": [], "table_ref": [], "text": "Interleaving the three algorithms described earlier achieves the goal of identifying tree-like subnetworks and maximizes their goodness. In this subsection we present the complete algorithm, combining the three phases while simplifying the computation. The algorithm is integrated with the discrete Hop eld activation function demonstrating how similar the formulas are.\nThe steps of the algorithm can be interleaved freely; i.e., a scheduler might execute each step for all the nodes or all steps for any given node (or combinations). These steps are computed repeatedly with no synchronization with the node's neighbors.2 Algorithm activate executed by unit i (when j denotes a non-parent neighbor of i and k denotes the parent of i) is given in Figure 5. Algorithm activate improves on an arbitrary local search connectionist algorithm in the following sense: Theorem 3.1 If a 1 is a local minimum generated by \\activate\" and a 2 is a local minimum generated by a local-search method (e.g., Hop eld), and if a 1 and a 2 have the same activation values on non-tree nodes, then G(a 1 ) G(a 2 ).\nProof: Follows immediately from the fact that activate generates a global minimum on tree-subnetworks. 2\nAdditional properties of the algorithm will be discussed in Section 4." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "An example", "publication_ref": [], "table_ref": [], "text": "The example illustrated in Figure 6 demonstrates a case where a local minimum of the standard algorithms is avoided. Standard algorithms may enter such local minimum and stay in a stable state that is clearly wrong.\nThe example is a variation on a Harmony network example (Smolensky, 1986) (page 259), and (McClelland, Rumelhart, & Hinton, 1986) (page 22). The task of the network is to identify words from low-level line segments. Certain patterns of line segments excite Algorithm activate: Optimizing on Tree-like Subnetworks (unit i):\n1. Initialization: If rst time, then (8j) P j i = 0; /*Clear pointers (cyclic nets)*/ 2. Tree directing: If there exists a single neighbor k, such that P i k = 0, then P k i = 1 and for all other neighbors j, P j i = 0;\nelse, for all neighbors P j i = 0; 3. Computing goodness values: G 0 i = maxf P j2neighbors(i) G 0 j P i j ; P j2neighbors(i) G 1 j P i j + i g; G 1 i = maxf P j2neighbors(i) G 0 j P i j ; P j2neighbors(i) (G 1 j P i j + w i;j P j i ) + i g;\n4. Assigning activation values:\nIf at least two neighbors are not pointing to i, then /*Not in tree: use Hop eld*/ X i = ( 1 if P j w i;j X j i 0 otherwise else, /* Node in a tree (including root and leaves) */ X i =\n( 1 if P j ((G 1 j G 0 j )P i j + w i;j X j P j i ) i 0 otherwise\nFigure 5: Algorithm activate for unit i units that represent characters, and certain patterns of characters excite units that represent words. The line strokes used to draw the characters are the input units: L1,..., L5. The units \\N,\" \\S,\" \\A\" and \\T\" represent characters. The units \\able,\" \\nose,\" \\time\" and \\cart\" represent words, and Hn, Hs, Ha, Ht, H1,... H4 are hidden units required by the Harmony model. For example, given the line segments of the character S, unit L4 is activated (input), and this causes units Hs and \\S\" to be activated. Since \\NOSE\" is the only word that contains the character \\S,\" both H2 and the unit \\nose\" are also activated and the word \\NOSE\" is identi ed. The network has feedback cycles (symmetric weights) so that ambiguity among characters or line-segments may be resolved as a result of identifying a word. For example, assume that the line segments required to recognize the word \\NOSE\" appear, but the character \\N\" in the input is blurred and therefore the setting of unit L2 is ambiguous. Given the rest of the line segments (e.g., those of the character \\S\"), the network identi es the word \\NOSE\" and activates units \\nose\" and H2. This will cause unit \\N\" and all of its line segments to be activated. Thus, the ambiguity of L2 is resolved.\nThe network is designed to have a global minimum when L2, Hn, \\N,\" H2 and \\nose\" are all activated. However, standard connectionist algorithms may fall into a local minimum when all these units are zero, generating goodness of 5 4 = 1. The correct setting (global minimum) is found by our tree-optimization algorithm (with goodness: 3-1+3-1+3-1+5-1-4+3-1+5=13). The thick arcs in the upper network of Figure 6 mark the arcs of a tree-like subnetwork. This tree-like subnetwork is drawn with pointers and weights in the lower part of the gure. Node \\S\" is not part of the tree and its activation value is set to one because the line-segments of \\S\" are activated. Once \\S\" is set, the units along the tree are optimized (by setting them all to one) and the local minimum is avoided." }, { "figure_ref": [], "heading": "Feasibility, Convergence, and Self-Stabilization", "publication_ref": [], "table_ref": [], "text": "So far we have shown how to enhance the performance of connectionist energy minimization networks without losing much of the simplicity of the standard approaches. The simple algorithm presented is limited in three ways, however. First, it assumes unrealistically that a central scheduler is used; i.e., a scheduler that activates the units one after the other asynchronously. The same results are obtained if the steps of the algorithm executes as one atomic operation or if neighbors are mutually excluded. We would like the network to work correctly under a distributed (synchronous) scheduler, where any subset of units may be activated for execution at the same time synchronously. Second, the algorithm guarantees convergence to global optima only for tree-like subnetworks. We would like to nd an algorithm that converges to correct solutions even if cycles are introduced. Finally, we would like the algorithm to be self-stabilizing. It should converge to a legal, stable state given enough time, even after noisy uctuations that cause the units to execute arbitrary program states and the registers to have arbitrary content. Formally, an algorithm is selfstabilizing if in any fair execution, starting from any input con guration and any program state (of the units), the system reaches a valid stable con guration.\nIn this section, we illustrate two negative results regarding the rst two problems; i.e., that it is not feasible to build uniform algorithms for trees under a distributed scheduler, and that such an algorithm is not feasible for cyclic networks even under a central scheduler.\nWe then show how to weaken the conditions so that convergence is guaranteed (for tree-like subnetworks) in realistic environments and self-stabilization is obtained.\nA scheduler can generate any speci c schedule consistent with its de nition. Thus, the central scheduler can be viewed as a speci c case of the distributed scheduler. We say that a problem is impossible for a scheduler if for every possible algorithm there exists a fair execution generated by such a scheduler that does not nd a solution to the problem. Since all the speci c schedules generated by a central scheduler can also be generated by a distributed scheduler, what is impossible for a central scheduler is also impossible for a distributed scheduler." }, { "figure_ref": [ "fig_3" ], "heading": "Negative results for uniform algorithms", "publication_ref": [ "b4", "b4" ], "table_ref": [], "text": "Following Dijkstra (1974), negative results were presented regarding the feasibility of distributed constraint satisfaction (Collin et al., 1991). Since constraint satisfaction problems can be formulated as energy minimization problems, these feasibility results apply also for computing the global minimum of energy functions. For completeness we now adapt those results for a connectionist computation of energy minimization.\nTheorem 4.1 No deterministic3 uniform algorithm exists that guarantees a global minimum under a distributed scheduler, even for simple chain-like trees, assuming that the algorithm needs to be insensitive to initial conditions. Proof (By counter example): Consider the network of Figure 7. There are two global minima possible : (11:::1101:::1) and (11:::1011:::1) (when the four centered digits are assigned to units, i 1; i; i + 1; i + 2). If the network is initialized such that all units have the same register values, and all units start with the same program state, then there exists a fair execution under a distributed scheduler such that in every step all units are activated. The units left of the center (1; 2; 3; :::i) \\see\" the same input as those units right of the center (2i; 2i 1; 2i 2; :::; i + 1) respectively. Because of the uniformity and the determinism, the units in each pair (i; i + 1); (i 1; i + 2); :::; (1; 2i) must transfer to the same program state and produce the same output on the activation register. Thus, after every step of that execution, units i and i + 1 will always have the same activation value and a global minimum (where the two units have di erent values) will never be obtained. 2\nThis negative result should not discourage us in practice since it relies on an obscure in nite sequence of executions which is unlikely to occur under a random scheduler. Despite this negative result, one can show that algorithm activate will optimize the energy of treelike subnetworks under a distributed scheduler if at least one of the following cases holds (see the next section for details):\n1. If step 2 of algorithm activate in Section 3.4 is atomic; i.e., no other neighbor may execute step 2 at the same time.\n2. if for every node i and every neighbor j, node i is executed without j in nitely often (fair exclusion); 3. if one node is unique and acts as a root, that is, does not execute step 2 (an almost uniform algorithm); 4. if the network is cyclic (one node will be acting as a root). 4Another negative result similar to (Collin et al., 1991) is given in the following theorem.\nTheorem 4.2 If the network is cyclic, no deterministic uniform algorithm exists that guarantees a global minimum, even under a central scheduler, assuming that the algorithm needs to be insensitive to initial conditions. Proof (by counter example): This may be proved even for cyclic networks as simple as rings.\nIn Figure 8 we see a ring-like network whose global minima are ( 010101) and ( 101010). Consider a fair execution under a central scheduler that activates the units 1,4,2,5,3,6 in order and repeats this order inde nitely. Starting with the same program state and same inputs, the two units in every pair of (1,4), (2,5), (3,6) \\see\" the same input, therefore they have the same output and transfer to the same program state. As a result, these units never output di erent values and a global minimum is not obtained. 2\nNote that any tree-like subnetwork of a cyclic network will be optimized even under a distributed scheduler (since nodes that are part of a cycle are identi ed as roots and the algorithm acts as an almost uniform algorithm)." }, { "figure_ref": [], "heading": "Convergence and self-stabilization", "publication_ref": [], "table_ref": [], "text": "In the previous subsection we proved that under a pure distributed scheduler there is no hope for a uniform network algorithm. In addition, we can easily show that the algorithm is not self-stabilizing when cycles are introduced. For example, consider the con guration of the pointers in the ring of Figure 9. It is in a stable state although clearly not a valid tree. 5In this subsection we weaken the requirements allowing our algorithm to converge to correct solutions and to be self-stabilizing under realistically weaker distributed schedulers. We will not use the notion of a pure distributed scheduler; instead, we will ask our distributed scheduler to have the fair exclusion property. Intuitively, a distributed scheduler with fair exclusion will no longer generate in nite sequences of the pathological execution schedules used in the previous subsection to prove the negative results. Instead, it is guaranteed that from time to time, every two neighboring units will not execute together.\nAs an alternative, we might weaken the requirement on the uniformity of the algorithm (that all nodes execute the same procedure). An almost uniform algorithm is when all the nodes perform the same procedure except one node that is marked unique. In the almost uniform version of algorithm activate, the root of the tree is marked and executes the procedure of Section 3.4 as if all its neighbors are pointing to it; i.e., it constantly sets P j i to zero. Theorem 4.3 Algorithm activate of Section 3.4 has the following properties: 1. It converges to a global minimum and is self-stabilizing6 in networks with tree-like topologies under a distributed scheduler with fair exclusion. 2. The algorithm also converges in tree-like subnetworks (but is not self-stabilizing) when the network has cycles. 3. It is self-stabilizing for any topology if an almost uniform algorithm is applied, even under a pure distributed scheduler.\nFor proof see appendix." }, { "figure_ref": [ "fig_0" ], "heading": "Extensions to Arbitrary Networks", "publication_ref": [ "b5", "b6", "b1" ], "table_ref": [], "text": "The algorithm we presented in Section 3 is limited in that it is restricted to nodes of tree-like subnetworks only. Nodes that are part of a cycle execute the traditional activation function which may lead to the known drawbacks of local energy minima and slow convergence. In this section we discuss generalizations of our algorithm to nodes that are part of cycles, that will work well for near-tree networks. A full account of this extension is deferred for future work.\nA well known scheme for extending tree algorithms to non-tree networks, is cycle-cutset decomposition (Dechter, 1990), used in Bayes networks and constraint networks. Cyclecutset decomposition is based on the fact that an instantiated variable cuts the ow of information on any path on which it lies and therefore it changes the e ective connectivity of the network. Consequently, when the group of instantiated variables cuts all cycles in the graph, (e.g., a cycle-cutset), the remaining network can be viewed as cycle-free and can be solved by a tree algorithm. The complexity of the cycle-cutset method can be bounded exponentially in the size of the cutset in each connected component of the graph (Dechter, 1992). We next show how to improve our energy minimization algorithm, activate using the cycle-cutset idea.\nRecall that the energy minimization task is to nd a zero/one assignment to the variables X = fX 1 ; :::; X n g that maximizes the goodness function. De ne Gmax(X 1 ; :::; X n ) = max X 1 ;:::;Xn G(X 1 ; :::; X n ). The task is to nd an activation level X 1 ; :::; X n satisfying Gmax(X 1 ; :::X n ) = max X 1 ;:::;Xn ( X i<j w i;j X i X j + X i i X i ):\n(2)\nLet Y = fY 1 ; :::; Y k g be a subset of the variables X = fX 1 ; :::; X n g. The maximum can be computed in two steps. First compute the maximum goodness conditioned on a xed assignment Y = y, then maximize the resulting function over all possible assignments to Y . Let Gmax(XjY = y), be the maximum goodness value of G conditioned on Y = y. Clearly, Gmax(X) = max Y =y Gmax(XjY = y) = max Y =y max fX=xjx Y =yg fG(X)g; where, x Y is the zero/one value assignments in the instantiation x that are restricted to the variable subset Y . If the variables in Y form a cycle-cutset, then the conditional maxima Gmax(XjY = y) can be computed e ciently using a tree algorithm. The overall maxima may be achieved subsequently by enumerating over all possible assignments to Y .\nObviously, this scheme is e ective only when the cycle-cutset is small. We next discuss some steps towards implementing this idea in a distributed environment.\nGiven a network with a set of nodes X = fX 1 ; :::; X n g, and a subset of cutset variables Y = fY 1 ; :::; Y k g, presumably a cycle-cutset, and assuming a xed, unchangeable assignment Y = y, the cutset variables behave like leaf nodes, namely, they select each of their neighbors as a parent if that neighbor does not point to them. Thus, a cutset variable may have several parents and zero or more child nodes.\nConsidering again the example network in Figure 3b and assuming node ( 7) is a cutset variable, a tree-directing may now change so that node (7) points both to (5) and to (6), (6) points to (5) and ( 5) remains the root. Note that with this modi cation all arcs are directed and the resulting graph is an acyclic directed graph. Once the graph is directed, each regular non-cutset node has exactly the same view as before. It has one parent (or no parent) and perhaps a set of child nodes, some of which may be cutset nodes. It then computes goodness values and activation values almost as before.\nAn algorithm along these lines will compute the maximum energy conditioned on Y = y, if Y is a cycle-cutset. Note however that such an assignment is not guaranteed to converge to a local maxima of the original (Hop eld) activation function. Some of the cutset nodes may be unstable relative to this function.\nEnumerating all the conditional maxima to get a global maxima cannot be done distributedly, unless the cutset size is small. When the cutset is small, the computation can be done in parallel, yielding a practical distributed solution for networks, as follows. Once the tree-directing part is accomplished, a node computes a collection of goodness values, each indexed by a conditioning assignment Y = y. The goodness values of a node that are associated with the cutset assignment Y = y will be computed using the goodness values of child nodes that are also associated with the same assignment Y = y. The maximum number of goodness values each node may need to carry is exponential in the cutset size.\nUpon convergence, the roots of the trees will select an assignment Y = y that will maximize the overall goodness value and propagate this information down the tree so that nodes will switch values accordingly. The above algorithm is certainly not in the connectionist spirit and is practically limited to small cutsets. Its advantage is that it nds a true global optimum.\nIn the following subsection, we will modify the cutset approach more towards the connectionist spirit by integrating the cutset scheme with a standard energy minimizing activation function. This yields a connectionist-style algorithm with a simple activation function and limited memory requirements once the identity of the cycle-cutset nodes is known. We can determine the cutset variables initially using a centralized algorithm for computing a small cutset (Becker & Geiger, 1994). Although not guaranteed to nd a global solution, the new activation function is more powerful than standard approaches on cyclic topologies." }, { "figure_ref": [], "heading": "Local search with cycle cutset", "publication_ref": [], "table_ref": [], "text": "Algorithm activate-with-cutset in Figure 10 assumes that the cutset nodes are known a priori; this time however, their values are changing using standard local techniques (e.g., Hop eld). The algorithm is well-de ned also when the cutset nodes do not cut all cycles or when the cutset is not minimal. However, it is likely to work best when the cutset is small and when it cuts all cycles.\nNote that the goodness value computation of cutset nodes (step 4) is not performing the maximization operation over the two possible activation values of the cutset variables since the activation value of cutset nodes is xed as far as the tree algorithm is concerned. Intuitively, if performed sequentially the algorithm would iterate between the following two steps: 1. nding a local maximum using Hop eld activation function for the cutset variables; 2. nding a global maximum conditioned on the cutset values determined in the previous step via the tree algorithm. In the connectionist framework these two steps are not synchronized. Nevertheless, the algorithm will converge to a local maxima relative to Hop eld algorithm as well as a conditional global maxima relative to the cutset variables. Convergence follows from the fact that the tree directing algorithm is guaranteed to converge given xed cutset variables. Once it does, a node ips its value either as a result of a Hop eld step or in order to optimize a tree. In both steps the energy does not increase. 7Example 5.1 The following example demonstrates how the algorithm nds a better minimum than what is found by the standard Hop eld algorithm when there are cycles. Consider the energy function: energy = 50AB 200BC 100AC 3AD 3DE 3AE + 0:1A + 0:1B + 0:1C + 4E + 4D. The associated network consists of two cycles: A; B; C and A; D; E. If we select node A as a cutset node, the network would then be cut into two acyclic (tree-like) subnetworks. Assume that the network starts with a setting of zeros (A; B; C; D; E = 0). This is a local minimum (energy = 0) of the Hop eld algorithm. Our activate-with-cutset algorithm breaks out of this local minimum by optimizing the acyclic subnetwork A; B; C conditioned on A = 0. The result of the optimization is the assignment A = 0; B = 1; C = 1; D = 0; E = 0 with energy = 199:7. It is not a stable state because A obtains an excitatory sum of inputs ( 50) and therefore ips its value to A = 1 using its Hop eld activation algorithm. The new state A; B; C = 1; D; E = 0 is also a local minimum of the Hop eld paradigm (energy = 249:7). However, since nodes A; D; E form a tree, the activate-with-cutset algorithm also manages to break out of this local minimum.\nIt nds a global solution conditioned on A = 1 which happens to be the global minimum A; B; C; D; E = 1 with energy = 250:97. The new algorithm was capable of nding the only global minimum of the energy function and managed to escape two of the local minima that trapped the Hop eld algorithm.\nIt is easy to see that algorithm activate-with-cutset improves on activate in the following sense:\nTheorem 5.1 If a 1 is a local minimum generated by activate and a 2 is a local minimum generated by activate-with-cutset, then if a 1 and a 2 have the same activation value on all non-tree nodes then, G(a 2 ) G(a 1 )." }, { "figure_ref": [], "heading": "Local search with changing cutset variables", "publication_ref": [], "table_ref": [], "text": "We can imagine a further extension of the cutset scheme idea that will improve the resulting energy level further by conditioning and optimizing relative to many cutsets. In a sequential implementation the algorithm will move from one cutset to the next, until there is no improvement. This process is guaranteed to monotonically reduce the energy. It is unclear however how this tour among cutsets can be implemented in a connectionist environment. It is not clear even how to identify one cutset distributedly. Since nding a minimal cycle-cutset is NP-complete a distributed algorithm for the problem is unlikely to exist. Nevertheless, there could be many brute-force distributed algorithms that may nd a good cutset in practice. Alternatively, cutset nodes may be selected by a process that randomly designates a node to be a cutset node.\nIn the following paragraphs we outline some ideas for a uniform connectionist algorithm that allows exploration of the cutset space. We propose the use of a random function to control the identity of the cutset nodes. The random process by which a node becomes a cutset node or switch from a cutset node to a regular node may be governed by a random heuristic function f(). A non-tree node may turn into a cutset node with probability P = f(). A cutset node may turn into a non-cutset node if it becomes part of a tree or by the random process with probability P = g(). The function f() should be designed in a way that it will assign high probabilities to nodes with potential to become \\good\" cutset nodes. The probability of de-selecting a cutset node may be de ned as g() = 1 f().\nAlgorithm activate-with-cutset can be augmented with a cutset selection function that will be running in parallel with the three procedures (tree-directing, assigning activation values and goodness computing). Thus, we may add a forth procedure that selects (or de-selects) the node as a cutset node with probability P = f(). Note that the randomly selected cutset is not perfect and that there might be too many or too few cutset nodes.\nAs long as there are cycles, cutset nodes should be selected. At the same time, nodes functioning too long as cutset nodes should be de-selected thus reducing the chances for redundant cutset nodes while continuously exploring the space of possible cutsets.\nOne way to implement a heuristic function f is to base it on the following ideas: 1.\nIncrease probability to non-tree nodes that have not been cutset nodes for a long time. 2. Increase probability to nodes that have not ipped their value for a long time. 3. Increase the probability to nodes with high connectivity.\nNote that a de-selected cutset node may cause a chain reaction of undirecting nodes. Nodes that lost their tree-pointers become not-part-of-tree and thus have a potential to become cutset nodes. The network may continue the tour in the cutset space inde nitely and may never become static. The selection-de-selection process may never converge. Nevertheless, if a function f is designed to allow enough time for convergence in between cutset changes than during the whole process, the energy tends to decrease. Temporary uctuations may sometimes cause an energy increase when a node relies on its not yet stable neighbors. 8 We conjecture that such a heuristic function f can be constructed so as to allow trees to be stabilized before they are distroyed by de-selection. Formalizing the algorithm's properties and further investigation and experimentation are left for future research." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b20", "b20", "b18", "b24" ], "table_ref": [], "text": "The main contributions of the paper are:\n1. We provide a connectionist activation function (algorithm activate, Figure 5), that is self-stabilizing and is guaranteed to converge to a global minima in linear time for tree-like networks. On general networks the algorithm will generate a global minima on all tree subnetworks and on the rest of the network it will coincide with regular local gradient activation functions (e.g., Hop eld). The algorithm dominates an arbitrary local search connectionist algorithm in the following sense: If a 1 is a local minimum generated by activate and a 2 is a local minimum generated by a corresponding localsearch method, then if a 1 and a 2 have the same activation values on all non-tree nodes 8. For example, temporarily relying on old goodness values of a de-selected node.\n(if it is a tree then the set is empty), then the energy of a 1 is smaller or equal to the energy of a 2 .\n2. We showed that activate can be further extended using the cycle-cutset idea. The extended algorithm called activate-with-cutset (Figure 10) is guaranteed to converge and generate solutions that are at least as good and normally better than algorithm activate. The algorithm converges to conditional global minima relative to the values of the cutset variables. If a 1 is a local minima generated by activate and a 2 is the local minima generated by activate-with-cutset then if a 1 and a 2 have the same activation values on all the cutset variables (if it is a tree, the cutset is empty) than the energy of a 2 is smaller or equal to the energy of a 1 . Therefore activate-with-cutset is better than activate which in turn is better than a regular energy-minimization connectionist algorithm in the above sense. A third variation of the algorithm is sketched for future investigation. The idea is that the cutset nodes are randomly and continuously selected, thus allowing exploration of the cutset space. 3. We stated two negative results: 1) Under a pure distributed scheduler no uniform algorithm exists to globally optimize even simple chain-like networks. 2) No uniform algorithm exists to globally optimize simple cyclic networks (rings) even under a central scheduler. We conjecture that these negative results are not of signi cant practical importance since in realistic schedulers the probability of having in nite pathological scheduling scenarios approaches zero. We showed that our algorithm converges correctly (on tree-like subnetworks) when the demand for pure distributed schedulers is somewhat relaxed; i.e., adding either fair exclusion, almost uniformity or cycles. Similarly, self-stabilization is obtained in acyclic networks or when the requirement for a uniform algorithm is relaxed (almost uniformity). The negative results apply to connectionist algorithms as well as to parallel versions of local repair search techniques. The positive results suggest improvements both to connectionist activation functions and to local repair techniques.\nWe conclude with a discussion of two domains that are likely to produce sparse, neartree networks and thus bene t from the algorithms we presented: inheritance networks and diagnosis.\nInheritance is a straightforward example of an application where translations of symbolic rules into energy terms form networks that are mostly cycle free. Each arc of an inheritance network, such as A ISA B or A HAS B is modeled by the energy term A AB.\nThe connectionist network that represents the complete inheritance graph is obtained by summing the energy terms that correspond to all the ISA and HAS relationships in the graph. Nonmonotonicity can be expressed if we add penalties to arcs and use the semantics discussed by Pinkas (1991bPinkas ( , 1995)). Nonmonotonic relationships may cause cycles both in the inheritance graph and the connectionist network (e.g. Penguin ISA Bird; Bird ISA FlyingAnimal; Penguin ISA not(FlyingAnimal)). Multiple inheritance may cause cycles as well, even when the rules are monotonic (e.g., Dolphin ISA Fish; Dolphin ISA Mammal; Fish ISA Animal; Mammal ISA Animal). Arbitrary constraints on the nodes of the graph may be introduced in this model. Constraints may be represented as proposition logic formulas and then translated into energy terms (Pinkas, 1991) potentially causing cycles. In a \\pure\" inheritance network that has no multiple inherited nodes and no nonmonotonic relationships, the network is cycle-free and can be processed e ciently by various algorithms. If we allow multiple inheritance, nonmonotonicity, or arbitrary propositional constraints, we may introduce cycles into the network that are generated. Nevertheless, it is reasonable to assume that in large practical inheritance domains cycles (multiple inheritance, nonmonotonicity and arbitrary constraints) are only scarcely introduced and the few that exist may be handled by our extension using the cycle-cutset idea.\nAnother potential application that will generate mostly cycle-free subnetworks is diagnosis. Here is a possible formulation of a diagnosis framework. Let X 1 ; X 2 ; :::X n be True(1)/false(0) propositions that represent symptoms and hypotheses. In a diagnosis application we may have diagnosis rules of the form: ( 1 X1; 2 X 2 ; :::; m X m ! X). These rules announce that the symptoms X 1 ; :::; X m with importance factors 1 ; :::; m , suggest the hypothesis X with sensitivity . A subset of the symptoms may be enough to suggest the hypothesis if the sum of the importance factors of the active symptoms is larger than the sensitivity . Intuitively, the larger the sum of the factors, the larger the support for the hypothesis. The corresponding energy function for a diagnosis rule is P m i i X i X + P m i i X i + X. In addition, arbitrary propositional constraints may also be added, like (X ! X i ) i.e., if the hypothesis X holds, so does the symptom X i .\n(X 1 ! (:X 2 ^:X 3 ) ^X2 ! (:X 1 ^:X 3 ) ^X3 ! (:X 1 ^:X 2 )) i.e., only one of the propositions X 1 ; X 2 ; X 3 can be true (mutual exclusion). Any propositional logic formula is allowed and nonmonotonicity may be expressed using con icting constraints (augmented with importance factors). Quadratic energy functions may be generated from arbitrary propositional constraints by introducing hidden variables (Pinkas, 1991). Sparseness of such networks emerges as a result of assuming conditional independency of symptoms relative to their hypothesis. Independency assumptions of this kind (that makes computation tractable) are quite common in actual implementations of Bayes networks, in uence diagrams (Pearl, 1988), and certainty propagation of rule-based expert systems (Shortli e, 1976). When our knowledge base consists only of diagnosis rules (and maybe the corresponding X ! X i rules) and the symptoms are all independent of each other, there are no cycles in the network, and the tree algorithm converges to a global maximum in linear time. When we add dependent symptoms which a ect a hypothesis through more than one path; e.g., X 1 ! X, and X 1 ! X 2 ! ::: ! X, or when we start adding arbitrary constraints, cycles are added. When dependent symptoms and arbitrary constraints are only scarcely added, the network generated will most likely lend itself e ciently to the activate-with-cutset algorithm.\nAbandonment of e cient algorithms exists for both inheritance and diagnosis in their tractable forms. Our algorithm o ers both to solve e ciently tractable versions of the problem and to approximate intractable versions of it in massively parallel, simple to implement methods. The e ciency of the suggested process depends on the \\closeness\" of the problem to an ideal, tractable form." }, { "figure_ref": [], "heading": "A. Appendix", "publication_ref": [ "b2" ], "table_ref": [], "text": "Proof sketch of theorem 4.3: The second and third phases of the algorithm are adaptations of an existing dynamic programming algorithm (Bertel e & Brioschi, 1972), and their cor-rectness is therefore not proved here. The self-stabilization of these steps is obvious because no variables are initialized. The proof is therefore dependent on the convergence of the tree directing phase.\nLet us rst assume that the scheduler is distributed with fair exclusion and that the network is a tree. The rst part of the theorem is proved by points 1-4. We want to show that the tree-directing algorithm converges, that it is self-stabilizing and that the nal stable result is that the pointers P j i represent a tree. Points 5 and 6 prove parts 2 and 3 of the theorem. A node is called legal if it is either a root (i.e., all its neighbors are legal, point to it and it doesn't point to any of them), or an intermediate node (i.e., it points to one of the neighbors and the rest of its neighbors are all legal and point back). A node is called a candidate if it is an illegal node and has all its neighbors but one pointing to it. We would like to show that:\n1. The property of being legal is stable; i.e., once a node becomes legal it will stay legal.\n2. A state where the number of illegal nodes is k > 0, leads to a state where the number of illegal nodes is less than k; i.e., the number of illegal nodes decreases and eventually all nodes turn legal. 3. If all the nodes are legal then the graph is marked as a tree. 4. The algorithm is self-stabilizing for trees. 5. The algorithm converges even if the graph has cycles (part 2 of the theorem). 6. The algorithm is self-stabilizing in arbitrary networks if an almost uniform version is used, even under a distributed scheduler (part 3 of the theorem).\nWe will now prove each of the above points.\n1. Show that a legal state is stable. Assume a legal node i becomes illegal. It is either a root node and one of its children became illegal, or an intermediate node whose one of its children became illegal (it cannot be that its parent suddenly points to i or that one of the children stopped pointing and still is legal). Therefore, there must be a chain of i 1 ; i 2 ; :::; i k of nodes that became illegal. Since there are no cycles, there must be a leaf that was legal and turned illegal. This cannot occur since a leaf does not have children; leading to a contradiction. 2. Show that if there are illegal nodes, their number is reduced. To prove this claim we need three steps: (a) Show that eventually, if there are illegals, then there are also candidates.\nBecause of the fair exclusion, eventually a state is reached where each node has been executed at least once. Assume that at least one node is illegal, and all the illegal nodes are not candidates. If a node is illegal and not a candidate, then either it is a root-type (all point to it) but at least one of its children is illegal, or there are at least two of its neighbors that are illegal. Suppose there are no root-type illegal nodes. Then all illegal nodes have at least two illegal neighbors. Therefore there must be a cycle that connects illegal nodes (contradiction). Therefore, one of the illegal nodes must be root-type. Suppose i is a root-type illegal node. It must have a neighbor j which is illegal. Consider the subtree of j that does not include i: it must contain illegal nodes. If there are no root-type illegal nodes we get a contradiction again. However, if there is a root-type node, we eliminate it and look at the subtree of some illegal j 0 that does not include j. Eventually, since the network is nite, we obtain a subtree with no root-like illegal nodes but which includes other illegal nodes. This leads to a contradiction. The conclusion is that there must be candidates if there are illegal nodes. (b) Show that a candidate is stable unless it becomes legal.\nIf a node i is a candidate, all its legal children remain legal. There are three types of candidate nodes (node j is an illegal neighbor of i):\ni. node j points to i;\nii. the pointer goes in both directions;\niii. there is no pointer from i to j or vice-versa. All possible changes in the pointers P j i or P i j will cause i to remain a candidate or to turn legal (the rest of the pointers will not be changed).\n(c) Show that every candidate node will eventually turn legal: Assume j is the illegal neighbor of the candidate i. In the next execution of i without j (fair exclusion), if P i j = 0 then i becomes legal by pointing to j; otherwise, i becomes a root-type candidate (all its neighbors point to it) but j is illegal. We will prove now that if an illegal node j points to i then eventually a state is reached where either j is legal or P i j = 0, and that this proposition is stable once it holds. If this statement is true then when i is executed eventually, if j is legal then all of i 0 s neighbors are legal and therefore i turns legal. If j is illegal then P i j = 0, and i will point to it (P j i = 1) making itself legal.\nWe next prove that if j is an illegal node pointing to i then there will be a state where either j is legal or P i j = 0, and this state is stable. We prove it by induction on the size of the subtree of j that does not include i.\nBase step: If j is a leaf and j points to i then if at the time j is executed (without i) P j i = 0, then node j points to i and becomes legal; otherwise, j updates P i j = 0. This status is stable because the legal state is stable and since a leaf will point to a node only if it turns legal. Induction step: Assume hypothesis is True for trees of size less than n. Suppose j is the illegal neighbor if i. Node j points to i and it has j 1 ; :::; j k other neighbors. Because we assume that all nodes were executed at least one time, since j points to i we assume that at the last execution of j all the other neighbors j 1 ; :::; j k pointed to j. The subtrees rooted by j l (not including j) are of size less than n and therefore by the hypothesis there will be a state where all the nodes j 1 ; :::; j k are either legal or P j j l = 0. This state is stable, so when eventually j is executed, it will either point to i turning legal (if all j 1 ; :::; j k are pointing to it), or it will make P i j 1 ; :::; j k is stable at that point, whenever j is executed it will either become legal or its pointers become zero. 3. Show that if all the nodes are legal then the graph is marked as a tree: If a node is legal, then all its children are legal and point to it. Therefore each node represents a subtree (if not a leaf) and has one parent at the most. To show that there is only one root we make the following argument. If several roots exist, then because of connectivity, there is one node that is shared between at least two subtrees and therefore has two parents (contradiction). 4. The algorithm is self-stabilizing for cycle-free networks since no initialization is needed (in the proof we haven't use the rst initialization step; i.e., P j i = 0). In the case where no cycles exist we do not need this step. The pointers can get any initial values and the algorithm still converges.\n5. The algorithm (with P j i = 0 initialization) converges even if the graph has cycles. Since all the nodes start with zero pointers, a (pseudo) root of a tree-like subnetwork will never point toward any of its neighbors (since it is part of a cycle and all of its neighbors but one must be legal). 6. Show that the algorithm is self-stabilizing in arbitrary networks if an almost uniform version is used, even under a distributed scheduler. We need to show that a candidate will eventually turn legal even if its neighbors are executed in the same time.\nSuppose node i is a candidate and node j is its illegal neighbor: (a) If j is a root, then it will never point to i, and therefore i will eventually turn legal by pointing to j. (b) If i is the root, then P j i = 0, and if j becomes legal it will point to i making i legal. Node j will turn eventually legal using the following induction (on the size of the subtree of j):\nHypothesis: In a subtree without a node that acts as a root, all illegal nodes will eventually turn legal.\nBase step: If j is a leaf, it will point eventually to its neighbor i which in its turn will make j legal by P j i = 0. Induction step: If j 1 ; :::; j k are other neighbors of j, then they will eventually turn legal (induction hypothesis) while pointing to j. Eventually j is executed and also turns legal.\n(c) Suppose neither i nor j are roots, but one of them is not part of a cycle (and therefore is part of a subtree that does not include a node marked as a root). Using the above induction, all the nodes in the subtree will eventually turn legal.\nAs a result either i or j eventually turns legal, and therefore i will eventually turn legal as well. " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [ "b21" ], "table_ref": [], "text": "This work was supported in part by NSF grant IRI-9157636, by Air Force O ce of Scienti c Research, AFOSR 900136, by Toshiba of America and by a Xerox grant. We would also like to thank Kalev Kask for commenting on the latest version of this manuscript, Kaoru Mulvihill for drawing the gures, for Lynn Haris for editing and the anonymous reviewers who helped improve the nal version of this paper. A shorter version of this paper appears earlier (Pinkas & Dechter, 1992)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Algorithm activate-with-cutset (unit i) Assumption: The cutset nodes are given a priori.\n1. Initialization: If rst time, then (8j) P j i = 0; 2. Tree directing:\nIf i is a cutset node, for every neighbor (j), if P i j = 0, then P j i = 1; (neighbors become parents unless they already point to it) else (not a cutset node), if there exists a single neighbor k, such that P i k = 0, then (part of a tree but not a root) P k i = 1 and for all other neighbors j, P j i = 0; else (root or non-tree node), for all neighbors P j i = 0; 3. Assigning activation values: If all neighbors of i point to it except maybe one (i.e., it is part of a tree) then, X i = ( 1 if P j ((G 1 j G 0 j )P i j + w i;j X j P j i ) i 0 otherwise else (a cutset node or a node that is not yet part of any tree), Compute Hop eld:\n( 1 if P j w i;j X j i 0 otherwise" }, { "figure_ref": [], "heading": "Computing goodness values: (only nodes in trees need goodness values)", "publication_ref": [], "table_ref": [], "text": "If i is a cutset node, then For each neighbor j, G 0 i = X i i , G j1 i = X i ( i + w ij ) (G 0 i ; G j1 i are goodness values for neighbor j ).\nelse (a regular tree node), G 0 i = maxf P j2neighbors(i) G 0 j P i j ; P j2neighbors(i) G 1 j P i j + i g; G 1 i = maxf P j2neighbors(i) G 0 j P i j ; P j2neighbors(i) (G 1 j P i j + w i;j P j i ) + i g;\nFigure 10: Algorithm activate-with-cutset" } ]
[ { "authors": "D H Ballard; P C Gardner; M A Srinivas", "journal": "", "ref_id": "b0", "title": "Graph problems and connectionist architectures", "year": "1986" }, { "authors": "A Becker; D Geiger", "journal": "", "ref_id": "b1", "title": "Approximation algorithms for loop cutset problems", "year": "1994" }, { "authors": "U Bertel E; F Brioschi", "journal": "Academic Press", "ref_id": "b2", "title": "Nonserial Dynamic Programming", "year": "1972" }, { "authors": "R D Brandt; Y Wang; A J Laub; S K Mitra", "journal": "IEEE International Conference on Neural Networks", "ref_id": "b3", "title": "Alternative networks for solving the traveling salesman problem and the list-matching problem", "year": "1988" }, { "authors": "Z Collin; R Dechter; S Katz", "journal": "", "ref_id": "b4", "title": "On the feasibility of distributed constraint satisfaction", "year": "1991" }, { "authors": "R Dechter", "journal": "Arti cial Intelligence", "ref_id": "b5", "title": "Enhancement schemes for constraint processing: Backjumping, learning and cutset decomposition", "year": "1990" }, { "authors": "R Dechter", "journal": "John Wiley & Sons, Inc", "ref_id": "b6", "title": "Constraint networks", "year": "1992" }, { "authors": "R Dechter; A Dechter; J Pearl", "journal": "John Wiley and Sons", "ref_id": "b7", "title": "Optimization in constraint networks", "year": "1990" }, { "authors": "J A Feldman; D H Ballard", "journal": "Cognitive Science", "ref_id": "b8", "title": "Connectionist models and their properties", "year": "1982" }, { "authors": "G Hinton; T Sejnowski", "journal": "MIT Press", "ref_id": "b9", "title": "Learning and re-learning in boltzmann machines", "year": "1986" }, { "authors": "J J Hop Eld", "journal": "", "ref_id": "b10", "title": "Neural networks and physical systems with emergent collective computational abilities", "year": "1982" }, { "authors": "J J Hop Eld", "journal": "", "ref_id": "b11", "title": "Neurons with graded response have collective computational properties like those of two-state neurons", "year": "1984" }, { "authors": "J J Hop Eld; D W Tank", "journal": "Biological Cybernetics", "ref_id": "b12", "title": "Neural computation of decisions in optimization problems", "year": "1985" }, { "authors": "S Kasif; S Banerjee; A Delcher; G Sullivan", "journal": "", "ref_id": "b13", "title": "Some results on the computational complexity of symmetric connectionist networks", "year": "1989" }, { "authors": "K Korach; D Rotem; N Santoro", "journal": "ACM Transactions on Programming Languages and Systems", "ref_id": "b14", "title": "Distributed algorithms for nding centers and medians in networks", "year": "1984" }, { "authors": "J L Mcclelland; D E Rumelhart; G Hinton", "journal": "MIT Press", "ref_id": "b15", "title": "The appeal of pdp", "year": "1986" }, { "authors": "S Minton; M D Johnson; A B Phillips", "journal": "", "ref_id": "b16", "title": "Solving large scale constraint satisfaction and scheduling problems using a heuristic repair method", "year": "1990" }, { "authors": "C Papadimitriou; A Sha Er; M Yannakakis", "journal": "", "ref_id": "b17", "title": "On the complexity of local search", "year": "1990" }, { "authors": "J Pearl", "journal": "Morgan Kaufmann Publishers", "ref_id": "b18", "title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "year": "1988" }, { "authors": "C Peterson; E Hartman", "journal": "Neural Networks", "ref_id": "b19", "title": "Explorations of mean eld theory learning algorithm", "year": "1989" }, { "authors": "G Pinkas", "journal": "Neural Computation", "ref_id": "b20", "title": "Energy minimization and the satis ability of propositional calculus", "year": "1991" }, { "authors": "G Pinkas; R Dechter", "journal": "", "ref_id": "b21", "title": "A new improved activation function for energy minimization", "year": "1992" }, { "authors": "D E Rumelhart; G E Hinton; J L Mcclelland", "journal": "MIT Press", "ref_id": "b22", "title": "A general framework for parallel distributed processing", "year": "1986" }, { "authors": "B Selman; H Levesque; D Mitchell", "journal": "", "ref_id": "b23", "title": "A new method for solving hard satis ability problems", "year": "1992" }, { "authors": "E H Shortli E", "journal": "Mycin. Elsevier", "ref_id": "b24", "title": "Computer-Based Medical Consultation", "year": "1976" } ]
[ { "formula_coordinates": [ 3, 246.72, 227.34, 164.64, 37.12 ], "formula_id": "formula_0", "formula_text": "X n ) = n X i<j w i;j X i X j n X i + i X i :" }, { "formula_coordinates": [ 3, 301.44, 322.2, 220.8, 36.82 ], "formula_id": "formula_1", "formula_text": "X n ) = X i<j w i;j X i X j + X i i X i(1)" }, { "formula_coordinates": [ 3, 247.92, 417.6, 274.08, 16.68 ], "formula_id": "formula_2", "formula_text": "; X 5 ) = 3X 2 X 3 X 1 X 3 +2X 3 X 4 2X 4 X 5 3X 3 X 2 +2X 1 ." }, { "formula_coordinates": [ 3, 105.96, 504.64, 286.12, 163.72 ], "formula_id": "formula_3", "formula_text": "2 3 4 5 1 -1 -3 2 -2 1 2 -1 3 Figure 1: An example network" }, { "formula_coordinates": [ 4, 228.96, 134.16, 23.76, 16.78 ], "formula_id": "formula_4", "formula_text": "X i =" }, { "formula_coordinates": [ 6, 172.56, 581.64, 267.12, 37.54 ], "formula_id": "formula_5", "formula_text": "G X k i = max X i 2f0;1g f X j2children(i) G X i j + w i;k X i X k + i X i g" }, { "formula_coordinates": [ 6, 205.44, 669.48, 209.76, 37.54 ], "formula_id": "formula_6", "formula_text": "0 i = maxf X j2children(i) G 0 j ; X j2children(i) G 1 j + i g 4 3 1 2 G 0 4 = 2 G 1 4 = 3 2 -2 -3 2 1 -1 -1 3 1 = 1 G 1 = 2 G 0 1 = 2 G 1 2 = 0 G 0 2 G 5 = 0 1 G 0 5 = 1 G 1 G 0 3 3 = 2 = 2 5 4 3 1 2 X 2 = 0 X 5 = 1 X 3 = 0 X 4 = 0 1 = 1 X (a) (b)" }, { "formula_coordinates": [ 7, 180.96, 288.12, 250.08, 37.54 ], "formula_id": "formula_7", "formula_text": "G 1 i = maxf X j2children(i) G 0 j ; X j2children(i) G 1 j + w i;k + i g" }, { "formula_coordinates": [ 7, 220.08, 665.4, 165.12, 39.92 ], "formula_id": "formula_8", "formula_text": "X i = ( 1 if P j G 1 j + i P j G 0 j 0 otherwise" }, { "formula_coordinates": [ 8, 197.28, 146.52, 210.72, 39.92 ], "formula_id": "formula_9", "formula_text": "X i = ( 1 if P j G 1 j + w i;k X k + i P j G 0 j 0 otherwise" } ]
Improving Connectionist Energy Minimization
Symmetric networks designed for energy minimization such as Boltzman machines and Hop eld nets are frequently investigated for use in optimization, constraint satisfaction and approximation of NP-hard problems. Nevertheless, nding a global solution (i.e., a global minimum for the energy function) is not guaranteed and even a local solution may take an exponential number of steps. We propose an improvement to the standard local activation function used for such networks. The improved algorithm guarantees that a global minimum is found in linear time for tree-like subnetworks. The algorithm, called activate, is uniform and does not assume that the network is tree-like. It can identify tree-like subnetworks even in cyclic topologies (arbitrary networks) and avoid local minima along these trees. For acyclic networks, the algorithm is guaranteed to converge to a global minimum from any initial state of the system (self-stabilization) and remains correct under various types of schedulers. On the negative side, we show that in the presence of cycles, no uniform algorithm exists that guarantees optimality even under a sequential asynchronous scheduler. An asynchronous scheduler can activate only one unit at a time while a synchronous scheduler can activate any number of units in a single time step. In addition, no uniform algorithm exists to optimize even acyclic networks when the scheduler is synchronous. Finally, we show how the algorithm can be improved using the cycle-cutset scheme. The general algorithm, called activate-with-cutset improves over activate and has some performance guarantees that are related to the size of the network's cycle-cutset.
Gadi Pinkas; Rina Dechter
[ { "figure_caption": "Figure 3 :3Figure3: Directing a tree: a) A tree b) A cyclic network with a tree-like subnetwork.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: a) Propagating goodness values. b) Propagating activation values.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure 6: A Harmony network for recognizing words: local minima along the subtrees are avoided.", "figure_data": "", "figure_id": "fig_2", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: No uniform algorithm exists to optimize chains under distributed schedulers.", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :Figure 9 :89Figure 8: No uniform algorithm exists that guarantees to optimize rings even under a central scheduler.", "figure_data": "", "figure_id": "fig_5", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "2Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. In J. L. McClelland and D. E. Rumelhart, Parallel Distributed Processing: Explorations in The Microstructure of Cognition I. MIT Press, Cambridge, MA.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b11", "b1", "b2", "b3", "b8", "b18", "b20", "b21", "b24", "b25", "b20" ], "table_ref": [], "text": "In any computer vision (CV) application involving the recognition or the detection of \\objects\", descriptions of the types of objects to be recognized are required. Object descriptions can be explicitly supplied by a human \\expert\". Alternatively, machine learning techniques can be used to derive descriptions from example objects.\nThere are some advantages to learning object descriptions from examples rather than from direct speci cation by an expert. Speci cally, it may be di cult for a person to c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. provide a CV system with an accurate description of an object that is general enough to cover the possible variations in the visual appearance of di erent instances of the object. For example, no two tumors in medical images will look exactly the same. Similarly, it would be cumbersome for a human to provide a CV system with the ranges of possible values for all the di erent physical aspects of chairs (i.e., What are the possible surface areas of the seating surface of a chair? How is the seating surface supported?). Considerable \\tweaking\" of the object description parameters may be required by a human expert in order to achieve satisfactory system performance. Machine learning techniques can be used to generate concepts that are consistent with observed examples. Some examples of such learning systems include C4.5 (Quinlan, 1992), and AQ (Michalski, 1983). System performance is a ected by the ratio of the number of training examples to the number of features used to describe the examples, and the accuracy with which the examples represent the \\real-world\" objects the CV system may encounter.\nA function-based object recognition system is an example of a CV system for which machine learning techniques can be useful in the development of object descriptions. A function-based object recognition system recognizes an object by classifying it into one or more generic object categories which describe the function that the object might serve (Bogoni & Bajcsy, 1993;Brand, 1993;Di Manzo, Trucco, Giunchiglia, & Ricci, 1989;Kise, Hattori, Kitahashi, & Fukunaga, 1993;Rivlin, Rosenfeld, & Perlis, 1993;Stark & Bowyer, 1991, 1994;Sutton, Stark, & Bowyer, 1993;Vaina & Jaulent, 1991). Each object category is de ned in terms of the functionality required of an object that belongs to the category. For example, an object category might be de ned as: straight back chair ::= provides sittable surface & provides stability & provides back support indicating that an object can be classi ed as a straight back chair to the degree that it satis es the conjunction of the three functional properties.\nThe functional properties are themselves de ned in terms of primitive evaluations of di erent aspects of an object's shape. For example, candidate surfaces may be checked for provides sittable surface by evaluating whether they have appropriate width, depth and height above the support plane. In many cases, there is not a unique ideal value for some given aspect of an object's shape, but instead there is a range of values that can be considered equivalent in terms of \\goodness\". For example, anything between 0.45 to 0.55 meters might be an equally acceptable height for a seating surface. However, as a particular shape measurement becomes too small or too large, the evaluation measure should be reduced. Fuzzy set theory provides a mathematical framework for handling this \\goodness of t\" concept. In our case, a fuzzy membership function transforms a physical measurement (i.e., height of an object's surface above the ground) into a membership value in the interval 0,1]. This membership value, or evaluation measure, denotes the degree to which the object (or portion of the object) ts the primitive physical concept (i.e., how well the height of the surface matches the seating surface height of typical chairs). Thus, a separate measure of goodness is produced for each primitive evaluation. These measures are combined to produce a nal aggregate measure of goodness for the object.\nThe Gruff system (Stark & Bowyer, 1991) is a function-based object recognition system which utilizes fuzzy logic, in the manner just described, to evaluate 3-D shapes. In previous versions of Gruff, the fuzzy membership functions embedded in the system have been collectively hand-crafted and re ned to produce the best results over a large set of example shapes. These membership functions are ideal candidates to be learned from examples using a machine learning approach.\nIn this paper, we present a method of automatically learning the collection of fuzzy membership functions from a set of labeled example shapes. Due to the system constraints imposed by Gruff, general-purpose machine learning algorithms, such as neural networks, genetic algorithms, or decision trees, are not readily applicable. Thus, a new special-purpose learning component, called Omlet, has been developed. Omlet is tested with synthetic data for two di erent object categories (chairs and cups), and with data collected from human evaluations of physical chairs. Results are presented to show that (a) learning the membership functions in this way provides a level of recognition performance equivalent to that obtained from the \\hand-tweaked\" Gruff, and (b) the learning method is compatible with human interpretation of the shapes. The approach should be generally applicable to any system in which a set of primitive evaluation measures is combined to produce an overall measure of goodness for the nal result.\nThis paper is organized as follows. Section 2 discusses some related work, and justies our need to develop a special-purpose learning component. Section 3 introduces the Gruff object recognition system. Section 4 presents the new learning component, called Omlet. At this point, we should state that the material in Section 3 has previously been published, and is presented here to facilitate an understanding of the new learning component. Although Omlet has been speci cally \\tailored\" as an add-on learning component for the Gruff system, it applies to a data structure that can be used in other systems. In general, Omlet can be described as a system for learning in the context of a fuzzy And/Or categorization tree. We point the reader with any questions concerning Gruff's object recognition paradigm to the references provided. Section 5 describes our experimental design and the data sets that are utilized. Section 6 documents the experimental results and gives our analysis of them. Finally, in Section 7 a summary of the paper is given and conclusions are drawn." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b16", "b14", "b19", "b13", "b26", "b10", "b27", "b5", "b0", "b6", "b7" ], "table_ref": [], "text": "There are two ways that learning might be used to ease the construction of systems such as Gruff. The rst is that the rules (or proof tree) that make up Gruff could be built by an inductive learning system. C4.5, a decision tree learner (Quinlan, 1992), is a good example of this class of learning systems. However, these types of inductive classi cation systems cannot adequately replace the functionality of the Gruff/Omlet system. Omlet allows examples which have less than perfect membership in a class to be used for training. There is no direct way to accomplish this in a system such as C4.5. A decision-tree based system would probably require di erent trees to be trained for parent and child categories. The functional concepts (provides sittable surface, for example) would get lost in the training process if the individual features for a chair were directly used. We could train a series of trees to learn functional concepts individually, then train a decision tree to combine the results. In such an approach the parameters of the membership functions that are learned in this paper would be learned implicitly in the construction of a decision tree for a functional concept and any resulting rules. Replacing Gruff/Omlet with a decision tree or other general-purpose rule learner is possible, but would require extensive work to preserve the idea of functional object recognition.\nOmlet is aimed at the second area in which a Gruff-like system could bene t from learning, which is in tuning the membership functions. A knowledge primitive might be a sittable surface. Given measurements for a speci c surface of an object in a speci c orientation, it is necessary to develop a representation of acceptable bounds on the measurements to determine whether the surface has the area to be sittable.\nTechniques from other areas of machine learning have been used to represent and learn probabilistic and fuzzy membership functions. For example, belief networks provide a mechanism for representing probabilistic relationships between features of a domain. Individual feature probabilities can be combined to generate the probability of a complex concept by propagating belief values and constraints through the network. Adaptive probabilistic networks are a kind of belief nets that can learn the individual probability values and distributions using gradient descent (Pearl, 1988;Cooper & Herskovits, 1992;Spiegelhalter, Dawid, Lauritzen, & Cowell, 1993). The structure of belief nets and their update algorithms are similar to the approaches found in Omlet. However, Omlet incorporates symbolic theorem proving, a feature that is fundamental to performing function-based object recognition, as well as value propagation.\nSimilar research has been performed to learn fuzzy membership functions using adaptive techniques such as genetic algorithms and classi er systems (Parido & Bonelli, 1993;Valenzuela-Rendon, 1991). Much of this work can only be used to learn individual membership functions and cannot handle combinations of input. Once again, little work has been directed at learning fuzzy memberships in the context of a rule-based system. Additional re nement techniques such as reinforcement learning (Mahadevan & Connell, 1991;Watkins, 1989), neural networks, and statistical learning techniques can also be used to re ne con dence values.\nThis project represents a new direction in computer vision and machine learning research; namely, the integration of machine learning and computer vision methods to learn fuzzy membership functions for a function-based object recognition system. Although learning such functions in a rule-based context is a novel e ort, similar research has been performed in the area of re ning certainty factors for intelligent rule bases. For example, Mahoney andMooney (1993) andLacher et al. (1992) use backpropagation algorithms to adjust certainty factors of existing rules in order to improve classi cation of a given set of training examples. In contrast to Omlet's approach, all of these systems re ne values that represent a measure of belief in a given result and are adjusted according to the combination functions of certainty factors. Omlet's measures represent degrees of fuzzy membership in an object class, and the re nement method propagates error through an And/Or tree.\nThe work by Wilkins and Ma (1994) focuses on revising probabilistic rules in a classication expert system. Probabilistic weights are applied to each rule, indicating the strength of the evidence supplied by the rule. However, re nements to the rule occur in the form of modifying the applicability of the rule by generalizing, specializing, deleting or adding rules, instead of automatically re ning the weight of the rule. The authors avoid automatic re nement of weights because the resulting rule base may not be interpretable by experts.\nTowell and Shavlik (1993) convert a set of rules into a representation suitable for a neural net, then train the network and re-extract the re ned rules. The initial network can be set up for a chain of rules. The extracted rules will not necessarily have the clear functional meaning that our approach aims at preserving.\nThere are several new approaches to learning and tuning fuzzy rules (Ishibuchi, Nozaki, & Yamamoto, 1993;Berenji & Khedkar, 1992;Jang, 1993;Jang & Sun, 1995) that use genetic algorithms or specialized kinds of neural networks, some making use of reinforcement learning. These approaches might provide an alternative way to learn the membership values provided the initial functional rules are given as fuzzy rules. However, some modi cations to the learning approaches would be needed as they normally work in domains without rule chaining or hierarchies of rules as there are in Gruff/Omlet." }, { "figure_ref": [], "heading": "The Gruff Object Recognition System", "publication_ref": [ "b20" ], "table_ref": [], "text": "The Gruff acronym stands for Generic Representation Using Form and Function (Stark & Bowyer, 1991). The Gruff recognition system takes a 3-D shape description as input, reasons about whether the shape could belong to any of the object categories known to Gruff, and outputs an interpretation for each category to which the object could belong. An \\interpretation\" is a speci ed orientation and a labeling of the parts of the shape which are identi ed as satisfying the functional properties. See Figure 1 for an example of an interpretation." }, { "figure_ref": [], "heading": "Provides Stable Support", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Provides Sittable Surface", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "GRUFF Input GRUFF Output", "publication_ref": [], "table_ref": [], "text": "Figure 1: Gruff interpretation of a 3-D shape for the category conventional chair. Elements of the shape are labeled with the functional property they provide." }, { "figure_ref": [], "heading": "The Knowledge Primitives", "publication_ref": [ "b20", "b21", "b24" ], "table_ref": [], "text": "All of Gruff's reasoning about shape is performed using \\low level\" procedural knowledge which is implemented as a set of knowledge primitives. Each knowledge primitive represents some primitive physical property concerning shape, physics, or causation. Each knowledge primitive takes some (speci ed portions of a) 3-D shape description as its input, along with values of the parameters for the primitive, and returns an evaluation measure between 0 and 1. The evaluation measure represents how well the shape element satis es the particular invocation of the primitive. The knowledge primitives used by Gruff to recognize chairs are (Stark & Bowyer, 1991, 1994;Sutton et al., 1993):\n1. relative orientation (normal one, normal two, range parameters)\nThis primitive determines if the angle between the normals for two surfaces (normal one and normal two) falls within a desired range." }, { "figure_ref": [], "heading": "dimensions ( shape element, dimension type, range parameters )", "publication_ref": [], "table_ref": [], "text": "This primitive can be used to determine if the dimension (e.g. width or depth) of a surface lies within a speci ed range.\n3. proximity ( proximity type, shape element one, shape element two )\nThis primitive can be used to check qualitative relations between shape elements, such as above, below and close to." }, { "figure_ref": [], "heading": "clearance ( object description, clearance volume )", "publication_ref": [], "table_ref": [], "text": "This primitive can be used to check for a speci ed volume of unobstructed free space in a location relative to a particular part of the shape." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "stability ( shape, orientation, applied force )", "publication_ref": [], "table_ref": [], "text": "This primitive can be used to check that a given shape is stable when placed on a at supporting plane in a given orientation and with a (possibly zero) force applied.\nEach of the rst two knowledge primitives include four range parameters: z1 (stands for 1st zero point), n1 (1st normal point), n2 (2nd normal point), and z2 (2nd zero point).\nThese parameters are used to de ne a trapezoidal fuzzy membership function, as in Figure 2, for calculating an evaluation measure for the invocation of the primitive. The last three of the knowledge primitives do not have range parameters. They return an evaluation measure of 1 or 0 depending on whether or not the primitive physical property has been satis ed.\nTrapezoidal membership functions re ect a desire to name (categorize) objects in a manner compatible with human naming. There is typically a non-trivial range for the \\ideal\" value of many physical properties related to functionality. For example, while there is a unique value for the mean sittable surface area of a population of chairs, that value is not the only one that would rate a perfect \\1.0\" for sittability. Reasonable deviations result in no decrease in the sittability. When the sittable surface area falls outside the ideal range (i.e., between z1 and n1, or between n2 and z2 in Figure 2), the evaluation measure is reduced, indicating the surface provides a less than perfect (but still functional) sittable area. Finally, when the area falls outside the range of values (less than z1, or greater than z2 in Figure 2), the surface can no longer function as the sittable portion of a chair, and a evaluation measure of 0 is returned." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "The Category De nition Tree", "publication_ref": [ "b21" ], "table_ref": [], "text": "Gruff's knowledge about di erent object categories is implemented as a category de nition tree, the leaves of which represent invocations of the knowledge primitives. The category de nition tree for the chair category is illustrated in Figure 3.\nA node in a category de nition tree may have two subtrees. One subtree gives the de nition of the category in terms of a list of functional properties. In our chair example, an object must satisfy the functional properties of stability and provides sittable surface in order to be considered a member of the category conventional chair. Each functional property may be de ned in terms of multiple primitives. The evaluation measures of individual primitives are combined (in a manner to be discussed shortly) to determine how well the functional properties have been satis ed. These functional property measures are further combined to arrive at an overall evaluation measure for a category node.\nThe other subtree de nes a subcategory. A subcategory is a specialization of its parent (or superordinate) category, and thus provides a more detailed elaboration of the de nition of its parent. A subcategory node has a subtree of functional properties that are required in addition to those of the parent category. For example, in Figure 3, the subcategory straightback chair is a specialization of a conventional chair with the additional functional requirement provides back support. The overall evaluation measure for a subcategory node is a combination of its parent category evaluation measure and the evaluation measure associated with the additional functional properties. In Figure 3, the overall measure for the subcategory straightback chair is a combination of the measures from the conventional chair node and the provides back support subtree. Note that subcategory measurements do not contribute to the cumulative measure for a parent category. There may be multiple levels of subcategories, as with conventional chair, straightback chair, and armchair in Figure 3.\nCategory nodes which have no associated functional properties (such as the root node chair in Figure 3) do not have associated evaluation measures. These nodes are used to set up the control structure of the function-based de nition. However, they do provide the category de nition since an object that is a member of a subcategory is automatically a member of all its predecessor categories. For example, in Figure 3, an object that belongs to the subcategory straightback chair also belongs to the categories conventional chair and chair. A superordinate category furniture could be added above the chair category (Stark & Bowyer, 1994)." }, { "figure_ref": [], "heading": "Combination of Evidence", "publication_ref": [], "table_ref": [], "text": "The evaluation measures returned by the primitive invocations at a functional property node are combined using the T-norm: T(a; b) = a b where a and b are the measures being combined. This T-norm is commonly referred to as the probabilistic and (Pand) function (Bonissone & Decker, 1986). The immediate parent category node directly receives an associated measure by combining the measures of the functional property nodes using the same T-norm.\nFor example, the functional property provides sittable surface is de ned by six primitives. For simplicity, we'll denote the evaluation measures returned by these six primitives as p 1 through p 6 . The functional property stability is de ned by a single primitive, which also returns an evaluation measure (p 7 ). To determine the overall evaluation measure of a shape for the category conventional chair we compute Since the de nition of a (sub)category is a conjunction of required functional properties, the cumulative measure should be dominated by the \\weakest link\" in the individual primitive evaluation measures, a property of the Pand function. So, an evaluation measure of 0 for any one primitive physical property will result in a cumulative evaluation measure of 0. An evaluation measure of 1 indicates that the primitive physical property has been ideally satis ed, and the shape may belong to the object category. The nal result depends on the evaluation of other primitive physical properties.\nIt would seem that each category could simply be de ned by the knowledge primitives without using the notion of functional properties. The functional property level was introduced into the representation hierarchy for two reasons. First, the subgroupings of functional properties intuitively follow the levels of named categorization typical of human concepts of function. Secondly, most functional property evaluations result in the labeling of the functional elements of the object (i.e., the portions of the structure) that ful ll the functional requirement.\nSince the subcategory de nition represents an increasingly specialized de nition, evidence for belonging to the subcategory should result in an increased measure for the object belonging to the subcategory as opposed to just the parent category. The combination of the functional property measurement of a subcategory node, a, with its parent node's evaluation measure, b, is computed using the T-conorm: S(a; b) = a + b a b\nThis T-conorm is commonly referred to as the probabilistic or (Por) function (Bonissone & Decker, 1986). While the T-conorm is used to combine measures at a subcategory node, the nal subcategory evaluation measure is actually computed as:\nE subcategory = ( S(a; b); if a > T; 0; otherwise:\nwhere T is a user de ned threshold. Thus, the functional property measurement of a subcategory node, a, must be greater than some minimum in order for a shape to receive a non-zero evaluation measure for the subcategory. For the purposes of this work, a value of T = 0 is assumed, indicating that a shape can be assigned to a subcategory as long as there is some non-zero evidence that it meets the additional functional requirements associated with the subcategory. In practice, a nal classi cation decision might require much stronger evidence, say T = 0:7, before a shape is assigned to a subcategory.\nFor example, to determine the overall evaluation measure of a shape for the category straightback chair, we rst compute the overall evaluation measure for the category conventional chair, as previously described. The functional property provides back support is dened by 8 primitives. Denoting the measurements returned by the 8 primitives as p 8 through p 15 , the overall evaluation measure (assuming the measure for provides back support > T)\nfor the category straightback chair is computed as: An object that can function as a straightback chair can also by de nition function as a conventional chair. The T-conorm will give the object a higher evaluation measure for the subcategory straightback chair since there is some evidence in addition to the \\minimal\" amount of evidence required for the shape to belong to the parent category conventional chair. Thus, Gruff performs recognition of a shape by selecting the (sub)category with the highest overall evaluation measure. This should correspond to the most speci c applicable subcategory. One exception occurs when the parent category has an evaluation measure of 1 and there is non-zero evidence supporting the subcategory functional requirements. In this case, the T-conorm assigns an evaluation measure of 1 to both the category and subcategory.\nThe particular T-norm/T-conorm pair utilized in this paper was chosen from among representative T-norm/T-conorm possibilities (including non-probabilistic formulations) described by Bonissone and Decker (1986) after analyzing their performance in conjunction with Gruff across a set of example shapes (Stark, Hall, & Bowyer, 1993a)." }, { "figure_ref": [ "fig_5", "fig_0" ], "heading": "The OMLET Learning System", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the Omlet learning (sub)system. Omlet learns fuzzy membership functions, which are located at the leaves of an And/Or categorization tree, from sets of training examples. Omlet works together with Gruff to automatically learn object category de nitions and use those de nitions to recognize new objects.\nIn the training mode, Omlet uses examples to learn the fuzzy ranges for primitive measurements. Each training example consists of an object description coupled with a desired overall evaluation measure. In the testing mode, Omlet uses the previously learned ranges to act as a function-based object recognition system. Knowledge primitives form the building blocks of the Omlet system, and rules make up the representation language. The rules, which are xed, are derived from Gruff's category de nition tree. They indicate 1) how the knowledge primitives are combined to de ne functional properties, and 2) how the functional properties are combined to give the function-based de nition of an object category.\nGiven a training example, Omlet uses the rules to construct a general proof tree for the example's given object category. The proof tree is simply a data structure that mimics the way Gruff combines primitive evaluation measures. The proof tree also maintains the primitive ranges that are modi ed by the learning algorithm. An example proof tree generated from the rules that de ne an object in the conventional chair category is shown in Figure 4. The proof trees contain only those knowledge primitives which are de ned using range parameters. This is because the other knowledge primitives return only 0/1 measures, and so there is no primitive membership function to learn. The training example must satisfy these \\binary\", or necessary, functional properties and return evaluation measures of 1 in order for the example to be a member of the given category. For example, in Figure 4, the left branch of the top Pand node represents the functional property provides stable support. This functional property is de ned by a single knowledge primitive which has no range parameters. Therefore, this input to the Pand node is xed to always return a 1.\nFor Omlet to obtain an overall evaluation measure for an example object, the physical measurements of the shape elements of the object are input to the primitive fuzzy membership functions in the leaves of the proof tree. The output at a leaf node represents the evaluation measure for the individual functional property. The evaluation measures are combined at the internal nodes of the tree using the probabilistic T-norm/T-conorm combiners described in Section 2.3. The overall evaluation measure of the input example is then output at the root node (see Figure 4).\nInput to Omlet consists of a set of goals for speci c examples from object (sub)categories. The goal includes the example's (sub)category, the elements of the 3-D shape that ful ll the functional properties, and an overall desired evaluation measure which is greater than 0 (otherwise the object is not an example of the object category). Figure 5 shows an example of a goal for a conventional chair object.\nUsing the training examples, Omlet attempts to learn the ranges used in the trapezoidal membership functions associated with the knowledge primitive de nitions (see Figure 2). When a training example is presented, Omlet attempts to prove via the rule base that the object is a member of the speci ed category. Here, the check is to make sure the physical elements of the object listed in the goal satisfy the binary, or necessary, functional properties. So, for a conventional chair training example, Omlet checks that the given orientation is stable, and the given seating surface is accessible (clearance in front and above) and meets a minimum width to depth ratio. If the necessary functional properties have all been satis ed, a proof tree is constructed. The actual overall evaluation measure is then calculated in the Figure 4: The simpli ed proof tree constructed for a learning example from the category conventional chair. The ?a, ?b, and ?c symbols in the rules represent the physical aspects of a shape that are used by the rules. An orientation of the shape, the face of the sittable surface, and the front edge of the sittable surface are substituted for ?a, ?b, and ?c, respectively. This way Omlet knows which elements of a shape are to be \\measured\" and evaluated by the knowledge primitives. manner described above. If the actual evaluation measure is su ciently di erent from the desired evaluation measure, then the primitive fuzzy membership functions that were included in the de nition need to be adjusted.\nPrimitive membership functions are adjusted by propagating the overall error for each training sample down through the nodes of the proof tree in a way that attempts to give each leaf node (i.e., range) some portion of the error. The range parameters (z1, n1, n2, and z2) that de ne the fuzzy membership trapezoids are then adjusted in an attempt to reduce the total error of the examples in the training set. The next few subsections provide details of the Omlet learning algorithm. First, we discuss the method for calculating an error value and propagating it down through the proof tree. Next, we present a method for making initial estimates of the parameters for each membership function. We describe error propagation rst because it is utilized in the initialization phase. We then describe how Omlet makes adjustments to the membership functions in an attempt to reduce the error over the entire training set. The last subsection describes the general learning paradigm and provides some theoretical justi cation for our implementation." }, { "figure_ref": [ "fig_6" ], "heading": "Error Propagation", "publication_ref": [], "table_ref": [], "text": "The error for a training example is de ned as the di erence between the desired evaluation measure and the actual evaluation measure computed by the current state of the Omlet system. A fraction of the error (de ned by a \\learning rate\") is propagated down the proof tree through the Pand and Por nodes. Error propagation through Pand and Por nodes is handled di erently. If the error at a three element Pand node is E, then each of the three elements will receive a portion of the error equal to the cube root of E (i.e., the inverse of the Pand function). For a Por node, the full amount of error, rather than an equal share, is propagated down each link. The rationale for this treatment of error should become clear in Section 4.4.\nIt should be noted that while the desired evaluation measure is fed to the root of the tree and propagated down to the leaves, the error is directly computable since the actual and the projected desired values are always known at each node. The actual values at each node are those computed when the physical measurements of the object shape are fed into the leaf nodes and combined to produce an overall evaluation measure at the root. The projected desired values in the proof tree are obtained by propagating the desired evaluation measure from the root node down to the leaves. For example, given a two input Pand node with actual inputs a 1 and a 2 , the actual output A will be a 1 a 2 (from the T-norm in section 2.3). If the desired output of the node is D, then we can compute the desired inputs to the node as d 1 and d 2 by solving the following set of equations: a 1 a 2 d 1 d 2 = A D\n(1)\nand\na 1 d 1 = a 2 d 2\n(2) The rst equation computes the error for the Pand node1 , while the second equation assures equal portions of the error are assigned to each input. Figure 6 shows an example of the desired values computed via Equations 1 and 2 for every node in a proof tree. In this gure, we have a known desired overall measure of D = 0:6 at the top Pand node, and an actual measure of A = 0:35 which was computed as the Pand of the actual node inputs, a 1 = 0:612 and a 2 = 0:571. Using Equations 1 and 2, we can easily compute the two unknown desired inputs d 1 and d 2 to the top Pand node (which are also the desired outputs of the bottom two Pand nodes) as 0:795 and 0:754, respectively. If there are three inputs to a Pand node, then we solve a set of three linear equations to derive the desired inputs. When there are more than three inputs to a Pand node, we divide the set of inputs recursively into groups of two or three and solve a set of two or three linear equations, respectively.\nSince the Por nodes are used to combine a single parent category measure with a single aggregate measure for a subcategory's functional properties, there will never be more than 2 inputs to this type of node. Therefore, the full amount of error can be propagated through a Por node by simply solving the independent equations:\na 2 + d 1 a 2 d 1 = D (3) and a 1 + d 2 a 1 d 2 = D (4)\nEventually, some portion of the overall error is propagated to the ranges de ned by the trapezoid membership functions. When the error reaches the individual ranges for a training example, the input to the primitive membership function (i.e., the x axis value) and the desired primitive evaluation measure (the y axis value) de ne a point that should lie somewhere on the trapezoid. We also note which leg of the trapezoid the point belongs to, based on which side of the normal portion of the range n1,n2] that the x value lies.\nThe set of desired points for each leg can be used to make adjustments to the trapezoid in an attempt to reduce the error. " }, { "figure_ref": [ "fig_8" ], "heading": "Initial Estimate of Measurement Functions", "publication_ref": [], "table_ref": [], "text": "Omlet's learning algorithm begins by making reasonable initial estimates of all fuzzy trapezoid membership functions for the physical measurements. This is accomplished by assigning actual values of 0 for the membership functions for each training example and propagating the errors (which in this case would be equal to the desired evaluation measures) down to the ranges in the leaf nodes of the proof trees. From the collections of desired points, we make an initial estimate of each trapezoidal membership function. It is only important at this stage to place the edges of the constructed normal range (the n1 and n2 range parameters) somewhere within the actual normal range. The learning algorithm will make adjustments to the n1 and n2 points on subsequent training epochs. Additionally, Omlet may set minimum or maximum limits on the values of some of the range parameters (more on this shortly).\nA training example with a desired evaluation measure of 1 is considered a \\perfect\" example of an object from a given category. Perfect training examples are desirable in the training set because all primitive measurements for perfect examples are known to fall in the range n1,n2]. For example, if a conventional chair training example has a desired evaluation measure of 1, then we know that all of the membership functions in its proof tree (see Figure 4) must return values of 1. This is because the result of the Pand function can be no greater than the minimum input.\nOmlet now examines the set of desired points that have been propagated to each range in the de nition tree and determines \\limit\" points. These are de ned as follows. If any two desired points have y values (memberships) of 1, then at least a segment of the normal range n1,n2] is known. The n1 range parameter is set to the minimum x value of all desired points with y values of 1. Similarly, the n2 parameter is set to the maximum x value of all desired points with y values of 1. Note that if only one such desired point is found then n1 and n2 are set to the same value, and the membership function is initially triangular.\nSince some portion of the normal range is known to be correct, an upper limit is set on the n1 value and a lower limit is set on the n2 value to assure that the known segment of the normal range is not reduced during subsequent training. Since training examples have desired membership values greater than 0, we know that all x input values must lie between z1 and z2. Omlet uses the minimum and maximum x values from the set of desired points to set limits on the z1 and z2 range parameters. The z1 range parameter is never permitted to increase above the minimum x value during training. Similarly, the z2 value may never decrease below the maximum x value in the set of desired points. Figure 7 shows the range parameters (limit points) Omlet sets during the initialization phase given a set of 10 examples. The limits on the range parameters serve several purposes. First, the limits assure that perfect training examples will not be assigned evaluation measures less than 1, and that all training examples will have evaluation measures greater than 0. More importantly, by limiting the changes that can be made to some range parameters, better approximations to the desired membership functions can be learned. In subsequent learning, the error is propagated down the proof tree with the assumption that equal amounts of the error come from each input to a node. This assumption is not always valid, and there is no way to directly determine the portion of the error that belongs to each input. If an error propagated to a membership function would cause a change in one or more of the range parameters (z1,n1,n2,z2) that moves the parameter past its set limit, the portion of the overall error assumed to be caused by the membership function has not been correctly estimated. When this occurs the parameter is set equal to its limit, e ectively reducing the degree to which changes in the membership function would compensate for the overall error. This should allow the learning algorithm to nd a good solution in the case where di erent membership functions contribute di erent amounts of error.\nIf a segment of the normal range is known for some membership function, then initialization of the range parameters is straight-forward. The n1 and n2 values will have already been set. The z1 value is set simply by making the left leg of the trapezoid pass through the point (n1,1.0) and the point from the set of desired points with the minimum x value.\nSimilarly, the z2 value is set by making the right leg of the trapezoid pass through the point (n2,1.0) and the desired point with the maximum x value. If there are no points to the left (right) of the n1 (n2) point, then the membership function is assumed to be one-legged (as for CONTIGUOUS SURFACE in Figure 4) and the parameters n1 and z1 (n2 and z2)\nare extended to a very large negative (positive) value and not permitted to change during training.\nIf no portion of the normal range of a membership function can be determined, then we attempt to t a trapezoid to the set of desired points. First, the two desired points with the maximum y values are found. We assume that the normal range lies somewhere between them. A best-t trapezoid is determined by varying the n1 and n2 range parameters over the assumed normal range, and selecting the normal range n1,n2] that produces the lowest error for the set of desired points. The error is the sum of the absolute values of the di erence between the desired y value and the actual y value found for each point. The z1 (z2) range parameter is set in the same manner as before, where the left (right) trapezoid leg is forced to pass through the desired point with the minimum (maximum) x value. The n1 value is varied from the leftmost point of the assumed normal range to the rightmost point in small increments. For each di erent value of n1, the n2 value is varied from n1 to the rightmost point of the assumed normal range in small increments. So, we are simply testing a range of possible trapezoids (with the degree of accuracy, and number of trapezoids tested, de ned by the increments in which n1 and n2 are varied) that have a normal range n1,n2] somewhere within the assumed normal range. From these we select the set of range parameters that minimize the total error over the set of training examples. The use of a best-t trapezoid approach is helpful, because we have no initial way to accurately associate error with any given trapezoid." }, { "figure_ref": [], "heading": "Adjusting Membership Functions", "publication_ref": [], "table_ref": [], "text": "To make adjustments to a membership trapezoid, each leg of the trapezoid is t to a set of desired points using a least squares line t. Recall that after every training epoch we have a set of desired points for each leg of each trapezoid. The new z1 (z2) value of the trapezoid is set to the point at which the left (right) leg intersects 0. The new n1 (n2) value is set to midway between the old n1 (n2) value and the value where the left (right) leg of the tted line intersects y = 1. The new n1 and n2 values are not directly set to where the tted trapezoid legs intersect 1 because overestimating the normal range n1,n2] can eliminate some desired points that should be used in the least squares line t for a trapezoid leg. Desired points in the normal n1,n2] range by de nition do not fall on a leg of the trapezoid, and are not used when adjusting the trapezoid legs. Therefore, if the normal range is overestimated, points that truly belong on a trapezoid leg will not be used to adjust the leg. By gradually moving the normal points n1 and n2, Omlet is better able to converge on an appropriate solution. After the new range parameter values (z1,n1,n2,z2) have been determined, Omlet checks to make sure that none of them lie outside any limits that may have been set in the initialization phase. Restrictions on new range parameters assure that the membership functions remain trapezoidal (or triangular if n1 = n2). First, z1 must be less than or equal to n1. Similarly z2 must be greater than or equal to n2. If z1 (z2) is greater (less) than n1 (n2) then z1 (z2) is set equal to n1 (n2). Also, n1 must be less than or equal to n2. In the case that there is only a single point in the set of desired points for a trapezoid leg, the leg is de ned by the normal point for that leg (n1 for the left leg and n2 for the right leg) and the single desired point.\nThe training data may provide target points for only a portion of a trapezoid for some of the ranges. Omlet is capable of detecting this situation by observing the slope of the tted line, and adjusting the membership function appropriately. The slope of the left trapezoid leg should be positive and the slope of the right leg should be negative. If the slope of the tted trapezoid leg is nearly horizontal (close to 0.0), or the sign of the slope is opposite what is expected, then the normal point on that leg is moved (again, n1 for the left leg and n2 for the right leg) outward. This adjustment allows Omlet to learn one-legged membership functions, and to handle (as well as possible) situations when not enough training data is available.\nA method of escaping local minima was empirically found useful. Normally Omlet does not allow a trapezoid leg to change if the change causes an increase in total error for the training set. So, it is possible for zero, one or both trapezoid legs for each range to get adjusted on an epoch. If learning slows down su ciently, then Omlet will temporarily allow trapezoid leg changes that cause an increase in overall error in hopes of escaping a possible local minima. More precisely, if the total training set error for one epoch decreases by less than a speci ed threshold, then range changes that cause an increase in overall error are permitted for the next training epoch." }, { "figure_ref": [], "heading": "The Training Approach", "publication_ref": [ "b9" ], "table_ref": [], "text": "In order to learn all the various subcategories de ned in a category de nition tree, we utilize a machine learning approach which is based on an assumption about human learning known as one disjunct per lesson (Lehn, 1990). Perhaps it is easiest to understand the mechanics of our learning approach if we explain the one-disjunct-per-lesson assumption in the terminology of cognitive science. Since many of the terms in machine learning are derived from the cognitive sciences, it will not be di cult to show the similarities between our algorithm and this characterization of human learning. We will also examine some of the computational characteristics of our learning algorithm that support our choice of this approach." }, { "figure_ref": [ "fig_9", "fig_9", "fig_9" ], "heading": "One Disjunct Per Lesson", "publication_ref": [ "b9", "b9" ], "table_ref": [], "text": "Van Lehn (1990) tells us that an e ective way of teaching more complicated concepts is to build them up from simple subconcepts, as opposed to an \\all-at-once\" approach. For our purposes, a disjunct can be considered one of these simple subconcepts. A lesson consists of an uninterrupted sequence of demonstrations, examples, and exercises. The length of a lesson varies. Thus, we might expect a human to better understand the concept of an armchair by presenting a series of lessons, each of which introduces a single new subconcept that builds upon the previous subconcepts. For example, a rst lesson teaches the concept of a conventional chair which requires only a stable sittable surface in the correct orientation.\nTo learn what constitutes a straightback chair, we build upon the concept of conventional chair by introducing the subconcept of back support in a second lesson. So, the second lesson broadens our notion of chairs, in general. Finally, a third lesson builds upon our understanding of a straightback chair by introducing the subconcept of arm support. By contrast, the all-at-once approach may try to explain that an armchair provides a stable sittable surface in the correct orientation with some back and arm support. Here, we are trying to teach three subconcepts at one time, and show how the three subconcepts together form the more complex concept of an armchair. Indeed, Van Lehn (1990) cites some laboratory studies which indicate that the learning task is more di cult when more than one disjunct (subconcept) is taught per lesson.\nWe have chosen to utilize a machine learning algorithm which has underpinnings similar to Van Lehn's one-disjunct-per-lesson assumption. In our case, concepts and subconcepts are represented by categories and subcategories. A lesson for our algorithm consists of numerous epochs of the training examples from one (sub)category. Thus, our lesson can be viewed as an uninterrupted sequence of positive examples that \\teach\" the functional requirements for a single (sub)category. The length, or number of training epochs, of our lessons may vary depending on the subcategory being learned. To learn all the ranges in a category de nition tree, we begin by learning the simplest concepts rst. Then we learn additional more complex subconcepts by building upon the notion of the more simple concept. For example in the simpli ed proof tree in Figure 8, the parent category conventional chair will be learned before attempting to learn the subcategory (specialization) straightback chair. Since the subcategory straightback chair is itself a parent category, it will be learned before attempting to learn the even more complex subcategory armchair. The remainder of this subsection discusses our implementation in ner detail.\nFrom an implementation standpoint, the simplest concepts are the functional properties associated with the categories that are directly linked to the root node in our category de nition tree such as provides sittable surface and provides stable support for the category conventional chair. In our rst lesson, we use positive examples from these \\ rst level\" (or parent) categories to learn only those membership functions associated with these categories. Once the rst level categories have been learned, their membership functions are \\frozen\" and not permitted to change during subsequent lessons.\nIn our second lesson, only the membership functions of the \\second level\" categories (i.e., the subcategories of the rst level categories in the de nition tree) are learned. In Figure 8, these membership functions belong to the node provides back support for the subcategory straightback chair. If we have learned the \\simple\" functional concept associated with the parent category, the values computed for a parent category node are assumed to be reasonably accurate. For example, when the actual values in a proof tree are computed for a straightback chair training example, the actual values emanating from the parent category node conventional chair should be accurate since the concepts associated with this node have already been learned. That is, the evaluation measures for the functional properties provides sittable surface and provides stable support of a straightback chair example are assumed to be correct. This implies that the membership functions making up the functional requirement subtree (i.e., provides back support) are responsible for the entire error for a subcategory training example. (This explains why Equations 3 and 4 are used to propagate error through Por nodes.) Hence, the error is propagated to the modi able leaves under a functional requirement node through a Pand subtree and learning continues as before.\nThe lessons continue with each parent category being learned before any of its subcategories are learned, until all subcategories have been learned. By freezing the parent category membership functions after they have been learned, we are applying to the one-subconceptper-lesson strategy. So in Figure 8 after learning straightback chair, the membership functions for that branch are frozen and the armchair subcategory is learned by modifying the membership functions under the provides arm support branch of the proof tree.\nOmlet begins learning by evaluating the rule base in order to determine subcategory dependencies and assigns each (sub)category in the de nition tree a level in the learning hierarchy. For example, Omlet determines that the category conventional chair has no parent category and its membership functions can be learned immediately (level 1). However, the evaluation measure of the subcategory straightback chair is dependent on the parent category conventional chair. The straightback chair subcategory is assigned to learning level 2. Subcategory armchair is dependent on parent category straightback chair, and is therefore assigned to learning level 3." }, { "figure_ref": [], "heading": "Practical Justification", "publication_ref": [], "table_ref": [], "text": "In order to understand why we have taken a one-disjunct-per-lesson approach rather than an all-at-once approach, let's make some observations concerning how accurately blame assignment for an error can be determined for a typical training example.\nRecall that error propagation through a proof tree involves projecting desired node input values from a known node output value. Consider a Pand node with a known desired output of 0.9, and two unknown inputs. We know that both of the inputs must be at least 0.9. This means both inputs to the Pand node fall within the relatively small range 0.9,1.0]. However, when the desired output of a two input Por node is 0.9, we can only be sure that both inputs fall in the range 0,0.9]. If the known output to the Pand or Por node is very low, say 0.1, then there is an opposite e ect. That is, the unknown inputs for a Por node would lie in the relatively small range 0.0,0.1], and the unknown inputs for the Pand node would fall somewhere in the much larger range 0.1,1.0]. These observations suggest that the blame assignment for error can be propagated through a Pand node with reasonable accuracy on examples that are relatively good, say 0.7 or above. However, for high evaluation measures, an error value cannot be reliably propagated through a Por node.\nSince a subcategory evaluation measure is computed as the Por of a parent category evaluation measure and the combination of additional functional requirements, all Por nodes in a proof tree have two inputs. All Por nodes (in our proof trees) have at least 1 connecting node which consists of a parent (or more general) category whose membership calculation involves only Pand connectives. The structure of the proof trees permits the membership functions which contribute to the evaluation measure of a parent category to be accurately learned prior to learning those de ned in the additional functional requirements of the subcategories. That is, we can determine one of the inputs to any Por node before we attempt to propagate an error through that node. With one input and the desired output of a Por node known, calculation of the unknown input is trivial. Thus, our learning approach eliminates the reliability problems associated with propagating blame assignment for error through Por nodes. This will be veri ed in Section 6 with experimental results for the subcategories straightback chair and armchair.\nThe mechanics of our learning algorithm suggests that Omlet's performance depends on how accurately blame assignment can be propagated through the Pand nodes of a proof tree. Earlier, we observed that blame assignment is less reliably propagated through Pand nodes for \\bad\" training examples. Not surprisingly, this suggests that the quality of the training data will have an e ect on system performance. This does not mean that \\bad\" examples of an object (sub)category cannot, or should not, be included in the training set. Since we use a least squares line t to adjust the fuzzy membership functions, the use of some \\bad\" training examples (for which the blame may have been inaccurately distributed among the fuzzy membership functions) should not dramatically a ect the overall reliability of the learned system parameters. Rather, it is just desirable to train the system with examples that, for the most part, are good examples of their labeled object category. However, this is not unreasonable as we might expect a machine (or a human for that matter) to better learn what constitutes a chair by observing good examples of chairs." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "Upon reading in the rule base, the knowledge primitive measurements of the training examples, and all training example goals, Omlet begins by learning the membership functions of all level 1 categories. The rst learning epoch is used to make initial estimates of the membership functions, and then Omlet iterates for 1000 additional training epochs. A learning rate of 0.15 is used during the 1000 training epochs, so that 15 percent of the actual error for each training example is propagated to the adjustable ranges on each epoch. After the 1000 training epochs, the best range parameters (those that resulted in the lowest overall error) for level 1 categories are restored and frozen. The 1000 training epochs are then repeated for the level 2 categories, followed by the level 3 categories, and so on until all ranges in the category de nition tree have been learned2 .\nThe performance task of the Omlet system is evaluated by how well the trained system recognizes objects that were not used in the training phase. One measurement of system performance is the error observed on the test examples. The error for a test example is computed as the absolute value of the di erence between the desired and actual evaluation measures. Training/Test sets are con gured two ways: random partitioning of all labeled data into training and test sets, and leave-one-out testing. In the rst case, for a given size training set, 10 train/test set pairs are created by randomly partitioning all the labeled data. The error for a single test set is the average error of all test examples. The results for a given size training set are reported as the average error of the 10 partitions. In leave-oneout testing, one example in the data set is used to test while all remaining samples form the training set. This is repeated using each example in the data as the test set, and results are reported as the average error of all test examples. The average error per example versus the training set size is plotted for training sets of 10, 20, 30, ... , N-1 samples. The point with N-1 training examples represents the leave-one-out test results." }, { "figure_ref": [ "fig_10" ], "heading": "Test on the Gruff Chair Database", "publication_ref": [ "b20" ], "table_ref": [], "text": "From the evaluations of Gruff (Stark & Bowyer, 1991), a large database of 3-D shapes speci ed as polyhedral boundary representations has been built up. Figure 9 shows 52 chair shapes. A number of the 52 shapes can belong to more than one category or can function in more than one stable orientation. This results in a total of 110 training examples. There are 78 labeled instances for the category conventional chair. Some 28 of these instances additionally satisfy the function of straightback chair, and 4 instances satisfy the function of armchair. For each shape, we have the evaluation measure for the shape's membership in di erent object categories, as computed by Gruff with the hand-crafted functions for the primitive evaluation measures. This set of shapes and their evaluation measures make up the rst set of training examples.\nThe rst set of experiments will help determine how well Omlet learns a set of membership functions that minimize the overall error, and also how closely the learned membership functions approximate the original functions hand-crafted by an expert for Gruff. A question of great practical importance to vision researchers is whether a machine learning technique can derive a set of system parameters equivalent to the hand-crafted results of the system designer. If so, the manual e ort in system construction could be greatly eased. When the learning task is formulated as duplicating the Gruff measures, the training data for these experiments is e ectively \\noiseless\". (Noiseless in the sense that the desired evaluation measures that are used as input to Omlet are all derived in the same manner from the same set of hand-crafted fuzzy membership functions.)" }, { "figure_ref": [], "heading": "Test on a Synthetic Cup Database", "publication_ref": [ "b12", "b28", "b28" ], "table_ref": [], "text": "The de nition and recognition of cups is a task that has been visited frequently in machine learning research (Mitchell, Keller, & Kedar-Cabelli, 1986;Winston, Binford, Katz, & Lowry, 1983). As Winston (1983) observes, it is hard to tell vision systems what cups should look like. It is much easier to talk about the purpose and function of a cup. We convey the description of a cup by providing its functional de nition. In particular, a cup is described as an object that can hold liquid, that is stable, liftable, and can be used to drink liquids. The physical identi cation can be made using this functional de nition. In particular, for the synthetic set of objects created here, these functional properties are broken down into 19 knowledge primitives, 17 of which have range parameters.\nWe generated a database of 200 synthetic cup examples, for which the measurements of the knowledge primitives are randomly distributed. Hand-crafted range parameters (z1,n1,n2,z2) are supplied for all 17 ranges in the cup functional de nition. To generate a cup example, a primitive measurement is randomly selected for each range. Approximately 80% of the time the primitive measurement is randomly chosen between n1 and n2. The other 20% of the time the measurement is randomly chosen outside n1 and n2, but inside z1 and z2. This cup generator program provides us with the capability to create a large number of cup examples without the time-consuming process of creating actual 3-D CAD models for each example." }, { "figure_ref": [ "fig_11" ], "heading": "Learning from Human Evaluation Measures", "publication_ref": [], "table_ref": [], "text": "In object recognition it is important to test a system on real objects, if possible, for a number of reasons. First, we can see whether the system can approximate human judgment. Second, it is important to observe system performance in the presence of noise, which real-world data will inevitably contain. Finally, using real-world data will alleviate the need to completely hand-craft the system with synthetic data. This is actually a useful guide for the scenario where the \\vision system engineer\" gives the system a set of human-labeled examples, and lets the system learn the parameters. To test Omlet, we have used a set of 37 actual objects and human ratings of how well they might serve as a chair. Figure 10 shows some of the objects used in these experiments. In order to determine how well Omlet can learn to recognize the set of real chair-like objects, all the objects were collected together in a single room and each object was placed in the orientation in which it would most likely be recognized as a chair. For actual chairs, this is simply the orientation in which the chair would typically be used. For a metal trash can it would be an \\upside down\" orientation, etc. Then a group of 32 undergraduate students in an Arti cial Intelligence class was given the following instructions:\nYou are asked to rate each of the thirty-seven objects according to the degree of \\chair-ness\" that is re ected in its 3-D shape. For our purposes, \\chair-ness\" measures if the object could be used as a chair. You are to consider only the 3-D shape in making your rating. You should assume that each object is made of appropriate materials, so that this is not a factor in your ratings. You are to consider the suitability of the object shape only in the orientation that you see it, rather than some other orientation. Examples of factors that you should consider in rating the \\chair-ness\" of a shape are height, width, depth, area, relative orientation and apparent stability.\nYou are asked to rate each shape against the requirements of three di erent aspects of \\chair-ness\". The rst aspect is solely its ability to provide a stable seating surface. The second aspect is solely its ability to provide back support compatible with the seating surface. The third aspect is solely its ability to provide arm support compatible with the seat and back. Each aspect should be judged independently on a scale of 1 to 5, where 1 means it has no ability to provide the required function and 5 means that it seems ideal to provide the desired function. You may mark halfway between two numbers if you wish.\nThe ratings of each aspect of \\chair-ness\" were then averaged, normalized and rounded to the nearest multiple of 0.02 to result in values in the range 0,1]. The overall evaluation measures for the objects for the conventional chair category are taken as the normalized evaluation measures for the rst aspect of \\chair-ness\", that is the object's ability to provide a stable seating surface. Overall evaluation measures for the categories straightback chair and armchair are computed using the probabilistic or T-conorm to combine the three aspects of \\chair-ness\" in the manner described in Subsection 3.3. Hence, a comfortable, sturdy chair would have a value close to 1 for \\chair-ness\", while the upside-down trash can has a considerably lower value (approx. 0.5).\nAfter the objects had been rated, measurements were taken for each of the primitives describing the chair in the Gruff system. The measurements were those required for the Omlet rules, such as the clearance from the ground, the area of the sittable surface, the height of the sittable surface, etc. Complete Omlet examples describing the objects were then created, including the aggregate evaluation measure of the objects for the categories conventional chair, straightback chair, and armchair. This resulted in 37 objects for the conventional chair category, 22 objects in the straightback chair category (15 objects had no back support at all), and 12 objects in the armchair category (10 objects that had back support did not have any arm support). There are at least two sources of noise in this experimental data: 1) the human evaluations, and 2) the actual measurements of the physical properties of the objects. For example, the standard deviations of the normalized human evaluations of the 37 objects for the conventional chair category are about 0.12, or 12%, on average. The results of leave-one-out testing on the 37 real-world objects are presented in the next section." }, { "figure_ref": [ "fig_12", "fig_13", "fig_14", "fig_14", "fig_14", "fig_12", "fig_17", "fig_18", "fig_12" ], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "There are at least four factors that may a ect the performance of the Omlet system: 1) the number of training epochs, 2) the number of training samples for each category, 3) the number of ranges to be learned for each category, and 4) the quality of the training data for each category. Histograms of the desired evaluation measures of the training data are used to convey the concept of training set \\quality\". They are shown in Figure 11 for the Gruff chair data. The height of each histogram bin is the number of training samples with desired evaluation measures that fall within a particular range. So, the histogram of a \\good\" set of training data would be skewed towards the higher evaluation measures. Similarly, the histogram representing \\bad\" training data would be skewed towards the lower evaluation measures. The histogram of a parent category, such as conventional chair or cup, represents the distribution of the overall desired evaluation measures (which are the goal measures of the examples in the data set provided as input to Omlet). However, the histograms for subcategories, such as straightback chair and armchair, represent the distributions of the desired evaluation measures associated with the additional functional requirements de ned for the subcategory. For example, the histogram for the straightback chair category represents the quality of the provides back support portion of the straightback chair examples in a data set, not the overall desired evaluation measures. Recall that the ranges associated with the parent category conventional chair will be frozen (and presumably accurate) before learning begins for the category straightback chair. So, Omlet only uses straightback chair examples to learn the ranges associated with the provides back support functional property. Thus, when learning the ranges for the category straightback chair, we want to observe the quality of the back supports of the training examples. Similarly, we want to observe the quality of the arm supports of the armchair examples, not the overall desired evaluation measures. Figure 12 shows examples of the average training sample error plotted as a function of the number of training epochs for each of the three data sets (Gruff objects, synthetic cups, and real objects). From these plots, we can see that 1000 training epochs is more than su cient for all of the categories in the three data sets. Training could most likely be stopped after 400 epochs for any of the categories without a degradation in system performance. Since the number of training epochs is the same for all categories, and has been shown to be su cient, we can eliminate this factor as a possible cause for the di erent levels of performance among categories. Some experiments in addition to those described in Section 5 were run to examine the e ect of the other performance factors.\n6.1 The Gruff Chair Database Figure 13 shows the plot of the average error per sample versus training set size for examples from the conventional chair category, and a separate plot for examples from the straightback chair category. Since there are only 28 straightback chair examples, only 3 different training set sizes (6,12,18) were evaluated in addition to the leave-one-out testing. All 78 conventional chair examples were used to train the ranges associated with the conventional chair category before the ranges for the straightback chair category were trained. No testing was done for the subcategory armchair since there were only four training samples available. The plot shows that increasing the number of training samples generally leads to a reduction in the average error. When more than 20 training examples are used, the actual evaluation measures of the test examples are within approximately 1% of the desired evaluation measures for both the conventional chair and straightback chair categories.\nWe should note here that the errors in overall evaluation measures found for categories at di erent learning levels are not directly comparable. So, the plot of the error rate for the straightback chair category is not directly comparable to the plot for the conventional chair category (Figure 13). As an example, consider an object with a desired overall evaluation measure of 0.85 for the category conventional chair. If Omlet computes an actual evaluation measure of 0.86, then the error for this example is 0.01. Let's assume the provides back support portion of this object has a desired evaluation measure of 0.75. The overall desired evaluation measure for this example in the category straightback chair would be 0.9625 (Por of 0.85 and 0.75). Now, suppose Omlet nds the actual evaluation measure for the back support of the object to be 0.76, or an error of 0.01. In this case, the actual overall evaluation measure of this example for the category straightback chair would be 0.9664 (Por of 0.86 and 0.76). As a result, the error of 0.01 attributed to the provides back support portion of the object is manifested as a much smaller error of 0.0039 in the overall evaluation measure of the object.\nThe original range parameters (z1,n1,n2,z2) hand-crafted by an expert for the three ranges in the conventional chair de nition (see Figure 4 Omlet was able to determine that the CONTIGUOUS SURFACE range was a one-legged membership function, and the n2 and z2 values (i.e., the leg that does not exist) were set to arbitrarily large values. These results show that the Omlet system is capable of using labeled examples to automatically determine range parameters which are similar to those that would be hand-crafted by an expert. This will facilitate the construction of other object category de nitions.\nIn Figure 13, we can see that the number of training samples does indeed a ect the error rate of test samples. With more than 20 or so training samples, the error rates for both the conventional chair and straightback chair categories begin to level o . So, the number of training samples becomes less of a factor a ecting system performance if a su cient number are used. What constitutes a su cient number of training samples for a category may depend on the number of ranges to be learned and the quality of the training data. There are 3 ranges that must be learned for the category conventional chair, and 5 ranges that must be learned for the category straightback chair. The histograms of desired evaluation measures for the Gruff conventional chairs and the back supports of the Gruff straightback chairs in Figure 11 Using the set of 38 \\good\" conventional chair examples to train Omlet, the average error found using the 38 \\bad\" examples to test drops to 0.013 (compared to an average error of 0.1869 when 37 \\bad\" examples are used to train). A closer examination of the results reveals that one \\bad\" example contributes a relatively high error of 0.5 to the average. If this single example is excluded from the test results, the average error of the remaining 37 \\bad\" examples is only 0.00067. If the 38 \\bad\" examples are used to train Omlet, the average error found using the 38 \\good\" examples to test is 0.242. These results indicate that Omlet is not inherently biased to produce more accurate test results for \\good\" examples since we are able to achieve a low error rate for the \\bad\" examples when \\good\" training data is used. Rather, these results emphasize the importance of controlling the quality of the data used to train Omlet. Figure 14 shows the plot of the average error per sample versus training set size for examples from the randomly generated cup category. As before, Omlet's performance generally improves as the number of training samples is increased. A comparison of the error plots for the conventional chair data and the cup data reveals that the average error for the cups is higher for the same number of training samples, and the error rate decreases more erratically. The comparison of error rates between these two categories is valid since they are both at the same level in the learning hierarchy. As before, there are two performance factors that could be the cause of the di erent error rates. There are considerably more ranges that need to be learned for the cup category than for the Gruff conventional chair category (17 versus 3). Also, from Figure 15 A, we can see that data set created by the cup generator program is of poor quality. Thus, due to the random nature of the synthetic cup generator program, the system was trained with shapes that, on average, are not very good examples of cups. Regardless of the poor training data, when more than 150 training samples are used, the actual evaluation measures for the cup test examples are within approximately 4% of the desired evaluation measures. In light of the \\bad\" set of shapes used as training examples and the large number of ranges that must be learned, the higher average error for cups seems reasonable. As an additional test, we generated a set of 78 synthetic cups in the same manner as before (see Section 5.2). However, we required the distribution of the desired evaluation measures of the synthetic cups to have a similar distribution as the Gruff conventional chair examples (shown in Figure 11 A). show an average error of less than 0.01 per sample. Thus, it would seem that the number of ranges to be learned a ects system performance considerably. " }, { "figure_ref": [], "heading": "The Synthetic Cups Database", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_21", "fig_21" ], "heading": "The Chair Database for Human Evaluation", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Leave-one-out test results for the real-object database with evaluation measures derived from human ratings of the objects are listed in Table 1. Recall that the error rates are not directly comparable among the three categories. The actual evaluation measures for the conventional chairs objects are within approximately 7% of the human evaluation measures. The average error here is about 6% greater average error than for the Gruff data with a similar number of training samples. The histogram in Figure 16 A shows that the data set of real conventional chair objects contains mostly \\good\" examples. Thus, the higher average error can probably be attributed to the \\noise\" associated with the real-object evaluation measures. Considering an average standard deviation of 12% for the human evaluations of the conventional chair objects, a 7% average error per sample for the Omlet results does not seem unreasonable. The actual evaluation measures for the real-object straightback chairs and armchairs di er on average by less than 1% from the desired measures. As before, all conventional chair examples were used to train the ranges associated with the conventional chair category before the ranges for the straightback chair category were trained. The histograms of desired evaluation measures for the back support of the real straightback chair objects and the arm support of the real armchair objects are shown in Figure 16 B and C, respectively. " }, { "figure_ref": [], "heading": "Summary and Discussion", "publication_ref": [ "b20", "b21", "b24", "b23", "b4" ], "table_ref": [], "text": "We have presented a system (Omlet) which uses labeled training examples to learn fuzzy membership functions embedded in a function-based object recognition system. The fuzzy membership functions are used to provide evaluation measures which determine how well a shape ts the functional description of an object category. The Omlet system is an example of using machine learning techniques to aid in the development of a computer vision system. We have shown that it is possible to accurately and automatically learn system parameters which would otherwise have to be provided by a human expert. Omlet may be used to aid in the construction of other object categories for the Gruff object recognition system. The expert does not need to concentrate on \\hand-tweaking\" the range parameters to improve system performance, but rather on providing a good set of example objects to \\show\" to Omlet. This is intuitively appealing in that we are deriving descriptions of objects we would like Gruff to recognize by providing examples from the object category. Additionally, we have been able to demonstrate that the performance of the learning algorithm is a ected by the number and quality of the training examples.\nIt should be possible for the learning approach described in this paper to be applied to other systems in which measurements (or other values) are combined in a tree structure. All cases are covered by our approach, except the case of 2 leaves leading directly to a Por node. However, a generalization of our method for treating Por nodes may be developed to handle this situation. The tree structure in our CV system is composed entirely of probabilistic and and probabilistic or nodes, which are used to combine measurements. It is possible that a similar approach is applicable to tree structures in which other types of nodes (T-norms or T-conorms) are used.\nThe Omlet system should make it easier to adapt the Gruff system to new object domains. Early versions of Gruff performed object recognition starting from complete 3-D shape descriptions (Stark & Bowyer, 1991, 1994;Sutton et al., 1993) rather than from real sensory data. The task of reliably extracting accurate object shape descriptions from normal intensity images is beyond the current state of the art in computer vision. Although work in, for example, binocular stereo, is steadily progressing, accurate models of object shape are more readily extracted from range imagery. Whereas in normal imagery a pixel value represents the intensity of re ected light, in range imagery a pixel value represents the distance to a point in the scene. A version of Gruff has been developed which attempts to recognize object functionality from the shape model that is extracted from a single range image (Stark, Hoover, Goldgof, & Bowyer, 1993b). A major di culty here is, of course, that a single range image does not yield a complete model of the 3-D shape of an object. The \\back half\" of the object shape is unseen (Hoover, Goldgof, & Bowyer, 1995). The accumulation of a complete 3-D shape model through a sequence of range images is a topic of current research. If this problem was solved, then it is conceivable that an Omlet training example might consist of a sequence of range images along with some operator annotations to identify which portions of the images correspond to the functionally important parts of the object (seating surface, back support surface, etc.)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was supported by Air Force O ce of Scienti c Research grant F49620-92-J-0223 and National Science Foundation grant IRI-91-20895." } ]
[ { "authors": "H Berenji; P Khedkar", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b0", "title": "\\Learning and Tuning Fuzzy Logic Controllers Through Reinforcements", "year": "1992" }, { "authors": "L Bogoni; ; Bajcsy; Washington", "journal": "North-Holland Publishing Company", "ref_id": "b1", "title": "\\Selecting Uncertainty Calculi and Granularity: An Experiment in Trading-o Precision and Complexity", "year": "1986" }, { "authors": "M Brand; Washington", "journal": "", "ref_id": "b2", "title": "Bayesian Method for the Induction of Probabalistic Networks from Data", "year": "1992" }, { "authors": "Di Manzo; M Trucco; E Giunchiglia; F Ricci; F ", "journal": "International Journal of Intelligent Systems", "ref_id": "b3", "title": "\\FUR: Understanding FUnctional Reasoning", "year": "1989" }, { "authors": "A Hoover; D Goldgof; K Bowyer", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b4", "title": "Extracting a valid boundary representation from a segmented range image", "year": "1995" }, { "authors": "H Ishibuchi; K Nozaki; N Yamamoto", "journal": "", "ref_id": "b5", "title": "\\Selecting Fuzzy Rules by Genetic Algorithm for Classi cation Problems", "year": "1993" }, { "authors": "J S R Jang", "journal": "IEEE Transactions on Systems, Man and Cybernetics", "ref_id": "b6", "title": "\\ANFIS: Adaptive-Network-based Fuzzy Inference Systems", "year": "1993" }, { "authors": "J S R Jang; C T Sun", "journal": "", "ref_id": "b7", "title": "\\Neuro-Fuzzy Modeling and Control", "year": "1995" }, { "authors": "K Kise; H Hattori; T Kitahashi; K Fukunaga", "journal": "", "ref_id": "b8", "title": "\\Representing and Recognizing Simple Hand-tools Based on Their Functions", "year": "1993" }, { "authors": "K V Lehn", "journal": "The MIT Press", "ref_id": "b9", "title": "Mind Bugs: The Origins of Procedural Misconceptions", "year": "1990" }, { "authors": "S Mahadevan; J Connell", "journal": "", "ref_id": "b10", "title": "\\Automatic Programming of Behavoir-Based Robots Using Reinforcement Learning", "year": "1991" }, { "authors": "R S Michalski", "journal": "Tioga Publishing Company", "ref_id": "b11", "title": "theory and methodology of inductive learning", "year": "1983" }, { "authors": "T M Mitchell; R M Keller; S T Kedar-Cabelli", "journal": "Machine Learning", "ref_id": "b12", "title": "\\Explanation-Based Generalization: A Unifying View", "year": "1986" }, { "authors": "A Parido; P Bonelli", "journal": "", "ref_id": "b13", "title": "New Approach to Fuzzy Classi er Systems", "year": "1993" }, { "authors": "J Pearl", "journal": "", "ref_id": "b14", "title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "year": "1988" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "J R Quinlan", "journal": "", "ref_id": "b16", "title": "C4.5: Programs for Machine Learning", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "E Rivlin; A Rosenfeld; D Perlis", "journal": "", "ref_id": "b18", "title": "\\Recognition of Object Functionality in Goal-Directed Robotics", "year": "1993" }, { "authors": "D C Washington; D Spiegelhalter; P Dawid; S Lauritzen; R Cowell", "journal": "Statistical Science", "ref_id": "b19", "title": "\\Bayesian Analysis in Expert Systems", "year": "1993" }, { "authors": "L Stark; K W Bowyer", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b20", "title": "\\Achieving generalized object recognition through reasoning about association of function to structure", "year": "1097" }, { "authors": "L Stark; K W Bowyer", "journal": "Image Understanding", "ref_id": "b21", "title": "\\Function-based recognition for multiple object categories", "year": "1994" }, { "authors": "L Stark; L O Hall; K W Bowyer", "journal": "Int. J. of Pattern Recognition and Arti cial Intelligence", "ref_id": "b22", "title": "\\An investigation of methods of combining functional evidence for 3-D object recognition", "year": "1993" }, { "authors": "L Stark; A W Hoover; D B Goldgof; K W Bowyer", "journal": "", "ref_id": "b23", "title": "\\Function-based recognition from incomplete knowledge of shape", "year": "1993" }, { "authors": "M Sutton; L Stark; K W Bowyer", "journal": "Elsevier Science Publishers", "ref_id": "b24", "title": "\\Function-based generic recognition for multiple object categories", "year": "1993" }, { "authors": "L Vaina; M Jaulent", "journal": "Int. J. of Intelligent Systems", "ref_id": "b25", "title": "\\Object structure and action requirements: a compatibility model for functional recognition", "year": "1991" }, { "authors": "M Valenzuela-Rendon", "journal": "", "ref_id": "b26", "title": "\\The Fuzzy Classi er System: A Classi er System for Continuously Varying Variables", "year": "1991" }, { "authors": "C J Watkins", "journal": "", "ref_id": "b27", "title": "Models of Delayed Reinforcement Learning", "year": "1989" }, { "authors": "P H Winston; T O Binford; B Katz; M Lowry", "journal": "National Conference on Arti cial Intelligence", "ref_id": "b28", "title": "\\Learning physical descriptions from functional de nitions, examples, and precedents", "year": "1983" } ]
[ { "formula_coordinates": [ 10, 216.54, 131.49, 172.8, 40.1 ], "formula_id": "formula_0", "formula_text": "E subcategory = ( S(a; b); if a > T; 0; otherwise:" }, { "formula_coordinates": [ 14, 264.42, 569.97, 82.8, 16.77 ], "formula_id": "formula_1", "formula_text": "a 1 d 1 = a 2 d 2" }, { "formula_coordinates": [ 15, 90, 205.29, 432.18, 50.97 ], "formula_id": "formula_2", "formula_text": "a 2 + d 1 a 2 d 1 = D (3) and a 1 + d 2 a 1 d 2 = D (4)" } ]
Learning Membership Functions in a Function-Based Object Recognition System
Functionality-based recognition systems recognize objects at the category level by reasoning about how well the objects support the expected function. Such systems naturally associate a \measure of goodness" or \membership value" with a recognized object. This measure of goodness is the result of combining individual measures, or membership values, from potentially many primitive evaluations of di erent properties of the object's shape. A membership function is used to compute the membership value when evaluating a primitive of a particular physical property of an object. In previous versions of a recognition system known as Gruff, the membership function for each of the primitive evaluations was hand-crafted by the system designer. In this paper, we provide a learning component for the Gruff system, called Omlet, that automatically learns membership functions given a set of example objects labeled with their desired category measure. The learning algorithm is generally applicable to any problem in which low-level membership values are combined through an and-or tree structure to give a nal overall membership value.
Kevin Woods; Diane Cook; Kevin Bowyer; Louise Stark
[ { "figure_caption": "Figure 2 :2Figure 2: Fuzzy membership function returns an evaluation measure of a primitive physical property.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Category de nition tree for the basic level category chair.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "conventional chair ::= provides sittable surface Pand stability where provides sittable surface ::= p 1 Pand p 2 Pand p 3 Pand p 4 Pand p 5 Pand p 6 and stability := p 7", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "straightback chair ::= conventional chair Por provides back support where provides back support ::= p 8 Pand p 9 Pand p 10 Pand p 11 Pand p 12 Pand p 13 Pand p 14 Pand p 15", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "ranges that are used to compute the evaluation measures of the functional property \"provides sittable surface\". (conventional_chair ?a ?b ?c) ::= (provides_sittable_surface ?a ?b ?c) PAND (provides_stable_support ?a) (provides_sittable_surface ?a ?b ?c) ::= (dimensions AREA range_parameters ?b) PAND (WIDTH/DEPTH 1.0 ?b) PAND (dimensions CONTIGOUS SURFACE range_parameters ?b) PAND (dimensions HEIGHT range_parameters ?b) PAND (clearance ABOVE ?a ?b) PAND (clearance IN_FRONT ?a ?c) (provides_stable_support ?a) ::= (stability SELF ?a)", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Training goal input to Omlet for a conventional chair object.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Example of error propagation through a Pand tree. Actual values are found when an overall evaluation measure is computed for an object. Desired values are propagated down the tree, and error is computed as Desired Actual.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Omlet collects these desired points for each leg of each membership function by propagating the error for all training examples down the proof trees. The trapezoid/range parameters (z1,n1,n2,z2) are adjusted at the end of each training epoch. Training continues for a xed number of epochs or until some satisfactory level of performance, de ned by minimal classi cation error rate averaged over the training set, is achieved.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure7: Range parameter limits that may be set when initializing range parameters.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Simpli ed proof tree for an armchair object.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: The 52 object chair database.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Some examples of the chair objects used for human evaluation tests.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Histograms of desired evaluation measures of the Gruff chair training sets.", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Average training sample error versus number of training epochs for A) Gruff chair objects, B) synthetic cups, and C) real chair objects. These plots are for a single leave-one-out test run.", "figure_data": "", "figure_id": "fig_13", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Omlet results for test samples from the Gruff chair database.", "figure_data": "", "figure_id": "fig_14", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "range values used by Gruff to determine the desired evaluation measures in the goals provided to Omlet. A typical example of the range parameters as learned by Omlet is", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "A and B, respectively, re ect the quality of the training data used for the leave-one-out tests. We can isolate the e ect of the quality of the training data with some additional experiments utilizing two separate data sets of Gruff conventional chair examples. The number of training epochs, the number of training samples, and the number of ranges to be learned will be identical for each data set. One data set of 38 \\bad\" examples contains all conventional chair examples with desired evaluation measures less than 0.6. A second data set of \\good\" examples was created by selecting 38 of the remaining conventional chair examples. The histograms of desired evaluation measures for the examples used in the \\good\" and \\bad\" data sets are shown in Figure 11 C and D, respectively. Leave-one-out testing (37 training examples) resulted in an average error of 0.0001 for the examples in the \\good\" data set, and 0.1869 for the examples in the \\bad\" data set. Thus, it would seem that the quality of the training data has a considerable e ect on the performance of the learning algorithm.", "figure_data": "", "figure_id": "fig_16", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Omlet results for test samples from the Gruff cup database.", "figure_data": "", "figure_id": "fig_17", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Histograms of desired evaluation measures of the synthetic cup training sets.", "figure_data": "", "figure_id": "fig_18", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 15 B shows the histogram of desired evaluation measures of the examples in this second synthetic cup data set. Since the number of training epochs, the number of training examples, and the quality of the training data are the same as for the rst test using the Gruff conventional chair examples, this experiment isolates the e ect of the number of ranges that must be learned. Performing a leave-one-out test (77 training examples), the average error per sample was found to be approximately 0.08. In Figure 13, the leave-one-out results on the 78 Gruff conventional chair examples", "figure_data": "", "figure_id": "fig_19", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Finally, we created a set of 200 synthetic cups with a similar distribution as the Gruff conventional chair examples. The histogram of desired evaluation measures of the examples in this third synthetic cup data set would look similar to the histograms in Figure 11 A, and Figure 15 B. Performing a leave-one-out test (199 training examples), the average error per sample was found to be approximately 0.023. Compared to the error rate of the original 200 synthetic cups (approximately 0.04), we again note that \\better\" training data improved system performance considerably. Compared to the error rate of the 78 synthetic cup data set (approximately 0.08), which is similar in quality, we see the increased number of training samples signi cantly improved system performance. The error rate for this third synthetic cup data set with 200 examples is still higher than the error rate for the Gruff data set of 78 conventional chair objects (less than 0.01), which has a similar quality distribution. Consider that for the Gruff data set we used 77 training examples to learn the 3 ranges of the conventional chair category, and for the synthetic cup data set, we used 199 training examples to learn the 17 ranges of the cup category.", "figure_data": "", "figure_id": "fig_20", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Histograms of desired evaluation measures of the real-object training sets.", "figure_data": "", "figure_id": "fig_21", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Leave-one-out test results for real-object database with evaluation measures derived from human ratings of the objects.", "figure_data": "(Sub)Category Conventional Chair Straightback Chair ArmchairNumber of Training Samples Evaluation Measure per Sample Average Desired Average Error 36 0.8447 0.0715373 21 0.9927 0.0066456 11 0.9973 0.0022430", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b34", "b15", "b63", "b64" ], "table_ref": [], "text": "The intelligent, autonomous agents of the future will be called upon to perform a wide and varying range of tasks, under a wide range of circumstances, over the course of their lifetimes. Performing these tasks requires knowledge. If the number of possible tasks and circumstances is large and variable over time (as it will be for a general agent), it becomes nearly impossible to preprogram all of the knowledge required. Thus, knowledge must be added during the agent's lifetime. Unfortunately, such knowledge cannot be added to current intelligent systems while they perform; they must be shut down and programmed for each new task.\nThis work examines an alternative: intelligent agents that can be taught to perform tasks through tutorial instruction, as a part of their ongoing performance. Tutorial instruction is a highly interactive dialogue that focuses on the speci c task(s) being performed. While working on tasks, a student may receive instruction as needed to complete tasks or to understand aspects of the domain or of previous instructions. This situated, interactive form of instruction produces very strong human learning (Bloom, 1984). Although it has received little attention in AI, it has the potential to be a powerful knowledge source for arti cial agents as well.\nMuch of tutorial instruction's power comes from its communicative exibility: The instructor can communicate whatever type of knowledge a student may need in whatever situation it is needed. The challenge in designing a tutorable agent is to support the breadth of interaction and learning abilities required by this exible communication.\nIn this paper, we present a theory of learning from tutorial instruction within an ongoing agent. In developing the theory, we have given special attention to supporting communicative exibility for the instructor (the human user). We began by identifying the properties of tutorial instruction from the instructor's perspective. From these properties, we have derived a set of requirements that an instructable agent must meet to support exible instructability. These requirements drove the development of the theory and its evaluation. Finally, we have implemented the theory in an instructable agent called Instructo-Soar (Hu man, 1994;Hu man & Laird, 1993, 1994), and evaluated its performance. 1Identifying requirements for exible instructability provides a target { a set of evaluation criteria { for instructable agents. The requirements encompass the ways an agent interacts with its instructor, comprehends instructions, and learns from them. The most general requirements are common to all interactive learning systems; e.g., the agent is expected to learn general knowledge from instructions, to learn quickly (with a minimal number of examples), to integrate what is learned with its previous knowledge, etc. Other requirements are speci c to tutorial instruction.\nOur theory of learning from tutorial instruction speci es how analytic and inductive learning techniques can be combined within an agent to meet the requirements, producing general learning from a wide range of instructional interactions. We present a learning framework called situated explanation that utilizes the situation an instruction applies to and the larger instructional context (the instruction's type and place in the current dialogue) to guide the learning process. Situated explanation combines a form of explanation-based learning (DeJong & Mooney, 1986;Mitchell, Keller, & Kedar-Cabelli, 1986) that is situated for each individual instruction, with a full suite of contextually guided responses to incomplete explanations. These responses include delaying explanation until more information is available, inducing knowledge to complete explanations, completing explanations through further instruction, or abandoning explanation in favor of weaker learning methods. Previous explanation-based learning systems have employed one or in some cases a static sequence of these options, but have not chosen dynamically among all the options based on the context of each example. Such dynamic selection is required for exible instructability. The learning framework is cast within a computational model for general intelligent behavior called the problem space computational model.\nInstructo-Soar is an implemented agent that embodies the theory. From interactive natural language instructions, Instructo-Soar learns to perform new tasks, extends known tasks to apply in new situations, and acquires a variety of other types of domain knowledge. It allows more exible instruction than previous instructable systems (e.g., learning apprentice systems, Mitchell, Mahadevan, & Steinberg, 1990) by meeting three" }, { "figure_ref": [], "heading": "Does this mean pushing the button causes the light to come on?", "publication_ref": [], "table_ref": [], "text": "Why not?\nTo turn on the light, push the button. Move above the green button.\nMove the arm up.\nMove down.\nThe operator is finished. key requirements of tutorial instruction: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation speci ed in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks.\nIn what follows, we rst discuss the properties and requirements of tutorial instruction. Then, we present our approach and its implementation in Instructo-Soar, including a series of examples illustrating the instructional capabilities that are supported. We conclude with a discussion of limitations and areas for further research." }, { "figure_ref": [ "fig_1" ], "heading": "Properties of Tutorial Instruction", "publication_ref": [ "b28", "b14", "b29", "b67" ], "table_ref": [], "text": "Tutorial instruction is situated, interactive instruction given to an agent as it attempts to perform tasks. It is situated in that it applies to particular task situations that arise in the domain. It is interactive in that the agent may request instruction as needed. This type of instruction is common in task-oriented dialogues between experts and apprentices (Grosz, 1977). An example of tutorial instruction given to Instructo-Soar in a robotic domain is shown in Figure 1.\nTutorial instruction has a number of properties that make it exible and easy for the instructor to produce: P1. Situation speci city. Instructions are given for particular tasks in particular situations. To teach a task, the instructor need only provide suggestions for the speci c situation at hand, rather than producing a global procedure that includes general conditions for applicability of each step, that handles all possible contingencies, etc. The situation can also help to disambiguate an otherwise ambiguous instruction. A number of authors have discussed the advantages of situation-speci c knowledge elicitation (e.g., Davis, 1979;Gruber, 1989).\nP2. Situation speci cation as needed. Although instructions typically apply to the situation at hand, the instructor is free to specify other situations as needed; for instance, specifying contingencies using conditional instructions.\nP3. Incremental as-needed elicitation. Knowledge is elicited incrementally as a part of the agent's ongoing performance. Instructions are given when the agent is unable to perform a task; thus, they directly address points where the agent's knowledge is lacking.\nP4. Task structuring by instructor. The instructor can structure larger tasks into smaller subtasks in any way desired. For instance, a task requiring ten primitive steps may be taught as a simple sequence of the ten steps, or as two subtasks of ve steps each, etc. If the agent does not know what an instructed action or subtask is, or how to perform it in the situation at hand, it will ask for further instruction.\nP5. Knowledge-level interaction. The instructor provides knowledge to the agent at the knowledge level (Newell, 1981). That is, the instructions refer to objects and actions in the world, not to symbol-level structures (e.g., data structures) within the agent. The interaction occurs in natural language, the language that the instructor uses to talk about the task, rather than requiring arti cial terminology and syntax to specify the agent's internal data and processes.\nTutorial instructions provide knowledge applicable to the agent's current situation or a closely related one. Thus, this type of instruction is most appropriate for tasks with a local control structure, in which control decisions are made based on presently available information. Local control structure is characteristic of constructive synthesis tasks, in which primitive steps are composed one after another to form a complete solution. Our work focuses on this type of task. 2" }, { "figure_ref": [], "heading": "Requirements on an Instructable Agent", "publication_ref": [], "table_ref": [], "text": "Although easing the instructor's burden in providing knowledge, the properties of tutorial instruction described above place severe requirements on an instructable agent. In general, such an agent must solve three conceptually distinct problems: it must (1) comprehend individual instructions to produce behavior, (2) support a exible dialogue with its instructor, and (3) produce general learning from the interaction. The properties of tutorial instruction described in the previous section place requirements on the solutions to each of these problems. In what follows, we identify key requirements for each problem in turn." }, { "figure_ref": [], "heading": "Comprehending Instructions: The Mapping Problem", "publication_ref": [ "b108", "b4", "b5", "b23", "b57", "b41", "b11", "b18", "b26", "b58" ], "table_ref": [], "text": "The mapping problem involves comprehending instructions that are given in natural language and transforming the information they contain into the agent's internal representation 2. In contrast, problem solving methods like constraint satisfaction and heuristic classi cation involve global control strategies. These strategies either follow a xed global regime or require an aggregation of information from multiple problem solving states to make control decisions. It is possible to produce a global control strategy using a combination of local decisions (Yost & Newell, 1989). However, teaching a global method by casting it purely as a sequence of local decisions may be di cult. Other types of instruction, beyond the scope of this work, are required to teach global methods in a natural way. To acquire knowledge for tasks that involve a known global control strategy, it may be most e cient to use a method-based knowledge acquisition tool (e.g., Birmingham & Klinker, 1993;Birmingham & Siewiorek, 1989;Eshelman, Ehret, McDermott, & Tan, 1987;Marcus & McDermott, 1989;Musen, 1989) with that control strategy built in.\nlanguage. This is required for the agent to apply information communicated by instructions at the knowledge level (property P5, above) to its internal processing.\nSolving the mapping problem in general involves all of the complexities of natural language comprehension. As Just and Carpenter (1976) point out, instructions can be linguistically complex and di cult to interpret independent of the di culty of the task being instructed. Even in linguistically simple instructions, actions and objects are often incompletely speci ed, requiring the use of context and domain knowledge to produce a complete interpretation (Chapman, 1990;DiEugenio & Webber, 1992;Frederking, 1988;Martin & Firby, 1991).\nThe general requirement for the mapping problem on a tutorable agent is straightforward: M 1 . A tutorable agent must be able to comprehend and map all aspects of each instruction that fall within the scope of information it can possibly represent.\nThe agent cannot be expected to interpret aspects that fall outside its representation abilities (these abilities may be augmented through instruction, but this occurs by building up from existing abilities). A more detailed analysis could break this general requirement into a set of more speci c ones.\nThis work has not focused on the mapping problem. Rather, the agent we have implemented uses fairly standard natural language processing techniques to handle instructions that express a su cient range of actions and situations to demonstrate its other capabilities. We have concentrated our e orts on the interaction and transfer problems." }, { "figure_ref": [], "heading": "Supporting Interactive Dialogue: The Interaction Problem", "publication_ref": [], "table_ref": [], "text": "The interaction problem is the problem of supporting exible dialogue with an instructor. The properties of tutorial instruction indicate that this dialogue occurs during the agent's ongoing performance to address its lacks of knowledge (property P3); within the dialogue, the agent must handle instructions that apply to di erent kinds of situations (properties P1 and P2) and that structure tasks in di erent ways (property P4).\nAn instructable agent moves toward solving the interaction problem to the degree that it supports these properties. In this work, we concentrate on the instructor's utterances within the dialogue, since exibility for the instructor is the goal. We have not considered the potential complexity of the agent's utterances (e.g., to give the instructor various kinds of feedback) in much detail.\nThe properties of exible interaction can be speci ed in terms of individual instruction events, where an instruction event is the utterance of a single instruction at a particular point in the discourse. To support truly exible dialogue, an instructable agent must be able to handle any instruction event that is coherent at the current discourse point. Each instruction event is initiated by either the student or the teacher, and carries knowledge of some type to be applied to a particular task situation. Thus, a exible tutorable agent should support instruction events with: I 1 . Flexible initiation. Instruction events can be initiated by agent or instructor. I 2 . Flexibility of knowledge content. The knowledge carried by an instruction event can be any piece of any of the types of knowledge the agent uses that is applicable in some way within the ongoing task and discourse context. I 3 . Situation exibility. An instruction event can apply either to the current task situation or to some speci ed hypothetical situation.\nThe following sections discuss each of these requirements in more detail." }, { "figure_ref": [], "heading": "Flexible Initiation", "publication_ref": [ "b22", "b64", "b45", "b27", "b47" ], "table_ref": [], "text": "In human tutorial dialogues, initiation of instruction is mixed between student and teacher. One study indicates that teacher initiation is more prevalent early in instruction; student initiation increases as the student learns more, and then drops o again as the student masters the task (Emihovich & Miller, 1988).\nInstructor-initiated instruction is di cult to support because instruction events can interrupt the agent's ongoing processing. Upon interrupting the agent, an instruction event may alter the agent's knowledge in a way that could change or invalidate the reasoning in which the agent was previously engaged. Because of these di culties, instructable systems to date have not fully supported instructor-initiated instruction. 3 Likewise, Instructo-Soar does not handle instructor-initiated instruction.\nAgent-initiated instruction can be directed in (at least) two possible ways: by veri cation or by impasses. Some learning apprentice systems, such as LEAP (Mitchell et al., 1990) and DISCIPLE (Kodrato & Tecuci, 1987b) ask the instructor to verify or alter each reasoning step. The advantage of this approach is that each step is examined by the instructor; the disadvantage, of course, is that each step must be examined. An alternative approach is to drive instruction requests by impasses in the agent's task performance (Golding, Rosenbloom, & Laird, 1987;Laird, Hucka, Yager, & Tuck, 1990). This is the approach used by Instructo-Soar. An impasse indicates that the agent's knowledge is lacking and it needs instruction. The advantage of this approach is that as the agent learns, it becomes more autonomous; its need for instruction decreases over time. The disadvantage is that not all lacks of knowledge can be recognized by reaching impasses; e.g., no impasse will occur when performance is correct but ine cient." }, { "figure_ref": [], "heading": "Flexibility of Knowledge Content", "publication_ref": [ "b101" ], "table_ref": [], "text": "A exible tutorable agent must handle instruction events involving any knowledge that is applicable in some way within the ongoing task and discourse context. This requirement is di cult to meet in general, because of the wide range of knowledge that may be relevant to any particular situation. It requires a robust ability to relate each utterance to the ongoing discourse and task situation. No instructable systems have met this requirement fully.\nHowever, we can de ne a more constrained form of this requirement, limited to instructions that command actions (i.e., imperatives). Imperative commands are especially prevalent in tutorial instruction of procedures. Supporting exible knowledge content for commands means allowing the instructor to give any relevant command at any point in the dialogue for teaching a task. We call this ability command exibility.\nFor any command that is given, there are three possibilities: (1) the commanded action is known, and the agent performs it; (2) the commanded action is known, but the agent does not know how to perform it in the current situation (extra, unknown steps are needed); or (3) the commanded action is unknown. Thus, command exibility allows the instructor teaching a procedure to skip steps (2) or to command a subtask that is unknown (3) at any point. In such cases, the agent asks for further instruction. The interaction pattern that results, in which procedures are commanded and then taught as needed, has been observed in human instruction. Wertsch (1979) notes that \\...adults spontaneously follow a communication strategy in which they use directives that children do not understand and then guide the children through the behaviors necessary to carry out these directives.\"\nCommand exibility gives the instructor great exibility in teaching a set of tasks because the instructions can hierarchically structure the tasks in whatever way the instructor wishes. A mathematical analysis (Hu man, 1994) revealed that the number of possible sequences of instructions that can be used to teach a given procedure grows exponentially with the number of actions in the procedure. For a procedure with 6 primitive actions, there are over 100 possible instruction sequences; for 7, there are over 400." }, { "figure_ref": [], "heading": "Situation Flexibility", "publication_ref": [ "b35", "b35", "b17", "b25", "b39" ], "table_ref": [], "text": "A exible tutorable agent must handle instructions that apply to either the current task situation or some hypothetical situation that the instructor speci es. Instructors make frequent use of both of these options. For instance, analysis of a protocol of a student being taught to use a ight simulator revealed that 119 out of 508 instructions (23%) involved hypothetical situations, with the remainder applying to the current situation at the time they were given.\nInstructions that apply to the current situation, such as imperative commands (e.g., \\Move to the yellow table\"), are called implicitly situated (Hu man & Laird, 1992). Since the instruction itself says nothing about the situation to which it should be applied, the current situation (the task being performed and the current state) is implied.\nIn contrast, instructions that specify elements of the situation to which they are meant to apply are explicitly situated (Hu man & Laird, 1992). The agent is not meant to carry out these instructions immediately (as an implicitly situated instruction), but rather when a situation arises that is like the one speci ed. Examples include conditionals and instructions with purpose clauses (DiEugenio, 1993), such as the following:4 When using chocolate chips, add them to coconut mixture just before pressing into pie pan.\nTo restart this, you can hit R or shift-R.\nWhen you get to the interval that you want, you just center up the joystick again.\nAs a number of researchers have pointed out (Ford & Thompson, 1986;Haiman, 1978;Johnson-Laird, 1986), conditional clauses introduce a shared reference between speaker and hearer that forms an explicit background for interpreting or evaluating the consequent. 5Here, the clauses in italics indicate a hypothetical situation to which the command in the remainder of the instruction is meant to apply. In most cases, the situation is only partially speci ed, with the remainder drawn from the current situation, as in \\When using chocolate chips and cooking this recipe, and at the current point in the process]...\"\nIn general, a hypothetical situation may be created and referred to across multiple utterances. The agent presented here handles both implicitly and single explicitly situated instructions, but does not deal with hypothetical situations that exist through multiple instructions." }, { "figure_ref": [], "heading": "Producing General Learning: The Transfer Problem", "publication_ref": [], "table_ref": [], "text": "The transfer problem is the problem of learning generally applicable knowledge from instructions, that will transfer to appropriate situations in the future. This general learning is based on instructions that apply to speci c situations (property P1, above). Many types of knowledge may be learned, since instructions can provide any type of knowledge that the agent is lacking (property P3).\nSolving this problem involves more than simply memorizing instructions for future use; rather, conditions for applying each instruction must be determined from the situation. Consider, for example, the following exchange between instructor and agent:\nBlock open our o ce door.\nHow do I do that? Pick up a red block. Now, drop it here, next to the door. What are the proper conditions for performing the \\pick up\" action? Simple memorization yields poor learning; e.g., whenever blocking open an office door, pick up a red block. However, the block's color, and even the fact that it is a block, are irrelevant in this case. Rather, the fact that the block weighs (say) more than ve pounds, giving it enough friction with the oor to hold open the door, is crucial. Thus, the proper learning might be:" }, { "figure_ref": [], "heading": "If trying to block open a door, and", "publication_ref": [], "table_ref": [], "text": "there is an object obj that is can be picked up, and obj weighs more than 5 pounds then propose picking up obj.\nHere, the original instruction is both generalized (color red and isa block drop out) and specialized (weight > 5 is added).\nThe transfer problem places a number of demands on a tutorable agent:\nT 1 . General learning from speci c cases. The agent is instructed in a particular situation, but is expected to learn general knowledge that will apply in su ciently similar situations.\nT 2 . Fast learning. An instructable agent is expected to learn new procedures quickly.\nTypically, a task should only have to be taught once.\nT 3 . Maximal use of prior knowledge. An agent must apply its prior knowledge in learning from instruction. This is a maxim for machine learning systems in general (if you have knowledge, use it), and is particularly relevant for learning from instruction because learning is expected to happen quickly.\nT 4 . Incremental learning. The agent must be able to continually increase in knowledge through instruction. New knowledge must be smoothly integrated with the agent's existing knowledge as a part of its ongoing performance.\nT 5 . Knowledge-type exibility. Since any type of knowledge (e.g., control knowledge, causal knowledge, etc.) might be communicated by instructions, a exible tutorable agent must be able to learn each type of knowledge it uses. We make this a testable criterion below by laying out the types of knowledge in an agent based on a particular computational model.\nT 6 . Dealing with incorrect knowledge. The agent's knowledge is clearly incomplete (otherwise, it would not need instruction); it may also be incorrect. A general tutorable agent must be able to perform and learn e ectively despite incorrect knowledge.\nT 7 . Learning from instruction coexisting with learning from other sources.\nIn addition to instruction, a complete agent should be able to learn from other sources of knowledge that are available. These might include learning from observation/demonstrations, experimentation in the environment, analogy, etc. The theory of learning from tutorial instruction presented here focuses on extending incomplete knowledge through instruction { requirements T 1 through T 5 of this list. Handling incorrect knowledge (T 6 ) and learning from other sources (T 7 ) are planned extensions in progress.\nTable 1 summarizes the requirements that must be met by an instructable agent to support exible tutorial instruction, and indicates the requirements targeted by Instructo-Soar. We have made two simpli cations in using the requirements to evaluate Instructo-Soar. First, we treat each requirement as binary; that is, as if either completely met or unmet. In reality, some requirements could be broken into ner-grained pieces to be evaluated separately. Second, we treat each requirement independently. The table indicates Instructo-Soar's performance on each requirement alone, but does not account for potential interactions between them. These interactions can be complex; for instance, in pursuing fast learning (T 2 ), an agent may sacri ce good general learning (T 1 ) because it bases its generalizations on too few examples. We have not addressed such tradeo s in our evaluation of Instructo-Soar." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b9", "b59", "b84", "b31", "b53", "b52", "b91", "b92", "b33", "b66", "b105", "b11", "b100", "b58", "b1", "b10", "b13", "b16", "b54", "b55", "b62", "b13", "b19", "b64", "b77", "b89", "b96", "b103", "b27", "b29", "b45", "b75", "b76", "b64", "b45", "b89", "b29", "b96", "b13", "b75", "b103" ], "table_ref": [], "text": "Although there has not been extensive research on agents that learn from tutorial instruction per se, learning from instruction-like input has been a long-time goal in AI ( Michalski, & Mitchell, 1983;McCarthy, 1968;Rychener, 1983). Early non-interactive systems learned declarative, ontological knowledge from language (Haas & Hendrix, 1983;Lindsay, 1963), simple tasks from unsituated descriptions (Lewis, Newell, & Polk, 1989;Simon, 1977;Simon & Hayes, 1976), and task heuristics from non-operational advice (Hayes-Roth, Klahr, & Mostow, 1981;Mostow, 1983).\nOther work has concentrated on behaving based on interactive natural language instructions. SHRDLU (Winograd, 1972) performed natural language commands and did a small amount of rote learning { e.g., learning new goal speci cations by directly transforming sentences into state descriptions. More recent systems that act in response to language (concentrating on the mapping problem) but do only minimal learning include SONJA (Chapman, 1990), AnimNL (DiEugenio & Webber, 1992), and Homer (Vere & Bickmore, 1990).\nSome recent work has focused more on learning from situated natural language instructions. Martin and Firby (1991) discuss an approach to interpreting and learning from elliptical instructions (e.g., \\Use the shovel\") by matching the instruction to expectations generated from a task execution failure. Alterman et al.'s FLOBN (Alterman, Zito-Wolf, & Carpenter, 1991;Carpenter & Alterman, 1994) searches for instructions in its environment in order to operate devices. FLOBN learns by relating a device's instructions to known procedures for operating similar devices. These systems do not target learning from exible interactive instructions or types of instructions other than imperatives, however.\nThe bulk of work on learning from instruction-like input has been under the rubric of learning apprentice systems (LASs), and closely related programming-by-demonstration (PbD) systems (Cypher, 1993) { as employed, for instance, in recent work on learning within software agents (Dent et al., 1992;Maes, 1994;Maes & Kozierok, 1993;Mitchell, Caruana, Freitag, McDermott, & Zabowski, 1994). These systems learn by interacting with an expert; either observing the expert solving problems (Cypher, 1993;Donoho & Wilkins, 1994;Mitchell et al., 1990;Redmond, 1992;Segre, 1987;VanLehn, 1987;Wilkins, 1990), or attempting to solve problems and allowing the expert to guide and critique decisions that are made (Golding et al., 1987;Gruber, 1989;Kodrato & Tecuci, 1987b;Laird et al., 1990;Porter, Bareiss, & Holte, 1990;Porter & Kibler, 1986). Each LAS has learned particular types of knowledge: e.g., operator implementations (Mitchell et al., 1990), goal decomposition rules (Kodrato & Tecuci, 1987b), operational versions of functional goals (Segre, 1987), control knowledge and control features (Gruber, 1989), procedure schemas (a combination of goal decomposition and control knowledge) (VanLehn, 1987), useful macrooperations (Cypher, 1993), heuristic classi cation knowledge (Porter et al., 1990;Wilkins, 1990), etc.\nTutorial instruction is a more exible type of instruction than that supported by past LASs, for three reasons. First, the instructor may command unknown tasks or tasks with unachieved preconditions to the agent at any instruction point (command exibility). Past LASs limit input to particular commands/observations at particular times (e.g., only commanding or observing directly executable actions) and typically do not allow unknown commands at all. Second, tutorial instruction allows the use of explicitly situated instructions (situation exibility), to teach about contingencies that may not be present in the current situation; past LASs do not. Third, tutorial instruction requires that all types of task knowledge can be learned (knowledge-type exibility). Past LASs learn only a subset of the types of knowledge they use to perform tasks." }, { "figure_ref": [], "heading": "A Theory of Learning from Tutorial Instruction", "publication_ref": [], "table_ref": [], "text": "Our theory of learning from tutorial instruction consists of a learning framework, situated explanation, placed within a computational model for general agenthood, the problem space computational model. We rst describe the computational model and then the learning framework." }, { "figure_ref": [ "fig_2" ], "heading": "The Problem Space Computational Model", "publication_ref": [ "b67", "b68", "b7", "b87", "b69", "b107", "b69", "b48", "b24", "b63", "b79", "b60", "b78" ], "table_ref": [ "tab_2" ], "text": "A computational model (CM) is a \\a set of operations on entities that can be interpreted in computational terms\" (Newell et al., 1990, p. 6). A computational model for a general instructable agent must meet two requirements:\n1. Support of general computation/agenthood. 2. Close correspondence to the knowledge level. Because tutorial instructions provide knowledge at the knowledge level (Newell, 1981), the further the CM com-ponents are from the knowledge level, the more di cult mapping and learning from instructions will be. In addition, a close correspondence to the knowledge level will allow us to use the CM to identify the types of knowledge the agent uses. Many potential CMs are ruled out by these requirements. Standard programming languages (e.g., Lisp) and theoretical CMs like Turing machines and push-down automata support general computation, but their operations and constructs are at the symbol level, without direct correspondence to the knowledge level. Similarly, connectionist and neural network models of computation (e.g., Rumelhart & McClelland, 1986) employ (by design) computational operations and entities at a level far below the knowledge level. Thus, these models are not appropriate as the top-level CM for an instructable agent. However, because higher levels of description of a computational system are implemented by lower levels (Newell, 1990), these CMs might be used as the implementation substrate for the higher level CM of an instructable agent.\nAnother alternative is logic, which has entities that are well matched to the knowledge level (e.g., propositions, well-formed formulas). Logics specify the set of legal operations (e.g., modus ponens), thus de ning the space of what can possibly be computed. However, logic provides no notion of what should be computed; that is, logics alone do not specify the control of the logical operations' application. It is desirable that the CM of an instructable agent include control knowledge, because control knowledge is a crucial type of knowledge for general agenthood, that can be communicated by instructions.\nSince one of our goals is to identify an agent's knowledge types, it might appear that selecting a theory of knowledge representation would be more appropriate than selecting a computational model. Such theories de ne the functions and structures used to represent knowledge (e.g., KL-ONE, Brachman, 1980); some also de ne the possible content of those structures (e.g., conceptual dependency theory, Schank, 1975;CYC, Guha & Lenat, 1990). However, computational structure must be added to these theories to produce working agents. Thus, rather than an alternative to specifying a computational model, a theory of knowledge representation is an addition. A content theory of knowledge representation would provide a more ne-grained breakdown of the knowledge to be learned by an instructable agent within each category of knowledge speci ed by its CM. We have not employed a particular content theory in this work thus far, however.\nThe computational model adopted here is called the problem space computational model (PSCM) (Newell et al., 1990;Yost, 1993). The PSCM is a general formulation of computation in a knowledge-level agent, and many applications have been built within it (Rosenbloom, Laird, & Newell, 1993a). It speci es an agent in terms of computation within problem spaces, without reference to the symbol level structures used for implementation. Because its components approximate the knowledge level (Newell et al., 1990), the PSCM is an apt choice for identifying an agent's knowledge types. Soar (Laird, Newell, & Rosenbloom, 1987) is a symbol level implementation of the PSCM.\nA schematic of a PSCM agent is shown in Figure 2. Perception and motor modules connect the agent to the external environment. A PSCM agent reaches a goal by moving through a sequence of states in a problem space. It progresses toward its goals by sequentially applying operators to the current state. Operators transform the state, and may produce motor commands. In PSCM, operators can be more powerful than simple STRIPS operators (Fikes, Hart, & Nilsson, 1972), because they can perform arbitrary computation (e.g., they can include conditional e ects, multiple substeps, reactivity to di erent situations, etc.).\nThe PSCM agent reaches an impasse when its immediately available knowledge is not su cient either to select or fully apply an operator. When this occurs, another problem space context { a subgoal { is created, with the goal of resolving the impasse. This second context may impasse as well, causing a third context to arise, and so on.\nThe only computational entities in the PSCM mediated by the agent's knowledge are states and operators. There are a small set of basic PSCM-level operations on these entities that the agent performs:\n1. State inference. Simple monotonic inferences that are always to be applied can be made without using a PSCM operator. Such inferences augment the agent's representation of the state it is in by inferring state properties based on other state properties (including those delivered by perception). For instance, an agent might know that a block is held if its gripper is closed and positioned directly above the block.\n2. Operator selection. The agent must select an operator to apply, given the current state. This process involves two types of knowledge: 2.1. Proposal knowledge: Indicates operators deemed appropriate for the current situation. This knowledge is similar to \\precondition\" knowledge in simple STRIPS operators. 2.2. Control knowledge: Orders proposed operators; e.g., \\A is better than B\"; \\C is best\"; \\D is rejected.\"\n3. Operator application. Once selected, an operator may be applied directly, or indirectly via a subgoal: 3.1. Operator e ects. The operator is applied directly in the current problem space.\nThe agent has knowledge of the e ects of the operator on the state and motor commands produced (if any). 3.2. Sub-operator selection. The operator is applied by reaching an impasse and selecting operators in a subgoal. Here, knowledge to apply the operator is selection knowledge (2, above) for the sub-operators.\n4. Operator termination. An operator must be terminated when its application has been completed. The termination conditions, or goal concept (Mitchell et al., 1986), of an operator indicate the state conditions that the operator is meant to achieve. For example, the termination conditions of pick-up(blk) might be that blk is held and the arm is raised. 6 Each of these functions is performed by the agent using knowledge; thus, they de ne the set of knowledge types present within a PSCM agent. The knowledge types ( ve types total) are summarized in Table 2. Because Soar is an implementation of the PSCM, all knowledge within Soar agents is of these types.\nIn Soar's implementation of the PSCM, learning occurs whenever results are returned from a subgoal to resolve impasses. The learning process, called chunking, creates rules (called chunks) that summarize the processing in the subgoal leading to the creation of the result. Depending on the type of result, chunks may correspond to any of the ve types of PSCM knowledge. When similar situations arise in the future, chunks allow the impasse that caused the original subgoal to be avoided by producing their results directly. Chunking is a form of explanation-based learning (Rosenbloom & Laird, 1986). Although it is a summarization mechanism, through taking both inductive and deductive steps in subgoals, chunking can produce both inductive and deductive learning (Miller, 1993;Rosenbloom & Aasman, 1990). Chunking occurs continuously, making learning a part of the ongoing activity of a Soar/PSCM agent.\nThe PSCM clari es the task of an instructable agent: it must be able to learn each of the ve types of PSCM knowledge from instruction. The next section discusses the learning process itself." }, { "figure_ref": [], "heading": "Learning from Instructions through Situated Explanation", "publication_ref": [ "b15", "b24", "b61", "b80", "b88", "b2", "b12", "b50", "b83" ], "table_ref": [], "text": "Learning from instruction involves both analytic learning (learning based on prior knowledge) and inductive learning (going beyond prior knowledge). Analytic learning is needed because the agent must learn from instructions that combine known elements { e.g., learning to pick up objects by combining known steps to pick up a particular object. The agent's prior knowledge of these elements can be used to produce better and faster learning. Inductive learning is needed because the agent must learn new task goals and domain knowledge 6. PSCM operators have explicit termination knowledge because they can have a string of conditional e ects that take place over time, they can respond to (or wait for) the external environment, etc. STRIPS operators, in contrast, do not need explicit termination knowledge, because they are de ned by a single list of e ects, and are \\terminated\" by de nition after applying those e ects. that are beyond the scope of its prior knowledge. The goal of this research is not to produce more powerful analytic or inductive techniques, but rather to specify how these techniques come together to produce a variety of learning in the variety of instructional situations faced by an instructable agent. The resulting approach is called situated explanation.\nInstruction requirements T 1 through T 3 specify that general learning (T 1 ) must occur from single, speci c examples (T 2 ), by making maximal use of prior knowledge (T 3 ). These requirements bode strongly for a learning approach based on explanation. The use of explanation to produce general learning has been a common theme in machine learning (e.g., DeJong & Mooney, 1986;Fikes et al., 1972;Minton, Carbonell, Knoblock, Kuokka, Etzioni, & Gil, 1989;Rosenbloom, Laird, & Newell, 1988;Schank & Leake, 1989; many others) and cognitive science (Anderson, 1983;Chi, Bassok, Lewis, Reimann, & Glaser, 1989;Lewis, 1988;Rosenbloom & Newell, 1986). Forming explanations enables general learning from speci c cases (requirement T 1 ) because the explanation indicates which features of a case are important and which can be generalized. Learning by explaining typically requires only a single example (requirement T 2 ) because the prior knowledge employed to construct the explanation (requirement T 3 ) provides a strong bias that allows this fast learning.\nThus, we use an explanation-based method as the core of our learning from instruction approach, and fall back on inductive methods when explanation fails. In standard explanation-based learning, explaining a reasoning step involves forming a \\proof\" (using prior knowledge) that the step leads from the current state of reasoning toward the current goal. The proof is a path of reasoning from the current state to the goal, through the step being explained, as diagrammed in Figure 3. General learning is produced by forming a rule that includes only the causally required features of the state, goal, and step appearing in the proof; features that do not appear are generalized away.\nFigure 3 indicates the three key elements of an explanation: the step being explained, the endpoints of the explanation (a state S and goal G to be reached), and the other steps required to complete the explanation. What form do these elements of an explanation take for situated explanation of an instruction?\nStep to be explained. In situated explanation, the step to be explained is an individual instruction given to the agent." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "S G ... (M ) k", "publication_ref": [ "b27", "b65", "b88", "b20", "b79", "b99", "b3", "b32", "b96", "b99", "b102", "b96", "b88", "b71", "b76" ], "table_ref": [], "text": "other steps, from agent's knowledge reasoning step to be explained (indicated by instruction I)\nFigure 3: Caricature of an explanation of how a reasoning step applies to a situation starting in a state S, with a goal G to be achieved.\nAlternatively, an entire instruction episode { e.g., the full sequence of instructions for a new procedure { could be explained at once. Applying explanation to single steps results in knowledge applicable at each step (as in Golding et al., 1987;Laird et al., 1990); explaining full sequences of reasoning steps results in learning schemas that encode the whole reasoning episode (as in Mooney, 1990;Schank & Leake, 1989;Van-Lehn, 1987). Learning factored pieces of knowledge rather than monolithic schemas allows more reactive behavior, since knowledge is accessed locally based on the current situation (Drummond, 1989;Laird & Rosenbloom, 1990). This meshes with the PSCM's local control structure. Explaining individual instructions is also supported by psychological results on the self-explanation e ect, which have shown that subjects who self-explain instructional examples do so by re-deriving individual lines of the example. \\Students virtually never re ect on the overall solution and try to recognize a plan that spans all the lines\" (VanLehn and Jones, 1991, p. 111). Endpoints of explanation. The endpoints of the explanation { a state S and a goal G to be achieved { correspond to the situation that the instruction applies to. Situation exibility (requirement I 3 ) stipulates that this situation may be either the current state of the world and goal being pursued or some hypothetical situation that is speci ed explicitly in the instruction. An instruction that does not specify any situational features is implicitly situated, and applies to the agent's current situation. Alternatively, an instruction can specify features of S or G, making for two kinds of explicitly situated instructions. For example, \\If the light is on, push the button\" indicates a hypothetical state with a light on; \\To turn on the machine, ip the switch\" indicates a hypothetical goal of turning on the machine. A situation S; G] is produced for each instruction, based on the current task situation and any situation features the instruction speci es. Other required steps. To complete an explanation of an instruction, an agent must bring its prior knowledge to bear to complete the path through the instruction to achievement of the situation goal. A PSCM agent's knowledge applies to its current situation to select and apply operators and to make inferences. When explaining an instruction I, this knowledge is applied internally to the situation S; G] associated with I. That is, explanation takes the form of forward internal projection within that situation. As depicted in Figure 3, the agent \\imagines\" itself in state S, and then runs forward, applying the instructed step and any knowledge that it has about subsequent states/operators. This knowledge includes both what is normally used in the external world and knowledge of operators' expected e ects that is used to produce those e ects in the projected world. If G is reached within the projection, then the projected path from S, through the step instructed by I, to G comprises an explanation of I. By indicating the features of I, S, and G causally required for success, the explanation allows the agent to learn general knowledge from I (as in standard EBL, realized in our agent by Soar's chunking mechanism, Rosenbloom & Laird, 1986). However, the agent's prior knowledge may be insu cient, causing an incomplete explanation, as described further below. Combining these elements produces an approach to learning from tutorial instruction that is conceptually quite simple. For each instruction I that is received, the agent rst determines what situation I is meant to apply to, and then attempts to explain why the step indicated by I leads to goal achievement in that situation (or prohibits it, for negative instructions). If an explanation can be made, it produces general learning of some knowledge I K by indicating the key features of the situation and instruction that cause success. If an explanation cannot be completed, it indicates that the agent is missing one or more pieces of prior knowledge M K (of any PSCM type) needed to explain the instruction.\nMissing knowledge (in Figure 3, missing arrows) causes an incomplete explanation by precluding achievement of G in the projection. For instance, the agent may not know a key e ect of an operator, or a crucial state inference, needed to reach G. More radically, the action commanded by I may be completely unknown and thus inexplicable.\nAs shown in Figure 4, there are four general options a learning agent might follow when it cannot complete an explanation. (O1) It could delay the explanation until later, in the hope that the missing knowledge (M K ) will be learned in the meantime. Alternatively, (O2-O3) it could try to complete the explanation now by somehow learning the missing knowledge. The missing knowledge could be learned (O2) inductively (e.g., by inducing over the \\gap\" in the explanation, as described by VanLehn, Jones & Chi, 1992, and many others), or, (O3) in an instructable agent's case, through further instruction. Finally, (O4) it could abandon the explanation altogether and try to learn the desired knowledge in another way instead.\nGiven only an incomplete explanation, it would be di cult to choose which option to follow. Identifying the missing knowledge M K in the general case is a di cult credit assignment problem (with no algorithmic solution), and there is nothing in the incomplete explanation itself that predicts whether M K will be learned later if the explanation is delayed. Thus, past machine learning systems have responded to incomplete explanations either in only a single way, or in multiple ways, but that are tried in a xed sequence. Many authors (Bergadano & Giordana, 1988;Hall, 1988;VanLehn, 1987;VanLehn, Jones, & Chi, 1992;Widmer, 1989), for instance, describe systems that make inductions to complete incomplete explanations (option O2). Because of the di culty of determining missing knowledge, these systems either base their induction on multiple examples, and/or bias the induction with an underlying theory or a teacher's help. SIERRA (VanLehn, 1987), for example, induces over multiple partially explained examples, and constrains the induction by requiring that each of the examples is unexplainable because of the same piece of missing knowledge (the same disjunct, in SIERRA's terminology). SWALE (Schank & Leake, 1989) uses an underlying theory of \\anomalies\" in explanations to complete incomplete explanations of events. OCCAM (Pazzani, 1991b) uses options O2 and O4 in a static order: It rst attempts to ll in the gaps in an incomplete explanation inductively, biased by a naive theory; if that fails, it abandons explanation and falls back on correlational learning methods. PET (Porter & Kibler, 1986) is an example of a system that delays explanation of a reasoning step until it learns further knowledge (option O1).\nHowever, as indicated in Figure 4, an instructable agent has additional information available to it besides the incomplete explanation itself. Namely, the instructional context (that is, the type of instruction and its place within the dialogue) often indicates which option is most appropriate for a given incomplete explanation. Thus, situated explanation includes all four of the options and dynamically selects between them based on the instructional context. For a situated explanation of an instruction I in a situation S; G], where missing knowledge M K precludes completing the explanation to learn knowledge I K , options O1-O4 take the following form: O1. Delay the explanation until later. The instructional context can indicate a likelihood that the missing knowledge M K will be learned later. For instance, an instruction I given in teaching a new procedure cannot be immediately explained because the remaining steps of the procedure are unknown, but they will be known later (assuming the instructor completes teaching the procedure). In such cases, the agent discards its current, incomplete explanation and simply memorizes I's use in S; G] (rote learning). Later, after M K is learned, I is recalled and explained in S; G], causing I K to be learned.\nGiven instruction I from which knowledge I K can be learned: Determine the situation S; G] (current or hypothetical) to which I applies Explain I in S; G] by forward projecting from S I !;\n; G ! Success (G met): learn I K from the complete explanation ( EBL). ! Failure: missing knowledge M K . Options: O1. Delay explanation until later. O2. Induce M K , completing the explanation. O3. Take instruction to learn M K , completing the explanation. O4. Abandon explanation; instead, learn I K inductively.\nTable 3: Situated explanation.\nO2. Induce M K , completing the explanation. In some cases, the instructional context localizes the missing knowledge M K to be part of a particular operator. For instance, a purpose clause instruction (\\To do X, do Y\") suggests that the single operator Y should cause X to occur. Because this localization tightly constrains the \\gap\" in the incomplete explanation, the agent can use heuristics to induce a strong guess at the M K needed to span the gap. Inducing M K allows the explanation of I to be completed and I K to be learned.\nO3. Take instruction to learn M K , completing the explanation. The default response of the agent (when the other options are not deemed appropriate) is to ask the instructor to explain I further. The further instruction can teach the agent M K . Again, learning M K allows the explanation of I to be completed and I K to be learned.\nO4. Abandon the explanation and learn I K in another way. The instructional context can indicate that the missing knowledge M K would be very di cult to learn.\nThis occurs when either the instructor refuses to give further information when asked to, or when the agent has projected multiple operators that may be missing pieces of knowledge (multiple potential M K s). Since it is unknown whether M K will ever be acquired, the agent abandons its explanation of I altogether. Instead, it attempts to learn I K directly (using inductive heuristics), without an explanation to base the learning on. These options will be made clearer through examples presented in the following sections.\nSituated explanation is summarized in Table 3. Unlike some knowledge acquisition approaches, it does not include an explicit check for consistency when newly learned knowledge is added to the agent's knowledge base. As Kodrato and Tecuci (1987a) point out, techniques like situated explanation are biased toward consistency because they only acquire new knowledge when current knowledge is insu cient, and they use current knowledge when deriving new knowledge. However, in some domains, explicit consistency checks (such as those used by Wilkins' (1990) ODYSSEUS) may be required.\nSituated explanation meets the requirement that learning be incremental (T 4 ) because it occurs during the ongoing processing of the agent and adds new pieces of knowledge to the agent's memory in a modular way. The local control structure of the PSCM allows new knowledge to be added independent of current knowledge. If there is a con ict between pieces of knowledge (for example, proposing two di erent operators in the same situation), an impasse will arise that can be reasoned about or resolved with further instruction." }, { "figure_ref": [], "heading": "Instructo-Soar", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Instructo-Soar is an instructable agent built within Soar { and thus, the PSCM { that uses situated explanation to learn from tutorial instruction.7 Instructo-Soar engages in an interactive dialogue with its instructor, receiving natural language instructions and learning to perform tasks and extend its knowledge of the domain. This section and the next describe how Instructo-Soar meets the targeted requirements of tutorial instruction, which are shown in expanded form in Table 4. This section describes the system's basic performance when learning new procedures, and extending procedures to new situations, from imperative commands (implicitly situated instructions); the next describes learning other types of knowledge and handling explicitly situated instructions.\nFigure 5: The robotic domain to which Instructo-Soar has been applied." }, { "figure_ref": [], "heading": "The Domain and the Agent's Initial Knowledge", "publication_ref": [ "b51" ], "table_ref": [], "text": "The primary domain to which Instructo-Soar has been applied is the simulated robotic world shown in Figure 5. 8 The agent is a simulated Hero robot, in a room with tables, buttons, blocks of di erent sizes and materials, an electromagnet, and a light. The magnet is toggled by closing the gripper around it. A red button toggles the light on or o ; a green button toggles it dim or bright, when it is on.\nInstructo-Soar consists of a set of problem spaces within Soar that contain three main categories of knowledge: natural language processing knowledge, originally developed for NL-Soar (Lewis, 1993); knowledge about obtaining and using instruction; and knowledge of the task domain itself. This task knowledge is extended through learning from instruction. Instructo-Soar does not expand its natural language capabilities per se as it takes instruction, although it does learn how sentences map onto new operators that it learns. It has complete, noiseless perception of its world, and can recognize a set of basic object properties (e.g., type, color, size) and relationships (e.g., robot docked-at gripper holding object, objects above, directly-above, left-of, right-of one another). The set of properties and relations can be extended through instruction, as described below.\nThe agent begins with knowledge of a set of primitive operators to which it can map natural language sentences, and can execute. These include moving to tables, opening and closing the hand, and moving the arm up, down, and above, left of, or right of things. The agent can also internally project these operators. However, some of their e ects under various conditions are unknown. For instance, the agent does not know which operators a ect the light or magnet, or that the magnet will attract metal objects. Also, the agent begins with no knowledge of complex operators (that involve combinations of primitive operators), such as picking up or arranging objects, pushing buttons, etc." }, { "figure_ref": [ "fig_4" ], "heading": "Learning New Procedures through Delayed Explanation", "publication_ref": [], "table_ref": [], "text": "Instructo-Soar learns new procedures (PSCM operators) from instructions like those shown in Figure 6, for picking up a block. Since \\pick up\" is not a known procedure initially, when told to \\Pick up the red block,\" the agent realizes that it must learn a new operator.\nTo perform a PSCM operator, the operator must be selected, implemented, and terminated. To select the operator in the future based on a command requires knowledge of the operator's argument structure (a template), and how natural language maps to this structure. Thus, to learn a new operator, the agent must learn four things:\n1. Template: Knowledge of the operator's arguments and how they can be instantiated.\nFor picking up blocks, the agent acquires a new operator with a single argument, the object to be picked up.\n2. Mapping from natural language: A mapping from natural language semantic structures to an instantiation of the new operator, so that the operator can be selected when commanded in the future. For picking up blocks, the agent learns to map the semantic object of \\Pick up ...\" to the single argument of its new operator template.\n3. Implementation: How to perform the operator. New operators are performed by executing a sequence of smaller operators. The implementation takes the form of selection knowledge for these sub-operators (e.g., move to the proper table, move the arm, etc.) 4. Termination conditions: Knowledge to recognize when the new operator is achieved { the goal concept of the new operator. For \\pick up,\" the termination conditions include holding the desired block, with the arm raised.\nRequirement T 2 (\\fast learning\") stipulates that after the rst execution of a new procedure, the agent must be able to perform at least the same task without being re-instructed. Thus, the agent must learn, in some form, each of the four parts of a new operator during its rst execution.\nA general implementation of a new operator can be learned through situated explanation of each of its steps. During the rst execution of a new operator, though, the instructions for performing it cannot be explained, because the agent does not yet know the goal of the operator (e.g., the agent does not know the termination conditions of \\pick up\") or the steps following the current one to reach that goal. However, in this instructional context { explaining instructed steps of a procedure being learned { it is clear that the missing knowledge of the remaining steps and the procedure's goal will be acquired later, because the instructor is expected to teach the procedure to completion. Thus, the agent delays explanation (option O1) and for now memorizes each implementation instruction in a rote, episodic form. At the end of the rst execution of a new procedure, the agent induces the procedure's goal { its termination conditions { using a set of simple inductive heuristics. On later executions of the procedure, the original instructions are recalled and explained to learn a general implementation.\nWe describe the details of this process using the \\pick up\" example." }, { "figure_ref": [ "fig_4", "fig_4", "fig_5" ], "heading": "First Execution", "publication_ref": [ "b28", "b60", "b78", "b97" ], "table_ref": [], "text": "The example, shown in Figure 6, begins with the instruction \\Pick up the red block.\" The agent comprehends this instruction, producing a semantic structure and resolving \\the red block\" to a block in its environment. However, the semantic structure does not correspond to any known operator, indicating that the agent must learn a new operator (which it calls, say, new-op14). To learn a template for the new operator, the agent simply assumes that the argument structure of the command used to request the operator is the required argument structure of the operator itself. In this case, a template for the new operator is generated with an argument structure that directly corresponds to the semantic arguments of the \\pick up\" command (here, one argument, object). The agent learns a mapping from the semantic structure to the new operator's template, to be used when presented with similar requests in the future. This simple approach to learning templates and mappings is su cient for imperative sentences with direct arguments, but will fail for commands with complex arguments, such as path constraints (\\Move the dynamite into the other room, keeping it as far from the heater as possible\").\nNext, the new operator is selected for execution. Since its implementation is unknown, the agent immediately reaches an impasse and asks for further instructions. Each instruction in Figure 6 is given, comprehended and executed in turn. These instructions provide the implementation for the new operator. They are implicitly situated { each applies to the current situation in which the agent nds itself.\nAt any point, the agent may be given another command that cannot be directly completed { one that requests either another unknown procedure or a known procedure that the agent does not know how to perform in the current situation due to skipped steps. This is command exibility (requirement I 2 ). For example, within the instructions for \\pick up,\" the command \\Move above the red block\" cannot be completed because of a skipped step (the arm must be raised to move above something). An impasse arises where the instructor indicates the needed step (\\move up\"), and then continues instructing \\pick up.\" Ultimately, the implementation of a new operator can be learned at the proper level of generality by explaining each instructed step. However, as illustrated in Figure 7, during its initial execution forming this explanation is impossible, because the goal of the new operator and the other steps (further instructions) needed to reach it are not yet known. Since these missing pieces of the explanation are expected to be available later, the agent delays explanation and resorts to rote learning of each instructed step.\nIn Instructo-Soar, rote learning occurs as a side e ect of language comprehension. While reading each sentence, the agent learns a set of rules that encode the sentence's semantic features. The rules allow NL-Soar to resolve referents in later sentences, implementing a simple version of Grosz's focus space mechanism (Grosz, 1977). The rules record each instruction, indexed by the goal to which it applies and its place in the instruction sequence. The result is essentially an episodic case that records the speci c, lock-step sequence of the instructions given to perform the new operator. For instance, it is recorded that \\to pick-up (that is, new-op14) the red block, rb1, I was rst told to move to the yellow table, yt1.\" Of course, the information contained within the case could be generalized, but at this point any generalization would be purely heuristic, because the agent cannot explain the steps of the episode. Thus, Instructo-Soar takes the conservative approach of leaving the case in rote form.\nFinally, the agent is told \\The operator is nished,\" indicating that the goal of the new operator has been achieved. This instruction triggers the agent to learn termination conditions for the new operator. Learning termination conditions is an inductive concept formation problem: The agent must induce which features of those that hold in the current state imply a positive instance of the new operator's goal being achieved. Standard concept learning approaches may be used here, as long as they produce a strong hypothesis within a small number of examples (due to the \\fast learning\" requirement, T 2 ). Instructo-Soar uses a simple heuristic to strongly bias its induction: It hypothesizes that everything that has changed between the initial state when the new operator was requested and the current state is part of the new operator's termination conditions. In this case, the changes are that the robot is docked at a table, holding a block, and the block and gripper are both up in the air. This heuristic gives a reasonable guess, but is clearly too simple. Conditions that changed may not matter; e.g., perhaps it doesn't matter to picking up blocks that the robot ends up at a table. Unchanged conditions may matter; e.g., if learning to build a \\stoplight,\" block colors are important although they do not change. Thus, the agent presents the induced set of termination conditions to the instructor for possible alteration and veri cation. The instructor can add or remove conditions. For example, in the \\pick up\" case the instructor might say \\The robot need not be docked at the yellow table\" to remove a condition deemed unnecessary, before verifying the termination conditions.\nInstructo-Soar performs induction by EBL (chunking) over an overgeneral theory that can make inductive leaps (similar to, e.g., Miller, 1993;Rosenbloom & Aasman, 1990;VanLehn, Ball, & Kowalski, 1990). This type of inductive learning has the advantage that the agent can alter the bias to re ect other available knowledge. In this case, the agent uses further instruction (the instructor's indications of features to add or remove) to alter the induction. Other knowledge sources that could be employed (but are not in the current implementation) include analogy to other known operators (e.g., pick up actions in other domains), domain-speci c heuristics, etc.\nThrough the rst execution of a new operator, then, the agent: Carries out a sequence of instructions achieving a new operator.\nLearns an operator template for the new operator.\nLearns the mapping from natural language to the new operator. Learns a rote execution sequence for the new operator.\nLearns the termination conditions of the new operator. Since the agent has learned all of the necessary parts of an operator, it will be able to perform the same task again without instruction. However, since the implementation of the operator is rote, it can only perform the exact same task. It has not learned generally how to pick up things yet." }, { "figure_ref": [], "heading": "Generalizing the New Operator's Implementation", "publication_ref": [], "table_ref": [], "text": "The agent now knows the goal concept and full (though rote) implementation sequence for the new operator. Thus, it has the information that it needs to explain how each instruction in the implementation sequence leads to goal achievement, provided its underlying domain knowledge is su cient.\nEach instruction is explained by recalling it from episodic memory and internally projecting its e ects and the rest of the path to achievement of the termination conditions of the new operator. The projection is a \\proof\" that the instructed operator will lead to goal achievement in the situation. Soar's chunking mechanism essentially computes the weakest preconditions of the situation and the instruction required for success (similar to standard EBL) to form a general rule proposing the instructed operator. The rule learned from the instruction \\Move to the yellow table\" is shown in Figure 8. The rule generalizes the original instruction by dropping the table's color, and specializes it by adding the facts that the table has the object sitting on it and that the object is small (only small objects If the goal is new-op-14(?obj), and ?obj is on table ?t, and small(?obj), and the robot is not docked at ?t, and the gripper has status(open), then propose operator move-to-table(?t).\nFigure 8: The general operator proposal rule learned from the instruction \\Move to the yellow table\" (new-op-14 is the newly learned \\pick up\" operator).\ncan be grasped by the gripper). The rule also tests that the gripper is open, because this condition was important for grasping the block in the instructed case. 9 After learning general proposal rules for each step in the instruction sequence, the agent can perform the task without reference to the rote case. For instance, if asked to \\Pick up the green block,\" the agent selects new-op14, instantiated with the green block. Then, general sub-operator proposal rules like the one in Figure 8 re one by one, as they match the current situation, to implement the operator. After performing all of the implementation steps, the agent recognizes that the termination conditions are met (the gripper is raised and holding the green block), and new-op14 is terminated.\nSince the general proposal rules for implementing the task are directly conditional on the state, the agent can perform the task starting from any state along the implementation path and can react to unexpected conditions (e.g., another robot stealing the block). In contrast, the rote implementation that was initially learned applied only when starting from the original starting state, and was not reactive because its steps were not conditional on the current state." }, { "figure_ref": [], "heading": "Recall Strategies", "publication_ref": [ "b34", "b46" ], "table_ref": [], "text": "We have described how our agent recalls and explains each step of a new operator's implementation sequence, after the operator's termination conditions are induced. There are still two open issues: (A) At what point after learning the termination conditions should the agent perform the recall/projection?, and (B) How many steps should be recalled and projected in sequence at a time?\nTo investigate these issues, we have implemented two di erent recall/project strategies:\n1. Immediate/complete recall. The agent recalls and attempts to explain the full sequence of instructions for the new operator immediately after learning the new operator's termination conditions.\n2. Lazy/single-step recall. The agent recalls and attempts to explain single instructions in the sequence when asked to perform the operator again starting from the same initial state. That is, at each point in the execution of the operator, the agent 9. More technical details of how Soar's chunking mechanism forms such rules can be found in (Hu man, 1994;Laird, Congdon, Altmann, & Doorenbos, 1993).\nrecalls the next instruction, and attempts to explain it by forward projecting it. However, if this projection does not result in a path to goal achievement without any further instructions being recalled, then rather than recalling the next instruction in the sequence to continue the forward projection, the agent gives up on explaining this instruction and simply executes it in the external world.\nThese strategies represent the extremes of a continuum of strategies. 10 The strategy to use is a parameter of the agent; it does not dynamically select between strategies while it is running. A possible extension would be to reason about the time pressure in di erent situations to select the appropriate strategy. Next, we brie y describe the implications of each recall strategy." }, { "figure_ref": [ "fig_6" ], "heading": "Immediate/Complete Recall Strategy", "publication_ref": [], "table_ref": [], "text": "Immediate/complete recall and explanation involves internally projecting multiple operators (the full instruction sequence) immediately after the rst execution of the new operator. The projection begins at the state the agent was in when the new operator was rst suggested. If the projection successfully achieves the termination conditions of the new operator, the agent learns general implementation rules for every step. The advantage of this strategy is that the agent learns a general implementation for the new operator immediately after its rst execution (e.g., the agent can pick up other objects right away).\nThe strategy has three important disadvantages. First, it requires that the agent reconstruct the initial state in which it was commanded to perform the new operator. This reconstruction may be di cult if the amount of information in the state is large (although it is not in the small robotic domain being used here).\nSecond, recall and projection of the entire sequence of instructed steps is time-consuming, requiring time proportional to the length of the instruction sequence. During the process, the agent's performance of tasks at hand is suspended. This suspension could be awkward if the agent is under pressure to act quickly.\nThird, as illustrated in Figure 9, multiple step projections are susceptible to compounding of errors in underlying domain knowledge. The projection of each successive operator begins from a state that re ects the agent's knowledge of the e ects of prior operators in the sequence. If this knowledge is incomplete or incorrect, the state will move further and further from re ecting the actual e ects of prior operators. Minor domain knowledge problems in the knowledge of individual operators, that alone would not produce an error in a single step explanation, may combine within the projection to cause an error. This can lead to incomplete explanations or (more rarely) to spuriously successful explanations (e.g., reaching success too early in the instruction sequence)." }, { "figure_ref": [ "fig_7", "fig_1", "fig_2", "fig_2", "fig_2", "fig_1" ], "heading": "Lazy/Single-Step Recall Strategy", "publication_ref": [ "b83" ], "table_ref": [], "text": "In the lazy/single-step recall strategy, the agent waits to recall and explain instructions until asked to perform the new operator a second time from the same initial state. In addition, the agent only recalls a single instruction to internally project at a time. After the recalled 10. We also implemented a lazy/complete recall strategy, which will not be described here (see Hu man, 1994, for details). operator is projected, the agent applies whatever general knowledge it has about the rest of the implementation of the new operator. This general knowledge, however, does not include rote memories of other past instructions. That is, if the agent does not know the rest of the path to complete the new operator using general knowledge, it does not recall any further instructions in the sequence from its rote memories. Rather, the internal projection is terminated and the single recalled operator is applied in the external world. This strategy addresses the three disadvantages of the immediate/complete strategy. First, it does not require reconstruction of the original instruction state; rather, it waits for a similar state to occur again.\nSecond, recalling and projecting a single instruction at a time does not require a timeconsuming introspection that suspends the agent's ongoing activity. For \\pick up,\" for instance, Table 5 shows the longest time that the agent's external action (movements or instruction requests) is suspended using each strategy (as measured in Soar decision cycles, which last about 35 milliseconds each for Instructo-Soar on an SGI R4400 Indigo). The immediate/complete strategy does no external actions for 304 decision cycles (about 11 seconds on our Indigo) immediately following the rst execution, in order to recall and explain the complete instruction sequence. Using the lazy/single-step strategy, only one instruction is ever recalled/explained at a time before action is taken in the world; thus, the longest time without action is only 75 decision cycles (about 2 seconds). The total recall/explanation time is proportional to the length of the instruction sequence in both cases (304 vs. 294 decision cycles), but in the lazy/single-step strategy, that time is interleaved with the execution of the instructions rather than fully taken after the rst execution.\nThird, the lazy/single-step strategy overcomes the problem of compounding of domain theory errors by beginning the projection of each instruction from the current state of the world after external execution of the previous instructions. Thus, the beginning state of each projection correctly re ects the e ects of the previous operators in the implementation sequence.\nThe major disadvantage of this strategy is that it requires a number of executions of the new operator equal to the length of the instruction sequence in order to learn the whole Immediate/complete Lazy/single-step Largest time without external action 304 75 Largest total recall/explanation time during an execution 304 (end of 1st exec'n) 294 (during 2nd exec'n.)\nTable 5: Timing comparison, in Soar decision cycles, for learning \\pick up\" using the immediate/complete and lazy/single-step recall strategies. general implementation. This is because limiting recall to a single step allows only a single sub-operator per execution to be generalized. This disadvantage, however, leads to two interesting learning characteristics:\nBack-to-front generalization. Generalized learning starts at the end of the implementation sequence and moves towards the beginning. On the second execution of the new operator, a path to the goal is known only for the last instruction in the sequence (it leads directly to goal completion), so a general proposal for that instruction is learned. On the third execution, after the second to last instruction is projected, the proposal learned previously for the last operator applies, leading to goal achievement and allowing a general proposal for the second to last instruction to be learned. This pattern continues back through the entire sequence until the full implementation is learned generally. As Figure 10 shows, the resulting learning curves closely approximate the power law of practice (Rosenbloom & Newell, 1986) (r = 0:98 for both (a) and (b)).\nE ectiveness of hierarchical instruction. Due to the back-to-front e ect, the agent learns a new procedure more quickly when its steps are taught using a hierarchical organization than when they are taught as a at sequence. Figure 11 Figure 12: A graphical view of a hierarchical instruction sequence for move-left-of(block,block2). New operators are shown in bold. left of another; Figure 12 depicts a hierarchical instruction sequence for the same procedure, that contains 13 instructed steps, but a maximum of 3 in any subsequence. By breaking the instruction sequence into shorter subsequences, a hierarchical organization allows multiple subtrees of the hierarchy to be generalized during each execution.\nGeneral learning for an N step operator takes N executions using a at instruction sequence. Taught hierarchically as an H-level hierarchy with H p N subtasks in each subsequence, only H H p N executions are required for full generalization. The hierarchy in Figure 12 has an irregular structure, but results in a speedup because the length of every subsequence is small (in this case, smaller than p N). Empirically, the at sequence of Figure 11 takes nine (N) executions to generalize, whereas the hierarchical sequence takes only six. Hierarchical organization has the additional advantage that more operators are learned that can be used in future instructions." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Supporting Command Flexibility", "publication_ref": [ "b106" ], "table_ref": [], "text": "Command exibility (requirement I 2 ) stipulates that the instructor may request either an unknown procedure, or a known procedure that the agent does not know how to perform in the current state (skipping steps), at any point. This can lead to multiple levels of embedded instruction. As we have seen, Instructo-Soar learns completely new procedures from instructions for unknown commands. In addition, when the agent is asked to perform a known procedure in an unfamiliar situation { one from which the agent does not know what step to take { it learns to extend its knowledge of the procedure to that situation.\nAn example is contained in the instructions for \\Pick up the red block,\" when the agent is asked to \\Move above the red block.\" The agent knows how to perform the operator when its arm is raised. However, in this case the arm is lowered, and so the agent reaches an impasse and asks for further instruction. 11 When told to \\Move up,\" the agent internally projects raising its arm, which allows it to achieve moving above the red block. From this projection it learns the general rule: move the arm up when trying to move above an object that is on the table the agent is docked at. This rule extends the \\move above\" procedure to cover this situation.\nAny operator { even one previously learned from instruction { may require extension to apply to a new situation. This is because when the agent learns the general implementation for a new operator, it does not reason about all possible situations in which the operator might be performed, but limits its explanations to the series of situations that arises during the actual execution of the new operator while it is being learned.\nNewly learned operators may be included in the instructions for later operators, leading to learning of operator hierarchies. One hierarchy of operators learned by Instructo-Soar is shown in Figure 13. Learning procedural hierarchies has been identi ed as a fundamental component of children's skill acquisition from tutorial instruction (Wood, Bruner, & Ross, 1976). In learning the hierarchy of Figure 13, Instructo-Soar learned four new operators, an extension of a known operator (move above), and an extension of a new operator (extending \\pick up\" to work if the robot already is holding a block). Because of command exibility, this same hierarchy can be taught in exponentially many di erent ways (Human, 1994). For instance, new operators that appear as sub-operators (e.g., grasp) can be taught either before or during teaching of higher operators (e.g., pick up)." }, { "figure_ref": [ "fig_9" ], "heading": "Abandoning Explanation when Domain Knowledge is Incomplete", "publication_ref": [ "b8", "b71", "b90", "b94", "b95", "b43" ], "table_ref": [ "tab_3" ], "text": "All of the general operator implementation learning described thus far depends on explaining instructions using prior domain knowledge (as opposed to the learning of operator termination conditions, which is inductive). What if the domain knowledge is incomplete, making explanation impossible? For sequences of multiple operators, pinpointing what knowledge is missing is an extremely di cult credit assignment problem (sequences known to contain only one operator, however, are a more constrained case, as described in the next section).\n11. Another option would be to search; i.e., to apply a weak method such as means-ends analysis. In this example, the search would be easy; in other cases, it could be costly. In any event, since the goal of Instructo-Soar is to investigate the use of instruction, our agent always asks for instructions when it reaches an impasse in task performance. Nothing in Instructo-Soar precludes the use of search or knowledge from other sources, however. In general, an explanation failure that is detected at the end of the projection of an instruction sequence could be caused by missing knowledge about any operator in the sequence. Thus, when faced with an incomplete explanation of a sequence of multiple instructions, Instructo-Soar abandons the explanation and instead tries to induce knowledge directly from the instructions (option O4).\nAs an example, consider a case in which all of Instructo-Soar's knowledge of secondary operator e ects (frame axiom type knowledge) is removed before teaching it a procedure. For example, although the agent knows that closing the hand causes it to have status closed, it no longer knows that closing the hand around a block causes the block to be held. Now, the agent is taught a new procedure, such as to pick up the red block. After the rst execution, the agent attempts to recall and explain the instructions as usual, but fails because of the missing knowledge. That is, the block is not picked up during the projection of the instructions, since the agent's knowledge does not indicate that it is held. The agent records the fact that this procedure's instructions cannot be explained.\nLater, the agent is again asked to perform the procedure, and again recalls the instructions. However, it also recalls that explaining the instructions failed in the past. Thus, it abandons explanation and instead attempts to induce a general proposal rule directly from each instruction. and G \\pick up the red block.\"\nIn the \\pick up\" example, the agent rst recalls the command to move to the yellow table. To learn a proposal rule for this operator (call it OP), the agent must induce a set of conditions of the state under which performing OP will contribute to achieving the \\pick up\" goal (call it G). Instructo-Soar uses two simple heuristics to induce these state conditions:\nOP-to-G-path. For each object Obj1 lling a slot of OP, and each object Obj2\nattached to G, include the shortest existing path (heuristically of length less than three) of relationships between Obj1 and Obj2 in the set of induced conditions.\nThis heuristic captures the intuition that if an operator involves some object, its relationship to the objects relevant to the goal is probably important. Figure 14 shows its operation for \\move to the yellow table.\" As the gure indicates, there is a path between G's object, the red block, and the destination of OP, the yellow table, through the relationship that the block is on the table.\nOP-features-unachieved. Each termination condition (essentially, each primary e ect) of OP that is not achieved in the state before OP is performed is considered an important condition.\nThis heuristic captures the intuition that all of the primary e ects of OP are probably important; therefore, it matters that they are not achieved when OP is selected. In our example, OP's primary e ect is that the robot ends up docked at the table; thus, the fact that the robot is not initially docked at the table is added to the inferred set of conditions for proposing OP.\nThese heuristics are implemented as Soar operators that compute the appropriate conditions. Once a set of conditions is induced, it is presented to the instructor, who can add or remove conditions before verifying them. Upon veri cation, a rule is learned proposing OP (e.g., move-to-table(?t)) when the induced conditions hold (e.g., goal is pick-up(?b), ?b isa block, on(?b,?t)). This rule is similar to the rule learned from explanation (Figure 8), but only applies to picking up a block (overspeci c), and does not stipulate that the to learn new operator e ects that could complete the explanation of the procedure. Learning e ects of operators from observation has been explored by a number of researchers (Carbonell & Gil, 1987;Pazzani, 1991b;Shen, 1993;Sutton & Pinette, 1985;Thrun & Mitchell, 1993). object must be small (overgeneral). A similar induction occurs for each step of \\pick up,\" so that the agent learns a general implementation for the full \\pick up\" operator. However, unless corrections are made by the instructor, this induced implementation is not as correct as one learned from explanation; for instance, it applies (wrongly) to any block instead of to any small object. In a more complex domain, inferring implementation rules would be even less successful. Not surprisingly, psychological research shows that human subjects' learning from procedural instructions also degrades if they lack domain knowledge (Kieras & Bovair, 1984).\nReturning to the targeted instruction requirements in Table 4, Instructo-Soar's learning of procedures illustrates (T 1 ) general learning from speci c instructions, (T 2 ) fast learning (because each procedure need only be instructed once) by (T 3 ) using prior domain knowledge to construct explanations, and (T 4 ) incremental learning during the agent's ongoing performance. Two types of PSCM knowledge are learned: (T 5 (b)) operator proposals for sub-operators of the procedure, and (T 5 (e)) the procedure's termination conditions. The learning involves either delayed explanation, or when domain knowledge is inadequate, abandoning explanation in favor of simple induction. The instructions are each (I 3 (a)) implicitly situated imperative commands, for either (I 2 (a)) known procedures, (I 2 (b)) known procedures where steps have been skipped, or (I 2 (c)) unknown procedures." }, { "figure_ref": [], "heading": "Beyond Imperative Commands", "publication_ref": [], "table_ref": [], "text": "Next, we turn to learning the remaining types of PSCM knowledge (T 5 (a,c,d)) from various kinds of explicitly situated instructions (I 3 (b)). From an explicitly situated instruction, Instructo-Soar constructs a hypothetical situation (goal and state) that includes the objects, properties, and relationships mentioned explicitly in the instruction as well as any features of the current situation that are needed to carry out the instruction. 13 This hypothetical situation is used as the context for a situated explanation of the instruction." }, { "figure_ref": [], "heading": "Hypothetical Goals and Learning E ects of Operators", "publication_ref": [ "b17", "b17" ], "table_ref": [], "text": "A goal is explicitly speci ed in an instruction by a purpose clause (DiEugenio, 1993): \\To do X, do Y.\" The basic knowledge to be learned from such an instruction is an operator proposal rule for doing Y when the goal is to achieve X.\nConsider this example from Instructo-Soar's domain:\n> To turn on the light, push the red button.\nThe agent has been taught how to push buttons, but does not know the red button's e ect on the light. From a purpose clause instruction like this example, the agent creates a hypothetical situation with the goal stated in the purpose clause (here, \\turn on the light\"), and a state like the current state, but with that goal not achieved (here, with the light o ). Within this situation, the agent attempts to explain the instruction by forward projecting the action of pushing the red button.\nIf the agent knew that pushing the red button toggles the light, then in the projection, the light would come on. Thus, the explanation would succeed, and a general operator proposal rule would be learned, that proposed pushing the red button when the light is o and the goal is to turn it on.\nHowever, since in actuality the agent is missing the knowledge (M K ) that pushing the button a ects the light, the light does not come on within the projection. The explanation is incomplete.\nWhen Instructo-Soar's explanation of a sequence of operators fails, the agent does not try to induce the missing knowledge needed to complete the explanation, because it could be associated with any of the multiple operators. Rather, the explanation is simply abandoned, as described in Section 6.5. However, in this case, the unexplainable sequence contains only one operator. In addition, the form of the instruction gives the agent a strong expectation about that operator's intended e ect. Based on the purpose clause, the agent expects that the speci ed action (pushing the button) will cause achievement of the speci ed goal (turning on the light). DiEugenio (1993) found empirically that this type of expectation holds for 95% of naturally occurring purpose clauses.\nThe expectation constrains the \\gap\" in the incomplete explanation: the state after pushing the button should be a state with the light on, and only one action was performed to produce this e ect. Based on this constrained gap, the agent attempts to induce the missing knowledge M K in order to complete the explanation (option O2). The most straightforward inference of M K is simply that an unknown e ect of the single action is to produce the expected goal conditions { e.g., pushing the button should cause the light to come on. The instructor is asked to verify this inference. 14 Once it is veri ed, Instructo-Soar heuristically guesses at the state conditions under which the e ect will occur. It uses the OP-to-G-path heuristic as a very naive causality theory (Pazzani, 1991a) to guess at the causes of the inferred operator e ect. Here, OP-to-G-path notices that the light and the red button are both on the same table. In addition, the agent includes the fact that the inferred e ect did not hold (the light was o ) before the operator caused it. The result is presented to the instructor: I think that doing push the button causes: the light to be on under the following conditions: the light is not currently on, the light is on the table, the button is on the table Are those the right conditions?\nHere, the heuristics have not recognized that it matters which button is pushed (the red one). The instructor can add this condition by saying ``The button must be red.'' Once the instructor veri es the conditions, the agent adds the new piece of operator e ect knowledge to its memory: if projecting push-button(?b), and ?l isa light with status off, on table ?t, and ?b isa button with color red, on table ?t, then light ?l now has status on.\n14. If the inference is rejected, the agent abandons the explanation and directly induces a proposal rule for pushing the button from the instruction, as described in Section 6.5.\nImmediately after being learned, this rule applies to the light in the forward projection for the current instruction. The light comes on, completing the instruction's explanation by achieving its goal. From this explanation, the agent learns the proposal rule that proposes pushing the red button when the goal is to turn on the light. Thus, the agent has acquired new knowledge at multiple levels; inferring an unknown e ect of an operator supported learning a proposal for that operator. This example illustrates (I 3 (b)) the use of hypothetical goal instructions and the use of option O2 for dealing with incomplete explanations { inferring missing knowledge { to learn new operator e ects (T 5 (d)), thus extending domain knowledge." }, { "figure_ref": [], "heading": "Hypothetical States to Learn About Contingencies", "publication_ref": [ "b25" ], "table_ref": [], "text": "Instructors use instructions with hypothetical states (e.g., conditionals: \\If state conditions], do ...\") either to teach general policies (\\If the lights are on when you leave the room, turn them o .\") or to teach contingencies when performing a task. Instructo-Soar handles both of these; here, we will describe the latter.\nA contingency instruction indicates a course of action to be followed when the current task is performed in a future situation di erent from the current situation. Instructors often use contingency instructions to teach about situations that di er from the current one in some crucial way that should alter the agent's behavior. Contingency instructions are very common in human instruction; Ford and Thompson (1986) found that 79% of the conditional statements in an instruction manual communicated contingency options to the student.\nConsider this interaction:\n> Grasp the blue block.\nThat's a new one for me. How do I do that?\n> If the blue block is metal, then pick up the magnet.\nThe blue block is not made of metal, but the instructor is communicating that if it were, a di erent course of action would be required.\nFrom the conditional instruction \\If the blue block is metal, then pick up the magnet,\" the agent needs to learn an operator proposal rule for picking up the magnet under appropriate conditions. The agent begins by constructing the hypothetical situation to which \\pick up the magnet\" applies. \\If the blue block is metal\" indicates a hypothetical state that is a variant of the current state with the blue block having material metal. The current goal (\\Grasp the blue block\") is also the goal in the hypothetical situation.\nWithin this situation, the agent projects picking up the magnet to explain how it will allow the block to be grasped. However, the agent is missing much of the knowledge needed to complete this explanation. It does not know the goal concept of \\Grasp\" yet, or the rest of the instructions to reach that goal.\nSince the instruction being explained is for a contingency, the rest of the instructions that the agent is given to \\Grasp the blue block\" may not (and in this case, do not) apply to the contingent situation, where the block is metal. In the normal grasp sequence, for instance, the agent learns to close its hand around the grasped object, but when grasping a metal object, the hand is closed around the magnet. Since knowledge of how to complete grasping a metal object is needed to explain the contingency instruction, and the agent does not know when it might learn this missing knowledge, it abandons the explanation (option O4). Instead, it uses the heuristics described in Section 6.5 to directly induce an operator proposal rule for \\Grasp the magnet.\" In addition to the conditions generated by the heuristics, the conditions indicated in the antecedent of the instruction are included. The result is presented to the instructor for alteration and veri cation:\nSo I'm guessing the conditions for doing \\pick up the magnet\" when your goal is \\grasp the block\" are: the block is metal Is that right? > Right.\nFrom this interaction the agent learns a rule that proposes picking up the magnet when the goal is to grasp a metal block. After this learning is completed, since the agent has not yet nished grasping the blue block, it continues to receive instruction for that task. Further contingencies can be indicated at any point. Learning contingencies illustrates (I 3 (b)) the handling of hypothetical state instructions." }, { "figure_ref": [], "heading": "Learning to Reject Operators", "publication_ref": [], "table_ref": [], "text": "Our nal examples illustrate learning to reject an operator { a type of operator control knowledge in the PSCM. The examples also detail the remaining option for dealing with incomplete explanations: (O3) completing an explanation through further instruction.\nConsider these instructions:\n> Never grasp green blocks.\nWhy? (a) > Trust me.\n(b) > Green blocks are explosive. A negative imperative prohibits a step from applying to a hypothetical situation in which it might apply. Thus, Instructo-Soar creates a hypothetical situation in which the prohibited action might be executed; in this case, a state with a graspable green block. Since no goal is speci ed by the instruction, and there is no other current goal, a default goal of \\maintaining happiness\" (which is always considered one of the agent's current goals) is used. From this hypothetical situation, the agent internally projects the \\grasp\" action, expecting an \\unhappy\" result. However, the resulting state, in which the agent is grasping a green block, is acceptable according to the agent's knowledge. Thus, the projection does not explain why the action is prohibited.\nThe agent deals with the incomplete explanation by asking for further instruction, in an attempt to learn M K and complete the explanation. However, the instructor can decline to give further information by saying (a) Trust me. Although the instructor will not provide M K , because the prohibition of a single operator (grasping the green block) is being explained, the agent can induce a plausible M K that will complete the explanation (option O2). Since the agent knows that the nal state after the prohibited operator is meant to be \\unhappy\", it simply induces that this state is to be avoided. This is the converse of learning to recognize when a desired goal has been reached (learning an operator's termination conditions). The agent conservatively guesses that all of the features of the hypothetical state (here, that there is a green block that is held), taken together, make it a state to be avoided. Because this inference is so conservative, in the current implementation the instructor is not even asked to verify it. The state inference rule that results is as follows:\nif goal is ``happiness'', and ?b isa block with color green, and holding(gripper,?b), then this state fails to achieve ``happiness''.\nThis rule applies to the nal state in the projection of \\Never grasp...\" The state's failure to achieve happiness completes the agent's explanation of why it should \\Never grasp...,\" and it learns a rule that rejects any proposed operator for grasping a green block.\nAlternatively, the instructor could provide further instruction, as in (b) Green blocks are explosive. Such instruction can provide the missing knowledge M K needed to complete an incomplete explanation (option O3). From (b), the agent learns a state inference rule: blocks with color green have explosiveness high. Instructo-Soar learns state inferences from simple statements like (b), and from conditionals (e.g., \\If the magnet is powered and directly above a metal block, then the magnet is stuck to the block\") by essentially translating the utterance directly into a rule. 15 Such state inference instructions can be used to introduce new features that extend the agent's representation vocabulary (e.g, stuck-to).\nThe rule learned from \\Green blocks are explosive\" adds explosiveness high to the block that the agent had simulated grasping in the hypothetical situation. The agent knows that touching an explosive object may cause an explosion { a negative result. This negative result completes the explanation of \\Never grasp...,\" and from it the agent learns to avoid grasping objects with explosiveness high.\nCompleting an explanation through further instruction (as in (b)) can produce more general learning than heuristically inferring missing knowledge (as in (a)). In (b), if the agent is later told Blue blocks are explosive, it will avoid grasping them as well. In general, multiple levels of instruction can lead to higher quality learning than a single level because learning is based on an explanation composed from strong lower-level knowledge (M K ) rather than inductive heuristics alone. M K (here, the state inference rule) is also available for future use.\nBecause the agent has learned not only to reject the \\grasp\" operator but to recognize the bad state that performing it would lead to, the agent can recognize the bad state if it is reached from another path. For instance, the agent can be led through the individual steps of grasping an explosive block without the instructor ever mentioning \\grasp.\" When the agent is nally asked to \\Close the gripper\" around the explosive object, it does so, but then immediately recognizes the undesirable state it has arrived in and reverses the close-gripper action. In the process, it learns to reject close-gripper if the hand is around an explosive object, so that in the future it will not reach the undesirable state through this path.\nNotice here the e ect of the situated nature of Instructo-Soar's learning. The agent learns to avoid operators that lead to a bad state only when they arise in the agent's performance. Its initial learning about the bad state is recognitional rather than predictive. Alternatively, when the agent rst learns about a bad state, it could do extensive reasoning to determine every possible operator that could lead to that state, from every possible previous state, to learn to reject those operators at the appropriate times. This unsituated reasoning would be very expensive; the agent would have to reason through a huge number of possible situations. In addition, whenever new operators were learned, the agent would have to reason about all the possible situations in which they could arise, to learn if they could ever led to a bad state. Rather than this costly reasoning, Instructo-Soar simply learns what it can from its situations as they arise.\nAnother alternative for completely avoiding bad states would be to think through the e ects of every action before taking it, to see if a bad state will result. This highly cautious execution strategy would be appropriate in dangerous situations, but is not appropriate in safer situations where the agent is under time pressure. (Moving between more or less cautious execution strategies is not currently implemented in Instructo-Soar.)\nThe \\Never grasp...\" examples have illustrated the agent's learning of one type of operator control knowledge, namely operator rejection (T 5 (c)), learning of state inferences (T 5 (a)), and the use of further instruction to complete incomplete explanations (option O3).\nThe nal category of learning we will discuss is a second type of operator control knowledge." }, { "figure_ref": [], "heading": "Learning Operator Comparison Knowledge", "publication_ref": [], "table_ref": [], "text": "Another type of control knowledge besides operator rejection rules is operator comparison rules, which compare two operators and express a preference for one over the other in a given situation. Instructo-Soar learns operator comparison rules by asking for the instructor's feedback when multiple operators are proposed at the same point to achieve a particular goal. Multiple operators can be proposed, for instance, when the agent has been taught two di erent methods for achieving the same goal (e.g., to pick up a metal block either using the magnet or directly with the gripper). The instructor is asked to either select one of the proposed operators or to indicate that some other action is appropriate. Selecting one of the proposed choices causes the agent to learn a rule that prefers the selected operator over the other proposed operators in situations like the current situation. Alternatively, if the instructor indicates some other operator outside of the set of proposed operators, Instructo-Soar attempts to explain that operator in the usual way, to learn a general rule proposing it. In addition, the agent learns rules preferring the instructed operator to each of the other currently proposed operators.\nThere are two weaknesses to Instructo-Soar's learning of operator comparison rules. First, the instructor can be required to indicate a preference for each step needed to complete a procedure, rather than simply choosing between overall methods. That is, the instructor cannot say \\Use the method where you grab the block with your gripper, instead of using the magnet,\" but must indicate a preference for each individual step of the method employing the gripper. This is because in the PSCM, knowledge about steps in a procedure is accessed independently, as separate proposal rules, rather than as an aggregate method. Independent access improves exibility and reactivity { the agent can combine steps from di erent methods as needed based on the current situation { but a higher level grouping of steps would simplify instruction for selecting between complete methods.\nThe second weakness is that although the agent uses situated explanation to explain the selection the instructor makes, it does not explain why that selection is better than the other possibilities. Preferences between viable operators are often based on global considerations; e.g., \\Prefer actions that lead to overall faster/cheaper goal achievement.\" Learning based on this type of global preference (which in turn may be learned through instruction) is a point for further research." }, { "figure_ref": [], "heading": "Discussion of Results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We have shown how Instructo-Soar learns from various kinds of instructions. Although the domain used to demonstrate this behavior is simple, it has enough complexity to exhibit a variety of the di erent types of instructional interactions that occur in tutorial instruction.\nOf the 11 requirements that tutorial instruction places on an instructable agent (listed in Table 1), Instructo-Soar meets 7 (listed in expanded form in Table 4) either fully or partially. Three of these in particular distinguish Instructo-Soar from previous instructable systems:\nCommand exibility: The instructor can give a command for any task at each instruction point, whether or not the agent knows the task or how to perform it in the current situation.\nSituation exibility: The agent can learn from both implicitly situated instructions and explicitly situated instructions specifying either hypothetical goals or states." }, { "figure_ref": [], "heading": "Knowledge-type exibility:", "publication_ref": [], "table_ref": [], "text": "The agent is able to learn each of the types of knowledge it uses in task performance (the ve PSCM types) from instruction.\nEarlier, we claimed that handling tutorial instruction's exibility requires a breadth of learning and interaction capabilities. Combining command, situation, and knowledge-type exibility, Instructo-Soar displays 18 distinct instructional capabilities, as listed in Table 6. This variety of instructional behavior does not require 18 di erent learning techniques, but arises as one general technique, situated explanation in a PSCM-based agent, is applied in a range of instructional situations.\nOur series of examples has illustrated how situated explanation uses an instruction's situation and context during the learning process. First, the situation to which an instruction applies provides the endpoints for attempting to explain the instruction. Second, the instructional context can indicate which option to follow when an explanation cannot be completed. The context of learning a new procedure indicates that delaying explanation (option O1) is best, since the full procedure will eventually be taught. If a step cannot be explained in a previously taught procedure, missing knowledge could be anywhere in the procedure, so it is best to abandon explanation (option O4) and learn another way. Instructions that provide an explicit context, such as through a purpose clause, localize missing knowledge by giving strong expectations about a single operator that should achieve a single goal. This localization makes it plausible to induce missing knowledge and complete the explanation (option O2). In other cases, the default is to ask for instruction about missing knowledge to complete the explanation (option O3)." }, { "figure_ref": [ "fig_7" ], "heading": "Empirical Evaluation", "publication_ref": [ "b34" ], "table_ref": [], "text": "Most empirical evaluations of machine learning systems take one of four forms, each appropriate for addressing di erent evaluation questions:\nA. Comparison to other systems. This technique is useful for evaluating how overall performance compares to the state of the art. It can be used when there are other systems available that do the same learning task. B. Comparison to an altered version of the same system. This technique evaluates the impact of some component of the system on its overall performance. Typically, the system is compared to a version of itself without the key component (sometimes called a \\lesion study\").\nC. Measuring performance on a systematically generated series of problems. This technique evaluates how the method is a ected by di erent dimensions of the input (e.g., noise in training data).\nD. Measuring performance on known hard problems. Known hard problems provide an evaluation of overall performance under extreme conditions. For instance, concept learners' performance is often measured on standard, di cult datasets.\nThese evaluation techniques have been applied in limited ways to Instructo-Soar. They are di cult to apply in great depth for two reasons. First, whereas most machine learning e orts concentrate on depth of a single type of learning from a single type of input, tutorial instruction requires a breadth of learning from a range of instructional interactions. Whereas depth can be measured by quantitative performance, breadth is measured by (possibly qualitative) coverage { here, our coverage of 7 out of 11 instructability requirements. Second, tutorial instruction has not been extensively studied in machine learning, so there is not a battery of standard systems and problems available. Nonetheless, evaluation techniques (B), (C), and (D) have been applied to Instructo-Soar to address speci c evaluation questions: B. Comparison to altered version: We removed frame-axiom knowledge to illustrate the e ect of prior knowledge on the agent's performance, as described in Section 6.5. Without prior knowledge, the agent is unable to explain instructions and must resort to inductive methods. Thus, removing frame-axiom knowledge increased the amount of instruction required and reduced learning quality. We also compared versions of the agent that use di erent instruction recall strategies (Section 6.3).\nC. Performance on systematically varied input: We examined the e ects of varying three dimensions of the instructions given to the agent. First, we compared learning curves for instruction sequences of di erent lengths (Section 6.2). As the graphs in Figure 10 show, Instructo-Soar's execution time for an instructed procedure varies with the number of instructions in sequence used to teach it. Total execution time drops each time the procedure is executed, according to a power law function, until the procedure has been learned in general form. Second, we compared teaching a procedure through hierarchical subtasks versus using a at instruction sequence. Based on the power law result, we predicted that hierarchical instruction would allow faster general learning than at instruction. This prediction was con rmed empirically. Third, we examined the number of instruction orderings that can be used to teach a given procedure to Instructo-Soar in order to measure the value of supporting command exibility. Rather than an experimental measurement, we performed a mathematical analysis. The analysis showed that due to command exibility, the number of instruction sequences that can be used to teach a given procedure is very large, growing exponentially with the number of primitive steps in the procedure (Hu man, 1994). D. Performance on a known hard problem: Since learning from tutorial instruction has not been extensively studied in machine learning, there are no standard, di cult problems. We created a comprehensive instruction scenario by crossing the command exibility, situation exibility, and knowledge-type exibility requirements. The scenario, described in detail in (Hu man, 1994), contains 100 instructions and demonstrates 17 of Instructo-Soar's 18 instructional capabilities from Table 6 (it does not include learning indi erence in selecting between two operators). The agent learns about 4,700 chunks during the scenario, including examples of each type of PSCM knowledge, that extend the agent's domain knowledge signi cantly." }, { "figure_ref": [], "heading": "Limitations and Further Research", "publication_ref": [], "table_ref": [], "text": "This work's limitations fall into three major categories: limitations to tutorial instruction as a teaching technique, limitations of the agent's general capabilities, and limitations because of incomplete solutions to the mapping, interaction, and transfer problems. We discuss each of these in turn." }, { "figure_ref": [], "heading": "Limitations of Tutorial Instruction", "publication_ref": [ "b10", "b12", "b96", "b93", "b86", "b93" ], "table_ref": [], "text": "Tutorial instruction is both highly interactive and situated. However, much of human instruction is either non-interactive or unsituated (or both), and these have not been considered in this work. In non-interactive instruction, the content and ow of information to the student is controlled primarily by the information source. Examples include classroom lectures, instruction manuals, and textbooks. One issue in using this type of instruction is locating and extracting the information that is needed for particular problems (Carpenter & Alterman, 1994). Non-interactive instruction can contain both situated information (e.g., worked-out example problems, Chi et al., 1989;VanLehn, 1987) and unsituated information (e.g., general expository text).\nUnsituated instruction conveys general or abstract knowledge that can be applied in a large number of di erent situations. Such general-purpose knowledge is often described as \\declarative\" (Singley & Anderson, 1989). For example, in physics class, students are taught that F = m a; this general equation applies in speci c ways to a great variety of situations. The advantage of unsituated instruction is precisely this ability to compactly communicate abstract knowledge that is broadly applicable (Sandberg & Wielinga, 1991). However, to use such abstract knowledge, students must learn how it applies to speci c situations (Singley & Anderson, 1989). 9.2 Limitations of the Agent An agent's inherent limitations constrain what it can be taught. We have developed our theory of learning from tutorial instruction within a particular computational model of agents (the PSCM), and within this computational model, we implemented an agent with a particular set of capabilities to demonstrate the theory. Thus, both the weaknesses of the computational model and the speci c implemented agent must be examined." }, { "figure_ref": [], "heading": "Computational Model", "publication_ref": [], "table_ref": [], "text": "The problem space computational model is well suited for situated instruction because of its elements' close correspondence to the knowledge level (facilitating mapping from instructions to those elements), and its inherently local control structure. However, the PSCM's local application of knowledge makes it di cult to learn global control regimes through instruction, because they must be translated into a series of local decisions that will each result in local learning.\nA second weakness of the PSCM is that it provides a theory of the functional types of knowledge used by an intelligent agent, but gives no indication of the possible content of that knowledge. A content theory of knowledge would allow a ner grained analysis of an agent's instructability, within the larger-grained knowledge types analysis provided by the PSCM." }, { "figure_ref": [], "heading": "Implemented Agent's Capabilities", "publication_ref": [ "b73", "b100" ], "table_ref": [], "text": "Producing a de nitive agent has not been the goal of this work. Rather, the Instructo-Soar agent's capabilities have been developed only as needed to demonstrate its instructional learning capabilities. Thus, it is limited in a number of ways. 16 For instance, it performs simple actions serially in a static world. This would not be su cient for a dynamic domain such as ying an airplane, where multiple goals at multiple levels of granularity, involving both achievement and/or maintenance of conditions in the environment, may be active at once (Pearson et al., 1993). Instructo-Soar's procedures are implemented by a series of locally decided steps, precluding instruction containing procedure-wide (i.e., nonlocal) path constraints (e.g., \\Go to the other room, but don't walk on the carpeting!\"). There is only a single agent in the world, precluding instructions that involve cooperation with other agents (e.g., two robots carrying a couch) and instructions that require reasoning about other agents' potential actions (e.g., \\Don't go down the alley, because your enemy may block you in.\")\nThe agent has complete perception (clearly unrealistic in real physical domains), so it never has to be told where to look, or asked to notice a feature that it overlooked. In contrast, our instruction protocols show that human students are often told where to attend or what features to notice. Instructo-Soar's world is noise-free, so the agent does not need to reason or receive instruction about failed actions. Because it has complete perception and a noise-free environment, the agent does not explicitly reason about uncertainty in its perceptions or actions, and we have not demonstrated handling instructions that explicitly describe uncertain or probabalistic outcomes.17 The agent also does not reason about time (as, e.g., Vere and Bickmore's (1990) Homer does), so it cannot be taught to perform tasks in a time-dependent way. It does not keep track of states it has seen or actions it performs (other than its episodic instruction memory), so it cannot be asked to \\do what you did before.\" Similarly, it cannot learn procedures that are de ned by a particular sequence of actions, rather than a set of state conditions to achieve. For example, it cannot be taught how to dance, because dancing does not result in a net change to the external world. Finally, whenever the agent does not know what to do next, it asks for more instruction. It never tries to determine a solution through search and weak methods such as means-ends analysis. Adding this capability would decrease its need for instruction.\nIn addition to the agent's capabilities, Instructo-Soar is limited because its solutions to the mapping, interaction, and transfer problems are incomplete in various ways. These limitations are discussed next." }, { "figure_ref": [], "heading": "Mapping Problem", "publication_ref": [], "table_ref": [], "text": "Instructo-Soar employs a straightforward approach to mapping instructions into the agent's internal language, and leaves all of the problems of mapping di cult natural language constructions unaddressed. Some of the relevant problems include reference resolution, incompleteness, and the use of domain knowledge in comprehension. Mapping can even require further instruction, as in this interaction to resolve a referent: > Grab the explosive block.\nWhich one is that? > The red one.\nThis type of interaction is not supported by Instructo-Soar.\nIn addition to these general linguistic problems, Instructo-Soar makes only limited use of semantic information when learning new operators. For example, when it rst reads \\Move the red block left of the yellow block,\" it creates a new operator, but does not make use of the semantic information communicated by \\Move...to the left of.\" A more complete agent would try to glean any information it could from the semantics of an unfamiliar command." }, { "figure_ref": [], "heading": "Interaction Problem", "publication_ref": [ "b56" ], "table_ref": [], "text": "The agent's shortcomings on the interaction problem center around its three requirements: (I 1 ) exible initiation of instruction, (I 2 ) full exibility of knowledge content, and (I 3 ) situation exibility. (I 1 ): In Instructo-Soar, instruction is initiated only by the agent. This limits the instructor's ability to drive the interaction or to interrupt the agent's actions with instruction: \\No! Don't push that button!\"18 (I 2 ): Instructo-Soar provides exibility for commands, but not for instructions that communicate other kinds of information. Similar to the notion of discourse coherence (Mann & Thompson, 1988), a fully exible tutorable agent needs to support any instruction event with knowledge coherence; that is, any instruction event delivering knowledge that makes sense in the current context. The great variety of knowledge that could be relevant at any point makes this requirement di cult.\n(I 3 ): Instructo-Soar provides situation exibility by handling both implicitly and explicitly situated instructions, but hypothetical situations can only be referred to within a single instruction. Human tutors often refer to one hypothetical situation over the course of multiple instructions." }, { "figure_ref": [], "heading": "Transfer Problem", "publication_ref": [ "b54", "b62", "b72", "b74" ], "table_ref": [], "text": "This work has focused primarily on the transfer problem { producing general learning from tutorial instruction { and most of its requirements have been met. However, the inductive heuristics that Instructo-Soar uses are not very powerful.\nIn addition, two transfer problem requirements have not been achieved. First, (T 7 ) Instructo-Soar has not yet demonstrated instructional learning in coexistence with learning from other knowledge sources. Nothing in Instructo-Soar's theory precludes this coexistence, however. Learning from other knowledge sources could be invoked and possibly enhanced through instruction. For instance, an instructor might invoke learning from observation by pointing to a set of objects and saying \\This is a tower\"; similarly, an instruction containing a metaphor could invoke analogical learning. One application where instruction could potentially enhance other learning mechanisms is within \\personal assistant\" software agents that learn by observing their users (e.g., Maes, 1994;Mitchell et al., 1994). Adding the ability to learn from verbal instructions in addition to observations would allow users to explicitly train these agents in situations where learning from observation alone may be di cult or slow.\nSecond, (T 6 ) Instructo-Soar cannot recover from incorrect knowledge that leads to either invalid explanations or incorrect external performance. Such incorrect knowledge may be a part of the agent's initial domain theory, or may be learned through faulty instruction. Inability to recover from incorrect knowledge precludes instruction by general case and exceptions; for instance, \\Never grasp red blocks,\" and then later, \\It's ok to grasp the ones with safety signs on them.\" In order to avoid learning anything incorrect, whenever Instructo-Soar attempts to induce new knowledge, it asks for the instructor's veri cation before adding the knowledge to its long-term memory. Human students do not ask for so much veri cation; they appear to jump to conclusions, and alter them later if they prove to be incorrect based on further information.\nRather than always verifying knowledge being learned, our next generation of instructable agents will learn from reasonable inferences without veri cation (although they may ask for veri cations in extreme cases). We have recently produced such an agent (Pearson & Hu man, 1995) that incorporates current research on incremental recovery from incorrect knowledge (Pearson & Laird, 1995). This agent learns to correct overgeneral knowledge that it infers when completing explanations of instructions. The correction process is triggered when using the overgeneral knowledge results in incorrect performance (e.g., an action that the agent expects to succeed does not). In the long run, we believe this work could push research on incremental theory revision and error recovery, because instructable agents can be taught many types of knowledge that may need revision." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Although much work in machine learning aims for depth at a particular kind of learning, Instructo-Soar demonstrates breadth { of interaction with an instructor to learn a variety of types of knowledge { but all arising from one underlying technique. This kind of breadth is crucial in building an instructable agent because of the great variety of instructions and the variety of knowledge that they can communicate. Because instructable agents begin with some basic knowledge of their domain, Instructo-Soar uses an analytic, explanationbased approach to learn from instructions, which makes use of that knowledge. Because instructions may be either implicitly or explicitly situated, Instructo-Soar situates its explanations of each instruction within the situation indicated by the instruction. Finally, because the agent's knowledge is often de cient for explaining instructions, Instructo-Soar employs four di erent options for dealing with incomplete explanations, and selects between these options dynamically depending on the instructional context.\nBecause of its availability and e ectiveness, tutorial instruction is potentially a powerful knowledge source for intelligent agents. Instructo-Soar illustrates this in a simple domain. Realizing instruction's potential in elded applications will require more linguistically able agents that incorporate robust techniques for not only acquiring knowledge from instruction, but also re ning that knowledge as needed based on performance and further instruction." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was performed while the rst author was a graduate student at the University of Michigan. It was sponsored by NASA/ONR under contract NCC 2-517, and by a University of Michigan Predoctoral Fellowship. Thanks to Paul Rosenbloom, Randy Jones, and our anonymous reviewers for helpful comments on earlier drafts." } ]
[ { "authors": "N Akatsuka", "journal": "Cambridge Univ. Press", "ref_id": "b0", "title": "Conditionals are discourse-bound", "year": "1986" }, { "authors": "R Alterman; R Zito-Wolf; T Carpenter", "journal": "Journal of the Learning Sciences", "ref_id": "b1", "title": "Interaction, comprehension, and instruction usage", "year": "1991" }, { "authors": "J R Anderson", "journal": "Harvard University Press", "ref_id": "b2", "title": "The architecture of cognition", "year": "1983" }, { "authors": "F Bergadano; A Giordana", "journal": "", "ref_id": "b3", "title": "A knowledge intensive approach to concept induction", "year": "1988" }, { "authors": "W Birmingham; G Klinker", "journal": "The Knowledge Engineering Review", "ref_id": "b4", "title": "Knowledge acquisition tools with explicit problemsolving methods", "year": "1993" }, { "authors": "W Birmingham; D Siewiorek", "journal": "Knowledge Acquisition", "ref_id": "b5", "title": "Automated knowledge acquisition for a computer hardware synthesis system", "year": "1989" }, { "authors": "B S Bloom", "journal": "Educational Researcher", "ref_id": "b6", "title": "The 2 sigma problem: The search for methods of group instruction as e ective as one-to-one tutoring", "year": "1984" }, { "authors": "R J Brachman", "journal": "Beranek and Newman Inc", "ref_id": "b7", "title": "An introduction to KL-ONE", "year": "1980" }, { "authors": "J G Carbonell; Y Gil", "journal": "", "ref_id": "b8", "title": "Learning by experimentation", "year": "1987" }, { "authors": "J G Carbonell; R S Michalski; T M Mitchell", "journal": "Morgan Kaufmann", "ref_id": "b9", "title": "An overview of machine learning", "year": "1983" }, { "authors": "T Carpenter; R Alterman", "journal": "", "ref_id": "b10", "title": "A reading agent", "year": "1994" }, { "authors": "D Chapman", "journal": "", "ref_id": "b11", "title": "Vision, Instruction, and Action", "year": "1990" }, { "authors": "M T H Chi; M Bassok; M W Lewis; P Reimann; R Glaser", "journal": "Cognitive Science", "ref_id": "b12", "title": "Selfexplanations: How students study and use examples in learning to solve problems", "year": "1989" }, { "authors": "A Cypher", "journal": "MIT Press", "ref_id": "b13", "title": "Watch what I do: Programming by demonstration", "year": "1993" }, { "authors": "R Davis", "journal": "Arti cial Intelligence", "ref_id": "b14", "title": "Interactive transfer of expertise: Acquisition of new inference rules", "year": "1979" }, { "authors": "G F Dejong; R J Mooney", "journal": "Machine Learning", "ref_id": "b15", "title": "Explanation-based learning: An alternative view", "year": "1986" }, { "authors": "L Dent; J Boticario; J Mcdermott; T Mitchell; D Zabowski", "journal": "", "ref_id": "b16", "title": "A personal learning apprentice", "year": "1992" }, { "authors": "B Dieugenio", "journal": "", "ref_id": "b17", "title": "Understanding natural language instructions: A computational approach to purpose clauses", "year": "1993" }, { "authors": "B Dieugenio; B Webber", "journal": "", "ref_id": "b18", "title": "Plan recognition in understanding instructions", "year": "1992" }, { "authors": "S K Donoho; D C Wilkins", "journal": "", "ref_id": "b19", "title": "Exploiting the ordering of observed problem-solving steps for knowledge ase re nement: An apprenticeship approach", "year": "1994" }, { "authors": "M Drummond", "journal": "", "ref_id": "b20", "title": "Situated control rules", "year": "1989" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "C Emihovich; G E Miller", "journal": "Discourse Processes", "ref_id": "b22", "title": "Talking to the turtle: A discourse analysis of Logo instruction", "year": "1988" }, { "authors": "L Eshelman; D Ehret; J Mcdermott; M Tan", "journal": "International Journal of Man-Machine Studies", "ref_id": "b23", "title": "MOLE: A tenacious knowledgeacquisition tool", "year": "1987" }, { "authors": "R E Fikes; P E Hart; N J Nilsson", "journal": "Arti cial Intelligence", "ref_id": "b24", "title": "Learning and executing generalized robot plans", "year": "1972" }, { "authors": "C A Ford; S A Thompson", "journal": "Cambridge Univ. Press", "ref_id": "b25", "title": "Conditionals in discourse: A text-based study from English", "year": "1986" }, { "authors": "R E Frederking", "journal": "Kluwer Academic Press", "ref_id": "b26", "title": "Integrated natural language dialogue: A computational model", "year": "1988" }, { "authors": "A Golding; P S Rosenbloom; J E Laird", "journal": "", "ref_id": "b27", "title": "Learning search control from outside guidance", "year": "1987" }, { "authors": "B J Grosz", "journal": "", "ref_id": "b28", "title": "The Representation and use of focus in dialogue understanding", "year": "1977" }, { "authors": "T Gruber", "journal": "Machine Learning", "ref_id": "b29", "title": "Automated knowledge acquisition for strategic knowledge", "year": "1989" }, { "authors": "R V Guha; D B Lenat", "journal": "AI Magazine", "ref_id": "b30", "title": "Cyc: A mid-term report", "year": "1990" }, { "authors": "N Haas; G G Hendrix", "journal": "Morgan Kaufmann. Haiman, J", "ref_id": "b31", "title": "Learning by being told: Acquiring knowledge for information management", "year": "1978" }, { "authors": "R J Hall", "journal": "Machine Learning", "ref_id": "b32", "title": "Learning by failing to explain", "year": "1988" }, { "authors": "F Hayes-Roth; P Klahr; D J Mostow", "journal": "", "ref_id": "b33", "title": "Advice taking and knowledge re nement: An iterative view of skill acquisition", "year": "1981" }, { "authors": "S B Hu Man", "journal": "", "ref_id": "b34", "title": "Instructable autonomous agents", "year": "1994" }, { "authors": "S B Hu Man; J E Laird", "journal": "SPIE", "ref_id": "b35", "title": "Dimensions of complexity in learning from interactive instruction", "year": "1992" }, { "authors": "S B Hu Man; J E Laird", "journal": "", "ref_id": "b36", "title": "Learning procedures from interactive natural language instructions", "year": "1993" }, { "authors": "S B Hu Man; J E Laird", "journal": "", "ref_id": "b37", "title": "Learning from highly exible tutorial instruction", "year": "1994" }, { "authors": "S B Hu Man; C S Miller; J E Laird", "journal": "", "ref_id": "b38", "title": "Learning from instruction: A knowledgelevel capability within a uni ed theory of cognition", "year": "1993" }, { "authors": "P N Johnson-Laird", "journal": "Cambridge Univ. Press", "ref_id": "b39", "title": "Conditionals and mental models", "year": "1986" }, { "authors": "R M Jones; M Tambe; J E Laird; P S Rosenbloom", "journal": "", "ref_id": "b40", "title": "Intelligent automated agents for ight training simulators", "year": "1993" }, { "authors": "M A Just; P A Carpenter", "journal": "", "ref_id": "b41", "title": "Verbal comprehension in instructional situations", "year": "1976" }, { "authors": "", "journal": "Lawrence Erlbaum Associates", "ref_id": "b42", "title": "Cognition and Instruction", "year": "" }, { "authors": "D E Kieras; S Bovair", "journal": "Cognitive Science", "ref_id": "b43", "title": "The role of a mental model in learning to operate a device", "year": "1984" }, { "authors": "Y Kodrato; G Tecuci", "journal": "", "ref_id": "b44", "title": "DISCIPLE-1: Interactive apprentice system in weak theory elds", "year": "1987" }, { "authors": "Y Kodrato; G Tecuci", "journal": "International Journal of Expert Systems", "ref_id": "b45", "title": "Techniques of design and DISCIPLE learning apprentice", "year": "1987" }, { "authors": "J E Laird; C B Congdon; E Altmann; R Doorenbos", "journal": "version", "ref_id": "b46", "title": "Soar user's manual", "year": "1993" }, { "authors": "J E Laird; M Hucka; E S Yager; C M Tuck", "journal": "", "ref_id": "b47", "title": "Correcting and extending domain knowledge using outside guidance", "year": "1990" }, { "authors": "J E Laird; A Newell; P S Rosenbloom", "journal": "Arti cial Intelligence", "ref_id": "b48", "title": "Soar: An architecture for general intelligence", "year": "1987" }, { "authors": "J E Laird; P S Rosenbloom", "journal": "AAAI Press", "ref_id": "b49", "title": "Integrating execution, planning, and learning in Soar for external environments", "year": "1990" }, { "authors": "C Lewis", "journal": "Cognitive Science", "ref_id": "b50", "title": "Why and how to learn why: Analysis-based generalization of procedures", "year": "1988" }, { "authors": "R L Lewis", "journal": "", "ref_id": "b51", "title": "An Architecturally-Based Theory of Human Sentence Comprehension", "year": "1993" }, { "authors": "R L Lewis; A Newell; T A Polk", "journal": "", "ref_id": "b52", "title": "Toward a Soar theory of taking instructions for immediate reasoning tasks", "year": "1989" }, { "authors": "R K Lindsay", "journal": "Oldenbourg KG", "ref_id": "b53", "title": "Inferential memory as the basis of machines which understand natural language", "year": "1963" }, { "authors": "P Maes", "journal": "Communications of the ACM", "ref_id": "b54", "title": "Agents that reduce work and information overload", "year": "1994" }, { "authors": "P Maes; R Kozierok", "journal": "", "ref_id": "b55", "title": "Learning interface agents", "year": "1993" }, { "authors": "W C Mann; S A Thompson", "journal": "Text", "ref_id": "b56", "title": "Rhetorical structure theory: Toward a functional theory of text organization", "year": "1988" }, { "authors": "S Marcus; J Mcdermott", "journal": "Arti cial Intelligence", "ref_id": "b57", "title": "SALT: A knowledge acquisition language for proposeand-revise systems", "year": "1989" }, { "authors": "C E Martin; R J Firby", "journal": "", "ref_id": "b58", "title": "Generating natural language expectations from a reactive execution system", "year": "1991" }, { "authors": "J Mccarthy", "journal": "MIT Press", "ref_id": "b59", "title": "The advice taker", "year": "1968" }, { "authors": "C M Miller", "journal": "", "ref_id": "b60", "title": "A model of concept acquisition in the context of a uni ed theory of cognition", "year": "1993" }, { "authors": "S Minton; J G Carbonell; C A Knoblock; D R Kuokka; O Etzioni; Y Gil", "journal": "Arti cial Intelligence", "ref_id": "b61", "title": "Explanation-based learning: A problem-solving perspective", "year": "1989" }, { "authors": "T Mitchell; R Caruana; D Freitag; J Mcdermott; D Zabowski", "journal": "Communications of the ACM", "ref_id": "b62", "title": "Experience with a learning personal assistant", "year": "1994" }, { "authors": "T M Mitchell; R M Keller; S T Kedar-Cabelli", "journal": "Machine Learning", "ref_id": "b63", "title": "Explanation-based generalization: A unifying view", "year": "1986" }, { "authors": "T M Mitchell; S Mahadevan; L I Steinberg", "journal": "Morgan Kaufmann", "ref_id": "b64", "title": "LEAP: A learning apprentice system for VLSI design", "year": "1990" }, { "authors": "R J Mooney", "journal": "Cognitive Science", "ref_id": "b65", "title": "Learning plan schemata from observation: Explanation-based learning for plan recognition", "year": "1990" }, { "authors": "D J Mostow", "journal": "", "ref_id": "b66", "title": "Learning by being told: Machine transformation of advice into a heuristic search procedure", "year": "1983" }, { "authors": "A Newell", "journal": "AI Magazine", "ref_id": "b67", "title": "The knowledge level", "year": "1981" }, { "authors": "A Newell", "journal": "Harvard University Press", "ref_id": "b68", "title": "Uni ed Theories of Cognition", "year": "1990" }, { "authors": "A Newell; G Yost; J E Laird; P S Rosenbloom; E Altmann", "journal": "", "ref_id": "b69", "title": "Formulating the problem space computational model", "year": "1990" }, { "authors": "M Pazzani", "journal": "Cognitive Science", "ref_id": "b70", "title": "A computational theory of learning causal relationships", "year": "1991" }, { "authors": "M Pazzani", "journal": "Journal of the Learning Sciences", "ref_id": "b71", "title": "Learning to predict and explain: An integration of similarity-based, theory driven, and explanation-based learning", "year": "1991" }, { "authors": "D J Pearson; S B Hu Man", "journal": "", "ref_id": "b72", "title": "Combining learning from instruction with recovery from incorrect knowledge", "year": "1995" }, { "authors": "D J Pearson; S B Hu Man; M B Willis; J E Laird; R M Jones", "journal": "IEEE Robotics and Autonomous Systems", "ref_id": "b73", "title": "A symbolic solution to intelligent real-time control", "year": "1993" }, { "authors": "D J Pearson; J E Laird", "journal": "Oxford University Press", "ref_id": "b74", "title": "Toward incremental knowledge correction for agents in complex environments", "year": "1995" }, { "authors": "B W Porter; R Bareiss; R C Holte", "journal": "Arti cial Intelligence", "ref_id": "b75", "title": "Concept learning and heuristic classication in weak-theory domains", "year": "1990" }, { "authors": "B W Porter; D F Kibler", "journal": "Machine Learning", "ref_id": "b76", "title": "Experimental goal regression: A method for learning problem-solving heuristics", "year": "1986" }, { "authors": "M A Redmond", "journal": "", "ref_id": "b77", "title": "Learning by observing and understanding expert problem solving", "year": "1992" }, { "authors": "P S Rosenbloom; J Aasman", "journal": "", "ref_id": "b78", "title": "Knowledge level and inductive uses of chunking (EBL)", "year": "1990" }, { "authors": "P S Rosenbloom; J E Laird", "journal": "", "ref_id": "b79", "title": "Mapping explanation-based generalization onto Soar", "year": "1986" }, { "authors": "P S Rosenbloom; J E Laird; A Newell", "journal": "Academic Press", "ref_id": "b80", "title": "The chunking of skill and knowledge", "year": "1988" }, { "authors": "", "journal": "MIT Press", "ref_id": "b81", "title": "The Soar Papers: Research on integrated intelligence", "year": "1993" }, { "authors": "", "journal": "MIT Press", "ref_id": "b82", "title": "The Soar Papers: Research on integrated intelligence", "year": "1993" }, { "authors": "P S Rosenbloom; A Newell", "journal": "MIT Press", "ref_id": "b83", "title": "The chunking of goal hierarchies: A generalized model of practice", "year": "1986" }, { "authors": "M D Rychener", "journal": "", "ref_id": "b84", "title": "The instructible production system: A retrospective analysis", "year": "1983" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b85", "title": "", "year": "" }, { "authors": "J Sandberg; B Wielinga", "journal": "", "ref_id": "b86", "title": "How situated is cognition?", "year": "1991" }, { "authors": "R C Schank", "journal": "American Elsevier", "ref_id": "b87", "title": "Conceptual Information Processing", "year": "1975" }, { "authors": "R C Schank; D B Leake", "journal": "Arti cial Intelligence", "ref_id": "b88", "title": "Creativity and learning in a case-based explainer", "year": "1989" }, { "authors": "A M Segre", "journal": "", "ref_id": "b89", "title": "A learning apprentice system for mechanical assembly", "year": "1987" }, { "authors": "W Shen", "journal": "Machine Learning", "ref_id": "b90", "title": "Discovery as autonomous learning from the environment", "year": "1993" }, { "authors": "H A Simon", "journal": "", "ref_id": "b91", "title": "Arti cial intelligence systems that understand", "year": "1977" }, { "authors": "H A Simon; J R Hayes", "journal": "Lawrence Erlbaum Associates", "ref_id": "b92", "title": "Understanding complex task instructions", "year": "1976" }, { "authors": "M K Singley; J R Anderson", "journal": "Harvard University Press", "ref_id": "b93", "title": "The transfer of cognitive skill", "year": "1989" }, { "authors": "R S Sutton; B Pinette", "journal": "", "ref_id": "b94", "title": "The learning of world models by connectionist networks", "year": "1985" }, { "authors": "S B Thrun; T M Mitchell", "journal": "", "ref_id": "b95", "title": "Integrating inductive neural network learning and explanation-based learning", "year": "1993" }, { "authors": "K Vanlehn", "journal": "Arti cial Intelligence", "ref_id": "b96", "title": "Learning one subprocedure per lesson", "year": "1987" }, { "authors": "K Vanlehn; W Ball; B Kowalski", "journal": "", "ref_id": "b97", "title": "Explanation-based learning of correctness: Towards a model of the self-explanation e ect", "year": "1990" }, { "authors": "K Vanlehn; R Jones", "journal": "", "ref_id": "b98", "title": "Learning physics via explanation-based learning of correctness and analogical search control", "year": "1991" }, { "authors": "K Vanlehn; R M Jones; M T H Chi", "journal": "Journal of the Learning Sciences", "ref_id": "b99", "title": "A model of the self-explanation e ect", "year": "1992" }, { "authors": "S Vere; T Bickmore", "journal": "Computational Intelligence", "ref_id": "b100", "title": "A basic agent", "year": "1990" }, { "authors": "J V Wertsch", "journal": "Human Development", "ref_id": "b101", "title": "From social interaction to higher psychological processes: A clarication and application of Vygotsky's theory", "year": "1979" }, { "authors": "G Widmer", "journal": "", "ref_id": "b102", "title": "A tight integration of deductive and inductive learning", "year": "1989" }, { "authors": "D C Wilkins", "journal": "", "ref_id": "b103", "title": "Knowledge base re nement as improving an incomplete and incorrect domain theory", "year": "1990" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b104", "title": "", "year": "" }, { "authors": "T Winograd", "journal": "Academic Press", "ref_id": "b105", "title": "Understanding Natural Language", "year": "1972" }, { "authors": "D Wood; J S Bruner; G Ross", "journal": "Journal of Child Psychology and Psychiatry", "ref_id": "b106", "title": "The role of tutoring in problem solving", "year": "1976" }, { "authors": "G R Yost", "journal": "IEEE Expert", "ref_id": "b107", "title": "Acquiring knowledge in Soar", "year": "1993" }, { "authors": "G R Yost; A Newell", "journal": "", "ref_id": "b108", "title": "A problem space approach to expert system speci cation", "year": "1989" } ]
[]
Flexibly Instructable Agents
This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent. Tutorial instruction is a exible (and thus powerful) paradigm for teaching tasks because it allows an instructor to communicate whatever types of knowledge an agent might need in whatever situations might arise. To support this exibility, however, the agent must be able to learn multiple kinds of knowledge from a broad range of instructional interactions. Our approach, called situated explanation, achieves such learning through a combination of analytic and inductive techniques. It combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations. The approach is implemented in an agent called Instructo-Soar that learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions. Instructo-Soar meets three key requirements of exible instructability that distinguish it from previous systems: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation speci ed in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks.
Scott B Hu; John E Laird
[ { "figure_caption": "is off then turn it on. ... Oh, I see! What next? Ok. What next? Ok. What next? How do I do that? That's a new one. How do I do that? Push the green button. Move to the grey table.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: An example of tutorial instruction.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The processing of a PSCM-based agent. Triangles represent problem spaces; squares, states; arrows, operators; and ovals, impasses.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Options when faced with an incomplete explanation because of missing knowledge M K .", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Instructions given to Instructo-Soar to teach it to pick up a block.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Instructions teaching a new operator cannot be explained before the termination conditions of the new operator are learned.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Multiple step projections can result in incomplete explanations due to compounding of errors in domain knowledge.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Decision cycles versus execution number to learn to (a) pick up and (b) move objects left of one another, using the lazy/single-step strategy.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "12 ", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: The use of the OP-to-G-path heuristic, with OP \\move to the yellow table,\"", "figure_data": "", "figure_id": "fig_9", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Entity Knowledge type Example state inference If gripper is closed & directly above obj ! holding obj. operator proposal If goal is to pick up obj on table-x, and not docked at tablex, then propose moving to table-x. operator control If goal is to pick up small metal obj on table-x, prefer moving to table-x over fetching magnet. operator e ects An e ect of the operator move to table-x is that the robot becomes docked at table-x. operator termination Termination conditions of pick up obj are that the gripper is raised & holding obj.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The ve types of knowledge of PSCM agents.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Expanded requirements of tutorial instruction met by Instructo-Soar.", "figure_data": "T 1 General learning from speci c cases T 2 Fast learning (each task instructed only once) T 3 Maximal use of prior knowledge T 4 Incremental learning T 5 Knowledge-type exibility a. state inference b. operator proposal c. operator control d. operator e ects e. operator termination I 2 Command exibility a. known command b. skipped steps c. unknown command I 3 Situation exibility a. implicitly situated b. explicitly situated: hypothetical state hypothetical goal", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Pick up the red block. Move to the yellow table. Move the arm above the red block.", "figure_data": "Move up.Move down.Close the hand.Move up.The operator is finished.", "figure_id": "tab_4", "figure_label": ",", "figure_type": "table" }, { "figure_caption": "", "figure_data": "pick up (block)move to table (table)move-left-of(arm,block1)put down (block)put down (blockX)grasp (block)move arm upmove arm downopen gripper(lg. metal)grasp (magnet)move above (block)move arm down(small)move to table (table)move above (blk/mag)move arm downclose grippermove arm up", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Instructional capabilities demonstrated by Instructo-Soar.", "figure_data": "Instructional capabilityExamplepick up 2. Extending a procedure to apply in a new situation move up to move above 1. Learning completely new procedures 3. Hierarchical instruction: handling instructions for a teaching pick up within line up procedure embedded in instruction for others 4. Altering induced knowledge based on further instruction removing docked-at from pick up's termination conditions 5. Learning procedures inductively when domain knowledge is incomplete learning with secondary operator e ects knowledge removed 6. Learning to avoid prohibited actions \\Never grasp red blocks.\" 7. More general learning due to further instruction Avoid grasping because \\Red blocks are explosive.\" 8. Learning to avoid indirect achievement of a bad state closing hand around explosive block 9. Inferences from simple speci c statements \\The grey block is metal.\" 10. Inferences from simple generic statements \\White magnets are powered.\" 11. Inferences from conditionals \\if condition and condition]* then concluded state feature\" 12. Learning an operator to perform for a hypothetical goal \\To turn on the light, push the red button.\" 13. Learning an operator to perform in a hypothetical state: general policy (active at all times) \\If the light is bright, then dim the light.\" 14. Learning an operator to perform in a hypothetical state: contingency within a particular procedure \\If the block is metal, then grasp the magnet\" to pick up 15. Learning operator e ects pushing the red button turns on the light 16. Learning non-perceivable operator e ects and asso-ciated inferences to recognize them the magnet becomes stuck-to a metal block when moved above it 17. Learning control knowledge: learning which of a set of operators to prefer two ways to grasp a small metal block 18. Learning control knowledge: learning operators are indi erent two ways to grasp a small metal block", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b31", "b20" ], "table_ref": [], "text": "Many artificial intelligence problems involve search. Consequently, the development of appropriate search algorithms is central to the advancement of the field. Due to the complexity of the search spaces involved, heuristic search is often employed. However, heuristic algorithms cannot guarantee that they will find the targets they seek. In contrast, an admissible search algorithm is one that is guaranteed to uncover the nominated target, if it exists (Nilsson, 1971). This greater utility is usually obtained at a significant computational cost.\nThis paper describes the OPUS (Optimized Pruning for Unordered Search) family of search algorithms. These algorithms provide efficient admissible search of search spaces in which the order of application of search operators is not significant. This search efficiency is achieved by the use of branch and bound techniques that employ domain specific pruning rules to provide a tightly focused traversal of the search space.\nWhile these algorithms have wide applicability, both within and beyond the scope of artificial intelligence, this paper focuses on their application in classification learning. Of particular significance, it is demonstrated that the algorithms can efficiently process many common classification learning problems. This contrasts with the seemingly widespread assumption that the sizes of the search spaces involved in machine learning require the use of heuristic search.\nThe use of admissible search is of potential value in machine learning as it enables better experimental evaluation of alternative learning biases. Search is used in machine learning in an attempt to uncover classifiers that satisfy a learning bias. When heuristic search is used it is difficult to determine whether the search technique introduces additional implicit biases that cannot be properly identified. Such implicit biases may confound experimental results. In contrast, if admissible search is employed the experimenter can be assured that c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nthe search technique is not introducing confounding unidentified implicit biases into the experimental situation.\nThe use of OPUS for admissible search has already led to developments in machine learning that may not otherwise have been possible. In particular, Webb (1993) compared classifiers developed through true optimization of Laplace accuracy estimate with those obtained through heuristic search that sought but failed to optimize this measure. In general, the latter proved to have higher predictive accuracy than the former. This surprising result, that could not have been obtained without the use of admissible search, led Quinlan and Cameron-Jones (1995) to develop a theory of oversearching.\nThis paper offers two distinct contributions to the fields of computing, artificial intelligence and machine learning. First, it offers a new efficient admissible search algorithm for unordered search. Second, it demonstrates that admissible search is possible for a range of machine learning tasks that were previously thought susceptible only to efficient exploration through non-admissible heuristic search." }, { "figure_ref": [ "fig_0" ], "heading": "Unordered Search Spaces", "publication_ref": [ "b15", "b12", "b23", "b22", "b5", "b3", "b14", "b23", "b28", "b30", "b18" ], "table_ref": [], "text": "For most search problems, the order in which operators are applied is significant. For example, when attempting to stack blocks it matters whether the red block is placed on the blue block before or after the blue block is placed on the green. When attempting to navigate from point A to point B, it is not possible to move from point C to point B before moving to point C. However, for some search problems, the order in which operators are applied is not significant. For example, when searching through a space of logical expressions, the effect of conjoining expression A with expression B and then conjoining the result with expression C is identical to the result obtained by conjoining A with C followed by B. Both sequences of operations result in expressions with equivalent meaning. In general, a search space is unordered if for any sequence O of operator applications and any state S, all states that can be reached from S by a permutation of O are identical. It is this type of search problem, search through unordered search spaces, that is the subject of this investigation. Special cases of search through unordered search spaces are provided by the subset selection (Narendra & Fukunaga, 1977) and minimum test-set (Moret & Shapiro, 1985) search problems. Subset selection involves the selection of a subset of objects that maximizes an evaluation criterion. The minimum test-set problem involves the selection of a set of tests that maximizes an evaluation criterion. Such search problems are encountered in many domains including machine learning, truth maintenance and pattern recognition. Rymon (1992) has demonstrated that Reiter's (1987) and de Kleer, Mackworth, and Reiter's (1990) approaches to diagnosis can be recast as subset selection problems.\nThe OPUS algorithms traverse the search space using a search tree. The root of the search tree is an initial state. Branches denote the application of search operators and the nodes that they lead to denote the states that result from the application of those operators. Different variants of OPUS are suited to each of optimization search and satisficing search. For optimization search, a goal state is an optimal solution. For satisficing search, a goal state is an acceptable solution. It is possible that a search space may include multiple goal states.\nThe OPUS algorithms take advantage of the properties of unordered search spaces to optimize the effect of any pruning of the search tree that may occur. In particular, when expanding a node n in a search tree, the OPUS algorithms seek to identify search operators that can be excluded from consideration in the search tree descending from n without excluding a sole goal node from that search tree. The OPUS algorithms differ from most previous admissible search algorithms employed in machine learning (Clearwater & Provost, 1990;Murphy & Pazzani, 1994;Rymon, 1992;Segal & Etzioni, 1994;Webb, 1990) in that when such operators are identified, they are removed from consideration in all branches of the search tree that descend from the current node. In contrast, the other algorithms only remove a single branch at a time without altering the operators considered below sibling branches, thereby pruning fewer nodes from the search space.\nIf it is not possible to apply an operator more than once on a path through the search space, search with unordered operators can be considered to be a subset selection problemselect a subset of operators whose application (in any order) leads to a goal state. If a single operator may be applied multiple times on a single path through the search space, search with unordered operators can be considered as a sub-multiset selection problem-select the multiset of operators whose application leads to the desired result.\nA search tree that traverses an unordered search space in which multiple applications of a single operator are not allowed may be envisioned as in Figure 1. This example includes four search operators, named a, b, c and d. Each node in the search tree is labeled by the set of operators by which it is reached. Thus, the initial state is labeled with the empty set. At depth one are all sets containing a single operator, at depth two all sets containing two operators and so on, up to depth four. Any two nodes with identical labels represent equivalent states.\nThere is considerable duplication of nodes in this search tree (the label {a, b, c, d} occurs 24 times). In Figure 1 (and the following figures), the number of unique nodes is listed below each depth of the search tree. Where this number can be derived from the number of combinations to be considered, this derivation is also indicated.\nIt is common during search to prune regions of the search tree on the basis of investigations that determine that a goal state cannot lie within those regions. Figure 2 shows a search tree with the sub-tree below {c} pruned. Note that, due to the duplication inherent in such a search tree, the number of unique nodes remaining in the tree is identical to that in the unpruned tree. However, if it has been deemed that no node descending from {c} may be a goal, then all nodes elsewhere in the search tree that have identical labels (are reached via identical sets of operator applications) to any nodes that occur in the pruned region of the tree could also be pruned. Figure 3 shows the search tree remaining when all nodes below {c} and all their duplicates have been deleted. It can be seen that the number of unique nodes in the remaining search tree (the tree at depths 2, 3 and 4) has been pruned by more than half. Similar results are obtained in the case where multiple applications of a single operator are allowed and the nodes are consequently labeled with multisets.\nThe OPUS algorithms do not provide pruning rules-mechanisms for identifying sections of the search tree that may be pruned. Rather, they take pruning rules as input and seek to optimize the effect of each pruning action that results from application of those rules.\nThe OPUS algorithms were designed for use with admissible pruning rules. When used solely with admissible pruning rules the algorithms are admissible. That is, they are Figure 1: Simple unordered operator search tree guaranteed to find a goal state if one exists in the search space. However, the algorithms may also be used with non-admissible pruning heuristics to obtain efficient non-admissible search.\nThe OPUS algorithms are not only admissible (when used with admissible pruning rules), they are systematic (Pearl, 1984). That is, in addition to guaranteeing that a goal will be found if one exists, the algorithms guarantee that no state will be visited more than once during a search (so long as it is not possible to reach a single node by application of different sets of operators)." }, { "figure_ref": [], "heading": "Fixed-order Search", "publication_ref": [ "b3", "b24", "b26", "b28", "b30", "b0", "b26", "b26", "b26" ], "table_ref": [], "text": "A number of recent machine learning algorithms have performed restricted admissible search (Clearwater & Provost, 1990;Rymon, 1993;Schlimmer, 1993;Segal & Etzioni, 1994;Webb, 1990). All of these algorithms are based on an organization of the search tree, that, when considering the search problem illustrated in Figures 1 to 3, traverse the search space in the manner depicted in Figure 4. Such an organization is achieved by arranging the operators in a predefined order, and then applying at a node all and only operators that have a higher order than any operator that appears in the path leading to the node. This strategy will be\n{ a, c, d } { b, c, d } { c, d } { d } { } 4 C 0 =1 4 C 1 =4 4 C 2 =6 4 C 3 =4 4 C 4 =1\nAn Efficient Admissible Algorithm for Unordered Search called fixed-order search. (Fixed-order search has also been used for non-admissible search, for example, Buchanan, Feigenbaum, & Lederberg, 1971).\nFigure 5 illustrates the effect of pruning the sub-tree descending below operator c, under fixed-order search. As can be seen, this is substantially less effective than the optimized pruning illustrated in Figure 3. Schlimmer (1993) ensures that the pruning effect illustrated in Figure 3 is obtained within the efficient search tree organization illustrated in Figure 4, by maintaining an explicit representation of all nodes that are pruned. The resulting search tree is depicted in Figure 6. This approach requires the considerable computational overhead of identifying and marking all pruned states following every pruning action, and the restrictive storage overhead of maintaining the representation. (One of the search problems tackled below contains 2 162 states. To represent whether a state is pruned requires a single bit. Thus, 2 162 bits would be required to represent the required information for this problem, a requirement well beyond the capacity of computational machinery into the foreseeable future.) Further, it is open to debate whether this approach does truly prune all identified nodes from the search space. Nodes that that have been 'pruned' will still need to be generated when encountered in previously unexplored regions of the search tree in order to\n{ a, b , d } { a, b } { a, b , d } { a, d } { a } { a, b , d } { a, b } { a, b , d } { b, d } { b } { c } { a, b , d } { a, d } { a, b , d } { b, d } { d } { } 4 C 0 =1 4 C 1 =4 3 C 2 =3 3 C 3 =1 3 C 4 =0 { a, b , c, d } { a, b, c } { a, b, d } { a , b } { a, c, d } { a , c } { a , d } { a } { b, c, d } { b , c } { b ,d } { b } { c, d } { c } { d } { } 4 C 0 =1 4 C 1 =4 4 C 2 =6 4 C 3 =4 4 C 4 =1\nWebb \n{ a, b , c, d } { a, b, c } { a, b, d } { a , b } { a, c, d } { a , c } { a , d } { a } { b, c, d } { b , c } { b ,d } { b } { c } { d } { } 4 C 0 =1 4 C 1 =4 5 4 C 3 =4 4 C 4 =1 { a, b, d } { a, b } { a, d } { a } { b,d } { b } { c } { d } { } 4 C 0 =1 4 C 1 =4 3 C 3 =1 3 C 4 =0 3 C 2 =3\nAn Efficient Admissible Algorithm for Unordered Search Figure 5: Effect of pruning under fixed-order search Figure 6: 'Optimal' pruning under fixed-order search be checked against the list of pruned nodes. Consider, for example, the node labeled {a} in Figure 5. When expanding this node it will be necessary to generate the node labeled {a, c}, even if this node has been marked as pruned. Only once it is generated is it possible to identify it as a node that has been 'pruned'. This node could in principle be pruned anyway by application of some variant of the technique that identified it as prunable in the first place. Viewed in this light, it can be argued that Schlimmer's (1993) approach does not reduce the number of nodes that must be generated under fixed-order search. All that it saves is the computational cost of determining for some nodes whether they require pruning or not. (This assumes that the optimistic pruning mechanism will be able to determine for any node n from the search space below a pruned node m that n should also be pruned, irrespective of where n is encountered in the search tree. If the optimistic pruning mechanism is deficient in that it cannot do this, then Schlimmer's (1993) approach will increase the amount of true pruning performed to the extent that it overcomes this deficiency.)" }, { "figure_ref": [], "heading": "The Feature Subset Selection Algorithm", "publication_ref": [ "b15" ], "table_ref": [], "text": "Fixed-order search traverses the search space in a naive manner-the topology of the search tree is determined in advance and takes no account of the efficiency of the resulting search. In contrast, the Feature Subset Selection (FSS) algorithm (Narendra & Fukunaga, 1977) performs branch and bound search in unordered search spaces, traversing the search space\n{ c } { a,b, d } { a ,b } { a ,d } { a } { b , d } { b } { d } { } 4 C 0 =1 4 C 1 =4 4 C 2 =6 4 C 3 =4 4 C 4 =1" }, { "figure_ref": [], "heading": "Webb", "publication_ref": [ "b26" ], "table_ref": [], "text": "Figure 7: Pruning under FSS-like search so as to visit each state at most once and dynamically organizing the search tree so as to maximize the proportion of the search space placed under unpromising operators. It can be viewed as a form of fixed-order search in which the order is altered at each node of the search tree so as to manipulate the topology of the search tree for the sake of search efficiency. Unlike Schlimmer (1993), the pruning mechanism ensures that nodes that are identified as prunable are not generated. The power of this measure is illustrated by Figure 7. In this figure, fixed-order search is performed on the simple example problem illustrated in Figures 1 to 6, with the order changed so that the operator to be pruned, c, is placed first. As can be seen, this achieves the amount of pruning achieved by optimal pruning. This effect can be achieved with negligible computational or storage overhead.\nHowever, FSS is severely limited in its applicability as-\n• it is restricted to optimization search;\n• it is restricted to tasks for which each operator may only be applied once (subset selection);\n• it is restricted to search for a single solution;\n• it requires that the values of states in the search space be monotonically decreasing. That is, the value of a state cannot increase as a result of an operator application; and\n• the only form of pruning that it supports is optimistic pruning." }, { "figure_ref": [], "heading": "The OPUS Algorithms", "publication_ref": [ "b8" ], "table_ref": [], "text": "The OPUS algorithms generalize the idea of search space reorganization from FSS. Two variations of OPUS are defined. OPUS s is a variant for satisficing search (search in which any qualified object is sought). OPUS o is a variant for optimization search (search in which an object that optimizes an evaluation function is sought). Whereas FSS uses node values for pruning, OPUS o uses optimistic evaluation of the search space below a node. This removes the requirement that the values of states in the search space be monotonically decreasing and opens the possibility of performing other types of pruning in addition to optimistic pruning.\nIn the analysis to follow, where comments apply equally to both variants the name OPUS will be employed. When a comment applies to only one variant of the algorithm, it will be distinguished by its respective superscript.\nOPUS uses a branch and bound (Lawler & Wood, 1966) search strategy that traverses the search space in a manner similar to that illustrated in Figure 4 so as to guarantee that no two equivalent nodes in the search space are both visited. However, it organizes the search tree so as to optimize the effect of pruning, achieving the effect illustrated in Figure 6 without any significant computational or storage overhead.\nRather than maintaining an operator order, OPUS maintains at each node, n, the set of operators n.active that can be applied in the search space below n. When the node is expanded, the operators in n.active are examined to determine if any can be pruned. Any operators that can be pruned are removed from n.active. New nodes are then created for each of the operators remaining in n.active and their sets of active operators are initialized so as to ensure that every combination of operators will be considered at only one node in the search tree.\nIt should be kept in mind that it is possible that many states in a search space may be goal states. For satisficing search all states that satisfy a given criteria are goal states. For optimization search, all states that optimize the evaluation criteria are goal states. For efficiency sake, the OPUS algorithms allow sections of the search space to be pruned even if they contain a goal state, so long as there remain other goal states in the remaining search space." }, { "figure_ref": [ "fig_2" ], "heading": "OPUS s", "publication_ref": [ "b18", "b5" ], "table_ref": [], "text": "The OPUS s algorithm is presented in Figure 8. This description of OPUS s follows the conventions employed in the search algorithm descriptions provided by Pearl (1984).\nThis definition of OPUS s assumes that a single operator cannot be applied more than once along a single path through the search space. If an operator may be applied multiple times, the order of Steps 8a and 8b should be reversed. Unless otherwise specified, the following discussion of OPUS assumes that each operator may be applied at most once along a single path.\nIf it is desired to obtain all solutions that satisfy the search criterion,\n•\nStep 2 should be altered to exit successfully, returning the set of all solutions;\n• Step 6b should be altered to not exit, but rather to add the current node to the set of solutions; and\n• The domain specific pruning mechanisms employed at Step 7 should also be modified so that no goal state may be pruned from the search space.\nThis form of search could be used in an assumption-based truth maintenance system to find the set of all maximally general consistent assumptions. This would provide efficient search without the need to maintain and search an explicit database of inconsistent assumptions such as the ATMS no-good set (de Kleer et al., 1990). Unless otherwise specified, the discussion of OPUS below assumes that a single solution is sought. The algorithm does not specify the order in which nodes should be selected for expansion at Step 3. Nodes may be selected at random, by a domain specific selection function, or by Webb Data structure: Each node, n, in the search tree has associated with it three items of information: n.state the state from the search space that is associated with the node; n.active the set of operators to be explored in the sub-tree descending from the node; and n.mostRecentOperator the operator that was applied to the parent node's state to create the current node's state." }, { "figure_ref": [], "heading": "Algorithm:", "publication_ref": [], "table_ref": [], "text": "1 The order of processing is also unspecified at Steps 7, 8 and 9. Depending upon the domain, practical advantage may be obtained by specific orderings at these steps.\nOPUS s has been used in a machine learning context to search the space of all generalizations that may be formed through deletion of conjuncts from a highly specific classification rule. The goal of this search is to uncover the set of all most general rules that cover identical objects in the training data to those covered by the original rule (Webb, 1994a)." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "OPUS o", "publication_ref": [], "table_ref": [], "text": "A number of changes are warranted if OPUS is to be applied to optimization search. The following definition of OPUS o , a variant of OPUS for optimization search, assumes that two domain specific functions are available. The first of these functions, value(n), returns the value of the state for node n, such that the higher the value returned, the higher the preference for the state1 . The second function, optimisticV alue(n, o) returns a value such that if there exists a node, b, that can be created by application of any combination of operators in the set of operators o to the state for node n, and b represents a best solution (maximizes value for the search space), optimisticV alue(n, o) will be no less than value(b). This is used for pruning sections of the search tree. In general, the lower the values returned by optimisticV alue, the greater the efficiency of pruning. At any time, it is possible to prune any node with an optimistic value that is less than or equal to the best value of a node explored to date.\nOPUS o is able to take advantage of the presence of optimistic values to further optimize the effect of pruning beyond that obtained solely by maximizing the proportion of the search space placed under nodes that are immediately pruned. Generalizing a heuristic used in FSS, nodes with lower optimistic values are given more active operators and thus have greater proportions of the search space placed beneath them than nodes with higher optimistic values. This is achieved by the order of processing at Step 9. The rationale for this strategy is that the lower the optimistic value the higher the probability that the node and its associated search tree will be pruned before it is expanded. Maximizing the proportion of the search space located below nodes with low optimistic value maximizes the proportion of the search space to be pruned and thus not explicitly explored.\nFigure 9 illustrates this effect with respect to a simple machine learning task-search for a propositional expression that describes the most target examples and no non-target examples. The seven search operators each represent conjunction with a specific proposition male, f emale, single, married, young, mid and old, respectively. Search starts from the expression anything. A total of 128 expressions may be formed by conjunction of any combination of these expressions. Twelve objects are defined: male, single, young, TARGET male, single, mid, TARGET male, single, old, TARGET male, married, young, NON-TARGET male, married, mid, NON-TARGET male, married, old, NON-TARGET female, single, young, NON-TARGET female, single, mid, NON-TARGET female, single, old, NON-TARGET female, married, young, NON-TARGET female, married, mid, NON-TARGET female, married, old, NON-TARGET.\nOf these objects, the first three are distinguished as targets. The value of an expression is determined by two functions, negCover and posCover. The negCover of an expression is the number of non-target objects that it matches. The posCover of an expression is the number of target objects that it matches. The expression anything matches all objects. The value of an expression is -∞ if negCover is not equal to zero. Otherwise the value equals posCover. This preference function avoids expressions that cover any negative objects and favors, of those expressions that cover no negative objects, those expressions that cover the most positive objects. The optimistic value of a node equals the posCover of the node's expression.\nFigure 9 depicts the nine nodes considered by OPUS o for this search task. For each node the following are listed:\n• the expression;\n• the number of target and the number of non target objects matched (cover);\n• the value;\n• the potential value; and\n• the operators placed in the node's set of active operators and hence included in the search tree below the node.\nThe search space is traversed as follows. The first node, anything, is expanded, producing its seven children for which values and optimistic values are determined. No node can be pruned as all have potential values greater than the best value so far encountered. The active operators are then distributed, maximizing the proportion of the search space placed below nodes with low optimistic values. Of the two nodes with the highest optimistic values, male and single, one receives no active operator and the other receives the first as its sole active operator. One or the other is then expanded. If it is the one with no active operators, single, no further nodes are generated. Then the other, male, is expanded, generating a single node, male ∧ single, with a value of 3. Immediately this node is generated, all remaining open nodes can be pruned as none has an optimistic value greater than this new maximum value, 3.\nNote that no nodes can be pruned until the node for male∧single is considered as, up to that point, no node has been encountered with a lower optimistic value than the best actual value. Consequently, if the search tree was not distributed in accord with potential value, the set of active operators for the node male would be {f emale, single, married, young, mid, old}. Instead of considering a single node when male was expanded, it would be necessary to If the search space was more complex and continued to depth three or beyond, there would be a commensurate increase in the proportion of the search space explored unnecessarily.\nNote also that the search in this example does not terminate when the goal node is first encountered, as the system cannot determine that it is a goal node until all other nodes that might have higher values have been explored or pruned." }, { "figure_ref": [ "fig_5" ], "heading": "The OPUS o Algorithm", "publication_ref": [], "table_ref": [], "text": "OPUS o , the algorithm for achieving the above effect, can be defined as in Figure 10. Note that optimistic pruning need not be performed at Step 8 as it is performed at Step 10a, irrespective.\nThis definition of OPUS o assumes that a single operator cannot be applied more than once along a single path through the search space. To allow multiple applications of a single operator, the order of Steps 9a and 9b should be reversed.\nThe algorithm could also be modified to identify and return all maximal solutions through a modification similar to that outlined above to allow OPUS s to return all solutions.\nIt is possible to further improve the performance of OPUS o if there is a lower limit on an acceptable solution. Then, the objective of the search is to find a highest valued node so long as that value is greater than a pre-specified minimum. In this case, all nodes whose potential value is less than or equal to the minimum may also be pruned at Step 10a.\nLike OPUS s , OPUS o does not specify the order in which nodes in OPEN should be expanded (Step 4). Selection of a node with the highest optimistic value will minimize the size of the search tree. If there is a single node n that optimizes the optimistic value, the search cannot terminate until n has been expanded. This is because no node with a lower optimistic value may yield a solution with a value higher than the optimistic value of n. However, an expansion of n may yield a solution that has a value higher than other candidate's optimistic values, allowing those other candidates to be discarded without expansion. Thus, selecting a single node with the highest optimistic value is optimal with respect to the number of nodes expanded because it maximizes the number of nodes that may be pruned without expansion. Where multiple nodes all maximize the optimistic value, at least one of these must be expanded before the search can terminate (and then the search will only terminate if expansion of that node leads to a node with a value equal to that optimistic value.)\nIn many cases it is more important to consider the number of nodes explored by an algorithm, rather than the number of nodes expanded. A node is explored if it is evaluated. Every time a node is expanded, all of its children will be explored. Many of these children may be pruned, however, and never be expanded. In addition to minimizing the number of nodes expanded, this form of best-first search will also minimize the number of nodes explored (within the constraint that where nodes have equal optimistic values it is not possible to anticipate which one to select in order to minimize the number of nodes explored). This is due to the strategy that the algorithm employs to distribute operators beneath nodes. The nodes that OPUS o expands under best-first search will be those with highest optimistic value. OPUS o always allocates fewer active operators to a node with higher optimistic value than to a node with lower optimistic value. The number of nodes examined when a node 2. Initialize BEST , the best node examined so far, to s. n is expanded equals the number of active operators at n. Hence, the number of nodes examined for those nodes expanded will be minimized (within the constraints of the use only of information that can be derived from the current state and the operators that are active at that state). However, while this best-first approach minimizes the number of nodes expanded, it may not be storage optimal due to the large potential storage overheads. If the storage overhead is of concern, depth-first rather than best-first traversal may be employed, at the cost of a potential increase in the number of nodes that must be expanded. If depth-first search is employed, nodes should be added to OP EN by order of optimistic value at Step 10. This will ensure that nodes open at a single depth will be expanded in a best-first manner, with the benefits outlined above." }, { "figure_ref": [], "heading": "If", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Relation to Previous Search Algorithms", "publication_ref": [ "b15", "b6", "b24" ], "table_ref": [], "text": "OPUS o can be viewed as an amalgamation of FSS (Narendra & Fukunaga, 1977) with A* (Hart, Nilsson, & Raphael, 1968). FSS performs branch and bound search in unordered search spaces, traversing the search space so as to visit each state at most once and dynamically organizing the search tree so as to maximize the proportion of the search space placed under unpromising operators. However, FSS requires that the values of states in the search space be monotonically decreasing. That is, the value of a state cannot increase as a result of an operator application. OPUS o generalizes from FSS by employing both the actual values of states and optimistic evaluation of nodes in the search tree, in a manner similar to A*. In consequence, there are only two minor constraints upon the values of states and the optimistic values of nodes in the search spaces that OPUS o can search. These are the requirements that-\n• for at least one goal state g and for any node n, if g lies below a node n in the search tree, the optimistic value of n be no lower than the value of g; and\n• that any and only states of maximal value be goal states.\nIt follows that OPUS o has wider applicability than FSS. OPUS o also differs from FSS by integrating pruning mechanisms other than optimistic pruning into the search process. This facility is crucial when searching large search spaces such as those encountered in machine learning.\nA further innovation of the OPUS algorithms is the use of the restricted set of operators available at a node in the search tree to enable more focused pruning than would otherwise be the case. There may be circumstances in which it would be possible to reach a goal from the state at a node n, but only through application of operators that are not active at n. The pruning rules are able to take account of the active operators to provide pruning in this context-pruning that would not otherwise be possible. Similarly, the set of active operators can be used to calculate a more concise estimate of the optimistic value than would otherwise be possible.\nOPUS o differs from A* in the manner in which it dynamically organizes the search tree so as to maximize the proportion of the search space placed under unpromising operators. It also differs from A* in that A* relies upon the value of a node being equivalent to the sum of costs of the operations that lead to that node, whereas OPUS allows any method for determining a node's value. Rymon (1993) discusses dynamic organization of the search tree during admissible search through unordered search spaces for the purpose of altering the topology of the data structure (SE-tree) produced. This contrasts with the use of dynamic organization of the search tree in OPUS o to increase search efficiency." }, { "figure_ref": [], "heading": "OPUS and Non-admissible Search", "publication_ref": [], "table_ref": [], "text": "As was pointed out above, although the OPUS algorithms were designed for admissible search, if they are applied with non-admissible pruning rules they may also be used for non-admissible search. This may be useful if efficient heuristic search is required. Most non-admissible heuristic search strategies embed the heuristics in the search For example, beam search relies upon the use of a fixed maximum number of alternative options that are to be considered at any stage during the search. The heuristic is to prune all but the n best solutions at each stage during search. The precise implications of this heuristic for a particular search task may be difficult to evaluate. In contrast, the use of OPUS with non-admissible pruning rules places the non-admissible heuristic in a clearly defined rule which may be manipulated to suit the circumstances of a particular search problem.\nAnother feature of OPUS o is that at all stages it has available the best solution encountered to date during the search. This means that the search can be terminated at any time. When terminated prematurely, the current best solution would be returned on the understanding that this solution may not be optimal. If the algorithm is to be employed in this context, it may be desirable to employ best-first search, opening nodes with highest actual (as opposed to optimistic) value first, on the assumption that this should lead to early investigation of high valued nodes." }, { "figure_ref": [], "heading": "Complexity and Efficiency Considerations", "publication_ref": [ "b26", "b26" ], "table_ref": [], "text": "OPUS ensures that no state is examined more than once (unless identical states can be formed by different combinations of operator applications), using a similar search tree organization strategy to that of fixed-order search. It differs, however, in that instead of placing the largest subsection of the search space under the highest ordered operator, the second largest subsection under the second highest ordered operator, and so on, whenever pruning occurs, the largest possible proportion of the search space is placed under the pruned node, and hence is immediately pruned.\nIf there are n operators active at the node e being expanded, the search tree below and including that node will contain every combination of any number of those operators (the application of none of the operators results in e). Thus, the search tree below and including e will contain 2 n nodes. Exactly half of these, 2 n-1 , will have a label including any single operator o. OPUS ensures that if any operator o is pruned when a node e is expanded, that all nodes containing o are removed from the search tree below e and are never examined (except, of course, the node reached by a single application of o that must be examined in order to determine that o should be pruned). Thus, the search tree below a node is almost exactly halved if a single operator can be pruned. Each subsequent operator pruned at that node reduces the remaining search tree by the same proportion. Thus, the size of the remaining search tree is divided by almost exactly 2 p , where p is the number of operators pruned.\nIn contrast, the number of nodes pruned under fixed-order search depends upon the ranking of the operator within the fixed operator ranking scheme. Only for the highest ranked operator will the same proportion of the search tree be pruned as under OPUS. In general, when an operator is pruned, only those nodes whose labels include that operator in combination exclusively with lower ranked operators will be pruned. This effect is illustrated in Figure 5 in which pruning below {c} removes only {c, d} from the search tree. Thus, 2 n-r -1, nodes are immediately pruned from the search tree, where n is the number of operators active at the node being expanded and r is the ranking within those operators of the operator being pruned, with the highest rank being 1. This contrasts with the 2 n-1 -1 nodes pruned by OPUS.\nHowever, the difference in the number of nodes explored under the two strategies is not quite as great as this analysis might suggest, as (assuming the availability of a reasonable optimistic pruning mechanism) fixed-order search can also prune the operator every time that it is examined deeper in the search tree in combination with higher ranked operators. Thus, in Figure 5, when they were eventually examined, pruning would occur at nodes {a, b, c}, {a, c} and {b, c}. Thus, {a, b, c, d}, {a, c, d} and {b, c, d} would also be pruned from the search tree. In other words, under fixed-order search, if an operator is pruned it will not be considered in combination with any lower ranked operator, but will be considered with every combination of any number of higher ranked operators. There are 2 r-1 combinations of higher ranked operator. It follows that fixed-order search considers this many more nodes than OPUS when a single operator is pruned. Thus, for each operator that can be pruned at a node n, OPUS explores 2 r-1 less nodes below n than fixed-order search.\nAs the rank order of the operators pruned will tend to grow as the number of operators grows, it follows that, in the average case, the advantage accrued from the use of OPUS will grow exponentially as the number of operators grows. OPUS will tend to have the greatest relative advantage for the largest search spaces.\nNote that OPUS is not always able to guarantee that the maximal possible pruning occurs as the result of a single pruning action. For example, if OPUS is being used to search the space of subsets of a set of items, and it can be determined that no superset of the set s at a node may be a solution, but some items are not active at the current node, supersets of s that contain the items that are not active may be explored elsewhere in the search tree. An algorithm that could prune all such supersets could perform more pruning than OPUS. While it might be claimed that Schlimmer's (1993) search method performs such pruning, it should be recalled that it does not prevent the 'pruned' nodes from being generated elsewhere in the tree, but rather, ensures that such nodes are pruned once generated. OPUS, if armed with suitable pruning rules, should also be able to prune such nodes when encountered. OPUS maximizes the pruning performed within the constraints of the localized information to which it has access.\nHowever, while the constraint provided by the active operators prevents OPUS from performing some pruning, it also enables it to perform other pruning that would not otherwise be possible. This is because it is only necessary when considering whether to prune a node to determine whether nodes that can be reached by active operators may contain a solution. Thus, to continue the example of subset search, even when supersets of the set at the current node n are potential solutions, it will still be possible to prune the search tree below n if all of the supersets that are potential solutions contain items that are not active at n. Schlimmer's (1993) approach does not allow pruning in such a context.\nTo illustrate this effect, let us revisit the search space examined in Figures 1 to 7. Even though the search space below {c} has been pruned, 'optimal pruning' cannot take this into account in its optimistic evaluations of other nodes as there is no mechanism by which this information can be communicated to the optimistic evaluation function (other than by actually exploring the space below the node to be evaluated, which defeats the purpose of optimistic evaluation). For example, when evaluating the optimistic value of the node {a}, the optimistic evaluation function cannot return a different value than would be the case if {c} had not been pruned. By contrast, the optimistic evaluation function employed by OPUS o can take account of this by taking the active operators for the current node into consideration. Such an optimistic evaluation function is described in Section 6.1 below. It will often be possible to use the information that particular operators are not available in the search tree below a node to substantially improve the quality of the optimistic evaluation of that node.\nIt should also be noted that no algorithm that does not employ backtracking can guarantee that it will minimize the number of nodes expanded under depth-first search. If a poor node is chosen for expansion under depth-first search, the system is stuck with having to explore the search space below that node before it can return to explore alternatives. No algorithm can guarantee against a poor selection unless the optimistic evaluation function has high enough accuracy to prevent the need for backtracking. It follows that no algorithm that requires backtracking can guarantee that it will minimize the number of nodes that are expanded. Thus, OPUS is heuristic with respect to minimizing computational complexity under depth-first search.\nThe storage requirements of OPUS will depend upon whether depth, breadth or bestfirst search is employed. If depth-first search is employed, the maximum storage requirement will be less than the maximum depth of the search tree multiplied by the maximum branching factor. However, if breadth or best-first search is employed, in the worst case, the storage requirement is exponential. At any stage during the search, the storage requirement is that of storing the frontier nodes of the search. The number of frontier nodes cannot exceed the number of leaf nodes in the complete search tree. For search in which no operator may be applied more than once (subset selection), if there is no pruning, the number of leaf nodes is 2 n-1 , where n is the number of operators. This assertion can be justified as follows. If the order in which operators are considered is invariant, all nodes reached via the last operator considered will be a leaf node. As the search is admissible, the last operator must be considered with every combination of other operators. There are 2 n-1 , other combinations of operators. The order in which operators are considered will not alter the number of leaf nodes in the absence of pruning. For search in which there is no limit on the number of applications of a single operator (sub-multiset selection) there is no upper limit on the potential storage requirements.\nIrrespective of the storage requirements, in the worst case OPUS will have to explore every node in the search space. This will only occur if no pruning is possible during a search. If operators can only be applied once per solution, the number of nodes in the search space will equal 2 n , where n is the number of operators. Thus, the worst case computational complexity of OPUS is exponential, irrespective of whether depth, breadth or best-first search is employed.\nOPUS is clearly inappropriate, both in terms of computational and, when using breadth or best-first search, storage requirements, for search problems in which substantial proportions of the search space cannot be pruned. For domains in which substantial pruning is possible, however, the average case complexity (computational and/or storage) may turn out to be polynomial. Experimental evidence that this is indeed the case for some machine learning tasks is presented below in Section 6.3." }, { "figure_ref": [], "heading": "How the Search Efficiency of OPUS Might be Improved", "publication_ref": [], "table_ref": [], "text": "As is noted in Section 5.4, the OPUS algorithms are not always able to guarantee that the maximum possible amount of pruning is performed. As noted, one restriction upon the amount of pruning performed is the localization inherent in the use of active operators. While this localization allows some pruning that would not otherwise be possible, it also has the potential to restrict the number of supersets of the set of operators at a pruned node that are also pruned. There may be value in developing mechanisms that enable such pruning to be propagated beyond the node at which a pruning action occurs and the sub-tree below that node.\nAnother aspect of the algorithms that has both positive and negative aspects is the type of information returned by the pruning mechanisms. These mechanisms allow the pruning of any branch of the search tree so long as at least one goal is not below that branch. This contrasts with an alternative strategy of only pruning branches that do not lead to any goal. The strategy used can be beneficial, as it maximizes the amount of pruning that can be performed. However, it is always possible that a branch containing a goal that could be found with little exploration will be pruned in favor of a branch containing a goal that requires extensive exploration to uncover. There is potential for gain through augmenting the current pruning mechanisms with means of estimating the search cost of uncovering a goal beneath each branch in a tree." }, { "figure_ref": [], "heading": "Evaluating the Effectiveness of the OPUS Algorithms", "publication_ref": [ "b2", "b31", "b33", "b19", "b10", "b7", "b11", "b24", "b17", "b3", "b28", "b3", "b28", "b29", "b11" ], "table_ref": [], "text": "Theoretical analysis has demonstrated that OPUS will explore fewer nodes than fixed-order search and that the magnitude of this advantage will increase as the size of the search space increases. However, the precise magnitude of this gain will depend upon the extent and distribution within the search tree of pruning actions. Of further interest, there are a number of distinct elements to each of the OPUS algorithms, including-optimistic pruning; other pruning (pruning in addition to optimistic pruning); dynamic reorganization of the search tree; and maximization of the proportion of the search space placed under nodes with low optimistic value. The following experiments evaluate the magnitude of the advantage to OPUS obtained for real world search tasks and explore the relative contribution of each of the distinct elements of the OPUS algorithms.\nTo this end, OPUS o was applied to a class of real search tasks-finding pure conjunctive expressions that maximize the Laplace accuracy estimate with respect to a training set of preclassified example objects. This is, for example, the search task that CN2 purports to heuristically approximate (Clark & Niblett, 1989) when forming the disjuncts of a disjunc-tive classifier. Machine learning systems have employed OPUS o in this manner to develop rules for inclusion both in sets of decision rules (Webb, 1993) and in decision lists (Webb, 1994b). (The current experiments were performed using the Cover learning system, which, by default, performs repeated search for pure conjunctive classifiers within a CN2-like covering algorithm that develops disjunctive rules. This more extended search for disjunctive rules was not used in the experiments, as it makes it difficult to compare alternative search algorithms. This is because, if two alternative algorithms find different pure conjunctive rules for the first disjunct, their subsequent search will explore different search spaces.)\nNumerous efficient admissible search algorithms exist for developing classifiers that are consistent with a training set of examples. The two classic algorithms for this purpose are the least generalization algorithm (Plotkin, 1970) and the version space algorithm (Mitchell, 1977). The least generalization algorithm finds the most specialized class description that covers all objects in a training set containing only positive examples. The version space algorithm finds all class descriptions that are complete and consistent with respect to a training set of both positive and negative examples. Hirsh (1994) has generalized the version space algorithm to find all class descriptions that are complete and consistent to within defined bounds of the training examples. The least generalization and version space algorithms will usually require a strong inductive bias in the class description language (restriction on the types of class descriptions that will be considered) if they are to find useful class descriptions (Mitchell, 1980). SE-tree-based learning (Rymon, 1993) demonstrates admissible search for a set of consistent class descriptions within more complex class description languages than may usefully be employed with the least generalization or version space algorithms. Oblow (1992) describes an algorithm that employs admissible search for pure conjunctive terms within a heuristic outer search for k-DNF class descriptions that are consistent with the training set.\nHowever, for many learning tasks it is desirable to consider class descriptions that are inconsistent with the training set. One reason for this is that the training set may contain noise (examples that are inaccurate). Another reason is that it may not be possible to accurately describe the target class in the available language for expressing class descriptions. In this case it is necessary to consider approximations to the target class. A further reason is that the training set may contain insufficient information to reliably determine the exact class description. In this case, the best solution may be an approximation that is known to be incorrect but for which there is strong evidence that the level of error is low.\nBoth Clearwater and Provost (1990) and Segal and Etzioni (1994) use admissible fixedorder search to explore classifiers that are inconsistent with the training set. However, the admissible search of Clearwater and Provost (1990) is not computationally feasible for large search spaces. Segal and Etzioni (1994) bound the depth of the search space considered in order to maintain computational tractability. Smyth and Goodman (1992) use optimistic pruning to search for optimal rules, but do not structure their search to ensure that states are not searched multiple times. No other previous admissible search algorithm has been employed in machine learning to find classifiers that are inconsistent with the training set and maximize an arbitrary preference function. The following experiments seek to demonstrate that such search is feasible using OPUS.\nWhere it is allowed that a class description may be inconsistent with the training set, it is helpful to employ an explicit preference function. Such a function is applied to a class description and returns a measure of its desirability. This evaluation will usually take account of how well the description fits that training set and may also include a bias toward particular types of class descriptions, for example, a preference for syntactic simplicity. Such a preference function expresses an inductive bias (Mitchell, 1980).\nOPUS o may be employed for admissible search in such contexts, provided a search space can be defined that may be traversed by a finite number of unordered search operators. For example, OPUS o may be employed to search for a class description in a language of pure-conjunctive descriptions by examining a search space starting with the most general possible class description true and employing search operators, each of which has the effect of conjoining a specific clause to the current description. Such search may be performed with an arbitrary preference function, provided appropriate optimistic evaluation functions can be defined.\nThe next section describes experiments in which OPUS o was applied in this manner." }, { "figure_ref": [], "heading": "The Search Task", "publication_ref": [ "b9", "b1" ], "table_ref": [], "text": "The pure conjunctive expressions consisted of conjunctions of clauses of the form attribute = value. For attributes with more than two values, such a language is more expressive than a language allowing only conjunctions of clauses of the form attribute = value. Indeed, it has equivalent expressiveness to a language that supports internal disjunction. For example, with respect to an attribute a with the values x, y and z, a language restricted to conjunctions of equality expressions cannot express a = x, whereas a language restricted to conjunctions of inequality expressions can express a = x using the expression a = y ∧ a = z.\nIn internal disjunctive (Michalski, 1984) terms, a = x is equivalent to a = y or z.\nIt should be noted that-\n• For attributes with more than two values the search space for conjunctions of inequality expressions is far larger than the search space for conjunctions of equality expressions. For each attribute, the size of the search space is multiplied by 2 n for the former and by n + 1 for the latter, where n is the number of values for the attribute.\n• The software employed in this experimentation can also be used to search the smaller search spaces of equality expressions with the same effects as are demonstrated in the following experiments.\nSearch starts from the most general expression, true. Each operator performs conjunction of the current expression with a term A = v, where A is an attribute and v is any single value for that attribute.\nThe Laplace (Clark & Boswell, 1991) preference function was used to determine the goal of the search. This function provides a conservative estimate of the predictive accuracy of a class description, e. It is defined as\nvalue(e) =\nposCover(e) + 1 posCover(e) + negCover(e) + noOf Classes where posCover(e) is the number of positive objects covered by e; negCover(e) is the number of negative objects covered by e; and noOf Classes is the number of classes for the learning task.\nThe Laplace preference function trades-off accuracy against generality. It favors class descriptions that cover more positive objects over class descriptions that cover fewer, and favors class descriptions for which a lower proportion of the cover is negative over those for which it is higher. In the following study, the Laplace preference function was employed with a pruning mechanism at Step 10a of the OPUS o algorithm that pruned sections of the search space with optimistic values less than or equal to the value of a class description that covered no objects. If there was no solution with a value higher than that obtained by a class description that covered no objects, no rule was developed for the class.\nThe optimistic value function is derived from the observation that the cover of specializations of an expression must be subsets of the cover of that expression. Thus, specializations of an expression may not cover more positive objects, but may cover fewer negative objects than are covered by the original expression. As the Laplace preference function is maximized when positive cover is maximized and negative cover is minimized, no specialization of the expression at a node may have higher value than that obtained with the positive cover of that expression and the smallest negative cover within the sub-tree below the node. The smallest negative cover within a sub-tree below a node n is obtained by the expression formed by applying all operators active at n to the expression at n.\nOther pruning can be performed through the application of cannotImprove(n 1 , n 2 ), a boolean function that is true of any two nodes n 1 and n 2 in the search tree such that n 2 is either the child or sibling of n 1 and no specialization of n 2 may have a higher value than the highest value in the search tree below n 1 inclusive but excluding the search tree below n 2 . This function may be defined as cannotImprove(x, y) ← neg(x) ⊆ neg(y) ∧ pos(x) ⊇ pos(y)\nwhere neg(n) denotes the set of negative objects covered by the description for node n and pos(n) denotes the set of positive objects covered by the description for node n. If cannotImprove(n 1 , n 2 ) then search below n 2 cannot lead to a higher valued result than can be obtained by search through specialization's of n 1 excluding nodes in the search space below n 2 . This can be shown where n 1 is the parent and n 2 is the child node as follows. If n 1 is the parent of n 2 then the expression for n 2 must be a specialization of the expression for n 1 and all operators available for n 2 must be available for n 1 . For any expression g and its specialization, s, if neg(g) ⊆ neg(s) then neg(g) = neg(s) (as specialization can only decrease cover). It follows that for any further specialization of n 2 , n 3 , obtained by applications of operators O, there must be a specialization of n 1 obtained by application of operators O, n 4 , which is a generalization of n 3 and which has identical negative cover to n 3 . As n 4 is a generalization of n 3 , it must cover all positive objects covered by n 3 . Therefore, n 4 must have equal or greater positive cover and equal negative cover to n 3 and consequently must have an equal or greater value. It follows that it must be possible to reach from n 1 a node of at least as great a value as the greatest valued node below n 2 without applying the operator that led from n 1 to n 2 .\nNext we consider the case where n 1 and n 2 are siblings. It follows from the definition of cannotImprove that neg(n 1 ) ⊆ neg(n 2 ) and pos(n 1 ) ⊇ pos(n 2 ). Let the operators o 1 and o 2 be those that led from the parent node p to n 1 and n 2 , respectively. It follows that o 2 cannot exclude any negative objects from expressions below p not also excluded by o 1 and that o 1 cannot exclude any positive objects from expressions below p not also excluded by o 2 . Therefore, application of o 2 below n 1 will have no effect on the negative cover of the expression but may reduce positive cover. For any expression e reached below n 2 by a sequence of operator applications O, application of O to n 1 cannot result in an expression with lower positive or higher negative cover than that of e.\nThe cannotImprove function was employed to prune nodes at Step 8 of the OPUS o algorithm." }, { "figure_ref": [], "heading": "Experimental Method", "publication_ref": [ "b13", "b3", "b24", "b26", "b28", "b30" ], "table_ref": [ "tab_3" ], "text": "This search was performed on fourteen data sets from the UCI repository of machine learning databases (Murphy & Aha, 1993). These were all the data sets from the repository that the researcher could at the time of the experiments identify as capable of being readily expressed as a categorical attribute-value learning tasks. These fourteen data sets are described in Table 1. The number of attribute values (presented in column 3) treats missing values as distinct values. The space of class descriptions that OPUS considers for each domain (and hence the size of the search space examined for each pure conjunctive rule developed) is 2 n , where n is the number of attribute values. Thus, for the Audiology domain, for each class description developed, the search space was of size 2 162 . Columns 4 and 5 present the number of objects and number of classes represented in the data set, respectively.\nThe search was repeated once for each class in each data set. For each such search, the objects belonging to the class in question were treated as the positive objects and all other objects in the data set were treated as negative objects. This search was performed using each of the following search methods-OPUS o ; OPUS o without optimistic pruning; OPUS o without other pruning; OPUS o without optimistic reordering; and fixed-order search, such as performed by Clearwater and Provost (1990), Rymon (1993), Schlimmer (1993), Segal and Etzioni (1994) and Webb (1990).\nOptimistic pruning was disabled by removing the condition from Step 10a of the OPUS o algorithm. In other words, Step 10(a)i was always performed.\nOther pruning was disabled by removing Step 8 from the OPUS o algorithm. Optimistic reordering was disabled by changing Step 9 to process each node in a predetermined fixed-order, rather than in order by optimistic value. Under this treatment, the topology of the search tree is organized in a fixed-order, but operators that are pruned at a node are removed from consideration in the entire subtree below that node.\nFixed-order search was emulated by disabling Step 8b and disabling optimistic reordering, as described above.\nAll of the algorithms are to some extent under-specified. OPUS o , no optimistic pruning and no other pruning are all leave unspecified the order in which operators leading to nodes with equal optimistic values should be considered at Step 9. Such ambiguities were resolved in the following experiments by ordering operators leading to nodes with higher actual values first. Where two operators tied on both optimistic and actual values, the operator mentioned first in the names file that describes the data was selected first.\nNo optimistic reordering and fixed-order search both leave unspecified the fixed-order that should be employed for traversing the search space. As fixed-order search is representative of previous approaches to unordered search employed in machine learning, and thus it is important to obtain a realistic evaluation of its performance, ten alternative random orders were generated and all employed for each fixed-order search task. While, due to the high variability in performance under different orderings, it would have been desirable to explore more than ten alternative orderings, this was infeasible due to the tremendous computational demands of this algorithm. The comparison with no optimistic reordering was considered less crucial, as it is used solely to evaluate the effectiveness of one aspect of the OPUS o algorithm, and thus, due to the tremendous computational expense of this algorithm, a single fixed ordering was used, employing the order in which attribute values are mentioned in the names file.\nAll of the algorithms leave unspecified the order in which nodes with equal optimistic values should be selected from OP EN under best-first search, or directly expanded under depth-first search. Under best first search nodes with equal optimistic values were removed from OP EN in a last-in-first-off order. Under depth-first search, nodes with equal optimistic value were expanded in the same order as was employed for allocating operators at Step 9.\nNote that the fixed-order search and OPUS o with disabled optimistic reordering conditions both used optimistic and other pruning. Note also that while fixed-order search ordered the topology of the search tree in the manner depicted in Figure 4, it explored that tree in either a best or depth-first manner." }, { "figure_ref": [ "fig_6" ], "heading": "Experimental Results", "publication_ref": [], "table_ref": [ "tab_6", "tab_7", "tab_4", "tab_5" ], "text": "Tables 2 and 3 present the number of nodes examined by each search in this experiment. For each data set the total number of nodes explored under each condition is indicated. For fixed-order search, the mean of all ten runs is presented. Tables 4 and5 present for fixed-order search the number of runs that completed successfully, the minimum number of nodes examined by a successful run, the mean number of nodes examined by successful runs (repeated from Tables 2 and3) and the standard deviations for those runs. Every node generated at Step 7a is counted in the tally of the number of nodes explored. A hyphen (-) indicates that the search could not be completed as the number of open nodes made the system exceed a predefined virtual memory limit of 250 megabytes. An asterisk (*) indicates that the search was terminated due to exceeding a pre-specified compute time limit of twenty-four CPU hours. (For comparison, the longest CPU time taken for any data set by OPUS o was sixty-seven CPU seconds on the Wisconsin Breast Cancer data under depth-first search.) It should be noted that one pure conjunctive rule was developed for each class. As a separate search was performed for each rule, the number of searches performed equals the number of classes. Thus, for the Audiology data using best-first search OPUS o explored just 7,044 nodes to perform 24 admissible searches of the 2 162 node search space.\nFor only two search tasks does OPUS o with best-first search explore more nodes than an alternative. For the Lenses data, OPUS o explores 41 nodes while no optimistic reordering explores 38. For the Monk 2 data, OPUS o explores 4,326 nodes while the best of ten fixedorder runs with different random fixed orders explores 4,283 nodes. It is possible that these outcomes have arisen from situations where two sibling nodes share the same optimistic value. In such a case, if two approaches each select different nodes to expand first, one may turn out to be a better choice than the other, leading to the exploration of fewer nodes. To test the plausibility of this explanation, OPUS o was run again on the Lenses data set with Step 8 altered to ensure that where two siblings have equal optimistic value they are ordered in the same order as was employed with no optimistic reordering. This resulted in the exploration of just 36 nodes, fewer than any alternative. When OPUS o and fixed-order were run with fixed-order using the order of attribute declaration in the data file to determine operator order and OPUS o using the same order to order siblings with equal optimistic values, the numbers of nodes explored for the Monk 2 data were 4,302 for OPUS o and 8,812 for fixed-order search.\nIt is notable that this effect is only apparent for very small search spaces. This is significant because it suggests that there is only an effect of small magnitude resulting from a poor choice of node to expand when two nodes have equal optimistic value. This is to be expected. Consider the case where there are two nodes n 1 and n 2 with equal highest optimistic value, v, but n 1 leads to a goal whereas n 2 does not. If n 2 is expanded first, so long as no child of n 2 has an optimistic value greater than or equal to v, the next node to be expanded will be n 1 , as n 1 will now be the node with the highest optimistic value. (If a child of n 2 has an optimistic value greater than v, the optimistic evaluation function cannot be very good, as the fact that n 2 had an optimistic value of v means that no node below n 2 can have a value greater than v.) Thus, the number of unnecessary node expansions due to this effect can never exceed the number of times that nodes with equal highest optimistic values are encountered during the search.\nIn contrast to the case with best-first search, as discussed in Section 5.4, OPUS is only heuristic with respect to minimizing the number of nodes expanded under depth-first search.\nNonetheless, for only one search task, the Monk 2 data set, does OPUS o explore more nodes under depth-first search (16,345) than an alternative (both no optimistic reordering and fixed-order search that explore 12,879 and 12,791 nodes respectively). These results demonstrate that this heuristic is not optimal for this data. It should be noted, however, that the single exception for depth-first search again occurs only for a relatively small search space. This suggests that efficient exploration of the search space below a poor choice of node can do much to minimize the damage done by that poor choice, even when there is no backtracking as is the case for depth-first search.\nFor five data sets (House Votes 84, Lymphography, Mushroom, Primary Tumor and Soybean Large), disabling optimistic pruning has little effect under best-first search. Disabling optimistic pruning always has large effect under depth-first search. Under best-first search the smallest increase caused by disabling optimistic pruning is an increase of just one node for both the Lymphography and Mushroom data sets. Of those data sets for which it was possible to complete the search without optimistic pruning, the biggest effect was an almost 1,500 fold increase in the number of nodes explored for the Tic Tac Toe data. Under depth-first search, of those data sets for which processing could be completed without optimistic pruning, the smallest increase was a five-fold increase for the Monk 2 data and the largest increase was a 30,000 fold increase for the Lymphography data.\nFor seven data sets (House Votes 84, Lenses, Monk 1, Monk 2, Monk 3, F11 Multiplexor, Tic Tac Toe) disabling other pruning had little or no effect under best-first or depth-first search. The largest effects are 2.5 fold increases for the Soybean Large and Wisconsin Breast Cancer data sets under best-first search and for the Audiology, Soybean Large and Wisconsin Breast Cancer data sets under depth-first search.\nFrom these results it is apparent that while there are some data sets for which each pruning technique has little effect (so long as the other is also employed), there are also data sets for which other pruning more than halves the amount of the search space explored and data sets for which optimistic pruning reduces the amount of the search space explored to thousandths of that which would otherwise be explored.\nThe effect of optimistic reordering was also highly variable. For two search tasks (bestfirst search for the Lenses data set and depth-first search for the Monk 2 data set) its use actually resulted in a slight increase in the number of nodes explored. This is discussed above. In many cases, however, the effect of disabling optimistic reordering was far greater than that of disabling optimistic pruning. Processing could not be completed without optimistic reordering for three of the best-first search tasks and one of the depth-first search tasks. Of those tasks for which search could be completed, the largest effect for best-first search was a 2,500 fold increase in the number of nodes explored for the Soybean Large data. Under depth-first search, of those tasks for which search could be completed, the largest effect was an 8,000 fold increase for the Slovenian Beast Cancer data. While it would be desirable to evaluate the effect of alternative fixed-orderings of operators on these results, it seems that optimistic reordering is critical to the general success of the algorithm.\nFor all but one data set (the Monk 2 data under depth-first search), fixed-order search on average explores substantially more nodes than OPUS o . It was asserted in Section 5.4 that the average case advantage from the use of OPUS o as opposed to fixed-order search will tend to grow exponentially as the number of search operators increases. As the number of attribute values increases, so does the relative advantage. For the four data sets with the greatest number of attribute values (Audiology, Mushroom, Soybean Large and Wisconsin Breast Cancer) in only one case (depth-first search of the Mushroom data) does the fixed-order search terminate. In this one case, OPUS o enjoys a 350,000-fold advantage. These results lend credibility to the claim that OPUS o 's average case advantage over fixed-order search is exponential with respect to the size of the search space. This is illustrated in Figure 11. In this figure, for searches for which fixed-order search terminated within the resource constraints, the size of the search space is plotted against log 2 (f /o) where f is the number of nodes explored by fixed-order search and o is the number of nodes explored by OPUS o . It seems clear from these results that admissible fixed-order search is not practical for many of these search tasks within the scope of current technology.\n• • • • • • • • • • • • • • • • • • • • • best-first search • depth-first search\nIt is interesting to observe that under best-first search, for all of the four artificial data sets (Monk 1, Monk 2, Monk 3 and F11 Multiplexor) fixed-order search often explores slightly fewer nodes than OPUS o with optimistic reordering disabled. The difference between these two types of search is that the latter deletes pruned operators from the sets of active operators under higher ordered operators whereas the former does not. Thus the latter prunes more nodes from the search tree with each pruning operation. It seems counter-intuitive that this increased pruning should sometimes lead to the exploration of more nodes. To understand this effect it is necessary to recall that other pruning can prune solutions from the search tree so long as there are alternative solutions available. For the artificial data sets in question, retaining alternative solutions in the search tree in some cases leads to a slight increase in search efficiency as the alternative can be encountered earlier than the first solution. Despite this minor advantage for a number of artificial data sets to fixed-order search over OPUS o with optimistic reordering disabled, the latter enjoys a large advantage for all other data sets for which processing could be completed. For the House Votes 84 data, fixed-order search explores over 3.5 times as many nodes under best-first search and over 350 times as many under depth-first search.\nIt can be seen that there is some reason to believe that the the average case number of nodes explored by OPUS o is only polynomial with respect to the search space size for these machine learning search tasks. The numbers of nodes explored for the three largest search spaces are certainly not suggestive of an exponential explosion in the numbers of nodes examined (Audiology-2 162 nodes in the search space: 7,044 and 7,011 nodes examined. Soybean Large-2 135 nodes in the search space: 8,304 and 9,562 nodes examined. Mushroom-2 126 nodes in the search space: 391 and 386 nodes examined.)\nIt is interesting that there is little difference in the number of nodes explored by OPUS o using either best or depth-first search for most data sets. Surprisingly, slightly fewer nodes are explored by depth-first search for three of the data sets (Audiology, Lenses and Mushroom). This will be for similar reasons to those presented above in the context of the occasional slight advantage enjoyed by fixed-order search over OPUS o with optimistic reordering disabled. In some cases depth-first search fortuitously encounters alternative solutions to those found by best-first search. To evaluate the plausibility of this explanation, OPUS o was run on the three data sets in question using the fixed-order ordering to order operators with equal optimistic values. The resulting numbers of nodes explored were Audiology: 6678, Lenses: 36 and Mushroom: 385. As can be seen, these numbers are in all cases lower than the numbers of nodes explored under depth-first search. As is the case when OPUS o was outperformed by other best-first strategies, this effect appears to be of small magnitude and thus is only significant where small numbers of nodes need to be explored. For four of the data sets depth-first search explores substantially more nodes than best-first search (Slovenian Breast Cancer, 75%; Monk 2, 275%; Primary Tumor, 67%; and Tic Tac Toe, 33%)." }, { "figure_ref": [], "heading": "Summary of Experimental Results", "publication_ref": [], "table_ref": [], "text": "The experiments demonstrate that admissible search for pure conjunctive classifiers is feasible using OPUS o for the types of learning task contained in the UCI repository.\nThey also support the theoretical findings that OPUS o will in general explore fewer nodes than fixed-order search and that the magnitude of this advantage will tend to grow exponentially with respect to the size of the search space.\nOptimistic pruning and other pruning are both demonstrated to individually provide large decreases in the number of nodes explored for some search spaces but to have little effect for others. Optimistic reordering is demonstrated to have a large impact upon the number of nodes explored.\nThe results with respect to the search of the largest search spaces suggest that the average case complexity of the algorithm is less than exponential with respect to search space size." }, { "figure_ref": [], "heading": "Summary and Future Research", "publication_ref": [ "b4", "b31", "b20" ], "table_ref": [], "text": "The OPUS algorithms have potential application in many areas of endeavor. They can be used to replace admissible search algorithms for unordered search spaces that maintain explicit lists of pruned nodes, such as currently used in ATMS (de Kleer, 1986). They may also support admissible search in a number of application domains, such as learning classifiers that are inconsistent with a training set, that have previously been tackled by heuristic search.\nIn addition to their applications for admissible search, the OPUS algorithms may also be used for efficient non-admissible search through the application of non-admissible pruning rules. The OPUS o algorithm is also able to return a solution if prematurely terminated at any time, although this solution may be non-optimal.\nThe availability of admissible search is an important step forward for machine learning research. While the studies in this paper have employed OPUS o to optimize the Laplace preference function, the algorithm could be used to optimize any learning bias. This means that for the first time it is possible to isolate the effect of an explicit learning bias from any implicit learning bias that might be introduced by a heuristic search algorithm and its interaction with that explicit bias.\nThe application of OPUS o to provide admissible search in machine learning has already proved to be productive. Webb (1993) used OPUS o to demonstrate that heuristic search that fails to optimize the Laplace accuracy estimate within a covering algorithm frequently results in the inference of better classifiers than found by admissible search that does optimize this preference function. It was to explain this result that Quinlan and Cameron-Jones (1995) developed their theory of oversearching.\nThe research reported herein has demonstrated that OPUS can provide efficient admissible search for pure conjunctive classifiers on all categorical attribute-value data sets in the UCI repository. It would be interesting to see if the techniques can be extended to more powerful machine learning paradigms such as continuous attribute-value and first-order logic domains.\nThe research has also demonstrated the power of pruning. This issue has been given scant attention in the context of search for machine learning. Although it is presented here in the context of admissible search, the pruning rules presented are equally applicable to heuristic search. The development of these and other pruning rules may prove important as machine learning tackles ever more complex search spaces.\nOPUS provides efficient admissible search in unordered search spaces. When creating a machine learning system it is necessary to consider not only what to search for (the explicit learning biases) but also how to search for it (appropriate search algorithms). It has been assumed previously that such algorithms must necessarily be heuristic techniques for approximating the desired explicit biases. Admissible search decouples these two issues by removing confounding factors that may be introduced by the search algorithm. By guaranteeing that the search uncovers the defined target, admissible search makes it possible to systematically study explicit learning biases. By supporting efficient admissible search, OPUS for the first time brings to machine learning the ability to clearly and explicitly manipulate the precise inductive bias employed in a complex machine learning task." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research has been supported by the Australian Research Council. I am grateful to Riichiro Mizoguchi for pointing out the potential for application of OPUS in truth maintenance. I am also grateful to Mike Cameron-Jones, Jon Patrick, Ron Rymon, Richard Segal, Jason Wells, Leslie Wells and Simon Yip for numerous helpful comments on previous drafts of this paper. I am especially indebted to my anonymous reviewers whose insightful, extensive and detailed comments greatly improved the quality of this paper.\nThe Breast Cancer, Lymphography and Primary Tumor data sets were provided by the Ljubljana Oncology Institute, Slovenia. Thanks to the UCI Repository, its maintainers, Patrick Murphy and David Aha, and its donors, for providing access to the data sets used herein." } ]
[ { "authors": "B G Buchanan; E A Feigenbaum; J Lederberg", "journal": "", "ref_id": "b0", "title": "A heuristic programming study of theory formation in science", "year": "1971" }, { "authors": "P Clark; R Boswell", "journal": "", "ref_id": "b1", "title": "Rule induction with CN2: Some recent improvements", "year": "1991" }, { "authors": "P Clark; T Niblett", "journal": "Machine Learning", "ref_id": "b2", "title": "The CN2 induction algorithm", "year": "1989" }, { "authors": "S H Clearwater; F J Provost", "journal": "IEEE Computer Society Pres", "ref_id": "b3", "title": "RL4: A tool for knowledge-based induction", "year": "1990" }, { "authors": "J De Kleer", "journal": "Artificial Intelligence", "ref_id": "b4", "title": "An assumption-based TMS", "year": "1986" }, { "authors": "J De Kleer; A K Mackworth; R Reiter", "journal": "", "ref_id": "b5", "title": "Characterizing diagnoses", "year": "1990" }, { "authors": "P Hart; N Nilsson; B Raphael", "journal": "IEEE Transactions on System Sciences and Cybernetics, SSC", "ref_id": "b6", "title": "A formal basis for the heuristic determination of minimum cost paths", "year": "1968" }, { "authors": "H Hirsh", "journal": "Artificial Intelligence", "ref_id": "b7", "title": "Generalizing version spaces", "year": "1994" }, { "authors": "E L Lawler; D E Wood", "journal": "Operations Research", "ref_id": "b8", "title": "Branch and bound methods: A survey", "year": "1966" }, { "authors": "R S Michalski", "journal": "Springer-Verlag", "ref_id": "b9", "title": "A theory and methodology of inductive learning", "year": "1984" }, { "authors": "T M Mitchell", "journal": "", "ref_id": "b10", "title": "Version spaces: A candidate elimination approach to rule learning", "year": "1977" }, { "authors": "T M Mitchell", "journal": "", "ref_id": "b11", "title": "The need for biases in learning generalizations", "year": "1980" }, { "authors": "B M E Moret; H D Shapiro", "journal": "SIAM Journal on Scientific and Statistical Computing", "ref_id": "b12", "title": "On minimizing a set of tests", "year": "1985" }, { "authors": "P Murphy; D Aha", "journal": "", "ref_id": "b13", "title": "UCI repository of machine learning databases", "year": "1993" }, { "authors": "P Murphy; M Pazzani", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b14", "title": "Exploring the decision forest: An empirical investigation of Occam's Razor in decision tree induction", "year": "1994" }, { "authors": "P Narendra; K Fukunaga", "journal": "IEEE Transactions on Computers", "ref_id": "b15", "title": "A branch and bound algorithm for feature subset selection", "year": "1977" }, { "authors": "N J Nilsson", "journal": "McGraw-Hill", "ref_id": "b16", "title": "Problem-solving Methods in Artificial Intelligence", "year": "1971" }, { "authors": "E M Oblow", "journal": "Machine Learning", "ref_id": "b17", "title": "Implementing Valiant's learnability theory using random sets", "year": "1992" }, { "authors": "J Pearl", "journal": "Addison-Wesley", "ref_id": "b18", "title": "Heuristics: Intelligent Search Strategies for Computer Problem Solving", "year": "1984" }, { "authors": "G D Plotkin", "journal": "Edinburgh University Press", "ref_id": "b19", "title": "A note on inductive generalisation", "year": "1970" }, { "authors": "J R Quinlan; R M Cameron-Jones", "journal": "", "ref_id": "b20", "title": "Oversearching and layered search in empirical learning", "year": "1995" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "R Reiter", "journal": "Artificial Intelligence", "ref_id": "b22", "title": "A theory of diagnosis from first principles", "year": "1987" }, { "authors": "R Rymon", "journal": "", "ref_id": "b23", "title": "Search through systematic set enumeration", "year": "1992" }, { "authors": "R Rymon", "journal": "", "ref_id": "b24", "title": "An SE-tree based characterization of the induction problem", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b25", "title": "", "year": "" }, { "authors": "J C Schlimmer", "journal": "", "ref_id": "b26", "title": "Efficiently inducing determinations: A complete and systematic search algorithm that uses optimal pruning", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b27", "title": "", "year": "" }, { "authors": "R Segal; O Etzioni", "journal": "", "ref_id": "b28", "title": "Learning decision lists using homogeneous rules", "year": "1994" }, { "authors": "P Smyth; R M Goodman", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b29", "title": "An information theoretic approach to rule induction from databases", "year": "1992" }, { "authors": "G I Webb", "journal": "Springer-Verlag", "ref_id": "b30", "title": "Techniques for efficient empirical induction", "year": "1990" }, { "authors": "G I Webb", "journal": "World Scientific", "ref_id": "b31", "title": "Systematic search for categorical attribute-value data-driven machine learning", "year": "1993" }, { "authors": "G I Webb", "journal": "Armidale. World Scientific", "ref_id": "b32", "title": "Generality is more significant than complexity: Toward alternatives to Occam's Razor", "year": "1994" }, { "authors": "G I Webb", "journal": "", "ref_id": "b33", "title": "Recent progress in learning decision lists by prepending inferred rules", "year": "1994" } ]
[ { "formula_coordinates": [ 5, 202.98, 225.48, 207.14, 187.18 ], "formula_id": "formula_0", "formula_text": "{ a, c, d } { b, c, d } { c, d } { d } { } 4 C 0 =1 4 C 1 =4 4 C 2 =6 4 C 3 =4 4 C 4 =1" }, { "formula_coordinates": [ 6, 202.99, 100.46, 213.6, 556.79 ], "formula_id": "formula_1", "formula_text": "{ a, b , d } { a, b } { a, b , d } { a, d } { a } { a, b , d } { a, b } { a, b , d } { b, d } { b } { c } { a, b , d } { a, d } { a, b , d } { b, d } { d } { } 4 C 0 =1 4 C 1 =4 3 C 2 =3 3 C 3 =1 3 C 4 =0 { a, b , c, d } { a, b, c } { a, b, d } { a , b } { a, c, d } { a , c } { a , d } { a } { b, c, d } { b , c } { b ,d } { b } { c, d } { c } { d } { } 4 C 0 =1 4 C 1 =4 4 C 2 =6 4 C 3 =4 4 C 4 =1" }, { "formula_coordinates": [ 7, 203.03, 92.45, 213.56, 280.21 ], "formula_id": "formula_2", "formula_text": "{ a, b , c, d } { a, b, c } { a, b, d } { a , b } { a, c, d } { a , c } { a , d } { a } { b, c, d } { b , c } { b ,d } { b } { c } { d } { } 4 C 0 =1 4 C 1 =4 5 4 C 3 =4 4 C 4 =1 { a, b, d } { a, b } { a, d } { a } { b,d } { b } { c } { d } { } 4 C 0 =1 4 C 1 =4 3 C 3 =1 3 C 4 =0 3 C 2 =3" }, { "formula_coordinates": [ 8, 203.03, 83.49, 208.21, 93.16 ], "formula_id": "formula_3", "formula_text": "{ c } { a,b, d } { a ,b } { a ,d } { a } { b , d } { b } { d } { } 4 C 0 =1 4 C 1 =4 4 C 2 =6 4 C 3 =4 4 C 4 =1" }, { "formula_coordinates": [ 22, 171.72, 639.19, 51.32, 10.54 ], "formula_id": "formula_4", "formula_text": "value(e) =" }, { "formula_coordinates": [ 30, 142.79, 113, 359.01, 198.21 ], "formula_id": "formula_5", "formula_text": "• • • • • • • • • • • • • • • • • • • • • best-first search • depth-first search" } ]
OPUS: An Efficient Admissible Algorithm for Unordered Search
OPUS is a branch and bound search algorithm that enables efficient admissible search through spaces for which the order of search operator application is not significant. The algorithm's search efficiency is demonstrated with respect to very large machine learning search spaces. The use of admissible search is of potential value to the machine learning community as it means that the exact learning biases to be employed for complex learning tasks can be precisely specified and manipulated. OPUS also has potential for application in other areas of artificial intelligence, notably, truth maintenance.
Geoffrey I Webb
[ { "figure_caption": "Figure 2 :2Figure 2: Simple unordered operator search tree with pruning beyond application of operator c", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Simple unordered operator search tree with maximal pruning beyond application of operator c", "figure_data": "", "figure_id": "fig_1", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: The OPUS s Algorithm", "figure_data": "", "figure_id": "fig_2", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Effect of pruning when search tree ordered on optimistic value", "figure_data": "", "figure_id": "fig_3", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "a list called OP EN of unexpanded nodes as follows, (a) Set OP EN to contain one node, the start node s. (b) Set s.active to the set of all operators, {o 1 , o 2 , ...o n } (c) Set s.state to the start state.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: The OPUS o algorithm", "figure_data": "", "figure_id": "fig_5", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Plot of difference in nodes explored by fixed-order and OPUS o search against search space size.", "figure_data": "", "figure_id": "fig_6", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "OP EN is empty, exit successfully with the solution represented by BEST .4. Remove from OP EN a node n, the next node to be expanded. 5. Initialize to n.active a set containing those operators that have yet to be examined, calledRemainingOperators.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Summary of experimental data sets", "figure_data": "###", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Number of nodes explored under best-first search", "figure_data": "NoNoNooptimisticotheroptimistic Fixed-orderData setOPUS opruningpruning reordering(mean)Audiology7,044-24,199--House Votes 84533661554355,0401,319,911Lenses41176413864Lymphography1,1421,1431,684658,3352,251,652Monk 13579,156371925788Monk 24,3266,5784,33510,0125,895Monk 328125,775281682656Multiplexor (F11)2,76996,3712,7694,9324,948Mushroom391392788233,579-Primary Tumor10,89210,89313,1374,242,97829,914,840Slovenian B. C.17,418 4,810,12932,965-42,669,822 †Soybean Large8,304833821,418 21,551,436-Tic Tac Toe2,894 4,222,6412,90216,55916,471Wisconsin B. C.447,786-1,159,011---Execution terminated after exceeding virtual memory limit of 250 megabytes.", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Number of nodes explored under depth-first search", "figure_data": "NoNoNoData setOPUS ooptimistic pruningother pruningoptimistic reorderingFixed-order (mean)Audiology7,011*17,1913,502,475*House Votes 8456817,067,30259610,0463,674,418Lenses38513383866Lymphography1,20039,063,3031,825728,27622,225,745Monk 136454,2183789801,348Monk 216,34585,42516,42712,87912,791Monk 328963,0572895881,236Multiplexor (F11)2,914188,1202,9146,9616,130Mushroom386*7611,562,006 132,107,513 ‡Primary Tumor18,20934,325,23423,6683,814,42231,107,648Slovenian B. C.30,647 172,073,24161,391 271,328,080 308,209,464Soybean Large9,562*23,86017,138,467*Tic Tac Toe3,87611,496,7364,01093,521110,664Wisconsin B. C.465,058* 1,211,211*** Execution terminated after exceeding the 24 CPU hour limit.", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Number of nodes explored under best-first fixed-order search", "figure_data": "Data setRuns MinimumMeansdAudiology0---House Votes 8410451,0381,319,911624,957Lenses1051649Lymphography10597,8422,251,6521,454,583Monk 110463788225Monk 2104,2835,895931Monk 310527656110Multiplexor (F11)104,2104,948364Mushroom0---Primary Tumor10 10,552,129 29,914,840 12,390,146Slovenian B. C.1 42,669,822 42,669,8220Soybean Large0---Tic Tac Toe108,04616,4715,300Wisconsin B. C.0----Execution terminated for all ten runs after exceeding thevirtual memory limit of 250 megabytes.", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Number of nodes explored under depth-first fixed-order search", "figure_data": "Data setRunsMinimumMeansdAudiology0***House Votes 84101,592,3913,674,4182,086,159Lenses10506612Lymphography10484,69422,225,74527,250,834Monk 1105531,348922Monk 2109,27412,7912,686Monk 3106271,236891Multiplexor (F11)104,4676,1301,164Mushroom3 105,859,320 132,107,51322,749,211Primary Tumor1010,458,42131,107,64814,907,744Slovenian B. C.10 110,101,761 308,209,464 303,800,659Soybean Large0***Tic Tac Toe1049,328110,66465,809Wisconsin B. C.0**** Execution terminated for all ten runs after exceeding the24 CPU hour limit.", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The number of search operators for the search tasks above equal the number of attribute values in the", "figure_data": "Log Advantage-1 0 1 2 3 4 5 6 7 8 9 10 11 12 20 19 18 17 16 15 14 1310 20 30 40 50 60 70 80 90 100 110 120 130 140 •Search Space Size", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b1" ], "table_ref": [], "text": "The work discussed in this paper forms part of the Eureka Prometheus activities, aimed at improved road tra c safety. Since the processing of images is of fundamental importance in automotive applications, our current work has been aimed at the development of an embedded low-cost computer vision system. Due to the special eld of application, the vision system must be able to process data and produce results in real-time. It is therefore necessary to consider data structures, processing techniques, and computer architectures capable of reducing the response time of the system as a whole.\nThe system considered is currently integrated on the Mob-Lab land vehicle (Adorni, Broggi, Conte, & D'Andrea, 1995). The MOBile LABoratory, the result of Italian work within the Prometheus project (see Figure 1.a), comprises a camera for the acquisition and digitization of images, which pipelines data to an on-board massively parallel computer for processing. As illustrated in Figure 2, the current output con guration comprises a set of warnings to the driver, displayed by means of a set of Leds on a control-panel (shown in Figure 1.b). But, due to the high performance levels achieved, it will be possible to replace this output device with a heads-up display showing the enhanced features superimposed onto the original image.\nThis paper presents a move toward the use of top-down control (the following feature extraction mechanism is based on a model-driven approach), instead of the traditional datadriven approach, which is generally used for data-parallel algorithms. Starting from the experience gained in the development of a di erent approach (Broggi, 1995c) based on the parallel detection of image edges pointing to the Focus of Expansion, this work presents a model-driven low-level processing technique aimed at road detection and enhancement of the road (or lane) image acquired from a moving vehicle. The model which contains a-priori knowledge of the feature to be extracted (road or lane) is encoded in the traditional data structure handled in low-level processing: a two-dimensional array. In this case, a binary image representing two di erent regions (road and o -road) has been chosen. Hereinafter this image will be referred to as Synthetic Image. It is obvious that di erent synthetic images must be used according to di erent acquisition conditions (camera position, orientation, optics, etc., which are xed) and environment (number of lanes, one way or two way tra c, etc., which may change at run-time). In the nal system implementation some a-priori world knowledge enables the correct synthetic model selection. As an example, Figure 3 presents several di erent synthetic images for di erent conditions. The following Section presents a survey of vision-based lane detection systems; Section 3 explains the choice of the multiresolution approach; Section 4 presents the details of the complete algorithm; Section 5 discusses the performances of current implementation on the Paprica system; Section 6 presents some results and a critical analysis of the approach which leads to the present development; nally Section 7 presents some concluding remarks." }, { "figure_ref": [], "heading": "Comparison with Related Systems", "publication_ref": [ "b22", "b29", "b2", "b21", "b51", "b39", "b53", "b33", "b45", "b44", "b32", "b53", "b54", "b42", "b27", "b23", "b39", "b38", "b36", "b35", "b37", "b10" ], "table_ref": [], "text": "Many di erent vision-based road detection systems have been developed worldwide, each relying on various characteristics such as di erent road models (two or three dimensional), acquisition devices (color or monochromatic camera, using mono or stereo vision), hardware systems (special-or general-purpose, serial or parallel), and computational techniques (template matching, neural networks, etc.).\nThe Scarf system (tested on the Navlab vehicle at Carnegie Mellon University) uses two color cameras for color-based image segmentation; the di erent regions are classi ed and grouped together to form larger areas; nally a Hough-like transform is used to vote for di erent binary model candidates. Due to the extremely high amount of data to be processed, the two incoming color images (480 512) are reduced to 60 64 pixel. Nevertheless, a high performance computer architecture, a 10 cell Warp (Crisman & Webb, 1991;Hamey, Web, & Wu, 1988;Annaratone, Arnould, T.Gross, H.Kung, & J.Webb, 1987), has been chosen to speed-up the processing. The system, capable of detecting even unstructured roads, reaches a processing rate of 1 3 Hz (Crisman & Thorpe, 1993, 1991, 1990;Thorpe, 1989). In addition to its heavy computational load, the main problems with this approach are found in the implicit models assumed: if the road curves sharply or if it changes width, the assumed shape model becomes invalid and detection fails (Kluge & Thorpe, 1990). The Vits system (tested on the Alv vehicle and developed at Martin Marietta) also relies on two color cameras. It uses a combination of the red and blue color bands to segment the image, in an e ort to reduce the artifacts caused by shadows. Information on vehicle motion is also used to aid the segmentation process. Tested successfully on straight, single lane roads, it runs faster than Scarf, sacri cing general capability for speed (Turk, Morgenthaler, Gremban, & Marra, 1988). Alvinn (tested on Navlab, Cmu) is a neural network based 30 32 video retina designed, like Scarf, to detect unstructured roads, but it does not have any road model: it learns associations between visual patterns and steering wheel angles, without considering the road location. It has also been implemented on the Warp system, reaching a processing rate of about 10 Hz (Jochem, Pomerleau, & Thorpe, 1993;Pomerleau, 1993Pomerleau, , 1990)). A di erent neural approach has been developed at Cmu and tested on Navlab: a 256 256 color image is segmented on a 16k processor MasPar MP-2 (MasPar Computer Corporation, 1990). A trapezoidal road model is used, but the road width is assumed to be constant throughout the sequence: this means that although the trapezoid may be skewed to the left or right, the top and bottom edges maintain a constant length. The high performance o ered by such a powerful hardware platform is limited by its low I/O bandwidth; therefore a simpler reduced version (processing 128 128 images) has been implemented, working at a rate of 2.5 Hz (Jochem & Baluja, 1993).\nDue to the high amount of data (2 color images) and to the complex operations involved (segmentation, clustering, Hough transform, etc.) the system discussed, even if implemented on extremely powerful hardware machines, achieve a low processing rate. Many di erent methods have been considered to speed-up the processing, including the processing of monochromatic images and the use of windowing techniques (Turk et al., 1988) to process only the regions of interest, thus implementing a Focus of Attention mechanism (Wolfe & Cave, 1990;Neumann & Stiehl, 1990).\nAs an example, in VaMoRs (developed at Universit at der Bundeswehr, M unchen) monochromatic images are processed by custom hardware, focusing only on the regions of interest (Graefe & Kuhnert, 1991). The windowing techniques are supported by strong road and vehicles models to predict features in incoming images (Dickmans & Mysliwetz, 1992). In this case, the vehicle was driven at high speeds (up to 100 kph) on German autobahns, which have constant lane width, and where the road has speci c shapes: straight, constant curvature, or clothoidal. The use of a single monochromatic camera together with these simple road models allows a fast processing based on simple edge detection; a match with a structured road model is then used to discard anomalous edges. This approach is disturbed in shadow conditions, when the overall illumination changes, or when road imperfections are found (Kluge & Thorpe, 1990). The Lanelok system (developed at General Motors) also relies on strong road models: it estimates the location of lane boundaries with a curve tting method (Kenue & Bajpayee, 1993;Kenue, 1991Kenue, , 1990)), using a quadratic equation model. In addition to being disturbed by the presence of vehicles close to the road markings, lane detection generally fails in shadow conditions. An extension for the correct interpretation of shadows has therefore been introduced (Kenue, 1994); unfortunately this technique relies on xed brightness thresholds which is far from being a robust and general approach.\nThe main aim of the approach discussed in this paper, on the other hand, is to build a low-cost system capable of achieving real-time performance in the detection of structured roads (with painted lane markings), and robust enough to tolerate severe illumination changes such as shadows. Limitation to the analysis of structured environments allows the use of simple road models which, together with the processing of monocular monochromatic images on special-purpose hardware allows the achievement of low-cost high performances. The use of a high performance general-purpose architecture, such as a 10 cell Warp or a 16k MasPar MP-2 (as in the case of Carnegie Mellon's Navlab), involves high costs which are not compatible with widespread large-scale use. It is for this reason that the execution of low-level computations (e ciently performed by massively parallel systems) has usually been implemented on general purpose processors, as in the case of VaMoRs. The design and implementation of special-purpose application-oriented architectures (like Paprica, Broggi, Conte, Gregoretti, Sanso e, & Reyneri, 1995, 1994), on the other hand, keep the production costs down, while delivering very high performance levels. More generally, the features that enable the integration of this architecture on a generic vehicle are:\n(a) its low production cost, (b) its low operational cost, and (c) its small physical size." }, { "figure_ref": [], "heading": "The Computing Architecture", "publication_ref": [ "b25", "b49", "b18", "b15" ], "table_ref": [], "text": "Additional considerations on power consumption show that mobile computing is moving in the direction of massively parallel architectures comprising a large number of relatively slow-clocked processing elements. The power consumption of dynamic systems can be considered proportional to CfV 2 , where C represents the capacitance of the circuit, f is the clock frequency, and V is the voltage swing. Power can be saved in three di erent ways (Forman & Zahorjan, 1994), by minimizing C, f, and V respectively: using a greater level of Vlsi integration, thus reducing the capacitance C; trading computer speed (with a lower clock frequency f) for lower power consumption (already implemented on many portable PCs);\nreducing the supply voltage V DD .\nRecently, new technological solutions have been exploited to reduce the IC supply voltage from 5 V to 3.3 V. Unfortunately, there is a speed penalty to pay for this reduction: for a Cmos gate (Shoji, 1988), the device delay T d (following a rst order approximation) is proportional to V DD (V DD V T ) 2 , which shows that the reduction of V DD determines a quasilinear increment (until the device threshold value V T ) of the circuit delay T d . On the other hand, the reduction of V DD determines a quadratic reduction of the power consumption.\nThus, for power saving reasons, it is desirable to operate at the lowest possible speed, but, in order to maintain the overall system performance, compensation for these increased delays is required.\nThe use of a lower power supply voltage has been investigated and di erent architectural solutions have been considered so as to overcome the undesired side e ects caused by the reduction of V DD (Courtois, 1993;Chandrakasan, Sheng, & Brodersen, 1992). The reduction of power consumption, while maintaining computational power, can be achieved by using low cost Simd computer architectures, comprising a large number of extremely simple and relatively slow-clocked processing elements. These systems, using slower device speeds, provide an e ective mechanism for trading power consumption for silicon area, while maintaining the computational power unchanged. The 4 major drawbacks of this approach are: a solution based on hardware replication increases the silicon area, and thus it is not suitable for designs with extreme area constraints; parallelism must be accompanied by extra-routing, requiring extra-power; this issue must be carefully considered and optimized; the use of parallel computer architectures involves the redesigning of the algorithms with a di erent computational model; since the number of processing units must be high, if the system has size constraints the processing elements must be extremely simple, performing only simple basic operations.\nThis paper investigates a novel approach to real-time road following based on the use of low-cost massively parallel systems and data-parallel algorithms." }, { "figure_ref": [], "heading": "The Multiresolution Approach", "publication_ref": [ "b47", "b3", "b50", "b12", "b13", "b26", "b14", "b6" ], "table_ref": [], "text": "The image encoding the model (Synthetic Image) and the image from the camera (Natural Image) cannot be directly compared with local computations, because the latter contains much more detail than the former. From all the known methods used to decrease the presence of details, it is necessary to choose one that does not decrease the strength of the feature to be extracted. For this purpose, a low-pass lter, such as a 3 3 neighborhoodbased anisotropic average lter, would not only reduce the presence of details, but also the sharpness of the road boundaries, rendering their detection more di cult. Since the road boundaries exploit a long-distance correlation, a subsampling of both the natural and the synthetic image would lead to a comparison which was less dependent on detail content. More generally, it is much easier to detect large objects at a low resolution, where only their main characteristics are present, than at a high resolution, where the details of the speci c represented object can make its detection more di cult. The complete recognition and description process, on the other hand, can only take place at high resolutions, where it is possible to detect even small details because of the preliminary results obtained at a coarse resolution.\nThese considerations lead to the use of a pyramidal data structure (Rosenfeld, 1984;Ballard & Brown, 1982;Tanimoto & Kilger, 1980), comprising the same image at di erent resolutions. Many di erent architectures have been developed recently to support this computational paradigm (Cantoni & Ferretti, 1993;Cantoni, Ferretti, & Savini, 1990;Fountain, 1987;Cantoni & Levialdi, 1986): where the computing architecture contains a number of processing elements which is smaller than the number of image pixels and an external processor virtualization mechanism (Broggi, 1994) is used. A useful side e ect due to resolution reduction is a decrease in the number of computations to be performed. Thus, the choice " }, { "figure_ref": [ "fig_2" ], "heading": "Algorithm Structure", "publication_ref": [ "b46", "b30", "b48", "b4", "b17", "b16", "b34" ], "table_ref": [], "text": "As shown in Figure 4, before each subsampling the natural image is ltered. In this way it is possible to decrease both the in uence of noise and redundant details, and the distortion due to aliasing (Pratt, 1978) introduced by the subsampling process. The image is partitioned into non-overlapping square subsets of 2 2 pixels each; the lter comprises a simple average of the pixels values for a given subset, which reduces the signal bandwidth. The set of resulting values forms the subsampled image.\nThe stretching of the synthetic image is performed through an iterative algorithm (Driven Binary Stretching, Dbs), a much simpler and morphological (Haralick, Sternberg, & Zhuang, 1987;Serra, 1982) version of the \\snake\" technique (Blake & Yuille, 1993;Cohen & Cohen, 1993;Cohen, 1991;Kass, Witkin, & Terzopolous, 1987). The result is then oversampled, and further improved using the same Dbs algorithm, until the original resolution is reached. The boundary of the stretched template represents the nal result of the process." }, { "figure_ref": [ "fig_3", "fig_3", "fig_4", "fig_5", "fig_7" ], "heading": "The DBS Filter", "publication_ref": [ "b24", "b30", "b48", "b5", "b17", "b16", "b34", "b30", "b48" ], "table_ref": [], "text": "The purpose of the Dbs lter, illustrated in Figure 5, is to stretch the binary input model in accordance with the data encoded in the grey-tone natural image and to produce a reshaped version of the synthetic model as an output.\nUsually the boundary of a generic object is represented by brightness discontinuities in the image reproducing it, therefore, in the rst version, the rst step of the Dbs lter comprises an extremely fast and simple gradient-based lter computed on the 3 3 neighborhood of each pixel. Then, as shown in Figure 5, a threshold is applied to the gradient image, in order to keep only the most signi cant edges. The threshold value is now xed, but its automatic tuning based on median ltering is currently being tested (Folli, 1994).\nMore precisely, two di erent threshold values are computed for the left and right halves of the image, in order to detect both road boundaries even under di erent illumination conditions. Since a 2D mesh-connected massively parallel architecture is used, an iterative algorithm must be performed in order to stretch the synthetic model toward the positions encoded in the thresholded image. A further advantage of the pyramidal approach is that the number of iterations required for successful stretching is low at a coarse resolution since the image size is small; and again, only a few iterations are required for high-resolution re nement, due to the initial coarse low-resolution stretching. Each border pixel of the synthetic image is attracted towards the position of the nearest foreground pixel of the thresholded image, as shown in Figure 6. For this purpose, a scalar eld V is de ned on E 2 . Field V : E 2 ! E, links the position of each pixel p 2 E 2 to a scalar value V (p) 2 E, which represents the potential associated to the pixel itself. The set of these values is encoded in a potential image. The di erence V (p) V (q) represents the cost of moving pixel p toward position q. The scalar eld is de ned in such a way that a negative cost corresponds to the movement toward the nearest position of the foreground pixels t i of the thresholded image. As a consequence, the iterative process is designed explicitly to enable all the pixel movements associated to a negative cost.\nThus, the value V (p) depends on the minimum distance between pixel p and pixels t i :\nV (p) = min i d(p; t i ) ;\n(1)\nwhere d : E 2 E 2 ! E represents the distance between two pixels, measured with respect to a given metric. In this case, due to the special simplicity of the implementation, a city block (Manhattan distance) metric has been chosen (see Figure 7). A very e cient method for the computation of the potential image using 2D mesh-connected architectures is based on the iterative application of morphological dilations (Haralick et al., 1987;Serra, 1982):\n1. a scalar counter is initialized to 0; for parallel architectures which cannot run any fragment of scalar code, this counter is associated to every pixel in the image, thus constituting a \\parallel\" counter; 2. the counter is decremented; (2) 4. the value of the counter is assigned to the potential image in the positions where the pixel, due to the previous dilation, changes its state from background to foreground. 5. the process is then repeated from step 2, until the output of the morphological dilation is equal to its input. This shows that the potential image can be generated by the application of a Distance Transform, DT (Borgefors, 1986) to the binary thresholded image. Due to a more e cient implementation-dependent data handling, the nal version of the potential image DT is obtained by adding a constant to every coe cient, so as to work with only positive values: represents the maximum value allowed for grey-tone images1 . Thus the new de nition of the scalar eld V is:\nV (p) = min i d(p; t i ) :\n(3) Furthermore, since the `distance' information is used only by the pixels belonging to the border of the synthetic image and their neighbors, the iterative process is stopped when the DT has been computed for these pixels, producing a noticeable performance improvement.\nAs already mentioned, the crux of the algorithm is an iterative process whose purpose is to move the edge pixels of the synthetic image in the directions in which the DT gradient has a maximum. Figure 8 shows an example of a monodimensional stretching. As shown in Appendix A, the strength of this approach lies in the fact that the stretching algorithm can be expressed simply by a sequence of morphological operations, and can therefore be mapped e ciently on mesh-connected massively parallel architectures, achieving higher performance levels than other approaches (Cohen & Cohen, 1993;Cohen, 1991;Kass et al., 1987).\nIn order to describe the Dbs algorithm, let us introduce some de nitions using Mathematical Morphology operators (Haralick et al., 1987;Serra, 1982) such as dilation ( ), erosion ( ), or complement ( ). A two-dimensional binary image S is represented as a subset of E 2 , whose elements correspond to the foreground pixels of the image: S = n s 2 E 2 s = (x; y); x; y 2 E o ;\n(4) where vector (x; y) represents the coordinates of the generic element s.\nThe external edge of S is de ned as the set of elements representing the di erence between S and the dilation of S by the 4-connected structuring element N shown in expression (2): B e (S) = (S N) \\ S :\n(5)\nIn a similar way, the set of elements representing the di erence between S and its erosion using the same structuring element N is de ned to be the internal edge of S: B i (S) = (S N) \\ S :\n(6)" }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "The Iterative Rules", "publication_ref": [], "table_ref": [], "text": "The set of elements B(S) = B e (S) B i (S) are the only elements which can be inserted or removed from set S by the application of a single iteration of the Dbs algorithm. More precisely, two di erent rules are applied to the two edges: the rst, applied to the external edge B e (S), determines the elements to be included in set S; while the second, applied to the internal edge B i (S), determines the elements to be removed from set S. Rule for the external edge: { each pixel of the external edge of S computes the minimum value of the DT associated to its 4-connected neighbors belonging to set S; { all the pixels, whose associated DT is greater than the value previously computed, are inserted into set S.\nThe application of this rule has the e ect of expanding the synthetic image towards the foreground pixels t i which are not included in the synthetic model (see the right hand side of Figure 6.b).\nRule for the internal edge: { each pixel of the internal edge of S computes the minimum value of the DT associated to its 4-connected neighbors not belonging to set S; { all the pixels, whose associated DT is greater than the value previously computed, are removed from set S.\nThe application of this rule has the e ect of shrinking the synthetic image (see the left hand side of Figure 6.b).\nNote that rule 2 is the inverse of rule 1: the latter tends to stretch the foreground onto the background, while the former acts in the opposite way, using the complement of the synthetic image." }, { "figure_ref": [ "fig_7", "fig_8", "fig_8" ], "heading": "Flat Handling", "publication_ref": [], "table_ref": [], "text": "Figure 8 refers to a monodimensional stretching. Unfortunately, when dealing with 2D data structures, the DT image does not present a strictly increasing or decreasing behavior, even locally. Thus, an extension to the previous rules must be considered for correct at-handling in the 2D space. Since the movement of a generic pixel towards positions holding an equal DT coecient is expressly disabled, the resulting binary image does not completely follow the shape encoded in the DT image. A minor revision to the de nition of rule 1 is thus required. Figure 9.c is obtained with the following rule applied to the external edge.\nRule for the external edge, including at handling: { each pixel of the external edge of S computes the minimum value of the DT associated to its 4-connected neighbors belonging to set S; { all the pixels, whose associated DT is greater than the value previously computed, and all the pixels not belonging to the thresholded image, whose associated DT is equal to the value previously computed are inserted into set S.\nThe speci c requirement for the pixels moving toward a at region, of not belonging to the thresholded image, ensures that the binary image does not follow the DT chain of maxima. With such a requirement, in the speci c case of Figure 9, the maxima in the upper-right hand area are not included in the resulting binary image." }, { "figure_ref": [], "heading": "Performance Analysis of the Current Implementation", "publication_ref": [ "b31", "b28", "b8", "b0" ], "table_ref": [], "text": "Since the nal aim of this work is to integrate the road detection system on a mobile vehicle, the main requirement is to achieve real-time performance. The choice of a specialpurpose massively parallel architecture has already been justi ed in Section 1; moreover, the algorithm discussed in this paper maps naturally onto a single-bit mesh-connected Simd machine, because the whole computation can be e ciently expressed as a sequence of binary morphological operators, as shown in Appendix A (which presents the morphology-based description of the whole Dbs algorithm). The complete processing, rst tested on a Connection Machine CM-2 (Hillis, 1985), is now implemented on the special-purpose massively parallel Simd architecture Paprica.\nThe Paprica system (PArallel PRocessor for Image Checking and Analysis), based on a hierarchical morphology computational model, has been designed as a specialized coprocessor to be attached to a general purpose host workstation: the current implementation, consisting of a 16 16 square array of processing units, is connected to a Sparc-based workstation via a Vme bus, and installed on Mob-Lab. The current hardware board (a single 6U Vme board integrating the Processor Array, the Image and Program Memories, and a frame grabber device for the direct acquisition of images for the Paprica Image Memory) is the result of the full reengineering of the rst Paprica prototype which has been extensively analyzed and tested (Gregoretti, Reyneri, Sanso e, Broggi, & Conte, 1993) for several years.\nThe Paprica architecture has been developed explicitly to meet the speci c requirements of real-time image processing applications (Broggi, 1995c(Broggi, , 1995b;;Adorni, Broggi, Conte, & D'Andrea, 1993); the speci c processor virtualization mechanism utilized by Paprica architecture allows the handling of pyramidal data structures without any additional overhead equipment.\nAs shown in Figure 2, the output device can be:\n(a) a heads-up display in which the road (or lane) boundaries are highlighted; (b) a set of Leds indicating the relative position of the road (or lane) and the vehicle.\nStarting from 256 256 grey-tone images and after the resolution reduction process, in the rst case (a) the initial resolution must be recovered in order to superimpose the result onto the original image. In the second case (b), on the other hand, due to the high quantization of the output device, the processing can be stopped at a low resolution (e.g., 64 64), where only a stripe of the resulting image is analyzed to drive the Leds. Table 1 presents the computational time required by each step of the algorithm (considering 5 DT iterations and 5 Dbs iterations for each pyramid level)." }, { "figure_ref": [], "heading": "Operation", "publication_ref": [ "b31" ], "table_ref": [], "text": "Image 1: Performance of Paprica system; the numbers refer to 5 iterations of the DT process and 5 iterations of the Dbs lter for each pyramid level.\nIn the current Mob-Lab con guration, the output device (shown in Figure 1.b) consists of a set of 5 Leds: Table 1 shows that the 256 2 ! 32 2 ! 64 2 ltering of a single frame takes 150 180 ms (depending on the number of iterations required), allowing the acquisition and processing of about 6 frames per second. Moreover, due to the high correlation between two consecutive sequence frames, the nal stretched template can be used as the input model to be stretched by the processing of the following frame. In these conditions a lower number of Dbs and DT iterations are needed for the complete template reshaping, thus producing a noticeable performance improvement. The reduction in the time required to process a single frame also increases the correlation between the current and the following frame in the sequence, thus allowing a further reduction in computation time2 . Images acquired in many di erent conditions and environments have been used for extensive experimentation, which was performed rst o -line on a functional simulator implemented on an 8k processor Connection Machine CM-2 (Hillis, 1985) and then in realtime on the Paprica hardware itself on Mob-Lab. The complete system was demonstrated at the nal Prometheus project meeting in Paris, in October 1994: the Mob-Lab land vehicle was driven around the two-lane track at Mortefontaine, under di erent conditions (straight and curved roads, with shadows and changing illumination conditions and with other vehicles on the path).\nThe performances obtained during this demonstration allowed the main limitations of the system to be detected and enabled a critical analysis of the approach, thus leading to proposals for its development." }, { "figure_ref": [ "fig_1", "fig_1", "fig_4" ], "heading": "Critical Analysis and Evolution", "publication_ref": [ "b55", "b54" ], "table_ref": [], "text": "The approach discussed achieves good performances in terms of output quality when the model matches (or is su ciently similar to) the road conditions, namely when road markings are painted on the road surface (on structured roads, see Figures 11.a and 12.a) inducing a su ciently high luminance gradient. This approach is successful when the road or lane boundaries can be extracted from the input image through a gradient thresholding operation (see Figures 11.b and 12.b). Unfortunately, this is not always possible, for example when the road region is a patch of shadow or sunlight, as in Figure 13.a. In this case the computation of the DT starting from the thresholded image (see Figure 13.b) is no longer signi cant: a di erent method must be devised for the determination of the binary image to be used as input for the Distance Transform. In a recent work (Broggi, 1995a) an approach based on the removal of the perspective e ect is presented and its performances discussed. A transform, a non-uniform resampling similar to what happens in the human visual system (Zavidovique & Fiorini, 1994;Wolfe & Cave, 1990), is applied to the input image (Figures 10.a); assuming a at road, every pixel of the resampled image (Figures 10.b) now represent the same portion of the road3 . Due to their constant width within the overall image, the road markings can now be easily enhanced and extracted by extremely simple morphological lters (Broggi, 1995a) (Figures 10.c Since the removal (and reintroduction) of the perspective e ect can be reduced to a mere image resampling and the lter is based on simple morphological operators (Broggi, 1995a), the implementation on the Paprica system is straightforward. The preliminary results obtained by the current test version of the system are encouraging both for the output quality (the problems caused by shadows are now resolved) and for computation time: a single frame is processed in less than 100 ms, thus allowing the processing of about 10 frames per second. The improvement of the Dbs process by means of the perspectivebased lter, in addition to allowing correct road (or lane) detection in presence of shadows, can be implemented extremely e ciently on the Paprica system, taking advantage of a speci c hardware extension designed explicitly for this purpose (non uniform-resampling) (Broggi, 1995a). Figures 11,12,13,and 14 show the results of the processing in di erent conditions: straight and curved road, with shadows and other vehicles in the path, respectively4 . In the last case the lane cannot be detected successfully due to the presence of an obstacle\nThe use of a pair of stereo images is currently being investigated to overcome this problem: the removal of the perspective e ect from both the stereo images would lead to the same image i the road is at, namely if no obstacles are found on the vehicle path. A di erence in the two reorganized images (namely when an obstacle is detected) would cause the algorithm to stop the lane detection and warn the driver.\nThe general nature of the presented approach enables the detection of other su ciently large-sized features: for example, using di erent synthetic models it is possible to detect road or lane boundaries, as shown in Figure 16." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper a novel approach for the detection of road (or lane) boundaries for visionbased systems has been presented. The multiresolution approach, together with top-down control, allows achievement of remarkable performances in terms of both computation time (when mapped on massively parallel architectures) and output quality. A perspective-based lter has been introduced to improve system performance in shadow conditions. However, even if the perspective-based lter alone is able to extract the road markings with a high degree of con dence, the hierarchical application of the Dbs lter (and its extension to the handling of image sequences) is of basic importance since it allows the exploitation of the temporal correlation between successive sequence frames (performing solution tracking).\nThe presence of an obstacle in the vehicle path is still an open problem, which is currently being approached using stereo vision (Broggi, 1995a).\nThe algorithm has been implemented on the Paprica system, a massively parallel lowcost Simd architecture; because of its speci c hardware features, Paprica is capable of processing about 10 frames per second." }, { "figure_ref": [ "fig_16", "fig_17", "fig_16", "fig_17" ], "heading": "Appendix A. The Morphological Implementation of the DBS Filter", "publication_ref": [ "b30", "b30" ], "table_ref": [], "text": "In this appendix, the rule for the external edge will be considered, assuming step n in the iterative process. Recalling the Mathematical Morphology notations used to identify a grey-tone two-dimensional image, the DT image is a subset of E 3 : DT = fd 2 E 3 j d = (u; v); v = V (u); 8 u 2 E 2 g ;\nwhere u represents the position of element d in E 2 , and v represents its value.\nThe pixelwise masking operation between a binary and a grey-tone image is here de ned as a function : E 2 E 3 ! E 3 (8) such that A B represents a subset of B containing only the elements b = (u; v) whose position vector u 2 E 2 also belongs to A: A B = fx 2 E 3 j x = (u; v) 2 B; u 2 Ag In order to compute the minimum value of the DT in the speci ed neighborhood, let us consider image K (n) e K (n) e = S (n) e DT S (n) e L ;\n(10) where S (n) e represents the binary image at step n, the subscript e indicates that the rule for the external edge is being considered, and nally L = fl 2 E 3 j l = (u; ); 8 u 2 E 2 g :\n(11)\nAs shown in (Haralick et al., 1987), in order to compute the minimum value of a grey-tone image K (n) e in a 4-connected neighborhood, the following grey-scale morphological erosion should be used: M (n) e = K (n) e Q ;\n(12) where Q = f(1; 0; 0); ( 1; 0; 0); (0; 1; 0); (0; 1; 0)g ;\n(13) as shown in Figure 17. In order to determine the set of elements in which M (n) e has a value smaller than DT, a new function M is required:\nM : E 3 E 3 ! E 2 :\n(14) Such a function is de ned as M(A; B) = fx 2 E 2 j V(A; x) < V(B; x)g ;\n(15) where V : E 3 E 2 ! E is de ned as V(A; x) = ( a if 9 a 2 E j (x; a) 2 T(A) 1 otherwise :\n(16)\nIn equation ( 16), T(A) represents the top of A (Haralick et al., 1987), here de ned as T(A) = ft 2 A; t = (u; v) j 6 9 t 0 = (x; v 0 ) 2 A for which v 0 > vg :\n(17)\nThe set of elements which will be included in set S (n+1) e is given by the logical intersection between M(M (n) e ; DT) and the set of elements belonging to the external edge of S (n) e :\nE (n) e = M M (n) e ; DT \\ B e S (n) e :\n(18) Thus, the nal result of iteration n is given by S (n+1) e = S (n) e E (n) e :\n(19) Figure 18.a shows the execution of an individual iteration of rule 1 on a monodimensional image pro le.\nFollowing similar steps, it is possible to formalize rule 2. In order to compute the maximum value of the DT in the speci ed neighborhood, let us consider image K (n) i K (n) i = S (n) i DT ;\n(20)\nwhere the subscript i indicates that the rule for the internal edge is being considered. As shown above, in order to compute the maximum value of a grey-tone image K (n) i in a 4-connected neighborhood, the following morphological dilation should be used: M (n) i = K (n) i Q ;\n(21\n)\nwhere Q is shown in Figure 17. The set of elements which will be removed from set S (n+1) i is given by the logical intersection between M(M (n) i ; DT) and the set of elements belonging to the internal edge of S (n) i :\nE (n) i = M M (n) i ; DT \\ B i S (n) i :\n(22) Thus, the nal result of iteration n is given by S (n+1) i = S (n) i E (n) i = S (n) i \\ E (n) i :\n(23)\nFigure 18b shows the execution of an individual iteration of rule 2 on a monodimensional pro le of an image." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was partially supported by the Italian Cnr within the framework of the Eureka Prometheus Project { Progetto Finalizzato Trasporti under contracts n. 93.01813.PF74 and 94.01371.PF74.\nThe authors are indebted to Gianni Conte for the valuable and constructive discussions and for his continuous support throughout the project." } ]
[ { "authors": "G Adorni; A Broggi; G Conte; V & D'andrea", "journal": "", "ref_id": "b0", "title": "A self-tuning system for realtime Optical Flow detection", "year": "1993" }, { "authors": "G Adorni; A Broggi; G Conte; V & D'andrea", "journal": "IEEE and SPIE Press", "ref_id": "b1", "title": "Real-Time Image Processing for Automotive Applications", "year": "1995" }, { "authors": "M Annaratone; E Arnould; T Gross; H Kung; & J Webb", "journal": "IEEE Trans on Computers, C", "ref_id": "b2", "title": "The Warp Computer: Architecture, Implementation and Performance", "year": "1987" }, { "authors": "D H Ballard; C M Brown", "journal": "Prentice Hall", "ref_id": "b3", "title": "Computer Vision", "year": "1982" }, { "authors": "A Blake; A Yuille", "journal": "MIT Press", "ref_id": "b4", "title": "Active Vision", "year": "1993" }, { "authors": "G Borgefors", "journal": "Computer Vision, Graphics and Image Processing", "ref_id": "b5", "title": "Distance Transformations in Digital Images", "year": "1986" }, { "authors": "A Broggi", "journal": "", "ref_id": "b6", "title": "Performance Optimization on Low-Cost Cellular Array Processors", "year": "1994" }, { "authors": "A Broggi", "journal": "", "ref_id": "b7", "title": "A Massively Parallel Approach to Real-Time Vision-Based Road Markings Detection", "year": "1995" }, { "authors": "A Broggi", "journal": "Real-Time Imaging Journal", "ref_id": "b8", "title": "A Novel Approach to Lossy Real-Time Image Compression: Hierarchical Data Reorganization on a Low-Cost Massively Parallel System", "year": "1995" }, { "authors": "A Broggi", "journal": "IEEE Trans on Image Processing", "ref_id": "b9", "title": "Parallel and Local Feature Extraction: a Real-Time Approach to Road Boundary Detection", "year": "1995" }, { "authors": "A Broggi; G Conte; F Gregoretti; C Sanso E; L M Reyneri", "journal": "Integrated Computer-Aided Engineering Journal -Special Issue on Massively Parallel Computing", "ref_id": "b10", "title": "The Evolution of the PAPRICA System", "year": "1995" }, { "authors": "A Broggi; G Conte; F Gregoretti; C Sanso E; L M Reyneri", "journal": "", "ref_id": "b11", "title": "The PAPRICA Massively Parallel Processor", "year": "1994" }, { "authors": "V Cantoni; M Ferretti", "journal": "Plenum Press", "ref_id": "b12", "title": "Pyramidal Architectures for Computer Vision", "year": "1993" }, { "authors": "V Cantoni; M Ferretti; M Savini", "journal": "NATO ASI Series F", "ref_id": "b13", "title": "Compact Pyramidal Architectures", "year": "1990" }, { "authors": "V Cantoni; S Levialdi", "journal": "Springer Verlag", "ref_id": "b14", "title": "Pyramidal Systems for Computer Vision", "year": "1986" }, { "authors": "A Chandrakasan; S Sheng; R Brodersen", "journal": "IEEE Journal of Solid-State Circuits", "ref_id": "b15", "title": "Low-Power CMOS Digital Design", "year": "1992" }, { "authors": "L D Cohen", "journal": "CGVIP: Image Understanding", "ref_id": "b16", "title": "Note on Active Contour Models and Balloons", "year": "1991" }, { "authors": "L D Cohen; I Cohen", "journal": "IEEE Trans on PAMI", "ref_id": "b17", "title": "Finite-Element Methods for Active Contour Models and Balloons for 2-D and 3-D Images", "year": "1993" }, { "authors": "B Courtois", "journal": "TIMA & CMP", "ref_id": "b18", "title": "CAD and Testing of ICs and systems: Where are we going?", "year": "1993" }, { "authors": "J Crisman; C Thorpe", "journal": "Kluwer Academic Publishers", "ref_id": "b19", "title": "Color Vision for Road Following", "year": "1990" }, { "authors": "J Crisman; C Thorpe", "journal": "", "ref_id": "b20", "title": "UNSCARF, A Color Vision System for the Detection of Unstructured Roads", "year": "1991" }, { "authors": "J Crisman; C Thorpe", "journal": "IEEE Trans on Robotics and Automation", "ref_id": "b21", "title": "SCARF: A Color Vision System that Tracks Roads and Intersections", "year": "1993" }, { "authors": "J D Crisman; J A Webb", "journal": "IEEE Trans on PAMI", "ref_id": "b22", "title": "The Warp Machine on Navlab", "year": "1991" }, { "authors": "E D Dickmans; B D Mysliwetz", "journal": "IEEE Trans on PAMI", "ref_id": "b23", "title": "Recursive 3-D Road and Relative Ego-State Recognition", "year": "1992" }, { "authors": "A Folli", "journal": "", "ref_id": "b24", "title": "Elaborazione parallela di immagini per applicazioni in tempo reale su autoveicolo", "year": "1994" }, { "authors": "G H Forman; J Zahorjan", "journal": "Computer", "ref_id": "b25", "title": "The Challenge of Mobile Computing", "year": "1994" }, { "authors": "T Fountain", "journal": "Academic-Press", "ref_id": "b26", "title": "Processor Arrays: Architectures and applications", "year": "1987" }, { "authors": "V Graefe; K.-D Kuhnert", "journal": "Springer Verlag", "ref_id": "b27", "title": "Vision-based Autonomous Road Vehicles", "year": "1991" }, { "authors": "F Gregoretti; L M Reyneri; C Sanso E; A Broggi; G Conte", "journal": "", "ref_id": "b28", "title": "The PAPRICA SIMD array: critical reviews and perspectives", "year": "1993" }, { "authors": "L G C Hamey; J A Web; I Wu", "journal": "Kluwer Academic Publishers", "ref_id": "b29", "title": "Low-level vision on Warp and the Apply Programming Model", "year": "1988" }, { "authors": "R M Haralick; S R Sternberg; X Zhuang", "journal": "IEEE Trans on PAMI", "ref_id": "b30", "title": "Image Analysis Using Mathematical Morphology", "year": "1987" }, { "authors": "W D Hillis", "journal": "MIT Press", "ref_id": "b31", "title": "The Connection Machine", "year": "1985" }, { "authors": "T M Jochem; S Baluja", "journal": "", "ref_id": "b32", "title": "A Massively Parallel Road Follower", "year": "1993" }, { "authors": "T M Jochem; D A Pomerleau; C E Thorpe", "journal": "", "ref_id": "b33", "title": "MANIAC: A Next Generation Neurally Based Autonomous Road Follower", "year": "1993" }, { "authors": "M Kass; A Witkin; D Terzopolous", "journal": "Intl Journal of Computer Vision", "ref_id": "b34", "title": "Snakes: Active Contour Models", "year": "1987" }, { "authors": "S K Kenue", "journal": "", "ref_id": "b35", "title": "LANELOK: detection of lane boundaries and vehicle tracking using image-processing techniques", "year": "1990" }, { "authors": "S K Kenue", "journal": "", "ref_id": "b36", "title": "LANELOK: An Algorithm for Extending the Lane Sensing Operating Range to 100 Feet", "year": "1991" }, { "authors": "S K Kenue", "journal": "", "ref_id": "b37", "title": "Correction of Shadow Artifacts for Vision-based Vehicle Guidance", "year": "1994" }, { "authors": "S K Kenue; S Bajpayee", "journal": "", "ref_id": "b38", "title": "LANELOK: Robust Line and Curvature Fitting of Lane Boundaries", "year": "1993" }, { "authors": "K Kluge; C E Thorpe", "journal": "Kluwer Academic Publishers", "ref_id": "b39", "title": "Explicit Models for Robot Road Following", "year": "1990" }, { "authors": "K Kluge", "journal": "", "ref_id": "b40", "title": "Extracting Road Curvature and Orientation from Image Edge Points without Perceptual Grouping into Features", "year": "1994" }, { "authors": "", "journal": "", "ref_id": "b41", "title": "MP-1 Family Data-Parallel Computers", "year": "" }, { "authors": "H Neumann; H Stiehl", "journal": "Elsevier", "ref_id": "b42", "title": "Toward a computational architecture for monocular preattentive segmentation", "year": "1990" }, { "authors": "W M Newman; R F Sproull", "journal": "McGraw-Hill", "ref_id": "b43", "title": "Principles of Interactive Computer Graphics", "year": "1981" }, { "authors": "D A Pomerleau", "journal": "Kluwer Academic Publishers", "ref_id": "b44", "title": "Neural Network Based Autonomous Navigation", "year": "1990" }, { "authors": "D A Pomerleau", "journal": "Kluwer Academic Publishers", "ref_id": "b45", "title": "Neural Network Perception for Mobile Robot Guidance", "year": "1993" }, { "authors": "W K Pratt", "journal": "", "ref_id": "b46", "title": "Digital Image Processing", "year": "1978" }, { "authors": "A Rosenfeld", "journal": "Springer Verlag", "ref_id": "b47", "title": "Multiresolution Image Processing and Analysis", "year": "1984" }, { "authors": "J Serra", "journal": "Academic Press", "ref_id": "b48", "title": "Image Analysis and Mathematical Morphology", "year": "1982" }, { "authors": "M Shoji", "journal": "Prentice Hall", "ref_id": "b49", "title": "CMOS Digital Circuit Technology", "year": "1988" }, { "authors": "S L Tanimoto; K Kilger", "journal": "Academic Press", "ref_id": "b50", "title": "Structured Computer Vision: Machine Perception trough Hierarchical Compuation Structures", "year": "1980" }, { "authors": "C Thorpe", "journal": "", "ref_id": "b51", "title": "Outdoor Visual Navigation for Autonomous Robots", "year": "1989" }, { "authors": "R Tsai", "journal": "", "ref_id": "b52", "title": "An E cient and Accurate Camera Calibration Technique for 3D Machine Vision", "year": "1986" }, { "authors": "M A Turk; D G Morgenthaler; K D Gremban; M Marra", "journal": "IEEE Trans on PAMI", "ref_id": "b53", "title": "VITS -A Vision System for Autonomous Land Vehicle Navigation", "year": "1988" }, { "authors": "J M Wolfe; K R Cave", "journal": "", "ref_id": "b54", "title": "Deploying visual attention: the guided model", "year": "1990" }, { "authors": "B Zavidovique; P Fiorini", "journal": "Plenum Press", "ref_id": "b55", "title": "A Control View to Vision Architectures", "year": "1994" } ]
[ { "formula_coordinates": [ 20, 517.32, 617.16, 4.92, 15.2 ], "formula_id": "formula_1", "formula_text": ")" } ]
Vision-Based Road Detection in Automotive Systems: A Real-Time Expectation-Driven Approach
The main aim of this work is the development of a vision-based road detection system fast enough to cope with the di cult real-time constraints imposed by moving vehicle applications. The hardware platform, a special-purpose massively parallel system, has been chosen to minimize system production and operational costs. This paper presents a novel approach to expectation-driven low-level image segmentation, which can be mapped naturally onto mesh-connected massively parallel Simd architectures capable of handling hierarchical data structures. The input image is assumed to contain a distorted version of a given template; a multiresolution stretching process is used to reshape the original template in accordance with the acquired image content, minimizing a potential function. The distorted template is the process output.
Alberto Broggi; Simona Bert
[ { "figure_caption": "Figure 1 :Figure 2 :12Figure 1: (a) The Mob-Lab land vehicle; (b) the control panel used as output to display the processing results", "figure_data": "", "figure_id": "fig_0", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Synthetic images used as road models for: (a) di erent camera positions and/or orientations (b) di erent number of lanes (assuming driving on the right)", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Block diagram of the whole algorithm; for each step the depth of every image is shown (in bit/pixel)", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Block diagram of the Dbs lter", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The attraction of the boundary pixels of the model (in grey) toward the thresholded image (the rectangular contour): (a) two-dimensional case; (b) monodimensional section", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: City block or Manhattan distance from the central pixel", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "3. the binary input image is dilated using a 4-connected structuring element N, formed by the following elements:", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Example of 4 iterations of the Dbs algorithm (monodimensional case)", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Two-dimensional stretching: di erent square markings represent di erent states of input binary image; dark grey areas represent the stretching area", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: The detection of road markings through the removal of the perspective e ect in three di erent conditions: straight road with shadows, curved road with shadows, junction. (a) input image; (b) reorganized image, obtained by non-uniform resampling of (a); (c) result of the line-wise detection of black-white-black transitions in the horizontal direction; (d) reintroduction of the perspective e ect, where the grey areas represent the portion of the image shown in (c); (e) superimposition of (d) onto a brighter version of the original image (a).", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :Figure 12 :1112Figure 11: Lane detection on a straight road: (a) input image; (b) image obtained by thresholding the gradient image; (c) image obtained by the perspective-based lter; (d) stretched template; (e) superimposition of the edges of the stretched template onto the original image. In this case both the thresholded gradient (b) and the perspective-based ltered (c) images can be used as input to the DT.", "figure_data": "", "figure_id": "fig_10", "figure_label": "1112", "figure_type": "figure" }, { "figure_caption": "Figure 13 :Figure 14 :1314Figure13: Lane detection on a straight road with shadows: this is a case where the thresholded gradient (b) cannot be used to determine the DT image due to the noise caused by shadows. On the other hand, the perspective-based processing is able to lter out the shadows and extract the road markings: the DT is in fact determined using image (c) instead of (b).", "figure_data": "", "figure_id": "fig_11", "figure_label": "1314", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Diagram of the extended Dbs lter including the perspective-based ltering", "figure_data": "", "figure_id": "fig_13", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16: (a) input image; (b) synthetic model used for lane detection; (c) superimposition of the edges of the stretched model onto the original input image; (d) synthetic model used for road detection; (e) superimposition of the edged of the stretched model onto the original input image.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: Structuring element Q", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: Monodimensional stretching in the case of external edge (left) and internal edge (right)", "figure_data": "", "figure_id": "fig_17", "figure_label": "18", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b29", "b31", "b43", "b35", "b23", "b22", "b26", "b2", "b5" ], "table_ref": [], "text": "The topic of this paper is generalization of clauses, which is a central problem in the area of Inductive Logic Programming (ILP) (Muggleton, 1991(Muggleton, , 1993)). ILP can be seen as the intersection of inductive machine learning and computational logic. In inductive machine learning the goal is to develop techniques for inducing hypotheses from examples (observations). By using the rich representation formalism of computational logic (clauses) for hypotheses and examples, ILP can overcome the limitations of classical machine learning representations, such as decision trees (Quinlan, 1986).\nBy using a clausal representation we have the ability to learn all types of hypotheses describable in rst-order logic, in particular the important class of recursive hypotheses. Another advantage of using a clausal representation is that clausal theories are easy to manipulate for machine learning algorithms. This is due to that changes to a clausal theory by adding or deleting clauses or literals have clear and simple e ects on the generality of the theory. The reader is referred to two introductions to ILP, one presented by Muggleton and De Raedt (1994), and one by Lavra c and D zeroski (1994). Lavra c and De Raedt (1995) present a recent survey of ILP research.\nWe use the following de nition of induction. A theory (background knowledge) 1) T 6 j = E + 1 ^: : : ^E+ n , 2) T ^H j = E + 1 ^: : : ^E+ n , and 3) T ^H 6 j = E 1 _ : : : _ E m .\nIn other words, the positive examples should not be a logical consequence of the theory alone, but a logical consequence of the theory together with the hypothesis, and no negative example should be a logical consequence of the theory and the hypothesis. Using clausal representation T, H, fE + 1 ; : : : ; E + n g and fE 1 ; : : : ; E m g are sets of clauses.\nIn this paper we concentrate on the subproblem in inductive learning of nding a clause that is a generalization of a set of positive examples. In other words, nding a clause C such that C j = E + 1 ^: : : ^E+ n :\nWe are particularly interested in least general generalizations, since every generalization of a set of clauses is also a generalization of the least general generalization of this set of clauses. Therefore a least general generalization in some sense represents all generalizations.\nA least general generalization is also consistent with the negative examples whenever there exists a consistent generalization.\nThe most natural and straightforward basis for generalization is implication, since induction is de ned in terms of logical consequence. Plotkin has described (1970Plotkin has described ( , 1971a) ) a technique for the computation of least general generalizations of clauses under a relation called -subsumption. This relation has been accorded much interest, and it is often used instead of implication, since it is easier to compute. However, there is a di erence between -subsumption and implication, which sometimes causes the generalizations obtained by Plotkin's technique to be over-generalizations with respect to implication.\nConsider the following clauses in which s denotes the successor function: C 1 = ( number(s(0)) number(0) ); C 2 = ( number(s 3 (0)) number(s(0)) ); D 1 = ( number(s(x)) number(y) ); and D 2 = ( number(s(x)) number(x) ): The clause D 1 is a least general generalization under -subsumption (LGG ) of C 1 and C 2 , and the clause D 2 is a least general generalization under implication (LGGI) of C 1 and C 2 . It is clear that D 1 is strictly more general than D 2 , both under -subsumption and under implication. It is also clear that D 2 is more appropriate in a de nition of natural number.\nTo learn recursive clauses, generalization under -subsumption is not very adequate, as illustrated above. The ability to learn recursive clauses is crucial, since recursion is the basic program structure of logic programs.\nIn section 2, we describe the most important results concerning generalization under -subsumption, and present a theoretically study of generalization under implication. In section 3, we present a technique to reduce implication to -subsumption based on orintroduction of literals. Finally, our results, computational complexity and future work are discussed in section 4.\nWe assume the reader to be familiar with the basic notions and notations in Logic Programming (Lloyd, 1987) and/or Automatic Theorem Proving (Chang & Lee, 1973;Gallier, 1986)." }, { "figure_ref": [], "heading": "Generalization of Clauses", "publication_ref": [ "b40" ], "table_ref": [], "text": "In the area of Inductive Logic Programming (ILP), the framework for generalization of clauses developed by Plotkin (1970Plotkin ( , 1971bPlotkin ( , 1971a)), has been accorded much interest. In this section we will describe this framework, which is based on a relation known assubsumption, and the most important results connected with it.\nSince generalization under -subsumption is not su cient for generalization of recursive clauses, as shown in the introduction, we will study the theory of generalization under implication. We note that implicaton between clauses is undecidable, and we will therefore introduce a restricted form of implication, called T-implication. Example Consider the following clauses: C = ( p(x) q(x; y); q(y; z); q(z; w); q(w; x) ); D = ( p(x) q(x; y); q(y; x); q(x; x) ); and E = ( p(x) q(x; x) ): We have C D since Cfz=x; w=yg D, D E since Dfy=xg E, and thus C E. We also have E D. Hence, D E and still D 6 ' E." }, { "figure_ref": [], "heading": "Generalization under -subsumption", "publication_ref": [], "table_ref": [], "text": "Theorem 1 states that -subsumption between clauses is decidable. This was rst shown by Robinson (1965, page 39).\nTheorem 1 (Decidability of -subsumption between clauses) Let C and D be clauses. Then there exists a procedure to decide if C D.\nAs mentioned in the introduction, we are particularly interested in least general generalizations. The main reason is that a least general generalization includes the information of all consistent generalizations.\nDe nition A clause C is a generalization under -subsumption of a set of clauses S = fD 1 ; : : : ; D n g if and only if, for every 1 i n, C D i . A generalization undersubsumption C of S is a least general generalization under -subsumption (LGG ) of S if and only if, for every generalization under -subsumption C 0 of S, C 0 C.\nExample Consider the following clauses: C = ( p(a) q(a); q(b) ); D = ( p(b) q(b); q(x) ); E = ( p(y) q(y); q(b) ); and F = ( p(y) q(y); q(b); q(z); q(w) ):\nBoth clauses E and F are LGG s of fC; Dg.\nIn general, an LGG is not unique, as shown by the example above. However, it is unique up to -subsumption equivalence. Plotkin has shown (1971a, page 82) that there exists an LGG of every nite set of clauses.\nTheorem 2 (Existence of LGG s) Let S be a nite set of clauses. Then there exists an\nLGG of S.\nAn LGG of a set of clauses is computable, and Plotkin has described (1971a) an algorithm for that. This algorithm is quite simple and easy to implement, but computationally expensive." }, { "figure_ref": [], "heading": "Generalization under Implication", "publication_ref": [ "b8", "b38", "b27" ], "table_ref": [], "text": "Implication is the most natural and straightforward basis for generalization in inductive learning, since the concept of induction can be de ned as the inverse of logically entailment. It is well-known that implication is re exive and transitive. Two clauses may be equivalent under implication without being equivalent under -subsumption.\nExample Consider the following clauses: C = ( p(x; y; z) p(y; z; x) ); and D = ( p(x; y; z) p(z; x; y) ):\nThen we have C , D, since D is a resolvent of C resolved with itself, and C is a resolvent of D resolved with itself. We also have C 6 ' D, and even C 6 D.\nIt has been claimed that implication and -subsumption are equivalent for function-free clauses (Helft, 1987). This is wrong as shown by the example above. The above example also shows that if a clause C implies a clause D then C does not necessarily -subsume D. It is well-known that implication is a strictly weaker relation between clauses than -subsumption.\nProposition 3 Let C and D be two clauses. If C D then C ) D.\nProposition 3 has been proved by Idestam-Almquist (1993a, page 21). Unfortunately implication between clauses is problematic since it is undecidable, which has been proved by Schmidt-Schauss (1988, page 294).\nTheorem 4 (Undecidability of implication between clauses) Let C and D be clauses.\nThen there exists no procedure to decide if C ) D. Niblett (1988) has claimed that implication between Horn clauses is decidable. This result has later been proved to be false (Marcinkowski & Pacholski, 1992).\nThe de nition of a least general generalization under implication (LGGI) follows the de nition of an LGG . The clause E is an LGG of fC; Dg, and F is an LGGI of fC; Dg. The LGG (clause E) is strictly more general than the LGGI (clause F), both under implication and under -subsumption, since E ) F but F 6 ) E, and E F but F 6 E.\nWhether there exists an LGGI of every nite set of clauses is still an open problem. However, since implication between clauses is undecidable, it is clear that in general an LGGI is not computable." }, { "figure_ref": [], "heading": "T-implication", "publication_ref": [], "table_ref": [], "text": "Because implication between clauses is undecidable, we here introduce a stronger form of implication called T-implication, which is decidable between clauses. It is called Timplication since it is de ned w.r.t. a nite set of ground terms T. In our presentation we use the notions of instance set of clauses, Skolem substitution, and term set of sets of clauses.\nDe nition Let C be a clause, fx 1 ; : : : ; x n g the set of variables in C, and T a set of terms.\nThen the instance set I(C; T) of C w.r.t. T is fC j = fx 1 =t 1 ; : : : ; x n =t n g where ft 1 ; : : : ; t n g Tg.\nDe nition Let be a substitution, C a clause, fx 1 ; : : : ; x n g the set of variables occurring in C, S a set of clauses, and F the set of function symbols occurring in S fCg. Then is a Skolem substitution for C w.r.t. S if and only if fx 1 =a 1 ; : : : ; x n =a n g where a 1 ; : : : ; a n are distinct constants, and F \\ fa 1 ; : : : ; a n g = ;.\nDe nition Let fD 1 ; : : : ; D n g be a set of clauses such that D 1 ; : : : ; D n have no variables in common, S be a set of clauses, a substitution, and T a set of terms. Then T is a term set of fD 1 ; : : : ; D n g by w.r.t. S if and only if: a) is a Skolem substitution for fD 1 ; : : : ; D n g w.r.t. S, and b) T is nite and includes all terms and subterms occurring in fD 1 ; : : : ; D n g . If T is equal to the set of terms and subterms occurring in fD 1 ; : : : ; D n g then T is a minimal term set of fD 1 ; : : : ; D n g by w.r.t. S. Like implication, T-implication is re exive, but unlike implication, T-implication is not transitive (Idestam-Almquist, 1993a). The relationship between implication and Timplication, described in Corollary 6 below, follows from Herbrand's theorem. For a proof of Herbrand's theorem the reader is referred to a book by Chang and Lee (1973, page 61). In our proof of Corollary 6 we use the notion of the complement of a clause. The clause E is an LGG of fC; Dg, and F is both an LGGI and an LGGT of fC; Dg. The LGGT is strictly more speci c than the LGG , since E ) F and F 6 ) E." }, { "figure_ref": [], "heading": "De nition", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "De nition Let C = (", "publication_ref": [], "table_ref": [], "text": "Below we prove that there exists an LGGT of every nite set of clauses. In fact we prove something stronger, namely that there exists, what we call, a complete LGGT of every nite set of non-tautological clauses. Note that a complete LGGT is -subsumed by any other generalization under T-implication. Lemma 10 Let S be a nite set of non-tautological clauses, T = ft 1 ; : : : ; t m g a term set of S, V = fx 1 ; : : : ; x m g a set of variables, and G = fC 1 ; C 2 ; : : :g the (possibly in nite) set of all generalizations under T-implication of S w.r.t. T. Then the set G 0 = I(C 1 ; V ) I(C 2 ; V ) : : : is a nite set of clauses." }, { "figure_ref": [], "heading": "De nition", "publication_ref": [], "table_ref": [], "text": "Proof: Let d be the maximal depth of a clause in S, and F S and F G the sets of predicate and function symbols occurring in the clauses in S and G respectively. Then F G V is the set of variables, predicate and function symbols occurring in the clauses in G 0 . By Corollary 6, G is a set of generalizations under implication of S. Then, by Proposition 9 and the de nition of -subsumption, F G F S and the maximal depth of a clause in G is d. Hence F G V is nite and the maximal depth of a clause in G 0 is d, and consequently G 0 is a nite set of clauses. 2\nLemma 11 Let C be a clause, S a set of clauses, V = fx 1 ; : : : ; x m g a set of variables, and T = ft 1 ; : : : ; t m g a term set of S by w.r.t fCg, such that C is a generalization under T-implication of S w.r.t. T. Then there exists an LGG E of I(C; V ) such that E is a generalization under T-implication of S w.r.t. T.\nProof: Let I(C; V ) = fC 1 ; : : : ; C k g. Then, 1 ; : : : ; k are variable-pure substitutions, and for every LGG F of I(C; T) and every 1 i k, we have C F and F C i . Then, there exists an LGG E of I(C; T) and variable-pure substitutions 1 ; : : : ; k such that, for every 1 i k, E i C i . Let = fx 1 =t 1 ; : : : ; x m =t m g, and then I(C; T) = fC 1 ; : : : ; C k g. Since C is a generalization under T-implication of S w.r.t. T, we have fC 1 ; : : : ; C k g j = S . For every 1 i k, E i C i . Then, for every 1 i k, by Proposition 3, E i ) C i , and thus fE 1 ; : : : ; E k g j = fC 1 ; : : : ; C k g. Since 1 ; : : : ; k are variable-pure substitutions, we have fE 1 ; : : : ; E k g I(E; T). Thus, I(E; T) j = fC 1 ; : : : ; C k g, and I(E; T) j = S . Consequently E is a generalization under T-implication of S w.r. Theorem 13 (Existence of complete LGGTs) Let S be a nite set of non-tautological clauses, and T a term set of S. Then there exists a complete LGGT of S w.r.t. T.\nProof: Let T = ft 1 ; : : : ; t m g, V = fx 1 ; : : : ; x m g be a set of variables, and G = fC 1 ; C 2 ; : : :g the (possibly in nite) set of all generalizations under T-implication of S w.r.t. T. By Lemma 10, the set G 0 = I(C 1 ; V ) I(C 2 ; V ) : : : is a nite set of clauses. Since G 0 is nite, the set fI(C 1 ; V ); I(C 2 ; V ); : : :g is also nite. For every i 1, by Lemma 11, there exists an LGG E i of I(C i ; V ) such that E i is a generalization under T-implication of S w.r.t. T. Then rename the variables in E 1 ; E 2 ; : : : such that, for every k 1 and p 1, E k = E p whenever I(C k ; V ) = I(C p ; V ) and otherwise E k and E p have no variables in common. Then the set fE 1 ; E 2 ; : : :g is nite, since fI(C 1 ; V ); I(C 2 ; V ); : : :g is nite. Let F = E 1 E 2 : : :, which consequently is a clause. For every i 1, by the de nition of an LGG , C i E i , and thus C i F. Then, for every i 1, by Proposition 7, C i ) T F. As showed above, for every i 1, E i is a generalization under T-implication of S w.r.t. T. Then, by Lemma 12, F is a generalization under T-implication of S w.r.t. T. Consequently, F is a complete LGGT of S w.r.t. T. 2 Theorem 14 (Existence of LGGTs) Let S be a nite set of clauses, and T a term set of S. Then there exists an LGGT of S w.r.t. T.\nProof: Let D be a tautology and a Skolem substitution for D. Then > j = D , and thus for every clause C, C ) T D. If every clause in S is a tautology, then every clause is a generalization under T-implication of S w.r.t. T, and every tautology is an LGGT of S w.r.t. T. Let S 0 be the set of clauses obtained from S by removing all tautologies. It is clear that every generalization under T-implication of S w.r.t. T also is a generalization under T-implication of S 0 w.r.t. T. By Theorem 13, there exists an LGGT of S 0 w.r.t. T. Consequently there exists an LGGT of S. 2" }, { "figure_ref": [], "heading": "Reduction of Implication to -subsumption", "publication_ref": [], "table_ref": [], "text": "There are generalizations under implication that are not generalizations under -subsumption. Our main idea to nd all generalizations under implication, is to reduce implication to -subsumption, which can be achieved by inverting self-resolution. In this section we will describe a technique for inverting resolution based on or-introduction of literals. We will also introduce the notion of expansion of clauses, which summarizes our idea of reduction of implication to -subsumption." }, { "figure_ref": [], "heading": "Di erence between -subsumption and Implication", "publication_ref": [ "b6", "b30", "b25", "b15", "b44" ], "table_ref": [], "text": "In section 2.2, we showed that C ) D follows from C D, but not the converse.\nHence, there are generalizations under implication that are not generalizations undersubsumption. It follows from a result by Gottlob (1987) that the di erence betweensubsumption and implication only concerns ambivalent clauses, as de ned below. Proposition 15 has been proved by Gottlob (1987, page 110). It follows from this proposition that an LGG and an LGGI of a set of clauses, including at least one nonambivalent clause, are equivalent. Muggleton (1992) has investigated the relationship between resolution and implication between clauses. He describes the subsumption theorem (Lee, 1967) in terms of input resolution, and gives a corollary about the relationship between -subsumption and implication between clauses. Unfortunately, this formulation of the subsumption theorem, which later also has been used by Idestam-Almquist (1993c, 1993a), has been shown to be wrong. Nienhuys-Cheng and de Wolf (1995) have given a counter-example which shows that the subsumption theorem for input resolution does not even hold in the special case where the considered set of clauses contains only one clause. Below we give the correct formulation of the subsumption theorem, which is based on the nth resolution (Robinson, 1965)." }, { "figure_ref": [], "heading": "De nition", "publication_ref": [ "b39", "b1", "b48", "b19", "b30" ], "table_ref": [], "text": "De nition A substitution is a uni er for a nite set of literals S if and only if S is a singleton. A uni er for S is a most general uni er (mgu) for S if and only if for each uni er of S there exists a substitution such that = .\nDe nition Let C be a clause, C and an mgu of . Then C is a factor of C. De nition Let T be a set of clauses. Then, the nth resolution of T, denoted R n (T), is de ned as: a) R 0 (T) = T, and b) R n (T) = R n 1 (T) fR j C; D 2 R n 1 (T) and R is a resolvent of C and Dg if n > 0.\nTheorem 16 (Subsumption theorem) Let T be a set of clauses and C a non-tautological clause. Then T j = C if and only if there exists a clause D 2 R n (T) such that D C for some n 0.\nTwo di erent recent proofs of Theorem 16 have been presented, one by Nienhuys-Cheng and de Wolf (1995), and one by Bain and Muggleton (1992). There also exist at least two di erent earlier proofs of this theorem in the literature, one by Slagle, Chang and Lee (1969), and one by Kowalski (1970). We are interested in the number of resolutions involved in the computation of a clause, and therefore we introduce the notion of nth resolution layer. A clause in the nth resolution layer has been obtained from the original set of clauses by n 1 resolutions.\nDe nition Let T be a set of clauses. Then, the nth resolution layer of T, denoted L n (T), is de ned as: a) L 1 (T) = T, and b) L n (T) = fR j R is a resolvent of C 2 L m (T) and D 2 L p (T) where m+p = n 1, m 1 and p 1g if n > 1.\nCorollary 17 (Implication between clauses using resolution) Let C be a clause and D a non-tautological clause. Then C ) D if and only if there exists a clause E 2 L n (fCg)\nsuch that E D for some n 1.\nCorollary 17 follows from Theorem 16, and the observation that, for every n 1, if a clause C 2 L n (T) then also C 2 R n (T). This corollary tells us that implication between clauses is equivalent to a combination of self-resolution and -subsumption. Muggleton (1992) has introduced the notion of powers and roots of clauses for specializations and generalizations of clauses where the clauses are resolved with themselves. Below we present de nitions of these and related concepts modi ed w.r.t. the correct de nition of the subsumption theorem.\nDe nition A clause D is an nth power of a clause C if and only if D is a variant of a clause in L n (fCg) (n 1). We also say that C is an nth root of D. A clause D is an indirect nth power of a clause C if and only if there exists a clause E such that E D and E is an nth power of C. We also say that C is an indirect nth root of D. Let C be a clause and D an indirect nth power of C. Then D is a proper indirect nth power of C if and only if C 6 D. We also say that C is a proper indirect nth root of D. To say that a clause implies another non-tautological clause or to say that the clause is an indirect root of the other clause, is equivalent. However, to say that a clause is an indirect nth root for some speci ed n is more informative.\nImplication between clauses can be described as a combination of self-resolution and -subsumption. Plotkin's algorithm to compute LGG s gives us a suitable tool for nding generalizations under -subsumption. Hence, to be able to nd generalizations under implication we also need a technique to invert resolution." }, { "figure_ref": [], "heading": "Inverting One Resolution by Or-introduction", "publication_ref": [ "b33", "b46", "b50", "b28", "b9", "b11", "b45" ], "table_ref": [], "text": "Other work on inverting resolution has primarily considered the problem of constructing one parent clause given the resolvent and the other parent clause (Muggleton & Buntine, 1988;Rouveirol & Puget, 1989;Wirth, 1989;Muggleton, 1990;Hume & Sammut, 1991;Idestam-Almquist, 1992;Rouveirol, 1992). Below we will describe how or-introduction can be used to construct two parent clauses from only the resolvent. Let C and D be clauses, and the following clause R a resolvent of C and D: R = ((C fAg) (D fBg)) ; where C is a factor of C, D is a factor of D, A 2 C , B 2 D and is an mgu for fA; Bg. We seek parent clauses of R that are minimally general. Then we should let , and be empty substitutions, which corresponds to an assumption that no instantiation of variables has been done in the resolution of C and D, and thus we have A = B. We should also let C fAg = D fBg, which corresponds to an assumption that each literal in R is inherited both from C and D. Then we have C = R fAg and D = R fAg;\nwhere A could be any literal, and we say that C and D are obtained from R by orintroduction of the literal A.\nTheorem 22 (Inverting resolution using or-introduction) Let T be a set of clauses, D a clause in L n (T). Then there exists a set of clauses S or-introduced from D such that for each E 2 S there exists a clause C 2 T such that C E.\nProof: The proof is by complete mathematical induction on n. It should be noted that D and S, in the statement of the theorem, in the proof are indexed by n.\nBase step (n=1): By the de nition of nth resolution layer L 1 (T) = T, and thus D 1 2 T. We have that S 1 = fD 1 g is or-introduced from D 1 by the empty sequence of literals 1 = ]. Hence, for D 1 2 S 1 there exists a clause D 1 2 T such that D 1 D 1 .\nInduction hypothesis (n=k): For every 1 i k, there exists a set of clauses S i orintroduced from D i by some sequence of literals i = L 1 ; : : : ; L i 1 ] such that for each E 2 S i there exists a clause C 2 T such that C E.\nInduction step (n=k+1): By the de nition of nth resolution layer, D k+1 is a resolvent of some clauses D m 2 L m (T) and D p 2 L p (T) such that m + p = k, 1 m k and 1 p k. Then by Proposition 19, there exists a literal L such that D m D k+1 fLg and D p D k+1 fLg.\nBy the induction hypothesis, there exists a set of clauses S m or-introduced from D m by some sequence of literals m = A 1 ; : : : ; A m 1 ] such that for each E 2 S m there exists a clause C 2 T such that C E. Then by Lemma 21, there exists a set of clauses S 0 m or-introduced from D k+1 fLg by some sequence of literals 0 m = A 0 1 ; : : : ; A 0 m 1 ] such that for each E 0 2 S 0 m there exists a clause C 2 T such that C E 0 .\nBy the induction hypothesis, there also exists a set of clauses S p or-introduced from D p by some sequence of literals p = B 1 ; : : : ; B p 1 ] such that for each E 2 S p there exists a clause C 2 T such that C E. Then by Lemma 21, there exists a set of clauses S 0 p or-introduced from D k+1 fLg by some sequence of literals 0 p = B 0 1 ; : : : ; B 0 p ] such that for each E 0 2 S 0 p there exists a clause C 2 T such that C E 0 .\nThen it follows from the de nition of or-introduction that S k+1 = S 0 m S 0 p is a set of clauses or-introduced from D k+1 by k+1 = L; A 0 1 ; : : : ; A 0 m ; B 0 1 ; : : : ; B 0 p ]. Consequently, there exists a set of clauses S k+1 or-introduced from D k+1 such that for each E 2 S k+1 there exists a clause C 2 T such that C E. 2" }, { "figure_ref": [], "heading": "Expansion of Clauses", "publication_ref": [ "b15" ], "table_ref": [], "text": "In the section 3.3 it was described how a reduction of generalization can be achieved by replacing a clause by a set of clauses. Here we show how this set of clauses equivalently can be described by a single clause, which we call an expansion of the original clause. By de nition, if a clause C -subsumes every clause in a set of clauses S, then C will also -subsume an LGG of S. This leads us to our de nition of expansion of clauses. The idea of expansion of clauses was rst presented by Idestam-Almquist (1993c). The set of clauses fD 1 ; D 2 ; D 3 g is or-introduced from the clause D by p(f 2 (a)); p(f(a))], and E is an LGG of fD 1 ; D 2 ; D 3 g. Consequently, E is an expansion of D by p(f 2 (a)); p(f(a))]." }, { "figure_ref": [], "heading": "De nition", "publication_ref": [], "table_ref": [], "text": "Note that implication has been reduced to -subsumption in the example above. We Proof: By the de nition of expansion, we know that there exists a set of clauses S orintroduced from D by such that E is an LGG of S. By Theorem 20, we have fDg S. By the de nition of an LGG , E F for each F 2 S. Then by Proposition 3, E ) F for each F 2 S. Thus fEg j = S, and consequently E ) D. We have D F for each F 2 S. Then by the de nition of an LGG , we have D E, and by Proposition 3 D ) E. Consequently, E , D. 2 Below we prove that for every generalization under implication of a clause there exists an expansion of the clause such that the generalization under implication is reduced to a generalization under -subsumption.\nTheorem 24 (Reduction of implication to -subsumption using expansion) Let C be a clause and D a non-tautological clause such that C ) D. Then there exists an expansion E of D such that C E.\nProof: By Corollary 17, there exists a clause D 0 2 L n (fCg) such that D 0 D for some n 1. By Theorem 22, there exists a set of clauses S 0 or-introduced from D 0 such that for each F 0 2 S 0 we have C F 0 . Then it follows from Lemma 21 that there exists a set of clauses S or-introduced from D such that for each F 2 S we have C F. Then let E be an LGG of S, and thus an expansion of D, and we have C E by the de nition of an LGG . 2" }, { "figure_ref": [], "heading": "Complete Expansion", "publication_ref": [], "table_ref": [], "text": "Generalizations under implication of a clause can be reduced to generalizations undersubsumption of an expansion of the clause. We are particularly interested in expansions of clauses such that every generalization under implication is reduced to a generalization under -subsumption of that particular expansion.\nDe nition Let D be a clause, and E an expansion of D. Then E is a complete expansion of D if and only if, for every clause C, C E whenever C ) D.\nRecall that an expansion of a clause is a clause, and thus nite. Muggleton and Page (1994, page 166) has shown that complete expansions, which they call nite self-saturations, do not exist for all clauses.\nTheorem 25 (Non-existence of complete expansions) There exist non-tautological clauses for which there exist no complete expansions.\nThe non-existence of complete expansions is due to that for some clauses there are in nitely many distinct generalizations under implication. Because of this we turn to the problem of reducing every generalization under T-implication to a generalization under -subsumption of a single expansion. If we can compute T-complete expansions of a set of clauses then we can use Plotkin's algorithm for computing an LGG to compute an LGGT. From the proof of Theorem 26 it follows that the candidate set of literals to be used to compute a T-complete expansion is nite. Since expansion is equivalence preserving we could simply test all di erent ways to expand a clause by sequences of literals from this candidate set, and in this way obtain a T-complete expansion. This is of course an extremely complex process, but at least theoretically, T-complete expansions and LGGTs are computable." }, { "figure_ref": [], "heading": "De nition", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Concluding Remarks", "publication_ref": [ "b40", "b36", "b17", "b36", "b13", "b32", "b20", "b0", "b16", "b4", "b18", "b7" ], "table_ref": [], "text": "We have studied the problem of generalization of clauses. In section 2, we described the framework for generalization of clauses developed by Plotkin (1970Plotkin ( , 1971bPlotkin ( , 1971a)), which is based on -subsumption. Implication is the most natural basis for inductive generalization. In section 2, we therefore also studied the theory of generalization under implication.\nThe contents of section 2 can be summarized as follows:\n1. It is decidable whether a clause -subsumes another clause.\n2. There exists a least general generalization under -subsumption (LGG ) of every nite set of clauses. 3. It is undecidable whether a clause implies another clause. 4. It is an open problem whether there exists a least general generalization under implication (LGGI) of every nite set of clauses. 5. T-implication is a strictly stronger relation between clauses than implication, and strictly weaker than -subsumption, and T-implication can become an arbitrarily good approximation of implication by extending the considered term set. 6. It is decidable whether a clause T-implies another clause. 7. There exists a least general generalization under T-implication (LGGT) of every nite set of clauses. In section 3, we studied the di erence between -subsumption and implication on clauses. We presented our approach to nd all generalizations under implication, by reducing implication to -subsumption. This can be achieved by inverting self-resolution, and we described a technique for inverting resolution based on or-introduction of literals. We also described expansion of clauses, which summarizes our idea of reduction of implication to -subsumption. The contents of section 3 can be summarized as follows:\n1. An expansion of a clause is an LGG of a set of clauses obtained by or-introduction from the clause. 2. For every generalization under implication of a clause there exists an expansion of the clause, logically equivalent to the clause, such that the generalization under implication is reduced to a generalization under -subsumption. 3. There exist non-tautological clauses for which there exist no complete expansions, which means that there are no expansions of the clauses such that every generalization under implication is reduced to a generalization under -subsumption of the expansions. 4. For each non-tautological clause there exists a T-complete expansion, which means that every generalization under T-implication of the clause is reduced to a generalization under -subsumption of the expansion.\nAs noted in section 3.5, T-complete expansions and LGGTs are computable, but such a computation is extremely costly. This is not surprising since our framework for generalization under implication is based on and extends Plotkin's framework for generalization under -subsumption, which already su ers from complexity problems. In general an LGG of a set of clauses may grow exponentially in the number of clauses in the set (Muggleton & Feng, 1990). Even an LGG reduced under -subsumption, which means that all literals that are redundant under -subsumption are removed, may grow exponentially in the number of clauses (Kietz, 1993). Since an expansion of a clause is an LGG of a set of or-introduced clauses, the computational cost of an expansion grows exponentially in the number of literals used in the or-introduction. In the computation of a T-complete expansion a large number of literals may be considered, and consequently such a computation would be extremely costly.\nHowever, although Plotkin's framework for generalization under -subsumption is computationally expensive, it has been widely used as a theoretical framework. Then to make it practical, a number of di erent restrictions on the clausal language has been considered, for example ij-determinacy (Muggleton & Feng, 1990). In a similar way we hope to nd restrictions under which our here presented framework for generalization under implication can be practically useful. Idestam-Almquist (1993b, 1993a) has described a technique to e ciently compute a restricted form of generalizations under implication. Recently, Muggleton has presented another approach based on generating a number of clauses, so called sub-saturants, which are candidates for being indirect roots, and then testing whether they are so or not (Muggleton, 1995). This approach might be a way to more e ciently compute some generalizations under implication. Some approaches to learn recursive de nitions (recursive logic programs) by generalization under implication have been presented (Lapointe & Matwin, 1992;Aha, Lapointe, Ling, & Matwin, 1994;Idestam-Almquist, 1995). These approaches are based on structural analysis of the given examples, but can theoretically be described in our framework.\nA study by Cohen (1995aCohen ( , 1995b) ) of the learnability of recursive logic programs has previously been presented in this journal. In this study it was shown that a recursive logic program consisting of one constant-depth determinate closed k-ary recursive clause and one constant-depth determinate non-recursive clause is PAC-learnable given an additional \\base-case oracle\", which determines if a positive example is covered by the non-recursive base clause of the target program alone. It was also shown that generalizing this class of learning problem in any natural way leads to a computationally di cult problem. This result tells us that to e ciently learn more complex recursive hypotheses some extra information, such as rule models (Kietz & Wrobel, 1992) or program recursion schemes (Hamfelt & Nilsson, 1994), is needed.\nThe contributions of this paper are threefold. First, we have systematically reviewed and discussed the concepts relevant to generalization in a rst-order setting. Second, we have introduced T-implication, a stronger form of implication which is decidable between clauses. Third, we have further developed previous work of the author (Idestam-Almquist, 1993c) on extending Plotkin's framework for generalization under -subsumption to generalization under implication." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The author wish to thank Torkel Franz en for invaluable help concerning the work on Timplication. The author also wish to thank Shan-Hwei Nienhuys-Cheng, Ronald de Wolf and the anonymous reviewers for a number of thoughtful comments and suggestions of improvements.\nThis work has been supported by the Swedish Research Council for Engineering Sciences (TFR) and the European Community ESPRIT BRA 6020 Inductive Logic Programming." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The above technique to nd parent clauses can be used to reduce implication tosubsumption. This is of interest for ambivalent clauses, such as R in the example above, for which there are proper indirect roots. For example the clause G = ( p(f(x)) p(x) ) is a proper indirect second root of R, and it -subsumes both C and D.\nProposition 18 shows that the set of two clauses obtained by our technique for inverting one resolution is logically equivalent to the original clause. In Proposition 19 it is shown that or-introduction of one literal is a general technique for inverting one resolution.\nProposition 18 Let R be a clause and L a literal. Then fRg fR fLg; R fLgg. Proof: Since R R fLg and R R fLg, we have R R fLg and R R fLg.\nThen by Proposition 3, R ) R fLg and R ) R fLg. Thus fRg j = fR fLg; R fLgg. The clause R is a resolvent of R fLg and R fLg. Then by soundness of resolution, fR fLg; R fLgg j = fRg. " }, { "figure_ref": [], "heading": "Inverting Multiple Resolutions by Or-introduction", "publication_ref": [], "table_ref": [], "text": "The technique for inverting one resolution can be generalized to a technique for inverting a sequence of resolutions as follows. The set fR fL 1 g; R fL 1 gg;\nwhere R is a clause and L 1 is a literal, is a set of two clauses from which the clause R follows by one resolution. Similarly, the two sets fR fL 1 g fL 2 g; R fL 1 g fL 2 g; R fL 1 gg; and fR fL 1 g; R fL 1 g fL 2 g; R fL 1 g fL 2 gg;\nwhere R is a clause and L 1 and L 2 are literals, are sets of three clauses from which R follows by two resolutions. In the same way, for given literals L 1 , L 2 and L 3 , we have six di erent sets of four clauses from which R follows by three resolutions, and so on. All these sets are or-introduced from the clause R. The set of clauses fD 1 ; D 2 g is or-introduced from C by p(f 2 (a))], and the set of clauses fE 1 ; E 2 g is or-introduced from D 1 by p(f(a))]. Consequently, the set of clauses fD 2 ; E 1 ; E 2 g is or-introduced from C by p(f 2 (a)); p(f(a))]." }, { "figure_ref": [], "heading": "De nition", "publication_ref": [], "table_ref": [], "text": "In the example above, clause D 1 is a resolvent of E 1 and E 2 , and C is a resolvent of D 1 and D 2 . Consequently, C is derivable from fD 2 ; E 1 ; E 2 g by resolution. That a set of clauses or-introduced from a clause is logically equivalent to the clause, is shown by the following theorem.\nTheorem 20 (Equivalence preservation of or-introduction) Let S be a set of clauses or-introduced from a clause C by a sequence of literals L 1 ; : : : ; L n ]. Then S fCg.\nProof: The proof is by mathematical induction on n. It should be noted that S, in the statement of the theorem, in the proof is indexed by n.\nBase step (n=0): S 0 is or-introduced from C by ]. Hence S 0 fCg. In section 3.2 we showed that it is possible to invert one resolution by or-introduction of one literal. Below we show that it is possible to invert a sequence of resolutions by or-introduction of a sequence of literals.\nLemma 21 Let D and E be clauses, fC 1 ; : : : ; C n g a set of clauses, and fD 1 ; : : : ; D n g a set of clauses or-introduced from D, such that D E and, for every 1 i n, C i D i .\nThen there exists a set of clauses fE 1 ; : : : ; E n g or-introduced from E, such that for every 1 i n, C i E i .\nProof: Let D i be an arbitrary clause in fD 1 ; : : : ; D n g. Then we have D i = D i for some set of literals i , since fD 1 ; : : : ; D n g is or-introduced from D. Since C i D i , there exists a substitution i such that C i i D i . Since D E, there exists a substitution such that D E. Thus we have (D i ) (E i ), and consequently C i i E i . Let E i = E i and we have C i E i . 2" } ]
[ { "authors": "D W Aha; S Lapointe; C X Ling; S Matwin", "journal": "Springer-Verlag", "ref_id": "b0", "title": "Inverting implication with small training sets", "year": "1994" }, { "authors": "M Bain; S Muggleton", "journal": "Academic Press", "ref_id": "b1", "title": "Non-monotonic learning", "year": "1992" }, { "authors": "C Chang; R Lee", "journal": "Academic Press", "ref_id": "b2", "title": "Symbolic Logic and Mechanical Theorem Proving", "year": "1973" }, { "authors": "W Cohen", "journal": "Journal of Arti cial Intelligence Research", "ref_id": "b3", "title": "Pac-learning recursive logic programs: E cient algorithms", "year": "1995" }, { "authors": "W Cohen", "journal": "Journal of Arti cial Intelligence Research", "ref_id": "b4", "title": "Pac-learning recursive logic programs: Negative results", "year": "1995" }, { "authors": "J H Gallier", "journal": "Harper & Row Publishers", "ref_id": "b5", "title": "Logic for Computer Science -Foundations of Automatic Theorem Proving", "year": "1986" }, { "authors": "G Gottlob", "journal": "Information Processing Letters", "ref_id": "b6", "title": "Subsumption and implication", "year": "1987" }, { "authors": "A Hamfelt; J F Nilsson", "journal": "", "ref_id": "b7", "title": "Inductive metalogic programming", "year": "1994" }, { "authors": "N Helft", "journal": "Sigma Press", "ref_id": "b8", "title": "Inductive generalization: A logical framework", "year": "1987" }, { "authors": "D Hume; C Sammut", "journal": "", "ref_id": "b9", "title": "Using inverse resolution to learn relations from experiments", "year": "1991" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b10", "title": "", "year": "" }, { "authors": "P Idestam-Almquist", "journal": "Ohmsha Publishers", "ref_id": "b11", "title": "Learning missing clauses by inverse resolution", "year": "1992" }, { "authors": "P Idestam-Almquist", "journal": "", "ref_id": "b12", "title": "Generalization of Clauses", "year": "1993" }, { "authors": "P Idestam-Almquist", "journal": "", "ref_id": "b13", "title": "Generalization under implication by recursive antiuni cation", "year": "1993" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "P Idestam-Almquist", "journal": "Springer-Verlag", "ref_id": "b15", "title": "Generalization under implication by using or-introduction", "year": "1993" }, { "authors": "P Idestam-Almquist", "journal": "", "ref_id": "b16", "title": "E cient induction of recursive de nitions by structural analysis of saturations", "year": "1995" }, { "authors": "J.-U Kietz", "journal": "", "ref_id": "b17", "title": "A comparative study of structural most speci c generalizations used in machine learning", "year": "1993" }, { "authors": "J.-U Kietz; S Wrobel", "journal": "Academic Press", "ref_id": "b18", "title": "Controlling the complexity of learning in logic through syntactic and task-oriented models", "year": "1992" }, { "authors": "R Kowalski", "journal": "Springer-Verlag", "ref_id": "b19", "title": "The case for using equality axioms in automatic demonstration", "year": "1970" }, { "authors": "S Lapointe; S Matwin", "journal": "", "ref_id": "b20", "title": "Sub-uni cation: A tool for e cient induction of recursive programs", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "N Lavra C; L De Raedt", "journal": "AI Communications", "ref_id": "b22", "title": "Inductive logic programming: A survey of European research", "year": "1995" }, { "authors": "N Lavra C; S ", "journal": "", "ref_id": "b23", "title": "Inductive Logic Programming: Techniques and Applications", "year": "1994" }, { "authors": "Ellis Horwood", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "C Lee", "journal": "", "ref_id": "b25", "title": "A Completeness Theorem and a Computer Program for Finding Theorems Derivable from Given Axioms", "year": "1967" }, { "authors": "J W Lloyd", "journal": "Springer-Verlag", "ref_id": "b26", "title": "Foundations of Logic Programming", "year": "1987" }, { "authors": "J Marcinkowski; L Pacholski", "journal": "", "ref_id": "b27", "title": "Undecidability of the horn clause implication problem", "year": "1992" }, { "authors": "S Muggleton", "journal": "Ohmsha Publishers", "ref_id": "b28", "title": "Inductive logic programming", "year": "1990" }, { "authors": "S Muggleton", "journal": "New Generation Computing Journal", "ref_id": "b29", "title": "Inductive logic programming", "year": "1991" }, { "authors": "S Muggleton", "journal": "", "ref_id": "b30", "title": "Inverting implication", "year": "1992" }, { "authors": "S Muggleton", "journal": "Springer-Verlag", "ref_id": "b31", "title": "Inductive logic programming: Derivations, successes and shortcomings", "year": "1993" }, { "authors": "S Muggleton", "journal": "New Generation Computing Journal", "ref_id": "b32", "title": "Inverse entailment and Progol", "year": "1995" }, { "authors": "S Muggleton; W Buntine", "journal": "", "ref_id": "b33", "title": "Machine invention of rst-order predicates by inverting resolution", "year": "1988" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b34", "title": "", "year": "" }, { "authors": "S Muggleton; L De Raedt", "journal": "Journal of Logic Programming", "ref_id": "b35", "title": "Inductive logic programming: Theory and methods", "year": "1994" }, { "authors": "S Muggleton; C Feng", "journal": "Ohmsha Publishers", "ref_id": "b36", "title": "E cient induction of logic programs", "year": "1990" }, { "authors": "S Muggleton; D Page", "journal": "", "ref_id": "b37", "title": "Self saturation of de nite clauses", "year": "1994" }, { "authors": "T Niblett", "journal": "Pitman", "ref_id": "b38", "title": "A study of generalization in logic programs", "year": "1988" }, { "authors": "S.-H Nienhuys-Cheng; R De Wolf", "journal": "", "ref_id": "b39", "title": "The subsumption theorem in inductive logic programming: Facts and fallacies", "year": "1995" }, { "authors": "G D Plotkin", "journal": "Edinburgh University Press", "ref_id": "b40", "title": "A note on inductive generalization", "year": "1970" }, { "authors": "G D Plotkin", "journal": "", "ref_id": "b41", "title": "Automatic Methods of Inductive Inference", "year": "1971" }, { "authors": "G D Plotkin", "journal": "Edinburgh University Press", "ref_id": "b42", "title": "A further note on inductive generalization", "year": "1971" }, { "authors": "J R Quinlan", "journal": "Machine Learning Journal", "ref_id": "b43", "title": "Induction of decision trees", "year": "1986" }, { "authors": "J A Robinson", "journal": "Journal of the ACM", "ref_id": "b44", "title": "A machine-oriented logic based on the resolution principle", "year": "1965" }, { "authors": "C Rouveirol", "journal": "Academic Press", "ref_id": "b45", "title": "Extensions of inversion of resolution applied to theory completion", "year": "1992" }, { "authors": "C Rouveirol; J.-F Puget", "journal": "", "ref_id": "b46", "title": "A simple solution for inverting resolution", "year": "1989" }, { "authors": "M Schmidt-Schauss", "journal": "Theoretical Computer Science", "ref_id": "b47", "title": "Implication of clauses is undecidable", "year": "1988" }, { "authors": "J R Slagle; C L Chang; R C T Lee", "journal": "", "ref_id": "b48", "title": "Completeness theorems for semantic resolution in consequence-nding", "year": "1969" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b49", "title": "", "year": "" }, { "authors": "R Wirth", "journal": "Pitman", "ref_id": "b50", "title": "Completing logic programs by inverse resolution", "year": "1989" } ]
[]
Generalization of Clauses under Implication
In the area of inductive learning, generalization is a main operation, and the usual de nition of induction is based on logical implication. Recently there has been a rising interest in clausal representation of knowledge in machine learning. Almost all inductive learning systems that perform generalization of clauses use the relation -subsumption instead of implication. The main reason is that there is a well-known and simple technique to compute least general generalizations under -subsumption, but not under implication. However generalization under -subsumption is inappropriate for learning recursive clauses, which is a crucial problem since recursion is the basic program structure of logic programs. We note that implication between clauses is undecidable, and we therefore introduce a stronger form of implication, called T-implication, which is decidable between clauses. We show that for every nite set of clauses there exists a least general generalization under T-implication. We describe a technique to reduce generalizations under implication of a clause to generalizations under -subsumption of what we call an expansion of the original clause. Moreover we show that for every non-tautological clause there exists a T-complete expansion, which means that every generalization under T-implication of the clause is reduced to a generalization under -subsumption of the expansion.
Peter Idestam-Almquist
[ { "figure_caption": "De nition A clause C -subsumes a clause D, denoted C D, if and only if there exists a substitution such that C D. Two clauses C and D are equivalent under -subsumption, denoted C D, if and only if C D and D C. -subsumption is re exive and transitive. Two clauses may be equivalent undersubsumption without being variants. Two clauses C and D are variants, denoted C ' D, if they are equal up to variable renaming.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "De nition A clause C implies a clause D, denoted C ) D, if and only if every model for C is a model for D (fCg j = D). Two clauses C and D are equivalent under implication, denoted C , D, if and only if C ) D and D ) C.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "De nition A clause C is a generalization under implication of a set of clauses S = fD 1 ; : : : ; D n g if and only if, for every 1 i n, C ) D i . A generalization under implication C of S is a least general generalization under implication (LGGI) of S if and only if, for every generalization under implication C 0 of S, C 0 ) C. Example Consider the following clauses: C = ( p(f(a)) p(a) ); D = ( p(f 2 (b)) p(b) ); E = ( p(f(x)) p(y) ); and F = ( p(f(z)) p(z) ):", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Let C and D be clauses, and T a term set of fDg by w.r.t. fCg. Then C T-implies D w.r.t. T, denoted C ) T D, if and only if I(C; T) j = D . Two clauses C and D are equivalent under T-implication w.r.t. T 0 , denoted C , T 0 D, if and only if C ) T 0 D and D ) T 0 C, where T 0 is a term set of fC; Dg. Note that the de nition of T-implication is independent of the choice of the Skolem substitution . In the following, if we say that a clause C T-implies a clause D without explicitly stating T, we mean that C T-implies D w.r.t. a minimal term set of fDg. Note that if C T-implies D w.r.t. a minimal term set of D then C T-implies D w.r.t. any term set of D. Example Consider the following clauses C and D, substitution , set of terms T and set of clauses I(C; T): C = ( p(f(x)) p(x) ); D = ( p(f 2 (y)) p(y) ); = fy=ag; T = fa; f(a); f 2 (a)g; and I(C; T) = f ( p(f(a)) p(a) ); ( p(f 2 (a)) p(f(a)) ); ( p(f 3 (a)) p(f 2 (a)) ) g: Then T is a minimal term set of fDg by w.r.t. fCg, and I(C; T) is the instance set of C w.r.t. T. We have that I(C; T) j = D and thus C ) T D. Note that C ) D, and that C 6 D.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "A 1 ; : : : ; A m B 1 ; : : : ; B n ) be a clause, T a set of clauses, and = fx 1 =a 1 ; : : : ; x k =a k g a Skolem substitution for C w.r.t. T. Then the set of ground unit clauses f( A 1 ) ; : : : ; ( A m ) ; (B 1 ) ; : : : ; (B n ) g is the complement C of C by w.r.t. T.Theorem 5 (Herbrand's theorem) A set of clauses S is unsatis able if and only if there exists a nite unsatis able set S 0 of ground instances of clauses in S. Corollary 6 (Relationship between implication and T-implication) Let C and D be clauses. Then: a) if C ) T D for some term set T of fDg then C ) D, and b) if C ) D then there exists a term set T of fDg such that C ) T D. Proof: a) If C ) T D then I(C; T) j = D , where T is a term set of fDg by w.r.t. fCg. Hence, I(C; T) D j = ?, where D is the complement of D by . By Theorem 5, fCg D j = ?, and thus C ) D. b) If C ) D then fCg D j = ?, where D is the complement of D by w.r.t. fCg. Then by Theorem 5, there exists a term set T of fDg such that I(C; T) D j = ?, and thus I(C; T) j = D . Then by de nition C ) T D. 2 It follows from Corollary 6 that T-implication can become an arbitrary good approximation of implication by extending the considered term set. T-implication is a strictly stronger relation between clauses than implication. The following example illustrates that if a clause C implies a clause D then C does not necessarily T-imply D. Example Consider the following clauses C, D and E, and set of terms T: C = ( p(f(x); y) p(z; x) ); D = ( p(f(x); y) p(z; w) ); E = ( p(f(a); a) p(a; f(a)) ); and T = fa; f(a)g: Then C ) E since D is a resolvent of C resolved with itself and E is an instance of D. The set of terms T is a minimal term set of E. We do not show here the whole set I(C; T), but just point out that I(C; T) 6 j = E and thus C 6 ) T E. However if we extend T to T 0 = fa; f(a); f 2 (a)g then I(C; T 0 ) j = E, and thus C ) T 0 E. Below we show that if a clause C -subsumes a clause D then C also T-implies D. Thus, T-implication is a strictly weaker relation between clauses than -subsumption. We also show decidability of T-implication between clauses. Proposition 7 Let C and D be clauses and T a term set of fDg. If C D then C ) T D. Proof: If C D then there exists a substitution such that C D. Let T be a term set of fDg by w.r.t. fCg. Then we have C 2 I(C; T). We also have C D , and thus C D . Then by Proposition 3, C ) D (fC g j = D ). Consequently I(C; T) j = D , and then by de nition C ) T D. 2 Theorem 8 (Decidability of T-implication between clauses) Let C and D be clauses and T a term set of fDg. Then there exists a procedure to decide if C ) T D. Proof: By the de nition of T-implication we have I(C; T) j = D where T is a term set of D by w.r.t. fCg. We have that I(C; T) is a set of ground clauses and D is a ground clause. Thus, it follows from the decidability of logical consequence in propositional logic that T-implication is decidable. 2 2.4 Generalization under T-implication A least general generalization under T-implication (LGGT) is de ned similar to an LGG and an LGGI. De nition Let C be a clause, S = fD 1 ; : : : ; D n g a set of clauses, and T a term set of S w.r.t. fCg. Then C is a generalization under T-implication of S w.r.t. T if and only if, for every 1 i n, C ) T D i . A generalization under T-implication C of S w.r.t. T is a least general generalization under T-implication (LGGT) of S w.r.t. T if and only if, for every generalization under T-implication C 0 of S w.r.t. T, C 0 ) T 0 C, where T 0 is a minimal term set of C. Example Consider the following clauses: C = ( p(f(a)) p(a) ); D = ( p(f 2 (b)) p(b) ); E = ( p(f(x)) p(y) ); and F = ( p(f(z)) p(z) ):", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Let C be an LGGT of a set of clauses S w.r.t. a term set T. Then C is a complete LGGT of S w.r.t. T if and only if, for every generalization under T-implication C 0 of S w.r.t. T, C 0 C. If C is a clause then we let C + denote the set of positive literals in C, and C the set of negative literals in C. The following proposition has been proved by Gottlob (1987, page 110). Proposition 9 Let C = C + C be a clause and D = D + D a non-tautological clause. If C ) D then C + D + and C D .", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "t. T. 2 Lemma 12 Let C, D and E be clauses such that C and D have no variables in common, and let T be a term set of fEg by w.r.t. fC; Dg. If C ) T E and D ) T E then C D ) T E. Proof: If C ) T E and D ) T E then by de nition I(C; T) j = E and I(D; T) j = E . Let I(C; T) = fC 1 ; : : : ; C n g and I(D; T) = fD 1 ; : : : ; D m g. Then I(C D; T) = fC i D j j 1 i n and 1 j mg. Let I be a model for I(C D; T). Then, for every 1 i n and 1 j m, I is a model for C i D j . Hence if I is not a model for C i for some 1 i n then I must be a model for D j for every 1 j m. Then it follows that either I is a model for I(C; T) or I is a model for I(D; T), and thus I is a model for E . Consequently, I(C D; T) j = E , and then by de nition C D ) T E. 2", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "A clause C is ambivalent if and only if there exist a positive literal A 2 C and a negative literal B 2 C such that A and B have the same predicate symbol. Example The clause C = ( p(f 2 (a)) q(b); p(a) ) is ambivalent since p(f 2 (a)) and :p(a) have the same predicate symbol. However, C is not recursive since neither p(a) nor q(b) is uni able with a variant of p(f 2 (a)). Proposition 15 Let C be a clause and D a non-ambivalent clause. Then C ) D if and only if C D.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "De nition A clause R is a resolvent of two clauses C and D if and only if there are C , D , A, B and such that: a) C is a factor of C and D is a factor of D, b) C and D have no variables in common, c) A is a literal in C and B is a literal in D , d) is an mgu of fA; Bg, and e) R is the clause ((C fAg) (D fBg)) . The clauses C and D are called parent clauses of R.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "ExampleConsider the following clauses: C = ( p(f(x)) p(x) ); D = ( p(f 2 (x)) p(x) ); E = ( p(f 3 (x)) p(x) ); F = ( p(f 2 (a)) p(a); p(b) ); and G = ( p(x) p(b) ): The clause C is a second root of D, and a third root of E. The clause C is also an indirect second root of F, since C is a second root of D and D -subsumes F. In fact C is a proper indirect second root of F, since C 6 F. For every n 1, the clause G is an indirect nth root of itself, but none of these indirect roots is a proper indirect root.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Let D be a clause and a sequence of literals. Then a clause E is an expansion of D by if and only if E is an LGG of a set of clauses or-introduced from D by . Example Consider the following clauses: C = ( p(f(x)) p(x) ); D = ( p(f 3 (a)) p(a) ); D 1 = ( p(f 3 (a)) p(f 2 (a)); p(a) ); D 2 = ( p(f 3 (a)); p(f 2 (a)); p(f(a) p(a) ); D 3 = ( p(f 3 (a)); p(f 2 (a)) p(f(a)); p(a) ); and E = ( p(f(x)); p(f 3 (a)) p(a); p(x) ):", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "have C ) D and C 6 D, but for the expansion E of D we have C E.Expansion can be regarded as a transformation technique, since the expansion of a clause is logically equivalent to the clause itself.Theorem 23 (Equivalence preservation of expansion) Let D be a clause, and E an expansion of D. Then E , D.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Let D be a clause, E an expansion of D and T a term set of fDg. Then E is a T-complete expansion of D w.r.t. T if and only if, for every clause C, C E whenever C ) T D. Example Consider the following clauses: C 1 = ( p(f(x)) p(x) ); C 2 = ( p(f 2 (y)) p(y) ); D = ( p(f 4 (a)) p(a) ); E 1 = ( p(f 2 (y)); p(f 4 (a)) p(a); p(y) ); and E 2 = ( p(f(x)); p(f 2 (y)); p(f 4 (a)) p(a); p(y); p(x) ): The clauses C 1 and C 2 are proper indirect roots of D, such that C 1 ) T D and C 2 ) T D. The clause E 1 is an expansion of D by p(f 2 (a))], and E 1 is an expansion of D by p(f 2 (a)); p(f 3 (a)); p(f(a))]. The expansion E 2 is a T-complete expansion but E 1 is not. In the example above the T-complete expansion E 2 of D is also a complete expansion of D. However, in contrast to complete expansions, T-complete expansions exist for all non-tautological clauses. Theorem 26 (Existence of T-complete expansions) Let D be a non-tautological clause and T a term set of fDg. Then there exists a T-complete expansion E of D w.r.t. T. Proof: By Theorem 13, there exists a complete LGGT F of fDg w.r.t. T. Hence, for every clause C, if C ) T D then C F. By the de nition of a complete LGGT, we have F ) T D, and then by Corollary 6, F ) D. By theorem 24, there exists an expansion E of D such that F E. Thus, for every clause C, if C ) T D then C E, and consequently E is a T-complete expansion of D w.r.t. T. 2", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b21", "b24", "b18", "b19", "b21", "b22", "b20", "b24", "b18", "b19", "b24", "b15", "b12", "b22", "b14", "b3", "b0", "b7" ], "table_ref": [], "text": "Knowledge of cause and e ect is crucial for modeling the a ects of actions. For example, if we observe a statistical correlation between smoking and lung cancer, we can not conclude from this observation alone that our chances of getting lung cancer will change if we stop smoking. If, however, we also believe that smoking is a cause for lung cancer, then we can conclude that our choice whether to continue or quit smoking will a ect whether we get lung cancer.\nWork by arti cial intelligence researchers, statisticians, and philosophers have emphasized the importance of identifying causal relationships for purposes of modeling the e ects of actions. For example, Simon (1977), Robins (1986), Spirtes et al. (1993), and Pearl (1993Pearl ( , 1995) ) have developed graphical models of cause and e ect, and have demonstrated how these models are important for reasoning about the e ects of actions. In addition, Robins (1986), Rubin (1978), Pearl and Verma (1991), and Spirtes et al. (1993) have developed approaches that embrace causality for learning the e ects of actions from data.\nOne useful framework for causal reasoning is that of Pearl (1993Pearl ( , 1995))|herein Pearl.\nUsing his framework, we construct a causal graph G. The nodes in G correspond to a set of variables U that we wish to model. Each variable has a set of mutually exclusive and collectively exhaustive values or instances. The arcs in G represent (informal) assertions of cause|in particular, the parents of x 2 U are direct causes of x. Pearl gives these informal assertions of cause an operational meaning by introducing a special class of actions on the variables U and then describing the a ects of these actions using the structure of the causal graph. Speci cally, he posits that, for every variable x 2 U, there exists another variable x, which we call an atomic intervention on x. The variable x has an instance set(x) for every instance x of x, and an instance idle. The instance set(x) corresponds to an action that c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.\nforces x to take on instance x and indirectly a ects other variables through the change in x. The instance idle corresponds to the action of doing nothing. 1 Pearl then asserts that the e ects of atomic interventions on the variables in U are determined by the structural equations x = f x (Pa G (x); x; x ) for all x 2 U, where (1) Pa G (x) are the parents of x in G|that is, the direct causes of x, (2) the variables x are exogenous and mutually independent random disturbances, and (3) the function f x has the property that x = x when x =set(x) regardless of the values of Pa G (x) and x . Following Pearl, we call this framework for de ning cause a structural-equation model.\nAnother useful framework for causal reasoning, closely related to Pearl's, is that of Spirtes et al. (1993)|herein SGS.\nDespite these and other important advances in reasoning about cause and e ect, foundations for such approaches are lacking. In any framework for causal reasoning, it is important to consider what concepts are primitive|that is, assumed to be self evident and used to de ne other concepts. As much as is possible, these primitives should have simple and universal meanings so that claims of causation can be empirically tested and causal inferences can be trusted. Unfortunately, the primitives used by Pearl, SGS, and other researchers are not ideal in this respect.\nFor example, SGS take cause itself to be a primitive. Given the controversies in statistics and other disciplines concerning the meaning of cause, we believe that a better primitive can be found. Pearl takes random disturbance, exogenous variable, and atomic intervention as primitives. One problem with this approach is that we need an understanding of cause and e ect to identify an intervention as atomic. To illustrate the problem, suppose we wish to model the causal relationship between the binary variables w and h representing whether or not a person considers himself to be wealthy and happy, respectively. Further, suppose we can give this person a large sum of stolen money along with the knowledge that this money is stolen. Now we ask the question: Is this action an instance of an atomic intervention for w? If this person does not care about how he becomes wealthy, then the answer is \\yes.\" If this person is more typical, however, then the answer is \\no,\" because this action would a ect both w and h directly. Thus, we must rst determine whether or not the action is a direct cause of h to determine whether or not this action is an instance of an atomic intervention.\nIn this paper, we provide a principled foundation for causal reasoning. In particular, we explicate a set of primitives from decision theory, and use these primitives to de ne the concepts of cause and atomic intervention as well as those of random disturbance and exogenous variable. These primitives are simple to understand and used uniformly across many disciplines.\nThe basic idea behind our de nition of cause is as follows. Following the paradigm of decision theory, we focus on a person|the decision maker|who has one or more decisions to make. For each variable that we wish to model in considering these decisions, we distinguish the variable as being either a decision variable or chance variable. A decision variable is a variable whose instances correspond to possible actions among which the person can choose. A chance variable is any other variable. This framework is similar to Pearl's, where chance variables correspond to the variables U and decision variables correspond to interventions.\nThe di erences are that (1) we do not require there to be a decision variable for every chance variable, and (2) decision variables need not be atomic interventions. Now, for simplicity, suppose that we have a model consisting of only one decision variable d and a set of chance variables U. Imagine that we choose one of the instances of d and subsequently observe x 2 U. If we believe that x can be di erent for di erent choices, then, by our de nition, we say that d is a cause for x. For example, suppose decision variable s represents the decision of whether or not to continue smoking and chance variable l represents whether or not we get lung cancer. If we believe that we will get lung cancer if we continue smoking and that we may not get lung cancer if we quit, then we can say that s is a cause of l.\nOur de nition is related to the notion of a counterfactual: a hypothetical statement or question that can not be veri ed or answered through observation (Lewis, 1973;Holland, 1986). In our smoking example, we ask the question \\Will deciding di erently possibly change our health outcome?\" This question can not be answered by any observation, because we must either quit or continue to smoke; we can not do both. Using counterfactuals, Rubin (1978) de nes a notion of causal e ect that is closely related to our de nition of cause.\nThe problem with most de nitions of cause based on intervention is that they do not allow chance variables to be the causes of other chance variables. Consider the variables g and c representing a person's gender at birth and whether or not that person gets breast cancer, respectively. Although g is a chance variable (we cannot choose our gender), we often hear people say in natural discourse that g causes c. In general, we would like to accommodate such assertions. The de nition of cause that we present does indeed permit chance variables to be causes.\nThere is, however, one catch. Namely, when we assert that a set of chance variables X is a cause of chance variable y, we must also specify the decision or decisions that bring about the possible changes in X and y. In our breast-cancer problem, we can assert that g causes c, but we must explicate a decision that possibly leads to a change in gender and breast cancer. For example, we can say that g causes c with respect to decision variable d, where d represents the decision of whether or not to perform genetic surgery at conception.\nBy including a decision context in assertions of cause, our de nition departs from the traditional view of causation. Nonetheless, this departure makes causal assertions more precise. For example, consider another decision that will likely lead to a change in gender: a decision o of whether or not to have a sex-change operation at birth. In this case, it may be reasonable to assert that g is not a cause of c with respect to o. Thus, causal relationships among chance variables may depend on the decisions available for intervention; and our de nition accommodates this dependence.\nOur paper is organized into four parts. In part 1 (Sections 2 and 3), we develop our de nition of cause, using the decision-theoretic primitives of Savage (1954). In Section 2, we introduce a simpler relation than cause, which we call limited unresponsiveness. In Section 3, we de ne cause in terms of limited unresponsiveness.\nIn part 2 (Sections 4 through 7), we address the graphical representation of cause. In Section 4, we review a directed-acyclic-graph (DAG) representation, known as an in uence diagram, which has been used for two decades by decision analysts to model the e ects of decisions (Howard and Matheson, 1981). We demonstrate the inadequacies of the inuence diagram as a representation of cause. In the following three sections, we develop a special condition on the in uence diagram, known as canonical form, that improves the representation of cause.\nIn part 3 (Section 8), we use our de nitions of cause, atomic intervention, and mapping variable, along with canonical form to build a correspondence with (and thus a foundation for) Pearl's causal framework.\nIn part 4 (Section 9), we demonstrate an important use of canonical form. Namely, we show how to use in uence diagrams in canonical form to do general counterfactual reasoning.\nWe present our framework in the traditional decision-analytic paradigm of a \\one shot\" decision. In particular, we do not consider experimental studies, where variables are measured repeatedly. Nonetheless, one can easily extend our framework to such situations by introducing the assumption of exchangeability (de Finetti, 1937). Bayesian methods for learning models of cause that are based on this approach are discussed in Angrist et al. (1995) andHeckerman (1995)." }, { "figure_ref": [], "heading": "Unresponsiveness", "publication_ref": [ "b16", "b22", "b13", "b13", "b0", "b10", "b0" ], "table_ref": [ "tab_1", "tab_1" ], "text": "In this section, we introduce the notion of limited unresponsiveness, a fundamental relation that we use to de ne cause. We de ne limited unresponsiveness using primitives from decision theory as explicated (for example) by Savage (1954).\nWe begin with a description of the primitives act, consequence, and possible state of the world. Savage describes and illustrates these concepts as follows:\nTo say that a decision is to be made is to say that one or more acts is to be chosen, or decided on. In deciding on an act, account must be taken of the possible states of the world, and also of the consequences implicit in each act for each possible state of the world. A consequence is anything that may happen to the person.\nConsider an example. Your wife has just broken ve good eggs into a bowl when you come in and volunteer to nish making the omelet. A sixth egg, which for some reason must either be used for the omelet or wasted altogether, lies unbroken beside the bowl. You must decide what to do with this unbroken egg. Perhaps it is not too great an oversimpli cation to say that you must decide among three acts only, namely, to break it into the bowl containing the other ve, to break it into a saucer for inspection, or to throw it away without inspection. Depending on the state of the egg, each of these three acts will have some consequence of concern to you, say that indicated by Table 1.\nFor purposes of our discussion, there are two points to emphasize from Savage's exposition. First, it is important to distinguish between that which we can choose|namely, acts|and that which we can not choose|namely, consequences. Second, once we choose an act, the consequence that occurs is logically determined by the state of the world. In the omelet story, the possible states of the world readily come to mind given the description of the problem. Furthermore, we can observe the state of the world (i.e., the condition of the egg). In many if not most situations, however, the state of the world is unobservable. That is, the assertion \\the state of the world is x\" is a counterfactual. In these situations, we can bring the possible states to mind by thinking about the acts and consequences. For example, suppose we have a decision to continue smoking or quit, and we model the consequences of getting cancer or not. These acts and consequences bring to mind four possible states of the world, as shown in Table 2. These possible states have no familiar names; and we simply label them with numbers. The actual state of the world is not observable, because, if we decide to quit, then we won't know for sure what would have happened had we continued, and vice versa.\nThe acts and consequences in this problem may actually bring to mind more than four| even an in nite number|of states of the world. For example, the state of the world may correspond to degree of susceptibility of lung tissue to tar as measured by a biochemical assay. Nonetheless, given the discrete acts and consequences that we have chosen to model in the problem, the four states in Table 2 are su ciently detailed. Savage recognizes this issue of detail in his de nition of state of the world: \\a description of the world, leaving no relevant aspect undescribed.\" In general, if we have a decision problem with c consequences and a acts, then at most c a possible states of the world need be distinguished. 3 The idea that the state of the world may not be observable can be traced to Neyman (1923), who derived statistical methods for estimating the di erences in yields of di erent crops planted on the same plot of land, in circumstances where only one crop was actually planted on a plot. Rubin (1978) and Howard (1990) have formalized this idea.\n2. Savage (1954) de nes an act to be \\a function attaching a consequence to each state of the world.\" In contrast, we take act to be a primitive, as do many decision analysts (e.g., Howard, 1990) In practice, it is often cumbersome if not impossible to reason about a monolithic set of acts, possible states of the world, or consequences. Therefore, we typically describe each of these items in terms of a set of variables that take on two or more values or instances. We call a variable describing a set of consequences a chance variable. For example, in the omelet story, we can describe the consequences in terms of three chance variables: (1) number of eggs in the omelet?4 (o) having instances zero, ve, and six, (2) number of good eggs destroyed? (g) having instances zero, one, and ve, and (3) saucer to wash? (s) having instances no and yes. That is, every consequence corresponds to an assignment of an instance to each chance variable.\nWe call a variable describing a set of acts a decision variable (or decision, for short). For example, suppose we have a set of possible acts about how we are going to dress for work. In this case, we can describe the acts in terms of the decision variables shirt (plain or striped), pants (jeans or corduroy), and shoes (tennis shoes or loafers). In this example and in general, every act corresponds to a choice of an instance for each decision variable.\nThe description of possible states of the world in terms of component variables is a bit more complicated, and is not needed for our explication of unresponsiveness and limited unresponsiveness. We defer discussion of this issue to Section 6.\nAs a matter of notation, we use D to denote the set of decisions that describe the acts for a decision problem, and lower-case letters (e.g., d; e; f) to denote individual decisions in the set D. Also, we use U to denote the set of chance variables that describe the consequences, and lower-case letters (e.g., x; y; z) to denote individual chance variables in U. In addition, we use the variable S to denote the state of the world (the instances of S correspond to the possible states of the world).5 Thus, any given decision problem|or domain, as we sometimes call it|is described by the variables U, D, and S. 6With this introduction, we can discuss the concept of limited unresponsiveness. To illustrate this concept, consider the following decision problem adapted from Angrist et al. (1995). Suppose we are a physician who has to decide whether to recommend for or against a particular treatment. Given our recommendation, our patient may or may not actually r (recommendation) S (state of the world) take don't take t (taken?) c (cured?) t (taken?) c (cured?)\n1: complier, helped yes yes Table 3: A decision problem about recommending a medical treatment.\naccept the treatment, and may or may not be cured as a result. Here, we use a single decision variable recommendation (r) to represent our acts (i.e., D = frg), and two chance variables taken? (t) and cured? (c) to represent whether or not the patient actually accepts the treatment and whether or not the patient is cured, respectively (i.e., U = ft; cg).\nThe possible states of the world for this problem are shown in Table 3. For example, consider the rst row in the table. Here, the patient will accept the treatment if and only if we recommend it, and will be cured if and only if he takes the treatment. We describe this state by saying that the patient is a complier and is helped by the treatment. We discuss the description of these states in more detail in Section 6.\nAs is indicated in the table, suppose that we believe the last four states of the world are impossible (i.e., have a probability of zero). These last four states share the property that t takes on the same instance for both acts, whereas c does not. Thus, this decision problem satis es the following property: in all of the states of the world that are possible, if t is the same for the two acts, then c is also the same. We say that c is unresponsive to r in states limited by t.\nIn general, suppose we have a decision problem described by variables U, D, and S. Let X be a subset of U, and Y be a subset of U D. We say that X is unresponsive to D in states limited by Y if we believe that, for all possible states of the world, if Y assumes the same instance for any two acts then X must also assume the same instance for those acts. We describe the notion of limited unresponsiveness in earlier work in terms of a conditional xed set (Heckerman and Shachter, 1994). Angrist et al. (1995) discuss an instance of limited unresponsiveness, which they call the exclusion restriction.\nTo be more formal, let X S; D] be the instance that X assumes (with certainty) given the state of the world S and the act D. For example, in the omelet story, if S is the state of the world where the egg is good, and D is the act throw away, then o S; D] (the number of eggs in the omelet) assumes the instance ve. Then, we have the following de nition. De nition 1 (Limited (Un)responsiveness) Given a decision problem described by chance variables U, decision variables D, and state of the world S, and variable sets X U and Y D U, X is said to be unresponsive to D in states limited by Y , denoted X 6 -Y D,\nif we believe that 8 S 2 S; D 1 2 D; D 2 2 D : Y S; D 1 ] = Y S; D 2 ] =) X S; D 1 ] = X S; D 2 ]\nX is said to be responsive to D in states limited by Y , denoted X -Y D, if it is not the case that X is unresponsive to D in states limited by Y |that is, if we believe that 9 S 2 S; D 1 2 D; D 2 2 D s:t: Y S; D 1 ] = Y S; D 2 ] and X S; D 1 ] 6 = X S; D 2 ] When X is (un)responsive to D in states limited by Y = ;, we simply say that X is (un)responsive to D. The notion of unresponsiveness is signi cantly simpler than that of limited unresponsiveness. That is, when Y = ;, the equalities on the left-hand-side of the implications in De nition 1 are trivially satis ed. Thus, X is unresponsive to D if we believe that, in each possible state of the world, X assumes the same instance for all acts; and X is responsive to D if there is some possible state of the world where X di ers for two di erent acts.\nAs examples of responsive variables, consider the omelet story. Let S denote the state where the egg is good, and D 1 and D 2 denote the acts break into bowl and throw away, respectively. Then, for the variable o (number of eggs in omelet?), we have o S; D 1 ] =six and o S; D 2 ] = ve. Consequently, o is responsive to D. 7 In a similar manner, we can conclude that g (number of good eggs destroyed?), and s (saucer to wash?) are each responsive to D.\nNote that if a chance variable x is responsive to D, then|to some degree|it is under the control of the decision maker. Consequently, the decision maker can not observe x prior to choosing an act for D. For example, in the omelet story, we can not observe any of the responsive variables o, g, or s before choosing an act. 8 As an example of an unresponsive variable, suppose we include S (the state of the world) as a variable in U. (E.g., in the omelet story, we can take U to be fS; o; g; sg.) By Savage's de nition of S, it must be unresponsive to D. Note that including S in U creates no new states of the world.\nAs we have discussed, the notions of unresponsiveness and limited unresponsiveness are closely related to concepts in counterfactual reasoning. When we determine whether or not 7. Technically, we should say that fog is responsive to D. For simplicity, however, we usually drop set notation for singletons. 8. To be more precise, the variable o represents the number of eggs in the omelet after we choose an act for D. This variable should not be confused with another variable{say o 0 |corresponding to the number of eggs in the omelet before we choose D. Whereas o is responsive to D and cannot be observed before choosing an act, o 0 is unresponsive to D and can be observed before choosing D.\na chance variable x is unresponsive to decisions D, we essentially answer the query \\Will the outcome of x be the same no matter how we choose D?\" Furthermore, when we determine whether or not x is unresponsive to D in states limited by Y , we answer the query \\If Y will not change as a result of our choice for D, will the outcome of x be the same?\" One of the fundamental assumptions of our work presented here is that these counterfactual queries are easily answered. In our experience, we have found that decision makers are indeed comfortable answering such restricted counterfactual queries. The concepts of responsiveness and probabilistic independence are related, as illustrated by the following theorem.\nTheorem 1 If a set of chance variables X is unresponsive to a set of decision variables D, then X is probabilistically independent of D.\nProof: By de nition of unresponsiveness, X assumes the same instance for all acts in any possible state of the world. Consequently, we can learn about X by observing S, but not by observing D. 2 Nonetheless, the two concepts are not identical. In particular, the converse of Theorem 1 does not hold. For example, let us consider the simple decision of whether to bet heads or tails on the outcome of a coin ip. Assume that the coin is fair (i.e., the probabilities of heads and tails are both 1/2) and that the person who ips the coin does not know our bet. Here, the possible outcomes of the coin toss correspond to the possible states of the world. Further, let decision variable b denote our bet, and chance variable w describe the possible consequences that we win or not. In this situation, w is responsive to b, because for both possible states of the world, w will be di erent for the di erent bets. Nonetheless, the probability of w is 1/2, whether we bet heads or tails. That is, w and b are probabilistically independent.\nLimited unresponsiveness and conditional independence are less closely related than are their unquali ed counterparts. Namely, limited unresponsiveness does not imply conditional independence. For example, in the medical-treatment story, c (cured?) is unresponsive to r (recommendation) in states limited by t (taken?), but it is reasonable for us to believe that c and r are not independent given t, perhaps because there is some factor that|partially or completely|determines how a person reacts to both recommendations and treatment.\nWe can derive several interesting properties of limited unresponsiveness from its de nition.\n1. X 6 -Y D () 8x 2 X; x 6 -Y D 2. X 6 -W D () X W 6 -W D 3. X 6 -D D 4. X 6 -Y D =) X 6 -Y Z D 5. X 6 -Y Z D and Y 6 -Z D =) X 6 -Z D 6. X -Z D and W 6 -Z D =) X -W Z D\nwhere D is the set of decision variables in the domain, X and W are arbitrary sets of chance variables in U, and Y and Z are arbitrary sets of variables in U D.\nThe proofs of these properties are straightforward. For example, consider property 5. Other properties follow from these. For example, it is true trivially that ; 6 -Y D. Consequently, by Property 2, we know that Y 6 -Y D. As another example, a special case of Property 4 is that whenever X is unresponsive to D, then X will be unresponsive to D in states limited by any Z. Also, Properties 4 and 5 imply that limited unresponsiveness is transitive: X 6 -Y D and Y 6 -Z D imply X 6 -Z D.\nIn closing this section, we note that the de nition of limited unresponsiveness can be generalized in several ways. In one generalization, we can de ne what it means for X U to be unresponsive to D in states of the world limited by Y, a set of instances of Y . Namely, we say that X is unresponsive to D in states limited by Y if, for all possible states of the world S, and for any two acts D 1 and D 2 , Y S; D 1 ] = Y S; D 2 ] 2 Y implies X S; D 1 ] = X S; D 2 ].\nIn a second generalization, we can de ne what it means for a set of chance variables to be unresponsive to a subset of all of the decisions. In particular, given a domain described by U and D, we say that X U is unresponsive to D 0 D in states limited by Y if X 6 -Y (DnD 0 ) D." }, { "figure_ref": [], "heading": "De nition of Cause", "publication_ref": [ "b22" ], "table_ref": [], "text": "Given the notion of limited unresponsiveness, we can formalize our de nition of cause.\nDe nition 2 (Causes with Respect to Decisions) Given a decision problem described by U and D, and a variable x 2 U, the variables C D U n fxg are said to be causes for x with respect to D if C is a minimal set of variables such that x 6 -C D.\nIn our framework, decision variables can not be caused, because they are under the control of the decision maker. Consequently, we de ne causes for chance variables only. Also, as we have discussed, our de nition is an extension of existing intervention-based de nitions of cause (e.g., Rubin 1978]) in that we allow causes to include chance variables. In addition, our de nition of cause departs from traditional usage of the term in that causee ect assertions may vary with the set of decisions available. We discuss the advantages of this departure shortly.\nAs an example of our de nition, consider the decision to continue or quit smoking, described by the decision variable s (smoke) and the chance variable l (lung cancer?). If we believe that s and l are probabilistically dependent, then, by Theorem 1, it must be that l -s. Furthermore, by Property 3, we know that l 6s s. Consequently, by De nition 2, we have that s is a cause of l with respect to s.\nAs another example, consider the medical-treatment story. We have that c (cured?) is responsive to r (recommendation), because (among other reasons) in the rst row in Table 3, the patient is cured if and only if we recommend the treatment. Furthermore, as we discussed in the previous section, c is unresponsive to r in states limited by t (taken?). Consequently, we have that t is a cause of c with respect to r.\nThe advantage of de ning cause relative to decisions is made clear by our breast-cancer example given in the introduction. Let g and c denote the chance variables gender? and breast cancer?, respectively. Now, imagine two decisions available to alter gender: o, a decision to have a sex-change operation at birth, and d, a decision to change chromosomes at conception by microsurgery. It is possible for someone to believe that c 6 -o and yet c -d and c 6g d. That is, it is possible for someone to believe that gender is a cause of breast cancer with respect to the chromosome change but not with respect to the sex-change operation. In this situation, it does not make sense to make the unquali ed statement \\gender is a cause of breast cancer.\" In general, our decision-based de nition provides added clarity.\nSeveral consequences of De nition 2 are worth mentioning. First, although cause is irre exive by de nition, it is not always asymmetric. For example, in our story about the coin toss, consider another variable m that represents whether or not the outcome of the coin toss matches our bet b. In the story as we have told it, m is a deterministic function of w (win?), and vice versa. Consequently, we have w 6m b and m 6w b; and so m is a cause of w and w is cause of m with respect to b. Note that any hint of uncertainty destroys this symmetry. For example, if there is a possibility that the person tossing the coin will cheat (so that we may lose even if we match), then we can conclude that m is a cause of w, but not vice versa. This symmetry would also be destroyed if we had a decision controlling w to which m is unresponsive.\nSecond, cause is transitive for single variables. In particular, if x is a cause for y and y is a cause for z with respect to D, then z -D and (by the transitivity of unresponsiveness) z 6x D. Consequently, x is a cause for z with respect to D. Note that transitivity does not necessarily hold for causes containing sets of variables, because the minimality condition in De nition 2 may not be satis ed.\nThird, C = ; is a set of causes for x with respect to D if and only if x is unresponsive to D.\nFourth, we have the following theorem, which follows from De nition 2 and several of the properties of limited unresponsiveness given in Section 2.\nTheorem 2 Given any x 2 U, if C is a set of causes for x with respect to D, and w 2 C\\U, then w must be responsive to D.\nProof: For any chance variable w 2 C, let C 0 = C n fwg. By the minimality condition in our de nition, we have x -C 0 D\n(1) Suppose that w 6 -D. Then, by Property 4, we have w 6 -C 0 D\n(2)\nApplying Equations 1 and 2 to Property 6, we have that x -C D, which contradicts that C is a set of causes for x with respect to D. 2\nTo illustrate the use of this theorem, let us extend the medical-treatment example by imagining that there is some gene that a ects how a person reacts to both our recommendation and to therapy. In this situation, it is reasonable for us to assert that the variable g (genotype?) is unresponsive to r. Thus, by Theorem 2, g can not be among the causes for any variable.\nThis consequence of our de nition may seem unappealing. Intuitively, we would like to be able to say that (in some sense) g is a cause of c. Indeed, our de nition does not preclude the ability to make such assertions. Namely, there is no reason to require that the decisions D be implementable in practice or at all. If we want to think about whether or not the patient's genotype is a cause for his cure, then we can imagine an action that can alter one's genetic makeup|for example, retroviral therapy (v). In this case, it is reasonable to conclude that fr; gg is a cause for t with respect to the decisions fr; vg. Nonetheless, as\nwe have discussed, we must be clear about the action(s) that alter genotype to make this statement of cause precise. Finally, we can generalize our de nition of what it means for a set of variables to cause\nx to a de nition of what it means for a set of instances to cause x. Namely, we say that C, a set of instances of C, is a cause for x = 2 C with respect to D if C is a minimal set of variables such that x is unresponsive to D in states limited by C. That is, C is a cause for x with respect to D if we replace our de nition of cause with the weaker requirement that x be unresponsive to D in states limited by C." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1", "fig_1", "fig_2", "fig_2" ], "heading": "In uence Diagrams", "publication_ref": [ "b14" ], "table_ref": [], "text": "In this and the following three sections, we examine the graphical representation of cause within our framework. This study is useful in its own right, and also will help to relate our framework with Pearl's structural equation model. We begin, in this section, with a review of the in uence-diagram representation.\nAn in uence diagram is (1) a acyclic directed graph G containing decision and chance nodes corresponding to decision and chance variables, and information and relevance arcs, representing what is known at the time of a decision and probabilistic dependence, respectively, (2) a set of probability distributions associated with each chance node, and optionally (3) a utility node and a corresponding set of utilities (Howard and Matheson, 1981).\nAn information arc is one that points to a decision node. An information arc from chance or decision node a to decision node d indicates that variable a will be known when decision d is made. (We shall use the same notation to refer to a variable and its corresponding node in the diagram.) A relevance arc is one that points to a chance node. The absence of a possible relevance arc represents conditional independence. To identify relevance arcs, we start with an ordering of the variables in U = (x 1 ; : : :; x n ). Then, for each variable x i in order, we ask the decision maker to identify a set Pa G (x i ) fx 1 ; : : :; x i 1 ; Dg that renders x i and fx 1 ; : : :; x i 1 ; Dg conditionally independent. That is, p(x i jx 1 ; : : :; x i 1 ; D; ) = p(x i jPa G (x i ); )\n(3) where p(XjY; ) denotes the probability distribution of X given Y for a decision maker with background information . For every variable z in Pa G (x i ), we place a relevance arc from z to x i in graph G of the in uence diagram. That is, the nodes Pa G (x i ) are the parents of x i in G. Associated with each chance node x i in an in uence diagram are the probability distributions p(x i jPa G (x i ); ). From the chain rule of probability, we know that p(x 1 ; : : :; x n jD; ) = n Y i=1 p(x i jx 1 ; : : :; x i 1 ; D; ) (4)\nCombining Equations 3 and 4, we see that any in uence diagram for U D uniquely determines a joint probability distribution for U given D. That is, p(x 1 ; : : :\n; x n jD; ) = n Y i=1 p(x i jPa G (x i ); ) (5)\nIn uence diagrams may also contain special chance nodes. A deterministic node corresponds to variable that is a deterministic function of its parents. A utility node encodes preferences of the decision maker. Finally, an in uence diagram is unambiguous when its decision nodes are totally ordered|that is, when there is a directed path in the in uence diagram that traverses all decisions. This total order corresponds to the order in which decisions are made.\nIn this paper, we concern ourselves neither with the ordering of decision nodes nor the observation of chance variables before making decisions. Therefore, we are not concerned with information arcs. Likewise, although our new concepts apply to models that include utility nodes, we can illustrate these concepts with models containing only chance, deterministic, and decision variables.\nFigure 1a contains an in uence diagram for the omelet story. As is illustrated in the gure, we use ovals, double ovals, and squares to represent chance, deterministic, and decision nodes, respectively. Among the possible relevance arcs in the in uence diagram, several are missing. For example, there is no arc from D to S, representing the independence of D and S (which follows from the assertion that S is unresponsive to D). Figures 1b and1c contain in uence diagrams for the medical-treatment example. The chance variable g (genotype?) is explicitly modeled in Figure 1c.\nThe ordinary in uence diagram was designed to be a representation of conditional independence. Furthermore, as we have discussed, the concepts of conditional independence and limited unresponsiveness are only loosely related. Consequently, the in uence diagram is an inadequate representation of causal dependence, at least by our de nition of cause.\nIn particular, an in uence diagram may contain an arc from node x to node y, even though x is not among a set of causes for y. For example, the in uence diagram of Figure 1b has an arcs from r and t to c due to the dependencies in the domain. Nonetheless, we have established that the singleton ftg is a cause for c with respect to r. Furthermore, an in uence diagram may contain no arc from x to y, even though x is a cause of y. For example, consider the coin example, illustrated by the in uence diagram in Figure 2a. If we believe that the coin is fair, and if we do not bother to model the variable c explicitly (as shown in Figure 2b), then we need not place an arc from d to w, because the probability of winning will be 1=2, regardless of our choice d. Nonetheless, b is a cause for w with respect to b, by our de nition. Despite these limitations, the in uence diagram is adequate for purposes of making decisions under uncertainty. In the introduction, we argued that causal information is needed for predicting the e ects of actions. Thus, the question arises: \\Why do we need anything more than the in uence diagram as a representation of the e ects of actions?\" We give an answer to this question in Section 9, where we discuss counterfactual reasoning. There, we show that the ordinary in uence diagram is inadequate for purposes of counterfactual reasoning unless it is in canonical form|a form that accurately re ects cause." }, { "figure_ref": [], "heading": "Direct and Atomic Interventions", "publication_ref": [], "table_ref": [], "text": "In order to de ne canonical form, we need the concept of a mapping variable. Likewise, in order to de ne a mapping variable, we need the concept of atomic intervention. We also need the concept of atomic intervention to explicate Pearl's structural-equation model. In this section, we de ne atomic intervention along with a more general concept called direct intervention.\nRoughly speaking, we say that a set of decisions I is a direct intervention on a set of chance variables X if the e ects of I on all chance variables are mediated only through the e ects of I on X. SGS, who take cause to be a primitive, provide a formal de nition of direct intervention (which they call a direct manipulation) that is consistent with our notion. We nd it simpler to de ne direct intervention in terms of limited unresponsiveness. De nition 3 (Direct Intervention) Given a domain described by U and D, a set of decisions I D is said to be a direct intervention on X U with respect to D if (1) for all x 2 X, x -I, and (2) for all y 2 U, y 6 -X I.\nFor example, in the medical-treatment story, r is a direct intervention on t, because t -r and c 6t r. As another example, suppose the physician has an additional decision p of whether or not to pay the patient to take the treatment. It is reasonable to expect that t -p. Furthermore, if the amount of payment is small, it is reasonable that c 6t p. Consequently, p quali es as a direct intervention on t. Nonetheless, if the amount of payment is su ciently large, the patient may use that money to improve his health care. Thus, ct p; and p does not satisfy the condition 2 for a direct intervention on t.\nGiven the notion of direct intervention, we can de ne atomic intervention.\nDe nition 4 (Atomic Intervention) Given a domain described by U and D, a decision x 2 D is said to be an atomic intervention on x 2 U with respect to D if (1) fxg is a direct intervention on fxg with respect to D, and (2) x has precisely the instances (a) idle, which corresponds to the instance of doing nothing to x, and (b) set(x) for every instance x of x, where x = x whenever x =set(x).\nAs we discussed in the introduction, Pearl takes the concept of atomic intervention to be primitive. Whether or not a decision is a direct (or atomic) intervention, however, depends on the underlying causal relationships in the domain. In the medical-treatment story, suppose the physician has a decision k of whether or not to administer the treatment (a drug) without the patient's knowledge. If we believe that the treatment is truly e ective and has no placebo e ect, then we can assert that k is a direct intervention on t. If, however, we believe that the treatment has only a placebo e ect, then k will not be a direct intervention on t, because k will also directly a ect c. Thus, the notions of direct and atomic intervention require de nitions, lest the meaning of cause would be hidden in these primitives.\nWe note that, when there are bi-directional causal relationships among variables in U, it is not always possible for every chance variable to have its own atomic intervention. For example, consider an adiabatic system consisting of a cylindrical chamber with a moveable instance of t(r) r =take r =don't take 1: complier top, in which we model the variables pressure? (p) and volume? (v).9 If we allow the top of the chamber to move freely, then placing various weights on the top of the chamber constitutes an atomic intervention on p; and we have that p is a cause of v with respect to p. In contrast, xing the top of the chamber at particular locations constitutes an atomic intervention on v; and we have that v is a cause of p with respect to v. By the laws of physics, however, both decisions p and v can not be available simultaneously." }, { "figure_ref": [], "heading": "Mapping Variables", "publication_ref": [ "b10", "b1", "b16", "b22", "b13" ], "table_ref": [ "tab_2" ], "text": "To understand the concept of a mapping variable, let us reexamine Savage's basic formulation of a decision problem. Recall that the chance variables U are a deterministic function of the decision variables D and the state of the world S. In e ect, each possible state of the world de nes a mapping from the decisions D to the chance variables U. Thus, S represents all possible mappings from D to U. We can characterize S as a mapping variable for U as a function of D, and use the suggestive notation U(D) to denote this mapping variable.\nIn general, given a domain described by U, D, and S, a set of decision variables Y D, and a set of chance variables X U, the mapping variable X(Y ) is a variable that represents the possible mappings from Y to X.\nAs an example, consider the medical-treatment story. The mapping variable t(r) represents the possible mappings from the decision variable r (recommendation) to the chance variable t (taken?). In this example, the instances of t(r), shown in Table 4, have a natural interpretation. In particular, the instance where the patient accepts treatment if and only if we recommend it represents a patient who complies with our recommendation; the instance where the patient accepts treatment if and only if we recommend against it represents a patient who de es our recommendation; and so on.\nThe notion of a mapping variable is discussed in Heckerman and Shachter (1994), and in Balke and Pearl (1994) under the name \\response function.\" A related counterfactual variable is described by Neyman (1923), Rubin (1978), andHoward (1990). They discuss what we would denote X(Y = Y): the variable X if we choose instance Y for Y .\nAn important property concerning mapping variables is that, given variables X; Y; and X(Y ), we can always write X as a deterministic function of Y and X(Y ). For example, t is a deterministic function of r and t(r); and U is a deterministic function of D and U(D) S.\nIn the discussions that follow, it is useful to extend the de nition of a mapping variable to include chance variables as arguments. For example, in the medical-treatment story, it seems reasonable to de ne the mapping variable c(t) with instances helped, hurt, always cured, and never cured. Together, the mapping variables t(r) and c(t) describe the possible states of the world U(D) S. (E.g., t(r) =complier and c(t) =helped corresponds to state 1 in Table 3.) As we shall see, this decomposition of U(D) facilitates the graphical representation of causal relationships.\nUnfortunately, de ning mapping variables with chance-variable arguments is not always possible. In the medical-treatment domain, when the patient is an always taker (states 10 and 11 in Table 3), t=yes regardless of r. Consequently, we can not tell whether c(t) is helped or always cured|that is, c(t) is not uniquely identi ed. Because Savage's decisiontheoretic framework requires that the state of the world and the act uniquely determine the instance of c(t) (a consequence), the instance of c(t) is not well de ned. Nonetheless, c(t) is well de ned whenever D includes an atomic intervention on t ( t), guaranteeing that t will take on all instances (as t varies) in every state of the world.\nIn general, we have the following de nition of mapping variable.\nDe nition 5 (Mapping Variable) Given a domain described by U and D, chance variables X, and variables Y such that, for every y 2 Y \\U, there exists an atomic intervention ŷ 2 D,10 the mapping variable X(Y ) is the chance variable that represents all possible mappings from Y to X.\nThere are several important points to be made about mapping variables as we have now de ned them. First, as in the more speci c case, X is always a deterministic function of Y and X(Y ).\nSecond, additional probability assessments typically are required when introducing a mapping variable into a probabilistic model. For example, two independent assessments are needed to quantify the relationship between r and t in the medical-treatment story; whereas three independent assessments are required for the node t(r). In general, many additional assessments are required. If X has c instances and Y has a instances, then X(Y ) has as many as c a instances. In real-world domains, however, reasonable assertions of independence decrease the number of required assessments. In some cases, no additional assessments are necessary (see, e.g., Heckerman et al., 1994).\nThird, we have the following theorem, which follows immediately from the de nitions of limited unresponsiveness and mapping variable. In this and subsequent theorems that mention mapping variables, we assume that atomic interventions required for the proper de nition of the mapping variables are included in D.\nTheorem 3 (Mapping Variable) Given a decision problem described by U and D, variables X U, and Y U D, X 6 -Y D if and only if X(Y ) 6 -D.\nFor example, in the medical treatment domain that includes the atomic intervention t, we have c 6t fr; tg and c(t) 6 -fr; tg. Roughly speaking, Theorem 3 says that X is unresponsive to D in states limited by Y if and only if the way X depends on Y does not depend on D. This equivalence provides us with an alternative set of conditions for cause.\nCorollary 4 (Causes with Respect to Decisions) Given a decision problem described by U and D, and a chance variable x 2 U, the variables C D U n fxg are causes for x with respect to D if only if C is a minimal set of variables such that x(C) 6 -D.\nWhen C are causes for x with respect to D, we call x(C) a causal mapping variable with respect to D. Thus, we have the following consequence of Theorem 3.\nCorollary 5 (Causal Mapping Variable) If x(C) is a causal mapping variable for x with respect to D, then x(C) is unresponsive to D." }, { "figure_ref": [ "fig_3", "fig_3", "fig_4", "fig_4" ], "heading": "Canonical Form In uence Diagrams", "publication_ref": [ "b13", "b13" ], "table_ref": [], "text": "We can now de ne what it means for an in uence diagram to be in canonical form.\nDe nition 6 (Canonical Form) An in uence diagram for a decision problem described by U and D is said to be in canonical form if (1) all chance nodes that are responsive to D are descendants of one or more decision nodes and (2) all chance nodes that are descendants of one or more decision nodes are deterministic nodes.\nAn immediate consequence of this de nition is that any chance node that is not a descendant of decision node must be unresponsive to D.\nWe can construct an in uence diagram in canonical form for a given problem by including in the in uence diagram a causal mapping variable for every variable that is responsive to the decisions. In doing so, we can make every responsive variable a deterministic function of its mapping variable and the corresponding set of causes. For example, consider the medicaltreatment story as depicted in the in uence diagram of Figure 3a. The variables t and c are responsive to r, but their corresponding nodes are not deterministic. Consequently, this in uence diagram is not in canonical form. To construct a canonical form in uence diagram, we introduce the mapping variables t(r) and c(r), as shown in Figure 3b. The responsive variables are now deterministic; and the mapping variables are unresponsive to the decision. This example illustrates an important point: Mapping variables may be probabilistically dependent. We return to this issue in Section 8.\nIn general, we can construct an in uence diagram in canonical form for any decision problem characterized by U and D as follows.\nAlgorithm 1 (Canonical Form)\n1. Add a node to the diagram corresponding to each variable in U D 2. Order the variables x 1 ; : : :; x n in U so that the variables unresponsive to D come rst 3. For each variable x i 2 U that is responsive to D, (a) Add a causal-mapping-variable chance node x i (C i ) to the diagram, where C i D fx 1 ; : : x i 1 g (b) Make x i a deterministic node with parents C i and x i (C i ) 4. Assess independencies among the variables that are unresponsive to D 11 This algorithm is well de ned, because it is always possible to nd a set C i satisfying the condition in step 3a. In particular, x i 6 -D D by Property 3. Consequently, even when D contains no atomic intervention, we can always create a causal mapping variable for every responsive variable in U.\nAlso, the structure of any in uence diagram constructed using Algorithm 1 will be valid. Namely, by Corollary 5, all causal mapping variables added in step 3 are unresponsive to D. Thus, suppose we identify the relevance arcs and deterministic nodes according to Equation 3 by using a variable ordering where the nodes in D are followed by the unresponsive nodes (including the causal mapping variables), which are in turn followed by the responsive nodes in the order speci ed at step 2. Then, (1) we would add no arcs from D to the unresponsive nodes by Theorem 1 (and the algorithm adds none); (2) we would add arcs among the unresponsive nodes as described in step 4; and (3) for every responsive variable x i , we would make x i a deterministic node (as described in step 3b) by de nition of a mapping variable.\nIn addition, the structure that results from Algorithm 1 will be in canonical form. In particular, because there are no arcs from D to the unresponsive nodes, only responsive variables can be descendants of D. Also, by Theorem 2, we know that every responsive node is a descendant of D, and (by construction) a deterministic node.\n11. Because mapping variables are random variables, the assessment of dependencies among the unresponsive variables is, in principle, no di erent than that for assessing dependencies among ordinary random variables. Nonetheless, the counterfactual nature of the variables can be confusing. Howard (1990) describes a method of probability assessment that addresses this concern. To illustrate the algorithm, consider the medical-treatment story as depicted by the in uence diagram in Figure 4a, where the variable g (genotype?) is represented explicitly, and where c 6t fr; tg and g 6 -fr; tg. To construct an in uence diagram in canonical form for this problem, we rst add the variables fr; t; g; t; cg to the diagram and choose the ordering (g; t; c). Both t and c are responsive to D = fr; tg, and have causes fr; tg and t, respectively. Consequently, we add causal mapping variables t(r; t) and c(t) to the new diagram, and make t a deterministic function of r, t, and t(r; t) and c a deterministic function of t and c(t). Finally, we assess the dependencies among the unresponsive variables fg; t(r; t); c(t)g, adding arcs from g to t(r; t) and c(t) under the assumption that the causal mapping variables are conditionally independent given g. The resulting canonical form in uence diagram is shown in Figure 4b. Canonical form is a generalization of Howard Canonical Form, which was developed by Howard (1990) to facilitate the computation of value of information.12 Before making important decisions, decision analysts investigate how useful it is to gather additional information. This investigation is typically done by computing the extra value the decision maker would obtain by observing earlier one or more chance variables in the domain. If the decision maker does not expect to observe chance variable x prior to making decision d, the value of information about x is the extra value he would obtain if he were able to observe chance variable x just before making decision d. The value of information is never negative, and it serves as a bound on the value of any experiment: it would never be worthwhile to spend more than the value of information about x to obtain any (possibly imperfect) observation about x just before making decision d.\nGiven an ordinary in uence diagram, we can not compute the value of information about variables responsive to D, because such variables can not be observed before decisions D are made. In contrast, we can always compute the value of information about mapping variables corresponding to responsive variables in a canonical form in uence diagram, because such variables are unresponsive to D by de nition. For example, consider the decision to continue or quit smoking described by decision variable s (smoke) and chance variables l (lung cancer?) and l(s). Although we cannot compute the value of information about l because it is responsive to D, we can compute the value of information about l(s).\nAt rst glance, it may seem pointless to determine the value of information about a variable that cannot be observed (such as l(s)). Nonetheless, we can often learn something about a mapping variable. For example, imagine a test that measures the susceptibility of someone's lung tissue to lung cancer in the presence of tobacco smoke. Learning the result of such a test may well update our probability distribution over l(s). By computing the value of information of l(s), we obtain an upper bound on the most we would be willing to pay to undergo such a test." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_4" ], "heading": "Pearl's Causal Framework", "publication_ref": [ "b1" ], "table_ref": [], "text": "We can now demonstrate the relationship between Pearl's causal framework and ours. As mentioned, Pearl's framework is similar to that of SGS (see the background notes in SGS for a discussion). Thus, many of the remarks in the section apply to SGS's model for cause as well. A notable exception is that SGS formally de ne direct intervention.\nThe following theorem outlines the relationship.\nTheorem 6 Given chance variables U, suppose the set of decision variables D contains a unique atomic intervention x for every x 2 U and no other decisions. Given graph G, a directed acyclic graph with nodes corresponding to the variables in U, suppose that, for all x 2 U, Pa G (x) fxg are causes for x with respect to D. 13 Then, the relationships among the variables in U D can be expressed by the set of simultaneous equations x = f x (Pa G (x); x; x(Pa G (x); x)) for all x 2 U, where f x is a deterministic function such that x = x if x =set(x).\nProof: The theorem follows by applying Algorithm 1 using an ordering over U consistent with the graph G. 2 This correspondence permits several clari cations of Pearl's framework. First, we have a precise de nition of atomic intervention. Unlike Pearl's model, where the concept of atomic intervention is primitive, our framework provides a way to verify that interventions are indeed atomic.\nSecond, we see what it means for the random disturbances to be exogenous. Namely, these random variables are unresponsive to the decisions D.\nThird, we have a precise de nition of random disturbance in terms of causal mapping variable. Consequently, we have a means for assessing the joint probability distribution of these variables, and|in particular|a means for assessing independencies among these variables. In fact, whereas Pearl requires that random disturbances be marginally independent, our de nition imposes no such requirement.\nTheorem 6 shows that any structural-equation model can be encoded as an in uence diagram in canonical form. The converse is also true|that is, any in uence diagram in canonical form can be encoded as a structural-equation model. This result may seem surprising, because in Pearl's model every domain variable must have an atomic intervention, all decision variables must be atomic interventions, and random disturbances must be independent. Given an in uence diagram in canonical form, however, we can encode its chance and decision variables in a structural equation model. Speci cally, a chance variable x can be encoded as the variable pair fx; xg where x is instantiated to idle, and a decision variable d can be encoded as the variable pair fd; dg where the act idle is forbidden. In addition, as noted by Pearl, we can remove dependencies among mapping variables (at least in practice) by introducing hidden common causes. 14 Nonetheless, because hidden common causes sometimes need to be introduced, Pearl's structural-equation model can be a less e cient representation than canonical form. For example, to represent the relationships in Figure 4b, we would use a structual-equation model with disturbance variables corresponding to g(ĝ), t(r; g; t), and c(t; g;ĉ). Assuming r; g; t and c are binary variables, the disturbance variables have 2, 16, and 16 instances, respectively. 15 Assuming the disturbance variables are independent, the joint probability distribution of these variables contain 31 probabilities. In contrast, both mapping variables in Figure 4b have only four instances. Consequently, the joint probability distribution over the unresponsive variables in the canonical-form representation contain only 13 probabilities.\nWe note that Balke and Pearl (1994) relax the assumption that mapping variables are independent. Nonetheless, their generalization of the structural-equation model, which they call a functional model, is still less e cient than canonical form. The ine ciency comes from the fact that canonical form encodes a joint probability distribution among all unresponsive variables (possibly including both domain and mapping variables), whereas a functional model encodes a joint probability distribution among mapping variables only. For example, the canonical-form in uence diagram in Figure 4b encodes the assertion that t(r; t) and c(t) are independent given g. This assertion can not be encoded in the Balke-Pearl representation. When we represent the relationships in Figure 4b using a functional model, we can include the variable g, in which case we obtain the 31-probability model described in the previous paragraph. Alternatively, we can exclude variable g from the model, and encode the dependency between the mapping variables t(r; t) and c(t; ĉ) with 14. The assumption that the mapping variables are independent has the convenient consequence that the graph G can be interpreted as a Bayesian network in the traditional sense. That is, if variables X and Y are d-separated by Z in G, then X and Y are conditionally independent given Z according to the structural-equation model corresponding to G. (See Pearl, 1988, for a de nition of d-separation.) SGS (p. 54) refer to this association as the causal Markov condition. 15. Note that the mapping variable x(Y; x) has the same number of instances as does the mapping variable x(Y ).\nan arc between these two variables. The resulting Balke-Pearl model has 15 probabilities in contrast to the 13 required by canonical form." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Counterfactual Reasoning", "publication_ref": [ "b4", "b1", "b5", "b10", "b1", "b5" ], "table_ref": [], "text": "As we have noted, the ordinary in uence diagram is adequate for making decisions under uncertainty, but is inadequate for counterfactual reasoning. In this section, we examine this form of reasoning and suggest how it can be facilitated by in uence diagrams in canonical form.\nGiven a domain described by U and D with X; Y; Z U, counterfactual reasoning addresses questions of the form: If we choose D = D 1 and observe X = X, what is the probability that Y = Y if we choose D = D 2 and observe Z = Z? For example, in the medical-treatment domain, we may wish to know: If we recommend the treatment and the patient takes the drug and is cured, what is the probability that the patient will be cured if we recommend against the treatment? Such reasoning is often important in the real-world|for example, in legal argument (Ginsberg, 1986;Balke and Pearl, 1994;Goldszmidt and Darwiche, 1994;Heckerman et al., 1994).\nWe can answer such queries using in uence diagrams in canonical form. To illustrate this approach, consider the medical-treatment question in the previous paragraph. To answer this query, we begin with the in uence diagram in canonical form shown in Figure 4b. Then, we duplicate all decision variables and all chance variables that are responsive to the decisions, as shown in Figure 5. The original variables represent the act r =take, t=idle and its consequences. The duplicate variables (denoted with primes) represent the act r 0 =don't take, t=idle and its consequences. There is no need to duplicate the unresponsive variables (including the causal mapping variables) because, by de nition, they can not be a ected by the decisions. 16 Next, we copy the deterministic function associated with each original variable to its primed counterpart. Then, we instantiate the decision and chance variables as described in the query (r =take, t=idle, t =taken, c =cured, r 0 =don't take, and t0 =idle). Finally, we use a standard Bayesian-network inference method to compute the probability of the variable(s) of interest (c 0 in our example).\nThe canonical form in uence diagram is a natural representation for counterfactual reasoning for two reasons. One, the deterministic relationships between a responsive chance variable and its parents remains the same for any choice of D. Two, the instances assumed by unresponsive variables are unaltered by the decisions. The ordinary in uence diagram o ers neither of these guarantees.\nOur approach, described in Heckerman and Shachter (1994), is similar to that of Balke and Pearl (1994). The main di erence between the two approaches is that Balke and Pearl use their functional model as the base representation, making their approach less e cient than ours. Goldszmidt and Darwiche (1994) describe a graphical language for modeling the evolution of real-world systems over time. Although their approach does not explicitly address counterfactual reasoning, it can be adapted to so do, yielding an alternative to our approach. " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We have presented a de nition of cause and e ect in terms of the decision-theoretic primitives of act, state of the world, and consequence determined by act and state of the world, and have shown how this de nition provides a foundation for causal reasoning. Our de nition departs from the traditional view of causation in that our causal assertions are made relative to a set of decisions. Consequently, as we have argued, our de nition allows for a more precise speci cation of causal relationships.\nIn addition, we have shown how our de nition provides a basis for the graphical representation of cause. We have described a special class of in uence diagrams, those in canonical form, and have shown that it is equally expressive and more e cient than Pearl's structural-equation model. Finally, we have shown how in uence diagrams in canonical form, unlike ordinary in uence diagrams, can be used for counterfactual reasoning." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We thank Jack Breese, Tom Chavez, Max Chickering, Eric Horvitz, Ron Howard, Christopher Meek, Judea Pearl, Mark Peot, Glenn Shafer, Peter Spirtes, Patrick Suppes, and anonymous reviewers for useful comments." } ]
[ { "authors": "J Angrist; G Imbens; D Rubin", "journal": "Journal of the American Statistical Association", "ref_id": "b0", "title": "Identi cation of causal e ects using instrumental variables", "year": "1995" }, { "authors": "A Balke; J Pearl", "journal": "", "ref_id": "b1", "title": "Probabilistic evaluation of counterfactual queries", "year": "1994" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "B De Finetti", "journal": "Annales de l'Institut Henri Poincar e", "ref_id": "b3", "title": "La pr evision: See lois logiques, ses sources subjectives", "year": "1937" }, { "authors": "M Ginsberg", "journal": "Arti cial Intelligence", "ref_id": "b4", "title": "Counterfactuals", "year": "1986" }, { "authors": "M Goldszmidt; A Darwiche", "journal": "", "ref_id": "b5", "title": "Action networks: A framework for reasoning about actions and change under uncertainty", "year": "1994" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "D Heckerman", "journal": "", "ref_id": "b7", "title": "A Bayesian approach for learning causal networks", "year": "1995" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b8", "title": "", "year": "" }, { "authors": "D Heckerman; J Breese; K Rommelse", "journal": "", "ref_id": "b9", "title": "Sequential troubleshooting under uncertainty", "year": "1994" }, { "authors": "D Heckerman; R Shachter", "journal": "", "ref_id": "b10", "title": "A decision-based view of causality", "year": "1994" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b11", "title": "", "year": "" }, { "authors": "P Holland", "journal": "Journal of the American Statistical Association", "ref_id": "b12", "title": "Statistics and causal inference", "year": "1986" }, { "authors": "R Howard", "journal": "Wiley and Sons", "ref_id": "b13", "title": "From in uence to relevance to knowledge", "year": "1990" }, { "authors": "R Howard; J Matheson", "journal": "Strategic Decisions Group", "ref_id": "b14", "title": "In uence diagrams", "year": "1981" }, { "authors": "D Lewis", "journal": "Harvard University Press", "ref_id": "b15", "title": "Counterfactuals", "year": "1973" }, { "authors": "J Neyman", "journal": "Translated in Statistical Science", "ref_id": "b16", "title": "On the application of probability theory to agricultural experiments", "year": "1923" }, { "authors": "J Pearl", "journal": "Morgan Kaufmann", "ref_id": "b17", "title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "year": "1988" }, { "authors": "J Pearl", "journal": "Statistical Science", "ref_id": "b18", "title": "Comment: Graphical models, causality, and intervention", "year": "1993" }, { "authors": "J Pearl", "journal": "Biometrika", "ref_id": "b19", "title": "Causal diagrams for empirical research", "year": "1995" }, { "authors": "J Pearl; T Verma", "journal": "Morgan Kaufmann", "ref_id": "b20", "title": "A theory of inferred causation", "year": "1991" }, { "authors": "J Robins", "journal": "Mathematical Modelling", "ref_id": "b21", "title": "A new approach to causal interence in mortality studies with sustained exposure results", "year": "1986" }, { "authors": "D Rubin", "journal": "Annals of Statistics", "ref_id": "b22", "title": "Bayesian inference for causal e ects: The role of randomization", "year": "1978" }, { "authors": "H Simon", "journal": "D. Reidel", "ref_id": "b23", "title": "Modles of Discovery and Other Topics in the Methods of Science", "year": "1977" }, { "authors": "P Spirtes; C Glymour; R Scheines", "journal": "Springer-Verlag", "ref_id": "b24", "title": "Causation, Prediction, and Search", "year": "1993" } ]
[ { "formula_coordinates": [ 8, 90, 222.66, 400.32, 38.3 ], "formula_id": "formula_0", "formula_text": "if we believe that 8 S 2 S; D 1 2 D; D 2 2 D : Y S; D 1 ] = Y S; D 2 ] =) X S; D 1 ] = X S; D 2 ]" }, { "formula_coordinates": [ 9, 103.44, 568.2, 211.44, 139.54 ], "formula_id": "formula_1", "formula_text": "1. X 6 -Y D () 8x 2 X; x 6 -Y D 2. X 6 -W D () X W 6 -W D 3. X 6 -D D 4. X 6 -Y D =) X 6 -Y Z D 5. X 6 -Y Z D and Y 6 -Z D =) X 6 -Z D 6. X -Z D and W 6 -Z D =) X -W Z D" }, { "formula_coordinates": [ 13, 250.68, 301.56, 271.56, 44.02 ], "formula_id": "formula_2", "formula_text": "; x n jD; ) = n Y i=1 p(x i jPa G (x i ); ) (5)" } ]
Decision-Theoretic Foundations for Causal Reasoning
We present a de nition of cause and e ect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our de nition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of in uence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and e ect. Finally, we show how canonical form facilitates counterfactual reasoning. no no 2: complier, hurt yes no no yes 3: complier, always cured yes yes no yes 4: complier, never cured yes no no no 5: de er, helped no no
David Heckerman; Ross Shachter
[ { "figure_caption": "Given X 6 -Y Z D, we have 8 S 2 S; D 1 2 D; D 2 2 D : Y S; D 1 ] = Y S; D 2 ] and Z S; D 1 ] = Z S; D 2 ] =) X S; D 1 ] = X S; D 2 ] Given Y 6 -Z D, we have 8 S 2 S; D 1 2 D; D 2 2 D : Z S; D 1 ] = Z S; D 2 ] =) Y S; D 1 ] = Y S; D 2 ] Consequently, we obtain 8 S 2 S; D 1 2 D; D 2 2 D : Z S; D 1 ] = Z S; D 2 ] =) X S; D 1 ] = X S; D 2 ] That is, X 6 -Z D.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: In uence diagrams for (a) the omelet story, and (b,c) the medical-treatment example.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: In uence diagrams for betting on a coin ip.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: (a) An in uence diagram for the medical-treatment story. (b) A corresponding in uence diagram in canonical form.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: (a) Another in uence diagram for the medical-treatment story. (b) A corresponding in uence diagram in canonical form.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The use of canonical form to compute a counterfactual query. Shaded variables are instantiated.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": ". 3. When acts and consequences are continuous, the speci cation of S is more complicated. In this paper, we address only situations where acts and consequences are discrete. The four possible states of the world for a decision to continue or quit smoking.", "figure_data": "state of the world 1 2 3 4act no cancer quit no cancer no cancer continue cancer cancer cancer no cancer cancer", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The mapping variable t(r).", "figure_data": "t =yes t =no 3: always taker t =yes 2: de er 4: never taker t =not =no t =yes t =yes t =no", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b21", "b13", "b16", "b13", "b5", "b13", "b14", "b15", "b2", "b19", "b7", "b11", "b19" ], "table_ref": [], "text": "The traditional form of representing knowledge in AI is through logical formulas (McCarthy, 1958;McCarthy & Hayes, 1969), where all the logical conclusions of a given formula are assumed to be accessible to an agent. Recently, an alternative way of capturing such information has been developed (Kautz, Kearns, & Selman, 1995;Khardon & Roth, 1994). Instead of using a logical formula, the knowledge representation is composed of a particular subset of its models, the set of characteristic models. This set retains all the information about the formula, and is useful for various reasoning tasks. In particular, using model evaluation with the set of characteristic models, one can deduce whether another formula, a query presented to an agent, is implied by the knowledge or not. While characteristic models exist for arbitrary propositional formulas, in this paper we limit our attention to logical formulas which are in Horn form and to their representation as characteristic models.\nThe characteristic models of Horn formulas have been shown to be useful. There is a linear time deduction algorithm using this set, and abduction can be performed in polynomial time, while using formulas it is NP-Hard (Kautz et al., 1995). Furthermore, an algorithm for default reasoning using characteristic models has been developed, for cases where formula based algorithms are not known (Khardon & Roth, 1995). Hence, the question arises, whether one can e ciently translate a Horn formula into its set of characteristic models c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. and then use this set for the reasoning task. We denote this translation problem by CCM (for Computing Characteristic Models).\nOn the other hand, given a set of assignments, it might be desirable to nd the underlying structure behind this set of models. This is the case when one is trying to learn the structure of the world using a set of examples. This problem has been studied before under the name Structure Identi cation (Dechter & Pearl, 1992;Kautz et al., 1995;Kavvadias, Papadimitriou, & Sideri, 1993). Technically, the problem seeks an e cient translation from a set of characteristic models into a Horn expression that explains it. We denote this translation problem by SID (for Structure Identi cation).\nInterestingly, the same constructs appear in the theory of relational databases. As shown in a companion paper (Khardon, Mannila, & Roth, 1995), there is a correspondence between Horn expressions and Functional Dependencies, and a correspondence between characteristic models and an Armstrong relation. The equivalent question of translating between functional dependencies and Armstrong relations has been studied before (Beeri, Dowd, Fagin, & Statman, 1984;Mannila & Raiha, 1986;Eiter & Gottlob, 1991;Gottlob & Libkin, 1990) and is relevant for the design of relational databases (Mannila & Raiha, 1986). While this paper does not discuss the problems in the database domain, some of the results presented here can be alternatively derived from previous results in database theory using the above mentioned equivalence. (We identify those precisely, later on.) However, this paper makes these results more accessible without resorting to any results in database theory, and with simpler proofs. On the other hand some new results are presented, which resolve a question which was open both in AI and in the database domain." }, { "figure_ref": [], "heading": "An Example", "publication_ref": [ "b6", "b13", "b16", "b13", "b13" ], "table_ref": [], "text": "Let us introduce the problems in question through an example. Suppose the world has 4 attributes denoted a; b; c; d, each taking a value in f0; 1g to denote whether it is \\on\" or \\o \", and our knowledge is given by the following constraints: W = (bc ! d)(cd ! b)(bc ! a): Then W is a Horn expression and it is normally used to decide whether certain constrains are implied by it or not. For example W j = (cd ! a), and W 6 j = (bd ! a), where the symbol j = stands for implication. This is normally performed by deriving a proof for the constraint in question. If no such proof exists then implication does not hold. In our example we would notice that (cd ! b), and therefore (cd ! bc ! a). As for (bd ! a), we would fail to nd a proof and therefore conclude that it is not implied by W. This general approach is called theorem proving, and is e cient for Horn expressions (Dowling & Gallier, 1984).\nAn alternative approach is to check the implication relation by model checking. Implication is de ned as follows: W j = if every model of W is also a model of (where x 2 f0; 1g n is a model of an expression f if f is evaluated to \\truth\" on x). So to decide whether W j = we can simply use all the models of W, and check, one by one, whether any of them does not satisfy . In our example W has 11 models: models(W) = f0000; 0001; 0010; 0100; 0101; 1000; 1001; 1010; 1100; 1101; 1111g (where the assignments denote the values assigned to abcd correspondingly), and we would have to test on every one of them. Unfortunately, in general the number of models may be very large, exponential in the number of variables, and therefore this procedure will not be e cient.\nThe question arises therefore, whether there is a small subset of models which still guarantees correct results when used with the model checking procedure. Such a subset is called the set of characteristic models of W and its existence has been proved (Kautz et al., 1995;Khardon & Roth, 1994). In our example this set is: char(W) = f0010; 0101; 1001; 1010; 1100; 1101; 1111g; so it includes 7 out of the 11 models of W. Model checking with this set is guaranteed to produce correct results for any which is a Horn expression, and using a slightly more complicated algorithm one can answer correctly for every (Kautz et al., 1995). In our example, it is easy to check that (cd ! a) is evaluated to \\truth\" on all the assignments in char(W) and that (bd ! a) is falsi ed by 0101.\nThe utility of these representations, Horn expressions and characteristic models, is not comparable. Each of these representations has its advantages over the other. First, the size of these representations is incomparable. There are short Horn expressions for which the set of characteristic models is of exponential size, and vice versa, there are also exponential size Horn expressions for which the set of characteristic models is small (Kautz et al., 1995). The representations also di er in the services which they support. On one hand, Horn expressions are more comprehensible. On the other hand characteristic models are advantageous in that they allow for e cient algorithms for abduction and default reasoning. In this paper we are asking how hard it is to translate between these representations, so as to enjoy the bene ts of both." }, { "figure_ref": [ "fig_0" ], "heading": "Overview of the Paper", "publication_ref": [ "b13", "b26", "b23", "b24", "b14", "b19", "b7", "b8", "b9", "b8", "b14", "b8", "b9", "b14", "b8", "b3", "b9", "b7", "b9", "b11" ], "table_ref": [], "text": "In this paper we study the complexity of the translation problems CCM and SID. For these problems, the output may be exponentially larger than the input. Therefore, it is appropriate to ask whether there are algorithms which can perform the above tasks in time which is polynomial in both the input size and the output size. These are called output polynomial algorithms.\nBefore starting our investigation we note that it has been shown (Kautz et al., 1995) that using the set of characteristic models one can answer abduction queries related to H in polynomial time, while given the formula H it is NP-Hard to perform abduction (Selman & Levesque, 1990). This however does not imply that computing the set of characteristic models is NP-Hard since the construction in the proof yields a Horn formula whose set of characteristic models is of exponential size.\nOur main result says that CCM and SID are equivalent to each other, and are also equivalent to the corresponding decision problem. The problem of Characteristic Models Identi cation (CMI), is the problem of deciding, given a Horn expression H and a set of models G, whether G = char(H). We show that CCM, SID, and CMI are equivalent under polynomial reductions. Namely, the translation problems are solvable in polynomial time if and only if the decision problem is solvable in polynomial time. These are new results which have immediate corollaries in the database domain.\nWe then show a close relationship between these problems and the Hypergraph Transversal Problem (HTR). Given a hypergraph G a transversal of its edges is a set of nodes which touches every edge in the graph. In the HTR problem one is given a hypergraph as an input, and is required to compute the set of minimal transversals of its edges.\nThe HTR problem has a lot of equivalent manifestations which appear in various branches of computer science. Examples in AI include computing abductive diagnoses (Reiter, 1987), enumerating prime implicants in ATMS (Reiter & De Kleer, 1987), and Horn approximations (Kavvadias et al., 1993) which are closely related to characteristic models. Other areas include database theory (Mannila & Raiha, 1986), Boolean complexity, and distributed systems (Eiter & Gottlob, 1991). A comprehensive study of these problems is presented by Eiter and Gottlob (1994). HTR is also equivalent to the problem of dualization of monotone Boolean expressions, which is the form in which we present it here. This problem, requires translation between the CNF and DNF representations of monotone functions.\nThe complexity of the HTR problem has been studied before (Fredman & Khachiyan, 1994;Eiter & Gottlob, 1994;Kavvadias et al., 1993) and is still an open question. On one hand a class of problems which are \\HTR complete\" has been de ned and studied (Eiter & Gottlob, 1994). This class includes many problems from various application areas which are equivalent to HTR (under polynomial reductions). On the other hand the problem is probably not NP-Complete. Recently, Fredman and Khachiyan (1994) have presented a sub-exponential n O(logn) time algorithm for the HTR problem.\nWe rst show that the problem CCM is at least as hard as HTR. By that we mean that if there is an output polynomial algorithm for CCM then there is an output polynomial algorithm for HTR. This has been stated as an open problem by Kavvadias et. al. (1993), who proved a similar hardness result for SID. Both hardness results can be alternatively derived by combining previous results in database theory (Eiter & Gottlob, 1994;Bioch & Ibaraki, 1993) and its relation to our problems (Khardon et al., 1995).\nWe then consider two relaxations of these translation problems. The rst is considering redundant Horn expressions which contain all the Horn prime implicates for a given expression. The output of SID is therefore altered to be the set of all prime implicates, and similarly the input of CCM includes all the prime implicates instead of a minimal subset. It is shown that in this special case, SID, CCM, and HTR are equivalent under polynomial reductions. Therefore, the algorithm presented by Fredman and Khachiyan (1994) can be used to solve CCM, and SID in time n O(logn) . This result can be alternatively derived from the results on functional dependencies in MAK form (Eiter & Gottlob, 1991). We show however that our argument generalizes to the larger family of k-quasi Horn expressions.\nThe second relaxation is the problem of computing all the prime implicants for a given Horn expression. This is a relaxation of CCM since using the prime implicants one can compute the characteristic models. Interestingly, the algorithm for HTR (Fredman & Khachiyan, 1994) can be adapted to this problem, resulting an algorithm with time complexity n O(log 2 n) .\nIt is shown, however, that both relaxations do not help in solving the general cases of CCM and SID due to exponential gaps in the size of the corresponding representations.\nLastly, we consider a related problem, denoted EOC, which is a minor modi cation of CCM and SID. This problem is shown to be co-NP-Complete. This serves to highlight some of the di culty in nding the exact complexity of our problems. A variant of this result, has already appeared in the database literature (Gottlob & Libkin, 1990). Our results are summarized in Figure 1, where a hierarchy of problems is depicted. The problem EOC is co-NP-Complete. The problem CMI is a special case of EOC, and is equivalent to SID and CCM. The problem HTR is a special case of CMI and is equivalent to SID and CCM under the restriction that the Horn expression is represented by the set of all prime implicates.\nThe rest of the paper is organized as follows. Section 2 de nes characteristic models, describes some of their properties, and formally de nes the problems in question. Section 3 discusses the relation between CCM,SID and the corresponding decision problem. Section 4 discusses the relation to the HTR problem. We rst establish the hardness result, and then consider the two relaxations mentioned above. Section 5 shows that EOC is co-NP-Hard, and Section 6 concludes with a summary." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "This section includes the basic de nitions, and introduces several previous results which are used in the paper.\nWe consider Boolean functions f : f0; 1g n ! f0; 1g. The elements in the set fx 1 ; : : :; x n g are called variables. Assignments in f0; 1g n are denoted by x; y; z, and weight(x) denotes the number of 1 bits in the assignment x. A literal is either a variable x i (called a positive literal) or its negation x i (a negative literal). A clause is a disjunction of literals, and a CNF formula is a conjunction of clauses. For example (x 1 _x 2 ) ^(x 3 _ x 1 _ x 4 ) is a CNF formula with two clauses. A term is a conjunction of literals, and a DNF formula is a disjunction of terms. For example (x 1 ^x2 ) _ (x 3 ^x1 ^x4 ) is a DNF formula with two terms. A CNF formula is Horn if every clause in it has at most one positive literal. A formula is monotone if all the literals that appear in it are positive. The size of CNF and DNF representations is, respectively, the number of clauses and the number of terms in the representation. We denote by jDNF(f)j the size of the smallest DNF representation for f.\nAn assignment x 2 f0; 1g n satis es f if f(x) = 1. Such an assignment x is also called a model of f. By \\f implies g\", denoted f j = g, we mean that every model of f is also a model of g. Throughout the paper, when no confusion can arise, we identify a Boolean function f with the set of its models, namely f 1 (1). Observe that the connective \\implies\" (j =) used between Boolean functions is equivalent to the connective \\subset or equal\" ( ) used for subsets of f0; 1g n . That is, f j = g if and only if f g.\nA term t is an implicant of a function f, if t j = f. A term t is a prime implicant of a function f, if t is an implicant of f and the conjunction of any proper subset of the literals in t is not an implicant of f.\nA clause d is an implicate of a function f, if f j = d. A clause d is a prime implicate of a function f, if d is an implicate of f and the disjunction of any proper subset of the literals in d is not an implicate of f.\nIt is well known that, a minimal DNF representation of f is a disjunction of some of its prime implicants. A minimal CNF representation of f is a conjunction of some of its prime implicates.\nIf f is monotone, then it has a unique minimal DNF representation (using all the prime implicants), and a unique minimal CNF representation (using all its prime implicates)." }, { "figure_ref": [], "heading": "Characteristic Models", "publication_ref": [ "b13", "b5", "b14", "b16", "b2", "b19", "b11", "b7", "b22", "b12", "b12", "b5", "b13", "b5" ], "table_ref": [], "text": "The idea of using characteristic models as a knowledge representation was introduced by Kautz et. al. (1995). Characteristic models were studied in AI (Dechter & Pearl, 1992;Kavvadias et al., 1993;Khardon & Roth, 1994) and under a di erent manifestation in database theory (Beeri et al., 1984;Mannila & Raiha, 1986;Gottlob & Libkin, 1990;Eiter & Gottlob, 1991, 1994). This section de nes characteristic models and their basic properties.\nFor u; v 2 f0; 1g n , we de ne the intersection of u and v to be the assignment z 2 f0; 1g n such that z i = 1 if and only if u i = 1 and v i = 1 (i.e., the bitwise logical-and of u and v.).\nFor a set of assignments S, x = intersect(S) is the assignment we get by intersecting all the assignments in S. We say that S is redundant if there exists x 2 S and S 0 S such that x 6 2 S 0 and x = intersect(S 0 ). Otherwise S is non-redundant.\nThe closure of S f0; 1g n , denoted closure(S), is de ned as the smallest set containing S that is closed under intersection.\nTo illustrate these de nitions consider the set M = f1101; 1110; 0101g. Then M is non-redundant, intersect(M) = 0100, and closure(M) = f1101; 1110; 0101; 0100; 1100g.\nLet H be a Horn expression. The set of the Horn characteristic models of H, denoted here char(H) is de ned as the set of models of H that are not the intersection of other models of H. Note that char(H) is non-redundant. Formally, char(H) = fu 2 H j u 6 2 closure(H n fug) g:\n(1) For example, char(f1101; 1110; 0101; 0100g) = f1101; 1110; 0101g.\nIt is well known that the set of models of Horn expressions is closed under intersection. This result is due to McKinsey (1943), who proved it for a certain class of rst order sentences. Alfred Horn (1951) considered a more general class of sentences. (Lemma 7 by Horn (1951) deals with the propositional case. Dechter and Pearl (1992) present another proof for the propositional case.) Moreover, since characteristic models capture all the information about the closure, they also capture all the information about the Horn expression.\nTheorem 1 (Kautz et al., 1995;Dechter & Pearl, 1992) Let H be a Horn expression then H = closure(char(H))." }, { "figure_ref": [ "fig_1" ], "heading": "Monotone Theory and Characteristic Models", "publication_ref": [ "b4", "b16", "b16", "b13", "b16", "b16", "b25", "b16", "b5", "b13", "b16" ], "table_ref": [], "text": "The monotone theory was introduced by Bshouty (1993), and was later used for a theory for model-based reasoning (Khardon & Roth, 1994). This section explores the relations between the monotone theory and characteristic models.\nDe nition 1 (Order) We denote by the usual partial order on the lattice f0; 1g n , the one induced by the order 0 < 1. That is, for x; y 2 f0; 1g n , x y if and only if 8i; x i y i . For an assignment b 2 f0; 1g n we de ne x b y if and only if x b y b (Here is the bitwise addition modulo 2). We say that x > y if and only if x y and x 6 = y.\nIntuitively, if b i = 0 then the order relation on the ith bit is the normal order; if b i = 1, the order relation is reversed and we have that 1 < b i 0. For example 0101 < 1111 0100, and 0101 6 < 1111 0110. We now de ne: The monotone extension of z 2 f0; 1g n with respect to b: M b (z) = fx j x b zg:\nThe monotone extension of f with respect to b: M b (f) = fx j x b z; for some z 2 fg:\nThe set of minimal assignments of f with respect to b: min b (f) = fz j z 2 f; such that 8y 2 f; z 6 > b yg:\nFor example M 1111 (0101) = f0101; 0001; 0100; 0000g; and M 1111 (1100) = f1100; 0100; 1000; 0000g: Let f = bc(a _ d)(a _ d), then in the set notation f = f1100; 0101g, and M 1111 (f) = f0101; 0001; 0100; 0000; 1100; 1000g. The set min 1111 (f) = f1100; 0101g, and the set min 0001 (f) = f0101g.\nClearly, for every assignment b 2 f0; 1g n , f M b (f). Moreover, if b 6 2 f, then b 6 2 M b (f) (since b is the smallest assignment with respect to the order b ). Therefore:\nf = b2f0;1g n M b (f) = b6 2f M b (f):\nThe question is if we can nd a small set of negative examples, and use it to represent f as above. De nition 2 (Basis) A set B is a basis for f if f = V b2B M b (f). B is a basis for a class of functions F if it is a basis for all the functions in F.\nUsing this de nition, we get an alternative representation for functions\nf = b 2B M b (f) = b 2B _ z2min b (f) M b (z):\n(2)\nIt is known that the set B H = fu 2 f0; 1g n j weight(u) n 1g, is a basis for any Horn CNF function. For example consider the Horn expression W = (bc ! d)(cd ! b)(bc ! a) discussed in the introduction. Recall that the satisfying assignments of W are: models(W) = f0000; 0001; 0010; 0100; 0101; 1000; 1001; 1010; 1100; 1101; 1111g: We have to compute the sets min b (W) for b 2 B H , where B H = f1111; 1110; 1101; 1011; 0111g. Note that if b satis es f then min b (f) = fbg, and M b (f) 1 (that is, 8x, M b (f)(x) = 1). Therefore, min 1111 (W) = f1111g, and min 1101 (W) = f1101g. One way to compute the sets of minimal assignments is by drawing the corresponding lattices and noting the relations there. Figure 2 shows the lattice with respect to b = 0111. The satisfying assignments of W are marked in bold face. The minimal assignments are underlined, and some of the order relations, which show that the rest of the assignments are not minimal, are drawn. To compute M b (W) we have to add any assignment which is above the minimal assignments. This is marked by the dotted lines which show that 1011 and 1110 are in M 0111 (W). Using the gure we observe that min 0111 (W) = f1111; 0101; 0010g. The other sets are min 1110 (W) = f1111; 1100; 1010g, and min 1011 (W) = f1111; 1001; 1010g.\nIt is known that the size of the basis for a function f is bounded by the size of its CNF representation, and that for every b the size of min b (f) is bounded by the size of its DNF representation.\nFor any function f and set of assignments B let:\nB f = min B (f) = b2B fz 2 min b (f)g:\nThe following theorem gives an alternative way to de ne char(H).\nTheorem 2 (Khardon & Roth, 1994) Let H be a Horn expression. Then char(H) = B H H .\nContinuing the above example with the function W = (bc ! d)(cd ! b)(bc ! a), we conclude that char(W) = f0010; 0101; 1001; 1010; 1100; 1101; 1111g. As the following theorem shows the set of characteristic models can be used to answer deduction queries.\nTheorem 3 (Kautz et al., 1995;Khardon & Roth, 1994) Let H 1 , H 2 be Horn expressions then H 1 j = H 2 if and only if for all x 2 char(H 1 ), H 2 (x) = 1.\nIt is useful to have the DNF representation of a function. If f is given in its DNF representation then it is easy to compute the set min b (f), for any b. Each term in the DNF representation can contribute at most one assignment, min b (t), where the variables that appear in the term are xed and the others are set to their minimal value. This is true since from every other satisfying assignment of the term we can \\walk down the lattice\" towards this assignment, on a path composed of satisfying assignments. For example, the minimal assignment for the term t = x 1 x 3 , with respect to the basis element b = 0011, is min 0011 (t) = f1001g. The assignment 1100 which also satis es t is not minimal since 1001 < 0011 1101 < 0011 1100. Further, once we have one assignment from each term, it is easy make sure that the set is non-redundant by checking which of the assignments generated is in the intersection of the others. We would use this algorithm later in some of our reductions.\nWe say that a function is b-monotone if it is monotone according to the order relation b . Namely, if whenever f(x) = 1 and y b x then f(y) = 1. Notice that if we rename the variable x i by its negation, for each i such that b i = 1 (i.e. where the order relation is reversed), then f becomes monotone. Therefore, b-monotone functions enjoy similar properties. For example, they have unique minimal DNF and CNF representations. Another property is that the minimal assignment which corresponds to every term is indeed part of the set min b (f).\nClaim 1 (Khardon & Roth, 1994) For any b-monotone function f, there is a 1-1 correspondence between the prime implicants of f and the set min b (f). Namely:\n(1) for every term t in the minimal DNF representation for f, the assignment min b (t) is in min b (f).\n(2) jmin b (f)j = jDNF(f)j.\nWe would also use the notion of a least upper bound of a Boolean function (Selman & Kautz, 1991), which can sometimes be characterized by the monotone theory.\nDe nition 3 (Least Upper-bound) Let F; G be classes of Boolean functions. Given f 2 F we say that g 2 G is a G-least upper bound of f if and only if f g and there is no f 0 2 G such that f f 0 g. Theorem 4 (Khardon & Roth, 1994) Let f be any Boolean function and G a class of all Boolean functions with basis B. Then, f B lub de ned as\nf B lub = b 2B M b (f)\nis the G-least upper bound of f.\nFor the class of Horn expressions we have two ways to express the least upper bound.\nOne using the monotone theory, and one using the closure operator:\nTheorem 5 (Dechter & Pearl, 1992;Kautz et al., 1995;Khardon & Roth, 1994) Let f : f0; 1g n ! f0; 1g be a Boolean function. Then f B H lub = closure(f), and char(f B H lub )\nf. For example consider the function f\n= (bc ! d)(cd ! b)(bc ! a)(a _ b _ c _ d).\nThe function f satis es all the assignments as W above except for 0001. However, intersect(f0101; 1001g) = 0001, and therefore f B H lub = W." }, { "figure_ref": [], "heading": "The Computational Problems", "publication_ref": [], "table_ref": [], "text": "This section includes de nitions for all the problems discussed in this paper. Let H be a CNF expression in Horn form, and let char(H) be its set of characteristic models. " }, { "figure_ref": [], "heading": "Polynomial Time Algorithms and Reductions", "publication_ref": [ "b8", "b10" ], "table_ref": [], "text": "As mentioned above we need to de ne algorithms that are polynomial with respect to their output. There is more than one way to give such a de nition. (A discussion of this issue is given by Eiter and Gottlob (1994).) We use the weakest 1 of those which is called an output polynomial algorithm.\nWhen the output of a problem P is uniquely de ned, we say that an algorithm A is an output polynomial algorithm for P if it solves P correctly in time which is polynomial in the size of its input and output. This is the case with HTR, and CCM.\nWhen the output of a problem P is not uniquely de ned, we consider the shortest permissible output O(I) for input I. We say that an algorithm A is an output polynomial algorithm for P if it solves P correctly in time which is polynomial in the size of its input I and the size of O(I). We note that for SID the output is not uniquely de ned since there is no unique minimal representation for Horn functions.\nWe de ne polynomial reductions with respect to an oracle (i.e. we use Turing reducibility (Garey & Johnson, 1979)). A problem P1 is polynomially reducible to a problem P2 if there is an output polynomial algorithm that solves P1 when given access to (1) an output polynomial subroutine for P2, and (2) a polynomial bound 2 on the running time of the subroutine." }, { "figure_ref": [], "heading": "Translating is Equivalent to Deciding", "publication_ref": [ "b10", "b8", "b8" ], "table_ref": [], "text": "In this section we show that the problems CCM, SID, CMI, and CMIC are equivalent under polynomial reductions. Namely, both translation problems are solvable in polynomial time if and only if the corresponding decision problem CMI is solvable in polynomial time.\nTheorem 6 The problems CCM,SID,CMI, and CMIC are equivalent under polynomial reductions.\nProof: The proof is established in a series of lemmas. In particular we show that CMIC CMI SID CMIC, and that CMI CCM CMIC, where denotes \\is polynomially reducible to\", in Lemma 1, Lemma 2, Lemma 3, Lemma 4, and Lemma 5 respectively.\nLemma 1 The problem CMIC is polynomially reducible to the problem CMI.\nBefore presenting the proof consider how a similar result is achieved for the satis ability problem (Garey & Johnson, 1979). Namely, how a decision procedure for satis ability can be used to construct an algorithm that nds a satisfying assignment if one exists. Suppose 1. Other related notions which we do not use here are \\enumeration with polynomial delay\" and \\enumeration with incremental polynomial delay\" (Eiter & Gottlob, 1994). These require that the algorithm will compute the elements of its output one at a time, and restrict the time delay between consecutive outputs. Incremental polynomial delay allows the delay to depend on the problem size and on the number of elements computed so far. Polynomial delay is stricter in that it requires dependence only on the problem size. Both of these notions are stricter than output polynomial algorithms since the latter may wait a long time before computing its rst output. Unfortunately, most of our reductions yield output polynomial algorithms, and we cannot guarantee that the stronger notions hold. 2. That is, a polynomial in the dimension of the problem (the number of variables), the input size, and the output size.\nwe have a formula C, and that we know that it is satis able. (We used the decision procedure to nd that out.) Our task is to nd a satisfying assignment for it. What we do is substitute\nx 1 = 0 into C yielding a formula C 0 with n 1 variables. The formula C 0 is satis able if and only if C has a satisfying assignment in which x i = 0. We run the decision procedure on C 0 . If the answer is Yes then we know that C has a satisfying assignment in which x i = 0. If the answer is No then since C is satis able, it must have a satisfying assignment in which x 1 = 1. In either case we found a substitution for x 1 which guarantees the existence of a satisfying assignment. All we have to do is to recurse with this procedure on C 0 . An example can clarify this a bit more. Suppose we have the expression C = (a_c)(b_c) which is satis able. To nd a satisfying assignment we substitute a = 0 to get C 0 = (c)(b_c), and run the decision procedure on C 0 . The answer is Yes, and therefore we continue with C 0 . We next substitute b = 0 to get C 00 = cc. We run the decision procedure again, and the answer is No. Therefore we conclude that we must substitute b = 1 instead of b = 0. This yields C 01 = c. We then continue to nd that c must be assigned 1 and altogether we nd the satisfying assignment abc = 011.\nWe would like to use the same trick here. However, G is given as a set of models and we cannot perform this substitution procedure as easily3 . Nevertheless, as the proof shows something similar can be done.\nProof: First observe that we have a solver for CMI. Therefore if the answer is Yes we have no problem, we can simply answer Yes. A problem arises in the case where the answer is No. In this case CMI is happy with saying No, but CMIC must provide a counter example.\nFormally, we get H; G as input to CMIC and an algorithm A to solve CMI. We run A on H; G as an input, and if A replies Yes we reply Yes. Otherwise we know that there exits an x 2 char(H) n G. We need to nd such a model and return it as the output of CMIC.\nConsider rst the easier task of nding x 2 H nclosure(G); the assignment x is a witness for the fact H 6 j = closure(G).\nRecall the substitution trick from above, and observe that for x i = 1 a similar substitution works. For H we simply perform the substitution to get an expression H, and for G we remove any z 2 G in which z i = 0 to get the set G. We claim that there is a witness for H; G with x i = 1 if and only if there is a witness for H; G. This follows from the fact that x 2 closure(G) and x i = 1 if and only if x 2 closure( G). To see that, let x 2 closure(G), such that x i = 1; if x = intersect(S), and y 2 S then y i = 1, and therefore x 2 closure( G). Also if x 2 closure( G) then x 2 closure(G). Therefore, if there is a witness x with x i = 1 then we can detect this fact by presenting A with H; G as input (on which it will say No). This however does not work for x i = 0. In this case an element in the closure requires at least one element in S with y i = 0, but we have no information on the other elements. Therefore we can not perform the recursion in the case where substitution of x i = 0 is required.\nWe circumvent this problem using the following iterative procedure. In each stage we try to turn one more variable to 1. For all i, we make the experiment described above of substituting x i = 1. If the answer is No, for some i, we can proceed to the next stage, just as before (ignoring tests for other values of i). If the answer is Yes for all i, then we know\nWe next consider the problem CCM:\nLemma 4 The problem CMI is polynomially reducible to the problem CCM. Proof: We are given an output polynomial algorithm C for CCM, and a polynomial bound on its running time (that is, a polynomial in the number of variables n, the input size, and the output size). Given H; G as input to CMI, we run C on H until it stops and outputs G 0 or until it exceeds its time bound (with respect to the size of G). In the rst case we compare G and G 0 and answer accordingly. In the second case we know that the set of characteristic models of H is larger than G and therefore we answer No.\nLemma 5 The problem CCM is polynomially reducible to the problem CMIC.\nProof: Given H as input for CCM, an algorithm for CMIC can be used repeatedly to produce the elements of char(H).\nWe start with G = ;. In each iteration we run CMIC on H; G to get a new characteristic model which we add to G. Once we nd all the characteristic models CMIC will answer Yes.\n(In fact, if CMIC is polynomial in its input size then we get an \\incremental polynomial algorithm\" (Eiter & Gottlob, 1994) which is even stronger than \\output polynomial\" as required here.)" }, { "figure_ref": [], "heading": "The Relation to Hypergraph Transversals", "publication_ref": [], "table_ref": [], "text": "In this section we establish the relation to the hypergraph transversal problem. We rst show that our problems are at least as hard as HTR. We then consider two relaxations of SID and CCM. The rst relaxation considers redundant representation for Horn expressions, which includes all the prime implicates. The second relaxation considers computing prime implicants instead of characteristic models. Both of these relaxations enjoy sub-exponential algorithms. It is shown, however, that the relaxations do not help in the general case, as a result of exponential gap in the size of the corresponding representations." }, { "figure_ref": [], "heading": "The Reduction to HTR", "publication_ref": [ "b14", "b7", "b0" ], "table_ref": [], "text": "The problem HTR is de ned as computing a DNF representation for a monotone function given in its CNF form. It is easy to observe that this is equivalent to computing a CNF representation for a monotone function given in its DNF form. (We can simply exchange the _ and ^operations to get one problem from the other). We can therefore assume that the input for HTR is given as either a DNF or a CNF. Another useful observation is that renaming the variables does not change the problem. Therefore if we rename every variable as its negation (namely, replace every x i with x i ), we get the equivalent problem of translating between functions which are monotone with respect to the order relation 1 n. We call such functions anti-monotone. This is useful since anti-monotone functions have CNF representations in which all variables are negated, which is a special case of Horn expressions. Having these observations, the next two theorems follow almost immediately from the de nitions, given the correspondence between minimal elements and prime implicants described in Claim 1. The following result has been stated as an open problem by Kavvadias et. al. (1993).\nIn particular we relax the problems so as to use the largest Horn expression for a function instead of using a small Horn expression. In this case the problem SID amounts to computing all the (Horn) prime implicates of the function identi ed by . For CCM we have to compute the set of characteristic models given the set of all prime implicates rather than a small expression.\nWe would use the following example to illustrate the notions in this sub-section. Con-\nsider the function W = (a ! b)(c ! b)(b _ d).\nThe satisfying assignments of W are W = f0000; 0001; 0100; 0110; 1100; 1110g, and the characteristic models are char(W) = f0001; 0110; 1100; 1110g. One can verify that W j = (c _ d)(a _ d), and that these are the only additional Horn prime implicates of W.\nFor CCM, this section asks whether it is easier to compute the characteristic models starting with the equivalent expression W = (a ! b)(c ! b)(b_d)(c_d)(a_d). For SID the question is whether it is easier to output the whole set rather than just a minimal subset. These are relaxations of the problems since, an algorithm for SID is allowed more time to compute its output, and CCM is given more information and more time for its computation.\nLet f be a Horn expression, then using the monotone theory representation (Equation ( 2)) we know that f = ^b2B H M b (f):\n(3)\nRecall that B H = fu 2 f0; 1g n j weight(u) n 1g, and denote by b (i) , 1 i n, the assignment with x i set to zero and all other bits set to 1, and by b (0) the assignment 1 n . In our example b (0) = 1111, and b (1) = 0111. Let D i be the set of clauses that are falsi ed by b (i) , and let G i denote the language of all CNF expressions with clauses from D i . In our example, with four variables a; b; c; d, clauses in D 1 may have b; c; d as negative literals and a as a positive literal. That is, (a _ b) 6 2 D 1 , but (a _ b) 2 D 1 and (b _ c) 2 D 1 .\nTheorem 4 implies that M b (i) (f) is equal to the least upper bound of f in G i . Namely, the intersection of all clauses in D i which are implied by f. De ne PI(f; i) to be the set of prime implicates of f with respect to b (i) . Formally: PI(f; i) = fd 2 D i jf j = d and 8d 0 d; f 6 j = d 0 g:\nUsing this notation we get:\nM b (i) (f) = d2PI(f;i) d:\n(4) Going back to the example W, we have:\nPI(W; 0) = (b _ d)(c _ d)(a _ d) PI(W; 1) = (b _ d)(c _ d) PI(W; 2) = (a ! b)(c ! b)(c _ d)(a _ d) PI(W; 3) = (b _ d)(a _ d) PI(W; 4) = true.\nNote that the partition of the prime implicates of f is not disjoint. In particular, the anti-monotone prime implicates (except for x 1 _ x 2 _ : : : _ x n if it is a prime implicate) appear in several PI(f; i) sets. Equation (3) tells us that we can decompose the function into n + 1, b (i) -monotone functions. Equation (4) tells us how to decompose the clauses of the function, and the monotone theory tells us how to decompose the characteristic models. These observations lead to the following theorem:\nTheorem 10 The problem CCM, when the input is given as the set of all Horn prime implicates, is polynomially equivalent to HTR.\nProof: First observe that the reduction in Theorem 8 uses an anti-monotone function, which has a unique Horn representation. Namely the smallest and the largest representations are the same in this case. This implies that the problem remains as hard as HTR in this special case.\nFor the other direction, we rst partition the input into the sets PI(f; i), and then use a procedure for HTR in order to translate each set to a DNF representation. Then using Claim 1 we translate the DNF expression to the set of minimal assignments. The crucial point is that we have DNF representations for the functions M b (i) (f) rather than for f. This implies that each term in these DNF representations is represented as an element in char(f) and therefore the reduction is polynomial. (We may get some of the elements in char(f) more than once, but at most n times, which is still polynomial.)\nIn our example, we get the following DNF expressions and their translation into assignments: Theorem 11 The problem SID, when the output required is all Horn prime implicates, is polynomially equivalent to HTR.\nProof: The proof is similar to the proof of the previous theorem. The hardness follows from Theorem 9.\nFor the other direction, assume we get as input a set , and an algorithm A for HTR. We rst partition into sets i according to minimality with respect to b (i) . (Note that the sets are not disjoint.) Then we use Claim 1 to transform each i into a DNF expression for the function M b (i) (f). For each such DNF expression we run the procedure A to compute its CNF representation. By Equation (3), the intersection, with respect to i, of these CNF expressions is the Horn expression we need.\nIn the example, we simply start with the sets i and use the same equations as above going in the other direction. From the above two theorems we get the following corollary.\nCorollary 1 The problems CCM and SID, when the Horn expression is represented as the set of all Horn prime implicates, are polynomially equivalent, and are polynomially equivalent to HTR.\nThe equivalence of CCM and SID, in this special case, has been observed before in the database domain (Heikki Mannila, private communication). In fact this led us to the results of this section. As mentioned above a similar result for relational databases is reported by Eiter and Gottlob (1991) where the restriction is called the MAK form for functional dependencies.\nLifting the Restriction: The polynomial equivalence to the problem HTR, implies the existence of sub-exponential n O(logn) algorithm for these problems which may have some practical implications. However, as the following example shows one cannot apply it to solve the general case of the problem SID. Aizenstein and Pitt (1995) present some functions with interesting properties. These functions can be manipulated to create examples with the following properties: (1) f has a short Horn expression, (2) jchar(f)j is small, (3) the number of Horn \\prime implicates\" is exponential. In particular f = (x 1 _ x 2 _ : : : _ x m ) ^(x1 _ y 1 ) ^(x2 _ y 2 ) ^: : : ^(x m _ y m ) has these properties. The set of prime implicates include all the disjunctions (b 1 _b 2 _: : :_b m ) where b i 2 fx i ; y i g.\nWe show by case analysis that the set of characteristic models is small. Observe that in order to satisfy f, at least one of the x i variables must be assigned 0, and that if x i = 0 then y i must also be assigned 0.\nConsider rst the set min 1 2m (f). Notice that if, for some j, x j = y j = 0 and all the other variables are set to 1, then f is satis ed. This contributes exactly m assignments to min 1 2m (f). For m = 3 and variable ordering x 1 x 2 x 3 y 1 y 2 y 3 , this yields the assignments 011011, 101101, 110110.\nConsider next min b (x i ) (f). Namely, the basis element in which x i = 0. To satisfy f, if x i = 0 then y i must be 0, and as before we can set all other variables to 1. If x i = 1 then there must be another variable x j which is set to 0. In this case y j must also be 0." }, { "figure_ref": [], "heading": "Therefore min b", "publication_ref": [ "b16" ], "table_ref": [], "text": "(x i ) (f) = min 1 2m (f).\nLastly, consider min b (y i ) (f). Namely the basis element in which y i = 0. Observe that f is anti-monotone in y i . Namely, given any satisfying assignment with y i = 1, by ipping y i to 0 we get another satisfying assignment, which is smaller than the original according to b (y i ) . Therefore, we may assume that y i = 0. If x i = 0 then we can set all other variables to 1. If x i = 1 then there must be another variable x j which is set to 0, and therefore also y j = 0. This assignment is 2 bits away from b (y i ) and it is minimal. We get m assignments in this case too. In our example with m = 3, and say i = 2, we get the assignments 101101, 011001, and 110100.\nAltogether we get m assignments from the rst two groups and m(m 1) new assignments from the last and therefore jchar(f)j = m 2 . This means that arbitrary enumeration of the prime implicates, for a given set of models , is not su cient for solving SID.\nA Generalization: While we concentrate in this paper on Horn expressions, we note that the same arguments and proofs hold in the more general case of k-quasi Horn expressions. These are expressions in CNF form where in every clause there are at most k positive literals (so that Horn expressions are 1-quasi Horn expressions). The set B H k = fu 2 f0; 1g n j weight(u) n kg is a basis for k-quasi Horn expressions, and B H k the set of characteristic models for f (Khardon & Roth, 1994). The generalized versions of CCM and SID, when restricted to hold all prime implicates are still equivalent to HTR." }, { "figure_ref": [], "heading": "Enumerating Prime Implicants", "publication_ref": [ "b9", "b9", "b9", "b16", "b0", "b0" ], "table_ref": [], "text": "As mentioned above, given a DNF representation for f we can easily compute the set of characteristic models. One might therefore try to solve CCM by rst translating the Horn expression into a DNF expression and then computing the characteristic models from this set. Another possible relaxation is to rst compute all the prime implicants of the function and then to extract a DNF representation from it. We consider this problem here. Namely, we consider the problem of enumerating all the prime implicants of a Horn expression, and its application for the solution of CCM.\nWhile we have not found a general reduction from this problem to HTR, a simple adaption of the algorithm for HTR (Fredman & Khachiyan, 1994) yields an incremental n O(log 2 n) algorithm for this problem. However, as we discuss below, enumeration of prime implicants of a Horn expression is not su cient for solving CCM. The problem in such an application is an exponential gap in the sizes of these representations.\nFor completeness we sketch the main ideas of the enumeration algorithm here. Let H be a Horn expression, and let D be the DNF expression composed of the prime implicants enumerated so far. The algorithm nds an assignment x which satis es H and does not satisfy D. Using x it is easy to nd a new prime implicant of H. The algorithm to nd x uses the following combinatorial fact (Fredman & Khachiyan, 1994): either there is a variable x i that appears in H ^D with high frequency, or the expression H ^D has \\a lot\" of satisfying assignments. In the rst case, one can recursively solve two sub-problems arrived at by substituting x i = 0, and x i = 1 in the expressions H and D. In the second case it is easy to nd an assignment x (e.g. by sampling). The solution of the recursion yields the stated time bound. For complete details we refer the reader to the article by Fredman and Khachiyan (1994). While the analysis there is specialized for monotone functions it is easy to extend (the rst part of) it for Horn expressions 4 .\nLifting the Restriction: Denote by #PIs(f) the number of prime implicants of f. While the representations (1) Prime Implicants (PIs), (2) DNF representation, and (3) Characteristic models, satisfy the inequalities #PIs(f) jDNF(f)j jchar(f)j=n, each of the inequalities may allow for an exponential gap. The function f 1 = (x 1 _ x 2 : : : _ x p n 1 _ x p n ) ^: : : ^(x n p n+1 _ x n p n+2 _ : : : _ x n 1 _ x n ) (Khardon & Roth, 1994) shows a gap between (2) and (3). The function f 2 = x 1 x 2 : : :x m _ x 1 y 1 _ x 2 y 2 _ : : : _ x m y m (Aizenstein & Pitt, 1995) shows a gap between (1) and ( 2). (To observe that, notice the similarity between f 2 and the dual of the function from the previous sub-section.) Both functions are Horn (for f 2 by multiplying out we see that every clause for f is Horn,4. One caveat that we have to tackle is enumerating prime implicants after D is already equivalent to H.\nThis can be done using \\consensus\" operations, which can generate all the prime implicants (Aizenstein & Pitt, 1995) although its Horn expression is large) and both have a small set of characteristic models. These examples show that enumeration of prime implicants may be an ine cient way for producing the characteristic models for some functions." }, { "figure_ref": [], "heading": "A Related Problem", "publication_ref": [ "b11", "b10" ], "table_ref": [], "text": "In this section we show that a related problem, which is a minor variant of CCM and SID, is co-NP-Complete. Recall the de nition of EOC:\nEOC: Entailment of Closure Input: a Horn CNF H, a set G of assignments.\nOutput: Yes if and only if H j = closure(G).\nThe important di erence between CMI and EOC is that the set G is not required to include only satisfying assignments of H. This enables the following reduction for EOC, while the complexity of CMI is still open. A similar result in the database domain has been obtained by Gottlob and Libkin (1990).\nTheorem 12 The decision problem EOC is co-NP-Complete. Proof: The problem is trivially in co-NP (guess an assignment x and say \\No\" if x 2 H n closure(G)).\nTo show its hardness we reduce co-Monotone 3-SAT to EOC. Monotone 3-SAT (Garey & Johnson, 1979) is the problem of satis ability of CNF formulas in which in every clause (has 3 literals and) either all the literals are positive (we call these clauses monotone) or all the literals are negated (we call such clauses anti-monotone). Let f = M ^A an instance of Monotone 3-SAT where M denotes a conjunction of monotone clauses and A is a conjunction of anti-monotone clauses. We translate it to the instance of EOC: H = A and = b2B H min b (M). First we claim that the reduction is polynomial. Note that since M is a monotone CNF, M is a DNF formula in which all the variables are negated, and can therefore be written as an anti-monotone CNF formula. This implies that M is Horn, but we have it in a DNF representation. Further computing is easy given the DNF representation of M, and its size is bounded by (n + 1) times the number of clauses in M.\nWe now claim that f is satis able if and only if H 6 j = closure( ). Assume rst that f is satis able, and let x 2 A ^M. This implies that x 2 H and x 6 2 M. Since M is Horn, and the models of Horn functions are closed under intersection (Theorem 1) we get that x 6 2 closure(M), and since M x 6 2 closure( ). Therefore, H 6 j = closure( ).\nFor the other direction assume H 6 j = closure( ), and let x be an assignment such that x 2 H and x 6 2 closure( ). We get that x 2 A, and since by Theorem 1 and Theorem 2 M = closure( ) we have x 6 2 M. So, x 2 A ^M and f is satis able. The satisfying assignments of M are 0000; 0001; 0100, and = char(M) = f0001; 0100g. Now consider the assignment x = 1000 which satis es f. Clearly, x satis es H, and one can check that it is not in the closure of ." }, { "figure_ref": [ "fig_0" ], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "Horn expressions and characteristic models are two alternative representations for the same information and none of the two dominates the other in the computational services it can support. The same representations occur in database theory where they have a role in the design of relational databases. A natural question is whether we can translate back and forth between these representations so as to enjoy the bene ts of both worlds. In this paper we have studied the computational complexity of these problems.\nOur main result is that the two translation problems CCM, and SID, are equivalent to each other (under polynomial reductions), and that they are equivalent to the corresponding decision problem CMI. Namely, translating in either direction is equivalent to deciding whether a given set of models is the set of characteristic models for a given Horn expression.\nWe have also shown a close relation between our problems and the hypergraph transversal problem HTR. This is a translation problem which is related to many applications in computer science and in particular to AI. We have shown that in general CCM, and SID are at least as hard as HTR, and that in a special case CCM, SID, and HTR are equivalent.\nWe exhibited examples which show that simple algorithms for enumerating prime implicants cannot guarantee e cient solution for CCM, and similarly enumerating prime implicates may not be e cient for SID. Lastly, we discussed the problem EOC, a minor modi cation of CMI, which is co-NP-Complete. The complexity hierarchy of the problems discussed is depicted in Figure 1.\nSome of the results presented in this paper can be obtained from previous results in database theory, using the equivalence between Armstrong relations and characteristic models reported in a companion paper (Khardon et al., 1995). However, our proofs and exposition make these results much more accessible.\nThe exact complexity of CMI, and that of HTR are left as open problems. While HTR has a sub-exponential algorithm, the problems CMI might still be co-NP-Hard." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I am grateful to Thomas Eiter, Heikki Mannila, and Dan Roth for their comments which lead to some of the results in this paper. I wish to thank the anonymous referees whose comments helped improve the presentation, and Dimitris Kehagias for his help in proofreading the paper. The research for this paper was supported by Center for Intelligent Control Systems under ARO contract DAAL03-92-G-0115." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b6", "b1", "b5", "b13", "b18" ], "table_ref": [], "text": "that for each x i that did not receive a value so far, there is no witness with x i = 1, so the only possible witness is the one assigning 0 to all the variables. We return the witness x 2 f0; 1g n arrived at, by the above substitutions, as the counter example of CMIC.\nFrom the construction it is clear that x 2 H n closure(G), but the requirement of CMIC is that x 2 char(H) n G. We claim that this stronger condition holds. Suppose not, and let S char(H) be such that x = intersect(S). Then clearly S is not a subset of G or otherwise x 2 closure(G). Let y 2 S n G, then since x = intersect(S), we get x < 0 n y. Namely, if x i = 1 then y i = 1. But this is a contradiction, since in the last run of the algorithm A for CMI, it was concluded that no more variables could be set to 1, while still maintaining a witness.\nWe exemplify the proof using the function W = (bc ! d)(cd ! b)(bc ! a) presented in the introduction. Recall that char(W) = f0010; 0101; 1001; 1010; 1100; 1101; 1111g, and suppose that so far we found G = f0010; 1001; 1010; 1100; 1101; 1111g. That is, all but the model 0101. We run CMI on W; G and, since G does not include all the characteristic models, it answers No. In order to nd the counter example we make 4 separate substitutions, one for each variable substituted to 1.\nConsider the substitution with b = 1. This yields W = (c ! d)(c ! a), and G = f1s00; 1s01; 1s11g, where we use s to mark that the variable b was substituted. We run CMI on W; G and it nds out that there is a counter example (the assignment 0s01 is in W but not in closure( G)), and therefore it answers No. That means we can continue our algorithm with b = 1. We forget all the information from the other substitutions (that were not considered in detail) and continue to the next step.\nIn the next step we substitute 1 to each of a; c; d. Consider rst the substitution for a. This yields W = (c ! d) and G = fss00; ss01; ss11g. Running CMI on this pair we get the answer Yes. Namely W = closure( G). Consider now the substitution for d. This yields W = (c ! a) and G = f1s0s; 1s1sg. Running CMI on this pair we get the answer No (since 0s0s is a counter example). We can therefore recurse on this value.\nIn the next iteration both substitutions for a and for c, yield the answer Yes, and therefore we substitute 0 to both to get the nal counter example abcd = 0101.\nUsing this example it is easy to see that one can improve the running time of the reduction by simply remembering the attributes for which we received the answer Yes. These attributes will have to get the value 0 in the end. In this way we can scan the variables one by one, and recurse on the rst that yields the answer No. This requires only n calls to CMI.\nLemma 2 The problem CMI is polynomially reducible to the problem SID.\nProof: We are given an output polynomial time algorithm A for SID, and a polynomial bound on its running time (that is, a polynomial in the number of variables n, the input size, and the output size). Given H; G as input to CMI, we run A on G until it stops and outputs H 0 or until it exceeds its time bound (with respect to the size of H). In the rst case we check whether H = H 0 (which can be done in polynomial time (Dowling & Gallier, 1984)) and answer accordingly. In the second case we know that the real Horn expression which corresponds to G is larger than H and therefore we answer No.\nThe proof of the next lemma draws on previous results in computational learning theory.\nIn this framework a function f : f0; 1g n ! f0; 1g is hidden from a learner that has to reproduce it by accessing certain \\oracles\". A membership query allows the learner to ask for the value of the function on a certain point.\nDe nition 4 A membership query oracle for a function f : f0; 1g n ! f0; 1g, denoted MQ(f), is an oracle that when presented with x 2 f0; 1g n returns f(x).\nAn equivalence query allows the learner to nd out whether a hypothesis he has is equivalent to f or not. In case it is not equivalent, the learner is supplied with a counter example.\nDe nition 5 An equivalence query oracle for a function f : f0; 1g n ! f0; 1g, denoted EQ(f), is an oracle that when presented with a hypothesis h : f0; 1g n ! f0; 1g, returns Yes if f h. Otherwise it returns No and a counter example x such that f(x) 6 = h(x).\nWe use a result that has been obtained in this framework.\nTheorem 7 (Angluin, Frazier, & Pitt, 1992) There is an algorithm A, that when given access to MQ(f) and EQ(f), where f is a hidden Horn expression, runs in time polynomial in the number of variables and in the size of f, and outputs a Horn expression H which is equivalent to f.\nThe hypothesis h, in the algorithm's accesses to EQ(f), is always a Horn expression.\nThe following lemma, and the simulation in its proof, are implicit in previous works (Dechter & Pearl, 1992;Kautz et al., 1995;Kivinen & Mannila, 1994).\nLemma 3 The problem SID is polynomially reducible to the problem CMIC.\nProof: We are given G as input to SID, and a polynomial time algorithm C for CMIC.\nOur algorithm will run the algorithm A from Theorem 7 and answer the MQ and EQ queries that A presents. Given x 2 f0; 1g n for MQ the algorithm tests whether x 2 closure(G). This can be done by testing whether x is equal to the intersection of all elements y in G such that y x.\nGiven a Horn expression h for EQ (the theorem guarantees that the hypothesis is a Horn expression), we have to test whether h closure(G). We rst test whether closure(G) h, which is equivalent to closure(G) j = h. Theorem 5 together with Theorem 3 imply that if the answer is No, then for some x 2 G, h(x) = 0. Such an x is a counter example for the equivalence query, and the test can be performed simply by evaluating h on all the assignments in G.\nIf closure(G) j = h, namely all the assignments in G satisfy h, we present h; G as input to the algorithm C for the problem CMIC. The input to CMIC is legal. C may answer Yes, meaning char(h) G, which implies h closure(G). In this case we answer Yes to the equivalence query. Otherwise C says No and supplies a counter example x 2 char(h) n G. Since G h we get x 2 hnclosure(G) and therefore we can pass x on as a counter example to the equivalence query." }, { "figure_ref": [], "heading": "Khardon", "publication_ref": [ "b14", "b14", "b8", "b3" ], "table_ref": [], "text": "Theorem 8 The problem HTR is polynomially reducible to the problem CCM. Proof: Let A be an algorithm for the problem CCM. We construct an algorithm B for the problem HTR. We may assume that the input is an anti-monotone CNF, C, and we want to compute its anti-monotone DNF representation.\nThe basic idea is that using Claim 1 we know how to compute the DNF from min 1 n(C), and that the latter is a subset of the characteristic models. So all we need to do is let A compute the characteristic models, identify the set min 1 n(C), and compute the DNF.\nMore formally, the algorithm B runs A to compute = char(C) = min B H (C), and computes the set 1 n = fz 2 j 8y 2 ; z 6 < 1 n yg. Namely the elements of which are minimal with respect to the order relation b = 1 n . It then computes the anti-monotone DNF expression D = _ z2 1 n ^zi =0 x i , which it outputs.\nThe correctness of the algorithm follows from Claim 1 which guarantees that the computation of the DNF from the set of characteristic models is correct.\nAs for the time complexity we observe, using Claim 1, that is not considerably larger than the size of the DNF. This is true since for all b, jDNF(f)j = jmin 1 n(f )j jmin b (f)j, and jB H j = n + 1.\nTo exemplify the above reduction, suppose that we have only three variables a; b; c, and that the input is C = (a _ b)(b _ c). (The satisfying assignments are 000; 001; 010; 100; 101, and the required DNF expression is a c _ b.) The algorithm A will compute the set of characteristic models char(C) = f101; 010; 100; 001g, from that we nd that min 1 n(C) = f101; 010g. The term which corresponds to 101 is b, and the term which corresponds to 010 is a c and indeed we get the right DNF expression.\nUsing the monotone theory one can give a simple proof for the following theorem, which has already been proved by Kavvadias et. al. (1993).\nTheorem 9 (Kavvadias et al., 1993) The problem HTR is polynomially reducible to the problem SID.\nWe note that both theorems can be deduced by combining results in database theory (Eiter & Gottlob, 1994, 1991;Bioch & Ibaraki, 1993) and using the above mentioned equivalence with problems in database theory (Khardon et al., 1995)." }, { "figure_ref": [], "heading": "Enumerating Prime Implicates", "publication_ref": [ "b7" ], "table_ref": [], "text": "Having obtained the hardness results in the previous sub-section, a natural question is whether CCM, and SID are as easy as HTR. This would help settle the exact complexity of the problems discussed, and more importantly would imply a sub-exponential algorithm for the problem. While no such reduction has been found, we show here that it holds in a special case. We show, however, that the solution obtained in this way may need exponential time in the general case.\nThis result has already been obtained in the database domain (Eiter & Gottlob, 1991), where restrictions of functional dependencies to be in MAK form is discussed. Our argument, however, can be generalized to richer languages, and in particular holds for the family of k-quasi Horn expressions de ned below." } ]
[ { "authors": "H Aizenstein; L Pitt", "journal": "Machine Learning", "ref_id": "b0", "title": "On the learnability of disjunctive normal form formulas", "year": "1995" }, { "authors": "D Angluin; M Frazier; L Pitt", "journal": "Machine Learning", "ref_id": "b1", "title": "Learning conjunctions of Horn clauses", "year": "1992" }, { "authors": "C Beeri; M Dowd; R Fagin; R Statman", "journal": "Journal of the ACM", "ref_id": "b2", "title": "On the structure of Armstorng relations for functional dependencies", "year": "1984" }, { "authors": "J Bioch; T Ibaraki", "journal": "", "ref_id": "b3", "title": "Complexity of identi cation and dualization of positive Boolean functions", "year": "1993" }, { "authors": "N H Bshouty", "journal": "", "ref_id": "b4", "title": "Exact learning via the monotone theory", "year": "1993" }, { "authors": "R Dechter; J Pearl", "journal": "Arti cial Intelligence", "ref_id": "b5", "title": "Structure identi cation in relational data", "year": "1992" }, { "authors": "W F Dowling; J H Gallier", "journal": "Journal of Logic Programming", "ref_id": "b6", "title": "Linear-time algorithm for testing satis ability of propositional Horn formulae", "year": "1984" }, { "authors": "T Eiter; G Gottlob", "journal": "", "ref_id": "b7", "title": "Identifying the minimal transversals of a hypergraph and related problems", "year": "1991" }, { "authors": "T Eiter; G Gottlob", "journal": "Siam Journal of Computing", "ref_id": "b8", "title": "Identifying the minimal transversals of a hypergraph and related problems", "year": "1994" }, { "authors": "M Fredman; L Khachiyan", "journal": "", "ref_id": "b9", "title": "On the complexity of dualization of monotone disjunctive normal forms", "year": "1994" }, { "authors": "M Garey; D Johnson", "journal": "W. H. Freeman", "ref_id": "b10", "title": "Computers and Intractability: A Guide to the Theory of NP-Completeness", "year": "1979" }, { "authors": "G Gottlob; L Libkin", "journal": "Acta Cybernetica", "ref_id": "b11", "title": "Investigations on Armstrong relations, dependency inference, and excluded functional dependencies", "year": "1990" }, { "authors": "A Horn", "journal": "Journal of Symbolic Logic", "ref_id": "b12", "title": "On sentences which are true on direct unions of algebras", "year": "1951" }, { "authors": "H Kautz; M Kearns; B Selman", "journal": "Arti cial Intelligence", "ref_id": "b13", "title": "Horn approximations of empirical data", "year": "1995" }, { "authors": "D Kavvadias; C Papadimitriou; M Sideri", "journal": "Springer-Verlag", "ref_id": "b14", "title": "On Horn envelopes and hypergraph transversals", "year": "1993" }, { "authors": "R Khardon; H Mannila; D Roth", "journal": "", "ref_id": "b15", "title": "Reasoning with examples: Propositional formulae and database dependencies", "year": "1995" }, { "authors": "R Khardon; D Roth", "journal": "", "ref_id": "b16", "title": "Reasoning with models", "year": "1994-01" }, { "authors": "R Khardon; D Roth", "journal": "", "ref_id": "b17", "title": "Default-reasoning with models", "year": "1995" }, { "authors": "J Kivinen; H Mannila", "journal": "", "ref_id": "b18", "title": "Approximate inference of functional dependencies from relations", "year": "1994" }, { "authors": "H Mannila; K J Raiha", "journal": "Journal of Computer and System Sciences", "ref_id": "b19", "title": "Design by example: An application of Armstrong relations", "year": "1986" }, { "authors": "J Mccarthy", "journal": "MIT Press", "ref_id": "b20", "title": "Programs with common sense", "year": "1958" }, { "authors": "J Mccarthy; P Hayes", "journal": "Edinburgh University Press", "ref_id": "b21", "title": "Some philosophical problems from the standpoint of arti cial intelligence", "year": "1969" }, { "authors": "J C C Mckinsey", "journal": "Journal of Symbolic Logic", "ref_id": "b22", "title": "The decision problem for some classes of sentences without quanti er", "year": "1943" }, { "authors": "R Reiter", "journal": "Arti cial Intelligence", "ref_id": "b23", "title": "A theory of diagnosis from rst principles", "year": "1987" }, { "authors": "R Reiter; J De Kleer", "journal": "", "ref_id": "b24", "title": "Foundations of assumption-based truth maintenance systems", "year": "1987" }, { "authors": "B Selman; H Kautz", "journal": "", "ref_id": "b25", "title": "Knowledge compilation using Horn approximations", "year": "1991" }, { "authors": "B Selman; H Levesque", "journal": "", "ref_id": "b26", "title": "Abductive and default reasoning: A computational core", "year": "1990" } ]
[ { "formula_coordinates": [ 7, 228.72, 631.56, 154.8, 37.92 ], "formula_id": "formula_0", "formula_text": "f = b2f0;1g n M b (f) = b6 2f M b (f):" }, { "formula_coordinates": [ 8, 214.56, 363.48, 183.12, 40.74 ], "formula_id": "formula_1", "formula_text": "f = b 2B M b (f) = b 2B _ z2min b (f) M b (z):" }, { "formula_coordinates": [ 8, 223.92, 688.86, 171.12, 19.6 ], "formula_id": "formula_2", "formula_text": "B f = min B (f) = b2B fz 2 min b (f)g:" }, { "formula_coordinates": [ 10, 265.2, 111.96, 81.84, 37.92 ], "formula_id": "formula_3", "formula_text": "f B lub = b 2B M b (f)" }, { "formula_coordinates": [ 10, 286.32, 245.4, 210.48, 16.8 ], "formula_id": "formula_4", "formula_text": "= (bc ! d)(cd ! b)(bc ! a)(a _ b _ c _ d)." }, { "formula_coordinates": [ 17, 90, 169.56, 240.96, 16.8 ], "formula_id": "formula_5", "formula_text": "sider the function W = (a ! b)(c ! b)(b _ d)." }, { "formula_coordinates": [ 17, 254.4, 520.2, 103.44, 28.42 ], "formula_id": "formula_6", "formula_text": "M b (i) (f) = d2PI(f;i) d:" }, { "formula_coordinates": [ 17, 203.52, 587.16, 205.2, 82.84 ], "formula_id": "formula_7", "formula_text": "PI(W; 0) = (b _ d)(c _ d)(a _ d) PI(W; 1) = (b _ d)(c _ d) PI(W; 2) = (a ! b)(c ! b)(c _ d)(a _ d) PI(W; 3) = (b _ d)(a _ d) PI(W; 4) = true." }, { "formula_coordinates": [ 19, 161.28, 462.84, 95.52, 16.8 ], "formula_id": "formula_8", "formula_text": "(x i ) (f) = min 1 2m (f)." } ]
Translating between Horn Representations and their Characteristic Models
Characteristic models are an alternative, model based, representation for Horn expressions. It has been shown that these two representations are incomparable and each has its advantages over the other. It is therefore natural to ask what is the cost of translating, back and forth, between these representations. Interestingly, the same translation questions arise in database theory, where it has applications to the design of relational databases. This paper studies the computational complexity of these problems. Our main result is that the two translation problems are equivalent under polynomial reductions, and that they are equivalent to the corresponding decision problem. Namely, translating is equivalent to deciding whether a given set of models is the set of characteristic models for a given Horn expression. We also relate these problems to the hypergraph transversal problem, a well known problem which is related to other applications in AI and for which no polynomial time algorithm is known. It is shown that in general our translation problems are at least as hard as the hypergraph transversal problem, and in a special case they are equivalent to it.
Roni Khardon
[ { "figure_caption": "Figure 1 :1Figure 1: Summary of Complexity Results", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Computing min b (f) and M b (f)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "PI(W; 0) = a b c _ d ) 0 = 0001; 1110 PI(W; 1) = b c _ d ) 1 = 0001; 0110 PI(W; 2) = bd _ ac ) 2 = 1110; 0001 PI(W; 3) = a b _ d ) 3 = 0001; 1100 PI(W; 4) = true ) 4 = 1110Similarly we get for SID:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "To exemplify the above reduction consider the function f = (a _ b _ c)(b _ c _ d)(a _ c _ d)(a _ b _ c): This function will be translated into H = (a_b_c)(b_c_d). The function M = a c d_a b c.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "The condition is equivalent to H j = closure(G), and essentially also to G = char(H).", "figure_data": "EOC: Entailment of Closure Input: a Horn CNF H, a set G of assignments. Output: Yes if and only if H j = closure(G).We also discuss the following variant of CMI:CMIC: Characteristic Models Identi cation with Counter example", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12", "b13", "b7", "b10", "b14", "b9", "b5", "b16", "b17", "b18", "b15" ], "table_ref": [], "text": "Programs playing games like chess, draughts, or Othello use evaluation functions to estimate the players' winning chances in positions at the leaves of game{trees. These values are propagated to the root according to the NegaMax principle in order to choose a move in the root position which leads to the highest score. Normally, evaluation functions combine features that measure properties of the position correlated with the winning chance, such as material in chess or mobility in Othello. Most popular are quickly computable linear feature combinations. In the early days of game programming, the feature weights were chosen intuitively and improved in a manual hill{climbing process until the programmer's patience gave out. This technique is laborious. Samuel (1959Samuel ( ,1967) ) was the rst to describe a method for automatic improvement of evaluation function parameters. Since then many approaches have been investigated. Two main strategies can be distinguished:\nMove adaptation: Evaluation function parameters are tuned to maximize the frequency with which searches yield moves that occur in the lists of moves belonging to training positions. The idea is to get the program to mimic experts' moves.\nValue adaptation: Given a set of labelled example positions, parameters are determined such that the evaluation function ts a speci c model. For instance, evaluation functions can be constructed in this way to predict the nal game result.\nIn move adaptation, proposed for instance by Marsland (1985), v.d. Meulen (1989), and Mysliwietz (1994), a linear feature combination has two degrees of freedom: it can be multiplied by a positive constant and any constant can be added to it without changing the move decision. If the evaluation function depends on the game phase, and positions from di erent phases are compared (for example within the framework of selective extensions or opening book play), these constants must be chosen suitably. Because evaluation functions optimized c 1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. by move adaptation for the moment have no global interpretation, a solution of this problem is not obvious. Schae er et al. (1992) presented an ad hoc and game{speci c approach.\nIn this respect, value adaptation is more promising. Here, evaluations from di erent phases are comparable if the example position labels have a phase{independent meaning. Mitchell (1984) labelled Othello positions occurring in a game with the nal game result in the form of the disc di erential and tried to approximate these values using a linear combination of features. Since a regression was used to determine the weights, it was also possible to investigate the features' statistical relevance. Another statistical approach for value adaptation was used by Lee & Mahajan (1988): example positions were classi ed as a win or loss for the side to move and | assuming the features to be multivariate normal | a quadratic discriminant function was used to predict the winning probability. This technique ensures the desired comparability and applies also to games without win degrees, i.e. that only know wins, draws, and losses.\nBesides these classical approaches which heavily rely on given feature sets, in recent years arti cial neural networks (ANNs) have been trained for evaluating game positions. For instance, Moriarty & Miikkulainen (1993) used genetic algorithms to evolve both the topology and weights of ANNs in order to learn Othello concepts by means of tournaments against xed programs. After discovering the concept of mobility, their best 1{ply ANN{player was able to win 70% of the games against a 3{ply brute{force program that used an evaluation function without mobility features. The most important contribution in this eld is by Tesauro (1992Tesauro ( ,1994Tesauro ( ,1995)). Using temporal di erence learning (Sutton, 1988) for updating the weights, his ANNs learned to evaluate backgammon positions at master level by means of self{play. Tesauro conjectured the stochastic nature of backgammon to be responsible for the success of this approach. Though several researchers obtained encouraging preliminary results applying Tesauro's learning procedure to deterministic games, this work has not yet led to strong tournament programs for tactical games such as Awari, draughts, Othello, or chess, that allow deep searches and for which powerful and quickly computable evaluation functions are known. It might be that due to tactics for these games more knowledgeable but slower evaluation functions are not necessarily more accurate than relatively simple and faster evaluation functions in conjunction with deeper searches.\nIn what follows, three well{known statistical models | namely the quadratic discrimination function for normally distributed features, Fisher's linear discriminant, and logistic regression | are described for the evaluation of game positions in the context of value adaptation. Thereafter, it is shown how example positions for parameter estimation were generated. Finally, the playing strengths of three versions of a world{class Othello program1 | LOGISTELLO | equipped with the resulting evaluation functions are compared in order to determine the strongest tournament player. It turns out that quadratic feature combinations do not necessarily lead to stronger programs than linear combinations, and that logistic regression gives the best results in this application." }, { "figure_ref": [], "heading": "Statistical Feature Combination", "publication_ref": [ "b3", "b4", "b0", "b8", "b8", "b5" ], "table_ref": [], "text": "The formal basis of statistical feature combination for position evaluation can be stated as follows:\nis the set of positions to evaluate. Y : ! fL; Wg classi es positions as a loss or win for the player to move, assuming optimal play by both sides. Draws can be handled in the manner outlined in Section 4.\nX 1 ; : : :; X n : ! IR are the features. The evaluation of a position ! 2 with x = (X 1 ; : : :; X n )(!) is the conditional winning probability V (!) = P (Y = W j (X 1 ; : : :; X n ) = x) =: P (W j x):\nThere are N classi ed example positions ! 1 ; : : :; ! N 2 available with x i = (X 1 ; : : :; X n )(! i ) and y i = Y (! i ).\nIn the following subsections models which express P (W j x) as a function of linear or quadratic feature combinations are brie y introduced in a way that is su cient for practical purposes. Good introductions and further theoretical details are given for instance by Duda & Hart (1973), Hand (1981), Agresti (1990), andMcCullagh &Nelder (1989). Both Fisher's classical method and logistic regression are used here to model P (W j x) for the rst time; the quadratic discriminant function has been used by Lee & Mahajan (1988), however, without considering Fisher's discriminant rst." }, { "figure_ref": [ "fig_1" ], "heading": "Discriminant Functions for Normally Distributed Features", "publication_ref": [], "table_ref": [], "text": "Bayes' rule gives P (W j x) = p(x j W)P(W) p(x) = p(x j W)P(W) p(x j W)P(W) + p(x j L)P(L) = 1 + p(x j L)P(L) p(x j W)P(W)\n1 ;\nwhere p(x j C) is the features' conditional density function and P (C) is the a priori probability of class C 2 fL; Wg. In the case that the a priori probablities are equal, and the features are multivariate normally distributed within each class, i.e.\np(x j C) = (2 ) n=2 j C j 1=2 exp n 1 2 (x C ) 1 C (x C ) 0 o\nwith mean vector C and covariance matrix C for C 2 fW; Lg, it follows\nP (W j x) = 1 1 + exp( f(x))\n; where f is the following quadratic discriminant function: If the covariance matrices are equal (= ), the expression can be simpli ed to a linear function:\nf(x) = n 1 2 x 1 W 1 L x 0 + L 1 L W 1 W x 0 + 1 2 W 1 W 0 W L 1 L 0 L + log j W j\nf(x) = ( W L ) 1 fx ( L + W )=2g 0 :\nInterestingly, this function is also a solution to the problem of nding a linear transformation which maximizes the ratio of the squared sample mean distance to the sum of the within{class sample variances after transformation. Therefore, it has good separator properties even if the features are not normally distributed. This is called Fisher's linear discriminant. Figure 1 illustrates the relation between the conditional densities and the winning probability.\nThe maximum likelihood (ML) parameter estimates are\n^ C = 1 jI C j X i2I C x i ^ C = 1 jI C j X i2I C (x i ^ C ) 0 (x i ^ C )\nwith I C = fi j y i = Cg. If the covariance matrices are equal,\n^ = 1 jI W j + jI L j X C 2fL;Wg X i2I C (x i ^ C ) 0 (x i ^ C ):" }, { "figure_ref": [ "fig_3" ], "heading": "Logistic Regression", "publication_ref": [ "b0", "b8", "b19" ], "table_ref": [], "text": "In logistic regression the conditional winning probability P (W j x) depends on a linear combination of the x i . Here, X 1 1 is assumed in order to be able to model constant o sets.\nThe simple approach P (W j x) = x using a parameter column vector is unusable because x 2 0;1] cannot be guaranteed generally. This requirement can be ful lled by means of a link{function g : (0;1) ! IR according to g(P (W j x)) = x . Figure 2 shows a typical nonlinear relation between the winning probability and one feature. Since the probability is usually a monotone increasing function of the features, g should satisfy lim x!0+ g(x) = 1 and lim x!1 g(x) = +1. The link{function g(t) = logit(t) := log(t=(1 t)) has these properties. Using g = logit, since g 1 (x) = f1 + exp( x)g 1 , it follows that\nP (W j x) = 1 1 + exp( x ) :\nHence, the winning probability has the same shape as for discriminant analysis. But logistic regression does not require the features to be multivariate normal; even the use of very discrete features is possible.\nAgain the parameter vector can be estimated using the ML approach. Unfortunately, in this case it is necessary to solve a system of nonlinear equations. In what follows, a known solving approach will be brie y described (cf. Agresti, 1990;McCullagh & Nelder, 1989).\nIn order to ensure convergence of the iterative algorithm given below, it is necessary to slightly generalize our model: from now on y i is the observed value of random variable Y i = P n i j=1 Y i;j , where the Y i;j : ! f0; 1g have mean i = f1 + exp( x i )g 1 and are stochastically independent. This de nition includes the old model (n i = 1 and y i 2 f0; 1g).\nThe likelihood function L( ), which is a probability density, measures how likely it is to see the realization y of the stochastically independent random variables Y i , if is the true parameter vector. In order to maximize L, it su ces to consider log(L):\nlog(L( )) = log N Y i=1 y i i (1 i ) n i y i = N X i=1 y i log i + (n i y i ) log(1 i ) = n X j=1 N X i=1 y i x ij j N X i=1 n i log h 1 + exp n X j=1 x ij j i :\nThis function is twice di erentiable, is strictly concave up to rare border cases, and has a unique maximum location if 0 < y i < n i for all i (cf. Wedderburn, 1976) that can be iteratively found using the Newton{Raphson method as follows:\n^ (t+1) = (X 0 (t) X) 1 X 0 (t) z (t)\nwith the (N n){matrix X built from the x i , Starting with ^ (0) i = (y i +1=2)=(n i +1), the ML estimate ^ may usually be computed with high accuracy within a few steps since the method is quadratically convergent and relatively robust with respect to the choice of the starting vector. Unfortunately, if there is an i with y i = 0 or y i = n i the estimates might not converge. But our original model can be approximated, for instance, by setting n i = 100 and y i = 1 or 99, depending on whether the position in question is lost or won.\n(t) = diag n i ^ (t) i (1 ^ (t) i )]; ^ (t) i = n 1 + exp n X j=1 x ij ^ (t) j o 1 ; and z (t) i = log ^ (t) i 1 ^ (t) i + y i n i ^ (t) i n i ^ (t) i (1 ^ (t)" }, { "figure_ref": [], "heading": "Generation and Classi cation of Example Positions", "publication_ref": [ "b5" ], "table_ref": [], "text": "Value adaptation requires labelled example positions. Here, some problems arise. First of all, for most nontrivial games only endgame positions can be classi ed correctly as won, drawn, or lost; for opening and midgame positions optimal play is out of reach due to the lack of game knowledge and time constraints. Furthermore, the example positions should contain signi cant feature variance since otherwise no discrimination is possible. Hence, it is problematic to use only high level games | which might be the rst idea | since good players and programs know the relevant features and try to maximize them during a game. Therefore, these features tend to be constant most of the time and statistical methods would assign only small weights to them. As a nal di culty, estimating parameters accurately for di erent game phases requires many positions.\nA pragmatic \\solution\" to these problems is indicated in Figure 3: over a period of two years, about 60,000 Othello games2 were played by early versions of LOGISTELLO and Igor Durdanovi c's program REV. 3 Feature variance was ensured by examining all openings of length seven which led mostly to unbalanced starting positions. Since early program versions were used which had only 5{10 minutes thinking time, the games, though well played most of the time, are not error free. In some cases even big mistakes occurred in which, for example, one side fell into a corner losing trap4 caused by a lack of look{ahead. But without these errors, no reasonable weight estimation of principal features (such as corner possession in Othello) is possible as explained above. Following Lee & Mahajan (1988), all positions were then classi ed by the nal game results. This approach is problematic because the classi cation reliability decreases from the endgame to the opening phase due to player mistakes. To reduce this e ect, early outcome searches were performed for solving Othello positions 20 moves before game end. Furthermore, from time to time the game database was searched for \\obvious\" errors using new program versions and longer searches to correct these games. Since in this process many lines of play were repeated, the misclassi cation rate was further reduced by propagating the game results from the leaves to the root of the game{tree, which had been built from all games according to the NegaMax principle. In this way the classi cation of a position depends on that of all examined successors and is therefore more reliable.\nThe proposed classi cation method is relatively fast and allows us to label many positions in a reasonable time (on average about 42 new positions in 10{20 minutes). In addition to ensuring an accurate parameter estimation even for di erent game phases (which is indicated by small parameter con dence intervals), this method enabled us to develop new pattern features for Othello based on estimating the winning probability conditioned upon occurrence of sub{con gurations, like edge or diagonal instances, of the board. 5" }, { "figure_ref": [], "heading": "Parameter Estimation and Playing Strength Comparison", "publication_ref": [ "b6", "b1", "b11" ], "table_ref": [ "tab_2" ], "text": "Although only about 5% of the example positions were labelled as drawn, it was decided to use them for parameter estimation since these positions give exact information about feature balancing. A natural way to handle drawn positions within the statistical evaluation framework considered here is to de ne the winning probability to be 1=2 in this case. For this extension the logistic regression parameters can easily determined by setting y i = n i =2 in case of a draw. Alternatively, doubling won or lost positions and incorporating drawn positions once as won and once as lost leads to the same estimate because both log likelihood functions are equal up to a constant factor. The latter technique was used for tting the other models.\nPrevious experiments showed that the parameters depend on the game phase, for which disc count is an adequate measure in Othello. So the example positions were grouped according to the number of discs on the board, and adjacent groups were used for parameter estimation in order to smooth the data and to ensure almost equal numbers of won and lost positions.\nThe success of the Othello program BILL described by Lee & Mahajan (1990) shows that in Othello table{based features can be quite e ective. For instance, the important edge structure can be quickly evaluated by adding four pre{computed edge evaluations which are stored in a table. All 13 features used by LOGISTELLO are table{based. They fall into two groups: in the rst group pattern instances including the horizontal, vertical, and most diagonal lines of the board are evaluated while in the second group two mobility measures are computed. 5 After parameter estimation for the three described models, tournaments between the players QUAD (which uses the quadratic discriminant function for normally distributed features), 5. Details are given by Buro (1994). The postscript le of this thesis can be obtained via anonymous ftp. FISHER, and LOG were played in order to determine the best tournament player. Starting with 100 nearly even opening positions with 14 discs (i.e. before move 11) from LOGIS-TELLO's opening book, each game and its return game with colours reversed was played. 6 In the opening and midgame phase all program versions performed their usual iterative deepening NegaScout searches (Reinefeld, 1983) with a selective corner quiescence search extension.\nEndgame positions with about 22 empty squares were solved by win{draw{loss searches.\nThere was no pattern learning during the tournaments, and the facility to think on opponent's time was turned o in order to speed up the tournaments which were run in parallel on seven SUN SPARC{10 workstations. Applying a conservative statistical test 5 it can be seen that all results listed in Table 1 stating a winning percentage greater than 59% are statistically signi cant at the 5% level. The rst two results show a clear advantage for the linear combinations under normal tournament conditions (30 minutes per player per game). Furthermore, since LOG outperforms FISHER the features would not seem to be even approximately normally distributed. Here lies the advantage of logistic regression: even very discrete features like castling status in chess or parity in Othello can be used.\nFurther tournaments were played with more time for the weaker players FISHER and QUAD in order to determine the time factors which lead to an equal playing strength. As shown in Table 1 FISHER reaches LOG's strength if it is given about 20% more time, and QUAD needs about 50% more time to compete with LOG. With LOGISTELLO's optimized implementation, the search speed when using the quadratic combination is still about 20% slower than that with the linear combination. Thus, giving QUAD 25% more time (1=(1 0:2) = 1:25) balances the total number of nodes searched during a game. But even with this timing, LOG is stronger than QUAD, and FISHER can still compete with it. All in all, the quadratic combination is not only slower than the linear combination, but it also has no better discrimination properties. Indeed, a look at the estimated covariance matrices of each 6. LOG's 11{ply evaluation of these positions lies in the range 0:4; +0:4] which corresponds to winning probabilities in the range 0:4; 0:6]. Only nearly even starting positions should be used to compare programs of similar playing strength since in clear positions the colour determines the winner and the winning percentage would be 50% even if one player is stronger. Of the 100 starting positions only six always led to game pairs with a balanced score.\nclass revealed that they are almost equal, and therefore a better evaluation quality than that of Fisher's linear discriminant could not be expected." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b5", "b1", "b2" ], "table_ref": [], "text": "In this paper three statistical approaches for modelling evaluation functions with a game{ phase{independent meaning have been presented and compared empirically using a world{ class Othello program. Quadratic feature combinations do not necessarily lead to stronger programs than linear combinations since the evaluation speed can drop signi cantly. Of course, this e ect depends on the number of features used and their evaluation speed: if only a few features are used or if it takes a long time to evaluate them, then the playing strength di erences cannot be explained by di erent speeds because in this case the evaluation times are almost equal. In any case, before using quadratic combinations the covariance matrices should be compared; if they are (almost) equal, the quadratic terms can be omitted and Fisher's linear discriminant can be used. Therefore, the motivations of Lee & Mahajan (1988) need re nement, since an existing feature correlation does not necessarily justify the use of nonlinear combinations. Generally, possibly more accurate nonlinear feature combinations (such as ANNs) should be compared to simpler but faster approaches in practice, since their use does not always guarantee a greater playing strength.\nBesides linear regression and discriminant analysis, logistic regression has proven to be a suitable tool for the construction of evaluation functions with a global interpretation. The drawback, that for parameter estimation a system of nonlinear equations has to be solved, is more than compensated for by the higher quality of the evaluation function in comparison to the other approaches, since in this application the parameters have to be determined only once. The current tournament version of LOGISTELLO uses feature weights estimated by means of logistic regression and pro ts from the comparability of evaluations from di erent game phases which is ensured by the use of value adaptation. As a result it is possible to perform selective searches in which values from di erent game phases are compared; moreover, values from the opening can be compared even with late midgame values in order to nd promising move alternatives in the program's opening book (Buro 1994(Buro ,1995)). In this sense, value comparability is a cornerstone of LOGISTELLO's strength." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I wish to thank my wife Karen for competently answering many of my statistical questions. I also thank my colleague Igor Durdanovi c for many fruitful discussions which have led to considerable improvements of our Othello programs. Furthermore, I am grateful to Colin Springer, Richard E. Korf, and the anonymous referees for their useful suggestions on earlier versions of this paper, which helped improve both the presentation and the contents." } ]
[ { "authors": "A Agresti", "journal": "Wiley", "ref_id": "b0", "title": "Categorical Data Analysis", "year": "1990" }, { "authors": "M Buro", "journal": "", "ref_id": "b1", "title": "Techniken f ur die Bewertung von Spielsituationen anhand von Beispielen", "year": "1994" }, { "authors": "M Buro Buro", "journal": "Magazine de la F ed eration Fran caise d'Othello FFORUM", "ref_id": "b2", "title": "L'apprentissage des ouvertures chez Logistello", "year": "1995" }, { "authors": "R Duda; P Hart", "journal": "Wiley", "ref_id": "b3", "title": "Pattern Classi cation and Scene Analysis", "year": "1973" }, { "authors": "D J Hand", "journal": "Wiley", "ref_id": "b4", "title": "Discrimination and Classi cation", "year": "1981" }, { "authors": "K F Lee; S Mahajan", "journal": "Arti cial Intelligence", "ref_id": "b5", "title": "A Pattern Classi cation Approach to Evaluation Function Learning", "year": "1988" }, { "authors": "K F Lee; S Mahajan", "journal": "Arti cial Intelligence", "ref_id": "b6", "title": "The Development of a World Class Othello Program", "year": "1990" }, { "authors": "T A Marsland", "journal": "ICCA Journal", "ref_id": "b7", "title": "Evaluation Function Factors", "year": "1985" }, { "authors": "P Mccullagh; J A Nelder", "journal": "Elsevier Science Publishers", "ref_id": "b8", "title": "Weight Assesment in Evaluation Functions", "year": "1989" }, { "authors": "D H Mitchell", "journal": "", "ref_id": "b9", "title": "Evolving Complex Othello Strategies Using Marker{ Based Genetic Encoding of Neural Networks", "year": "1984" }, { "authors": "P Mysliwietz", "journal": "", "ref_id": "b10", "title": "Konstruktion und Optimierung von Bewertungsfunktionen beim Schach", "year": "1994" }, { "authors": "A Reinefeld", "journal": "ICCA Journal", "ref_id": "b11", "title": "An Improvement of the Scout Tree Search Algorithm", "year": "1983" }, { "authors": "A L Samuel", "journal": "IBM Journal of Research and Development", "ref_id": "b12", "title": "Some Studies in Machine Learning Using the Game of Checkers", "year": "1959" }, { "authors": "A L Samuel", "journal": "IBM Journal of Research and Development", "ref_id": "b13", "title": "Some Studies in Machine Learning Using the Game of Checkers II", "year": "1967" }, { "authors": "J Schae Er; J Culberson; N Treloar; B Knight; P Lu; D Szafron", "journal": "Arti cial Intelligence", "ref_id": "b14", "title": "A World Championship Caliber Checkers Program", "year": "1992" }, { "authors": "R S Sutton", "journal": "Machine Learning", "ref_id": "b15", "title": "Learning to Predict by the Methods of Temporal Di erences", "year": "1988" }, { "authors": "G Tesauro", "journal": "Machine Learning", "ref_id": "b16", "title": "Practical Issues in Temporal Di erence Learning", "year": "1992" }, { "authors": "G Tesauro", "journal": "Neural Computation", "ref_id": "b17", "title": "TD{Gammon, A Self{Teaching Backgammon Program, Achieves Master{Level Play", "year": "1994" }, { "authors": "G Tesauro", "journal": "Communications of the ACM", "ref_id": "b18", "title": "Temporal Di erence Learning and TD{Gammon", "year": "1995" }, { "authors": "R W M Wedderburn", "journal": "Biometrika", "ref_id": "b19", "title": "On the Existence and Uniqueness of Maximum Likelihood Estimates for Certain Generalized Linear Models", "year": "1976" } ]
[ { "formula_coordinates": [ 3, 158.64, 549.34, 294.72, 27.12 ], "formula_id": "formula_0", "formula_text": "p(x j C) = (2 ) n=2 j C j 1=2 exp n 1 2 (x C ) 1 C (x C ) 0 o" }, { "formula_coordinates": [ 3, 236.64, 605.5, 134.88, 30.08 ], "formula_id": "formula_1", "formula_text": "P (W j x) = 1 1 + exp( f(x))" }, { "formula_coordinates": [ 3, 147.36, 658.54, 286.32, 48.96 ], "formula_id": "formula_2", "formula_text": "f(x) = n 1 2 x 1 W 1 L x 0 + L 1 L W 1 W x 0 + 1 2 W 1 W 0 W L 1 L 0 L + log j W j" }, { "formula_coordinates": [ 4, 193.44, 277.9, 225.12, 19.92 ], "formula_id": "formula_3", "formula_text": "f(x) = ( W L ) 1 fx ( L + W )=2g 0 :" }, { "formula_coordinates": [ 4, 216.72, 389.74, 180.48, 68.28 ], "formula_id": "formula_4", "formula_text": "^ C = 1 jI C j X i2I C x i ^ C = 1 jI C j X i2I C (x i ^ C ) 0 (x i ^ C )" }, { "formula_coordinates": [ 4, 189.36, 484.78, 235.2, 35.16 ], "formula_id": "formula_5", "formula_text": "^ = 1 jI W j + jI L j X C 2fL;Wg X i2I C (x i ^ C ) 0 (x i ^ C ):" }, { "formula_coordinates": [ 4, 240.24, 677.98, 131.52, 30.08 ], "formula_id": "formula_6", "formula_text": "P (W j x) = 1 1 + exp( x ) :" }, { "formula_coordinates": [ 5, 116.88, 277.9, 378.48, 68.28 ], "formula_id": "formula_7", "formula_text": "log(L( )) = log N Y i=1 y i i (1 i ) n i y i = N X i=1 y i log i + (n i y i ) log(1 i ) = n X j=1 N X i=1 y i x ij j N X i=1 n i log h 1 + exp n X j=1 x ij j i :" }, { "formula_coordinates": [ 5, 230.4, 401.74, 151.92, 20.24 ], "formula_id": "formula_8", "formula_text": "^ (t+1) = (X 0 (t) X) 1 X 0 (t) z (t)" }, { "formula_coordinates": [ 5, 151.92, 478.54, 333.6, 72.84 ], "formula_id": "formula_9", "formula_text": "(t) = diag n i ^ (t) i (1 ^ (t) i )]; ^ (t) i = n 1 + exp n X j=1 x ij ^ (t) j o 1 ; and z (t) i = log ^ (t) i 1 ^ (t) i + y i n i ^ (t) i n i ^ (t) i (1 ^ (t)" } ]
Statistical Feature Combination for the Evaluation of Game Positions
This article describes an application of three well{known statistical methods in the eld of game{tree search: using a large number of classi ed Othello positions, feature weights for evaluation functions with a game{phase{independent meaning are estimated by means of logistic regression, Fisher's linear discriminant, and the quadratic discriminant function for normally distributed features. Thereafter, the playing strengths are compared by means of tournaments between the resulting versions of a world{class Othello program. In this application, logistic regression | which is used here for the rst time in the context of game playing | leads to better results than the other approaches.
Michael Buro
[ { "figure_caption": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Conditional densities and winning probability", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Typical shape of the winning probability", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3: The classi cation process", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x x x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x x x x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Tournament results", "figure_data": "PairingTime per game (Minutes)Result (Win Draw Loss) Percentage WinningLOG FISHER QUAD QUAD LOG FISHER30 30 30 30 30 30116 15 69 112 15 73 93 35 7261.8% 59.8% 55.3%LOGFISHER30 3686 24 9049.0%LOG FISHER QUAD QUAD30 38 30 3893 33 74 84 30 8654.8% 49.5%LOGQUAD30 4588 26 8650.5%", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b21", "b23", "b5", "b8", "b7", "b17", "b9", "b22", "b19", "b2", "b2", "b7" ], "table_ref": [], "text": "The problem of approximating the values of a continuous variable is described in the statistical literature as regression. Given samples of output (response) variable y and input (predictor) variables x = fx 1 :::x n g, the regression task is to nd a mapping y = f(x). Relative to the space of possibilities, nite samples are far from complete, and a prede ned model is needed to concisely map x to y. Accuracy of prediction, i.e. generalization to new cases, is of primary concern. Regression di ers from classi cation in that the output variable y in regression problems is continuous, whereas in classi cation y is strictly categorical. From this perspective, classi cation can be thought of as a subcategory of regression. Some machine learning researchers have emphasized this connection by describing regression as \\learning how to classify among continuous classes\" (Quinlan, 1993).\nThe traditional approach to the problem is classical linear least-squares regression (Sche e, 1959). Developed and re ned over many years, linear regression has proven quite e ective for many real-world applications. Clearly the elegant and computationally simple linear model has its limits, and more complex models may t the data better. With the increasing computational power of computers and with larger volumes of data, interest has grown in pursuing alternative nonlinear regression methods. Nonlinear regression models have been explored by the statistics research community and many new e ective methods have emerged (Efron, 1988), including projection pursuit (Friedman & Stuetzle, 1981) and MARS (Friedman, 1991). Methods for nonlinear regression have also been developed outside the mainstream statistics research community. A neural network trained by back-propagation (McClelland & Rumelhart, 1988) is one such model. Other models can be found in numerical analysis (Girosi & Poggio, 1990). An overview of many di erent regression models, with application to classi cation, is available in the literature (Ripley, 1993). Most of these methods produce solutions in terms of weighted models.\nIn the real-world, classi cation problems are more commonly encountered than regression problems. This accounts for the greater attention paid to classi cation than to regression. But many important problems in the real world are of the regression type. For instance, problems involving time-series usually involve prediction of real values. Besides the fact that regression problems are important on their own, another reason for the need to focus on regression is that regression methods can be used to solve classi cation problems. For example, neural networks are often applied to classi cation problems.\nThe issue of interpretable solutions has been an important consideration leading to development of \\symbolic learning methods.\" A popular format for interpretable solutions is the disjunctive normal form (DNF) model (Weiss & Indurkhya, 1993a). Decision trees and rules are examples of DNF models. Decision rules are similar in characteristics to decision trees, but they also have some potential advantages: (a) a stronger model (b) often better explanatory capabilities. Unlike trees, DNF rules need not be mutually exclusive. Thus, their solution space includes all tree solutions. These rules are potentially more compact and predictive than trees. Decision rules may also o er greater explanatory capabilities than trees because as a tree grows in size, its interpretability diminishes.\nAmong symbolic learning methods, decision tree induction, using recursive partitioning, is highly developed. Many of these methods developed within the machine learning community, such as ID3 decision tree induction (Quinlan, 1986), have been applied exclusively to classi cation tasks. Less widely known is that decision trees are also e ective in regression. The CART program, developed in the statistical research community, induces both classi cation and regression trees (Breiman, Friedman, Olshen, & Stone, 1984). These regression trees are strictly binary trees, a representation which naturally follows from intensive modeling using continuous variables. 1In terms of performance, regression trees often are competitive in performance to other regression methods (Breiman et al., 1984). Regression trees are noted to be particularly strong when there are many higher order dependencies among the input variables (Friedman, 1991). The advantages of the regression tree model are similar to the advantages enjoyed by classi cation trees over other models. Two principal advantages can be cited: (a) dynamic feature selection and (b) explanatory capabilities. Tree induction methods are extremely e ective in nding the key attributes in high dimensional applications. In most applications, these key features are only a small subset of the original feature set. Another characteristic of decision trees that is often cited is its capability for explanation in terms acceptable to people. On the negative side, decision trees cannot represent compactly many simple functions, for example linear functions. A second weakness is that the regression tree model is discrete, yet predicts a continuous variable. For function approximation, the expectation is a smooth continuous function, but a decision tree provides discrete regions that are discontinuous at the boundaries. All in all though, regression trees often produce strong results, and for many applications their advantages strongly outweigh their potential disadvantages.\nIn this paper we describe a new method for inducing regression rules. The method takes advantage of the close relationship between classi cation and regression and provides a uniform and general model for dealing with both problems. Additional gains can be obtained by extending this method in a manner that preserves the strengths of the partitioning schemes while compensating for their weaknesses. Rules can be used to search for the most relevant cases, and a subset of these cases can help determine the function value. Thus, some of the model's interpretability can be traded o for better performance. Empirical results suggest that these methods are e ective and can induce solutions that are often superior to decision trees." }, { "figure_ref": [], "heading": "Measuring Performance", "publication_ref": [], "table_ref": [], "text": "The objective of regression is to minimize the distance between the sample output values, y i and the predicted values y 0 i . Two measures of distance are commonly used. The classical regression measure is equation 1, the average squared distance between y i and y 0 i , i.e. the variance. It leads to an elegant formulation for the linear least squares model. The mean absolute distance (deviation) of equation 2 is used in least absolute deviation regression, and is perhaps the more intuitive measure.\nThe mean absolute distance (deviation) of equation 2 is used in our studies. This is a measure of the average error of prediction for each y i over n cases." }, { "figure_ref": [], "heading": "V ariance", "publication_ref": [], "table_ref": [], "text": "= 1 n n X i=1 (y i y 0 i ) 2 (1) MAD = 1 n n X i=1 jy i y 0 i j (2)\nThe regression problem is sometimes described as a signal and noise problem. The model is extended to include a stochastic component in equation 3. Thus, the true function may not produce a zero error distance. In contrast to classi cation where the labels are assumed correct, for regression the predicted y values could be explained by a number of factors including a random noise component, , in the signal, y. y = f(x 1 : : : x n ) +\n(3) Because prediction is the primary concern, estimates based on training cases alone are inadequate. The principles of predicting performance on new cases are analogous to classi cation, but here the mean absolute distance is used as the error rate. The best estimate of true performance of a model is the error rate on a large set of independent test cases. When large samples of data are unavailable, the process of train and test is simulated by random resampling. In most of our experiments, we used (10-fold) cross-validation to estimate predictive performance." }, { "figure_ref": [ "fig_0" ], "heading": "Regression by Tree Induction", "publication_ref": [ "b2", "b19", "b29", "b2" ], "table_ref": [], "text": "In this section, we contrast regression tree induction with classi cation tree induction. Like classi cation trees, regression trees are induced by recursive partitioning. The solution takes the form of equation 4, where R i are disjoint regions, k i are constant values, and y i j refers to the y-values of the training cases that fall within the region R i . if x R i then f(x) = k i = medianfy i j g\n(4) Regression trees have the same representation as classi cation trees except for the terminal nodes. The decision at a terminal node is to assign a case a constant y value. The single best constant value is the median of the training cases falling into that terminal node because for a partition, the median is the minimizer of mean absolute distance. Figure 1 is an example of a binary regression tree. All cases reaching shaded terminal node 1 (x1 3) are assigned a constant value of y=10. Tree induction methods usually proceed by (a) nding a covering set for the training cases and (b) pruning the tree to the best size. Although classi cation trees have been more widely studied, a similar approach can be applied to regression trees. We assume the reader is familiar with classi cation trees, and we cite only the di erences in binary tree induction (Breiman et al., 1984;Quinlan, 1986;Weiss & Kulikowski, 1991). In many respects, regression tree induction is more straightforward. For classi cation trees, the error rate is a poor choice for node splitting, and alternative functions such as entropy or gini are employed. For regression tree induction, the minimized function, i.e. absolute distance, is most satisfactory. At each node, the single best split that minimizes the mean absolute distance is selected. Splitting continues until fewer than a minimum number of cases are covered by a node, or until all cases within the node have the identical value of y.\nThe goal is to nd the tree that generalizes best to new cases, and this is often not a full covering tree, particularly in presence of noise or weak features. The pruning strategies employed for classi cation trees are equally valid for regression trees. Like the covering procedures, the only substantial di erence is that the error rate is measured in terms of mean absolute distance. One popular method is the weakest-link pruning strategy (Breiman et al., 1984). For weakest-link pruning, a tree is recursively pruned so that the ratio delta/n is minimized, where n is the number of pruned nodes and delta is the increase in error. " }, { "figure_ref": [], "heading": "Rule-based Functional Prediction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_0" ], "heading": "Regression by Rule Induction", "publication_ref": [ "b18", "b3" ], "table_ref": [], "text": "Both tree and rule induction models nd solutions in disjunctive normal form, and the model of equation 4 is applicable to both. Each rule in a rule-set represents a single partition or region R i . However, unlike the tree regions, the regions for rules need not be disjoint. With non-disjoint regions, several rules may be satis ed for a single sample. Some mechanism is needed to resolve the con icts in k i , the constant values assigned, when multiple rules, R i regions, are invoked. One standard model (Weiss & Indurkhya, 1993a) is to order the rules. Such ordered rule-sets have also been referred to as decision lists. The rst rule that is satis ed is selected, as in equation 5.\nif i < j and x both R i and R j then f(x) = k i (5) Figure 2 is an example of an ordered rule-set corresponding to the tree of Figure 1. All cases satisfying rule 3, and not rules 1 and 2, are assigned a value of y=5.\nGiven this model of regression rule sets, the problem is to nd procedures that e ectively induce solutions. For rule-based regression, a covering strategy analogous to the classi cation tree strategy could be speci ed. A rule could be induced by adding a single component at a time, where each added component is the single best minimizer of distance. As usual, the constant value k i is the median of the region formed by the current rule. As the rule is extended, fewer cases are covered. When fewer than a minimal number of cases are covered, rule extension terminates. The covered cases are removed and rule induction can continue on the remaining cases. This is also the regression analogue of rule induction procedures for classi cation (Michalski, Mozetic, Hong, & Lavrac, 1986;Clark & Niblett, 1989).\nHowever, instead of this approach, we propose a novel strategy of mapping the regression covering problem into a classi cation problem." }, { "figure_ref": [ "fig_3", "fig_4", "fig_2" ], "heading": "A Reformulation of the Regression Problem", "publication_ref": [], "table_ref": [], "text": "The motivation for mapping regression into classi cation is based on a number of factors related to the extra information given in the regression problem: the natural ordering of y i by magnitude: if i > j then y i > y j .\nLet fC i g be a set consisting of an arbitrary number of classes, each class containing approximately equal values of fy i g. To solve a classi cation problem, we expect that the classes are di erent from each other, and that patterns can be found to distinguish these 1. Generate a set of Pseudo-classes using the P-class algorithm (Figure 4). 2. Generate a covering rule-set for the transformed classi cation problem using a rule induction method such as Swap-1 (Weiss & Indurkhya, 1993a). 3. Initialize the current rule set to be the covering rule set and save it. 4. If the current rule set can be pruned, iteratively do the following: a) Prune the current rule set. b) Optimize the pruned rule set (Figure 5) and save it. c) Make this pruned rule set the new current rule set. 5. Use test cases or cross-validation to pick the best of the saved rule sets. classes. Should we expect classes formed by an ordering of fy i g to be a reasonable classication problem? There are a numbers of reasons why the answer is yes, particularly for a rule induction procedure.\nThe most obvious situation is the classical linear relationship. In this instance, by de nition, some ordering of fx 1i . . . x ni g corresponds to the ordering of y i . Although classical methods are very strong in compactly determining linear functions, most interest in modern methods centers around their potential for nding nonlinear relationships. For nonlinear functions, we know there is usually no such ordering of fx 1i . . . x ni g corresponding to the fy i g. Still, we expect that the true function is smooth, and in a local region the ordering relationship will hold. In terms of classi cation, we know that a class C j with similar values of y is quite di erent than class C k with much lower values of y. For a nonlinear function within a class of similar values of y, some of these y have very similar values of fx 1i . . . x ni g.\nThese correspond to some local region of the function. However, it is also true that some identical values of y can have very di erent fx 1i . . . x ni g so that multiple clusters can be found within the class. Because rule induction methods do not cover a class with a single rule, the expectation is that multiple patterns will be found to cover these clusters.\nOnce the cases have been assigned such (pseudo-)classes, the classi cation problem can be solved in the following stages: (a) nd a covering set and (b) prune the rule set to an appropriate size, with improved results achieved when an additional technique is considered: (c) re ne or optimize a rule set. The overall method is outlined in Figure 3." }, { "figure_ref": [], "heading": "Generating Pseudo-classes", "publication_ref": [ "b10", "b15" ], "table_ref": [], "text": "In the previous section, we described the motivation for pseudo-classes. The speci cation of these classes does not use any information beyond the ordering of y. No assumptions about the true nature of the underlying function are made. Within this environment, the goal is to make the y values within one class most similar and y values across classes most dissimilar. We wish to assign the y values to classes such that the overall distance between each y i and its class mean is minimum. Classes with identical means should be merged. P-Class is a variation of k-means clustering, a statistical method that minimizes a distance measure (Hartigan & Wong, 1979). Alternative methods that do not depend on distance measures (Lebowitz, 1985) may also be used.\nGiven a xed number of k classes, this procedure will relatively quickly assign the y i to classes such that the overall distances are minimized. Because the underlying function is unknown, it is not critical to have a global minimum assignment of the y i . This procedure matches well to our stated goals for ordering the y i values. The obvious remaining question is how do we determine k, the number of classes? Unfortunately, there is no direct answer, and some experimentation is necessary. However, as we shall see in Section 7, there is empirical evidence suggesting that results are quite similar within a local neighborhood of values of k. Moreover, relatively large values of k, which entail increased computational complexity for rule induction, are typically necessary only for noise-free functions that can be modeled exactly. Analogous to comparisons of neural nets with increasing numbers of hidden units, the trends for increasing numbers of partitions become evident during experimentation.\nOne additional variation on the classi cation theme arises for rule induction schemes that cover one class at a time. The classes must be ordered, and the last class typically becomes a default class to cover situations when no rule for other classes is satis ed. For regression, having one default partition for a class is unlikely to be the best covering solution, and instead the remaining cases for the last class are repeatedly partitioned (by P-Class) into 2 classes until fewer than m cases remain.\nAn interesting characteristic of this transformation of the regression problem is that we now have a uniform and general model that once again relates both classi cation and regression. If the y i values are discrete and categorical, P-Class merely restates the standard classi cation problem. For example, if all values of y i are either 0 or 1, then the result of P-Class will be be 2 non-empty classes." }, { "figure_ref": [], "heading": "A Covering Rule Set", "publication_ref": [], "table_ref": [], "text": "With this transformation, rule induction algorithms for classi cation can be applied. We will consider those induction methods that fully cover a class before moving on to induce rules for the next class. At each step of the covering algorithm, the problem is considered a binary classi cation problem for the current class C i versus all C j where j > i, i.e. the current class versus the remaining classes. When a rule is induced, its corresponding cases are removed and the remaining cases are considered. When a class has been covered, the next class is considered. An example of such a covering algorithm is that used in Swap-1 (Weiss & Indurkhya, 1993a), and this is the procedure used in this paper. The covering method is identical for classi cation and regression. However, one distinction is that the regression classes are transient labels that are replaced with the median of the y values for the cases covered by each induced rule. Because the rules are ordered and multiple rules may be satis ed, the medians are derived only from those instances where the rule is the rst to be satis ed.\nAlthough this procedure may yield good, compact covering sets, additional procedures are necessary for a complete solution." }, { "figure_ref": [], "heading": "Pruning the Rule Set", "publication_ref": [ "b20" ], "table_ref": [], "text": "Typical real-world applications have noisy features that are not fully predictive. A covering set, particularly one composed of many continuous variables, can be far too over-specialized to produce the best results. For classi cation, relatively few classes are speci ed in advance. For regression, we expect many smaller groups because values of y i are likely to be quite di erent.\nWe noted earlier that for regression trees the usual classi cation pruning techniques can be applied with the substitution of mean absolute distance for the classi cation error rate. As in weakest-link tree pruning, the same ratio of delta/n can be recursively minimized for weakest-link rule pruning. The intuitive rationale is to remove those parts of a rule set that have the least impact on increasing the error. Pruning rule sets is usually accomplished by either deleting complete rules or single rule components (Quinlan, 1987;Weiss & Indurkhya, 1993a). In general, rule pruning (for both classi cation and regression) is less natural and far more computationally expensive than tree pruning. Tree pruning has a natural ow from set to subset. Thus a tree can be pruned from bottom up, typically considering the e ect of removing a subtree. Non-disjoint rules have no such natural pruning order, for example every component in a rule is a candidate for pruning and may a ect all other rules that follow it in the speci ed rule order.\nThere is a major di erence in pruning regression rules vs. classi cation rules. For classi cation, deleting a rule or a rule component has no e ect on the class labels. For regression, pruning will change the median-values of y for the regions. Even the deletion of a rule will a ect other region medians because the rules are ordered and multiple rules may be satis ed. This characteristic of rule pruning for regression adds substantial complexity to the task. However, by assuming that the median-values of y remain unchanged during the evaluation of candidate rules to prune, a pruning procedure can achieve reasonable computational e ciency at the expense of some loss in the accuracy of evaluation. Once the best rule or component for deletion is selected, the medians of all regions can then be re-evaluated.\nEven for classi cation rules, rule pruning has some inherent weaknesses. For example, rule deletion will often create a gap in coverage. For classi cation rules though, it is quite feasible to develop an additional procedure to re ne and optimize a rule set. To a large extent, this overcomes the cited weakness in pruned rules sets. A similar re nement and optimization procedure can be developed for regression and is described next." }, { "figure_ref": [ "fig_4" ], "heading": "Rule Re nement and Optimization", "publication_ref": [ "b16", "b11", "b12", "b13", "b4", "b7" ], "table_ref": [], "text": "Given a rule set RS i , can it be improved? This question applies to any rule set, although we are mostly motivated by trying to improve the pruned rules sets fRS o . . . RS i . . . RS n g. This is a combinatorial optimization problem. Using error measure Err(RS), can we improve RS i without changing its size, i.e. the number of rules and components? Figure 5 describes an algorithm that minimizes Err(RS), the MAD of the model prediction on sample cases, by local swapping, i.e. replacing a single rule component with the best alternative. It is a variation of the techniques used in Swap-1 (Weiss & Indurkhya, 1993a).\nThe central theme is to hold a model con guration constant and make a single local improvement to that con guration. Local modi cations are made until no further improvements are possible. Making local changes to a con guration is a widely-used optimization technique to approximate a global optimum and has been applied quite successfully, for example to nd near-optimum solutions to traveling salesman problems (Lin & Kernighan, 1973). An analogous local optimization technique, called back tting, has been used in the context of nonlinear statistical regression (Hastie & Tibshirani, 1990).\nVariations on the selection of the next improvement move could include:\n1. First local improvement encountered (such as in back tting)\n2. Best local improvement (such as in Swap-1)\nIn our experiments with rule induction methods, the results are consistently better for (2); (1) is more e cient, but the (pruned) rule induction environment is mostly stable with relatively few local improvements prior to convergence. In a less stable environment, with very large numbers of possible con guration changes, (2) may not be feasible or even better. In the pruned rule set environment, if the covering procedure is e ective, then each pruned solution should be relatively close to a local minimum solution. Weakest results in a series of pruned rule sets RS i that number far fewer than sets which would result from a single prune of a rule or rule component. Each of the RS i are optimized prior to continuing the pruning process. However, rule set optimization can usually be suspended until substantial segments of the covering set have already been pruned.\nIf (1) is used, then either sequentially ordered evaluations (as in back tting) or stochastic evaluations can be considered. Empirical evidence in the optimization literature supports the superiority of stochastic evaluation (Jacoby, Kowalik, & Pizzo, 1972). Further improvements may be obtained by occasionally making random changes in con guration (Kirpatrick, Gelatt, & Vecchi, 1983). These are general combinatorial optimization techniques that must be substantially reworked to t a speci c problem type. Most are expected to be applied throughout problem solving.\nThe result of pruning a covering rule set, RS o , is a series of progressively smaller rule sets fRS o . . . RS i . . . RS n g. The objective is to pick the best one, usually by some form of error estimation. Model complexity and future performance are highly related. Both too complex or too simple a model can yield poor results, the objective being to nd just the right size model. Independent test cases or resampling by cross-validation are e ective for estimating future performance. In the absence of these estimates, approximations, such as GCV (Craven & Wahba, 1979;Friedman, 1991), as described in equation 6, have been used in the statistics literature to estimate performance2 . Both measures of training error and model complexity are used in the estimates. C(M), is a measure of model complexity expressed in terms of parameters estimated (such as the number of weights in a neural net) or tests performed, where C(M) is assumed to be less than n, the number of cases.\nGCV (M) = n X i=1 jy i y 0 i j n 1 C(M) n (6)\nIn our experiments we used cross-validated estimates to guide the nal model selection process, but other measures such as GCV may also be used." }, { "figure_ref": [], "heading": "Potential Problems with Rule-based Regression", "publication_ref": [ "b21" ], "table_ref": [], "text": "Regression rules, like trees, are induced by recursive partitioning methods that approximate a function with constant-value regions. They are relatively strong in dynamic feature selection in high-dimensional applications, sometimes using only a few highly predictive features. An essential weakness of these methods is the approximation of a partition or region by a constant value. For a continuous function and even a moderately sized sample, this approximation can lead to increased error.\nTo deal with this limitation, instead of constant-value functions, linear functions can be substituted in a partition (Quinlan, 1993). However, a linear function has the obvious weakness that the true function may be far from linear even in the restricted context of a single region. In general, use of such linearity compromises the highly non-parametric nature of the DNF model. A better strategy might be to examine alternative non-linear methods." }, { "figure_ref": [], "heading": "An Alternative to Rules: k-Nearest Neighbors", "publication_ref": [], "table_ref": [], "text": "The k-nearest neighbor method is one of the simplest regression methods, relying on table lookup. To classify an unknown case x, the k cases that are closest to the new case are found in a sample data base of stored cases. The predicted y(x) of equation 7 is the mean of the y values for the k-nearest neighbors. The nearest neighbors are found by a distance metric such as euclidean distance (usually with some feature normalization). The method is non-parametric and highly non-linear in nature y knn (x) = 1 K K X k=1 y k for K nearest neighbours of x ( 7)\nA major problem with this approach is how to limit the e ect of irrelevant features. While limited forms of feature selection are sometimes employed in a preprocessing stage, the method itself cannot determine which features should be weighted more than others. As a result, the procedure is very sensitive to the distance measure used. In a high-dimensional feature space, k-nearest neighbor methods may perform very poorly. These limitations are precisely those that the partitioning methods address. Thus, in theory, the two methods potentially complement one another." }, { "figure_ref": [], "heading": "Model Combination", "publication_ref": [ "b32", "b1", "b14", "b21" ], "table_ref": [], "text": "In practice, one learning model is not always superior to others, and a learning strategy that examines the results of di erent models may do better. Moreover, by combining di erent models, enhanced results may be achieved. A general approach to combining learning models is a scheme referred to as stacking (Wolpert, 1992). Additional studies have been performed in applying the scheme to regression problems (Breiman, 1993;LeBlanc & Tibshirani, 1993). Using small training samples of simulated data, and linear combinations of regression methods, improved results were reported. Let M i be the i-th model trained on the same sample, and w i , the weight to be given to M i . 3 If the new case vector is\nx, the predictions of di erent models can be combined as in Equation 8to produce an estimate of y. The models may use the same representation, such as k-nearest neighbors with variable-size k, or perhaps variable-size decision trees. The models could also be completely di erent, such as combining decision trees with linear regression models. Di erent models are applied independently to nd solutions, and later a weighted vote is taken to reach a combined solution. This method of model combination is in contrast to the usual approach to evaluation of di erent models, where the single best performing model is selected.\ny = K X k=1 w k M k (x) (8)\nWhile stacking has been shown to give improved results on simulated data, a major drawback is that properties of the combined models are not retained. Thus when interpretable models are combined, the result may not be interpretable at all. It is also not possible to compensate for weaknesses in one model by introducing another model in a controlled fashion.\nAs suggested earlier, partitioning regression methods and k-nearest neighbor regression methods are complementary. Hence one might expect that by suitably combining the two methods, one might obtain better performance. In one recent study (Quinlan, 1993), model trees (i.e., regression trees with linear combinations at the leaf nodes) and nearest neighbor methods were also combined. The combination method is described in equation 9, where the N(x) k is one of the K nearest neighbors of x, V(x) is the y-value of the stored instance x, and T(x) is the result of applying a model tree to x.\ny = 1 K K X k=1 V (N(x) k ) (T (N(x) k ) T(x)) (9)\nThe k-nearest neighbors are found independently of the induced regression tree (results were reported with K=3). In that sense, the approach is similar to the combination method of equation 8. The k-nearest neighbors are passed down the tree, and the results are used to re ne the nearest neighbor answer. Thus, we have a combination model formed by independently computing a global solution, and later combining results.\nHowever, there are strong reasons for not determining the global nearest neighbor solution independently. While, at the limit, with large samples, the non-parametric k-nearest neighbor methods will correctly t the function, in practice though, their weaknesses can be substantial. Finding an e ective global distance measure may not be easy, particularly in the presence of many noisy features. Hence a di erent technique for combining the two methods is needed." }, { "figure_ref": [], "heading": "Integrating Rules with Table-lookup", "publication_ref": [], "table_ref": [], "text": "Consider the following strategy: To determine y-value of a case x that falls in region R i , instead of assigning a single constant value k i for region R i , where k i is determined by the median y value of training cases in the region, assign y i knn (x), the mean of the k-nearest (training set) instances of x in region R i . Thus for regression trees, we now have equation 10. For regression rules, we also have equation 11.\nif x R i then f(x) = y i knn (x)(10)\nif i < j and x both R i and R j then f(x\n) = y i knn (x)(11)\nAn interesting aspect of this strategy is that k-nearest neighbor results need only be considered for the cases covered by a particular partition. While this increases the interaction between the models and eliminates the independent computation of the two models, the model rationale and, as we shall show, the empirical results, are supportive of this approach.\nWe now have a representation which potentially alleviates the weakness of partitions being assigned single constant values. Moreover, some of the global distance measure difculties of the k-nn methods may also be relieved because the table lookup is reduced to partitioned and related groupings. This is the rationale for a hybrid partition and k-nn scheme. Note that unlike stacking, our hybrid models are not independently determined, but interact very strongly with one another. However, it must be demonstrated that these methods are in fact complementary, preserving the strengths of the partitioning schemes while compensating for the weaknesses that would be introduced if constant values were used for each region. With respect to model combination, two principal questions need to be addressed by empirical experimentation:\nAre results improved relative to using each model alone? Are these methods competitive with alternative regression methods?" }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b21", "b21", "b21", "b26", "b28", "b27" ], "table_ref": [ "tab_2", "tab_3" ], "text": "Experiments were conducted to assess the competitiveness of rule-based regression compared to other procedures (including less interpretable ones), as well as to evaluate the performance of the integrated partition and k-nn regression method. Experiments were performed using seven datasets, six of which are described in previous studies (Quinlan, 1993). In addition to these six datasets, new experiments were done on a very large telecommunications application, which is labeled pole. In each of the seven datasets, there was one continuous real-valued response variable. Experimental results are reported in terms of the MAD, as measured using 10-fold cross-validation. For pole, 5,000 cases were used for training and 10,000 for independent testing. The features from the di erent datasets were a mixture of continuous and categorical features. For pole, all 48 features were continuous. Descriptions (Quinlan, 1993). 4 Table 1 summarizes the key characteristics of the datasets used in this study. Table 2 summarizes the original results reported (Quinlan, 1993). These include modeltrees (MT), which are regression trees with linear ts at the terminal nodes; neural nets (NNET); 3-nearest neighbors (3-nn); and the combined results of model-trees and 3-nearest neighbors (MT/3-nn). 5 Table 3 summarizes the additional results that we obtained. These include the CART regression tree (RT); 5-nearest neighbors with euclidean distance (5-nn); rule regression using Swap-1; rule regression with 5-nn applied to the rule region (Rule/5-nn); and MARS. 5-nn was used because the expectation is that the nearest neighbor method incrementally improves a constant-value region when the region has a moderately large sample of neighbors to average.\nFor the rule-based method, the parameter m, the number of pseudo-classes, must be determined. This can be found using cross-validation or independent test cases (in our experiments, cross-validation was used). Figure 6 represents a typical plot of the relative error vs. the number of pseudo-classes (Weiss & Indurkhya, 1993b). As the number of partitions increases, results improve until they reach a relative plateau and deteriorate somewhat. Similar complexity plots can be found for other models, for example neural nets (Weiss & Kapouleas, 1989).\nThe MARS procedure has several adjustable parameters. 6 For the parameter mi, values tried were 1 (additive modeling), 2, 3, 4 and number of inputs. For df, the default value of 3.0 was tried as well the optimal value estimated by cross-validation. The parameter nk was varied from 20 to 100 in steps of 10. Lastly, both piece-wise linear as well as piece-wise cubic solutions were tried. For each of the above setting of the parameters, the cross-validated accuracy was monitored, and the value for the best MARS model is reported.\nFor each method, besides the MAD, the relative error is also reported. The relative error is simply the estimated true mean absolute distance (measured by cross-validation) normalized by the initial mean absolute distance from the median. Analogous to classi -4. The peptide dataset is a slightly modi ed version of the one Quinlan refers to as lhrh-att in his paper.\nIn the version used in our experiments, cases with missing values were removed. 5. Because peptide was a slightly modi ed version of the lhrh-att dataset, the result listed is one that was provided by Quinlan in a personal communication. 6. The particular program used was MARS 3.5. In comparing the performance of two methods for a dataset, the standard error for each method was independently estimated, and the larger one was used in comparisons. If the di erence in performance was greater than 2 standard errors, the di erence was considered statistically signi cant. As with any signi cance test, one must also consider the overall pattern of performance and the relative advantages of competing solutions (Weiss & Indurkhya, 1994).\nFor each dataset, Figure 7 plots the relative best error found by the ratio of the best reported result to each model's result. A relative best error of 1 indicates that the result is the best reported result for any regression model. The model results that are compared to the best results are for regression rules, 5-nn, and the mixed model. .195 .21 .582 .63 .235 .25 .227 .24 .212 .23 cpu 30.5 .39 29.4 .38 27.62 .35 26.32 .34 1. How does rule-based regression perform compared to tree-based regression? Comparing the results for Rule with RT, one can see that except for servo, Rule does consistently better than RT on all the remaining six datasets. The di erence in performance also tests as signi cant. The results of the signi cance tests, and the general trend (which can be seen visually in Figure 7) leads us to conclude that rule-based regression is de nitely competitive to trees and often yields superior performance.\n2. Does integrating 5nn with rules lead to improved performance relative to using each model alone? A comparison of Rule/5nn with 5nn shows that for all datasets, Rule/5nn is signi cantly better. In comparing Rule/5nn with Rule, the results indicate that for three datasets (mpg, pole and housing), Rule/5nn was signi cantly better than Rule, and for the remaining three datasets both were about the same. The overall pattern of performance also appears to favor Rule/5nn over Rule. Thus the empirical results indicate that our method improved results relative to using each model alone. The general trend can be seen in Figure 7.\n3. Are the new methods competitive with alternative regression methods? Among the previous reported results, MT/3nn is the best performer. Other alternatives to consider are: Regression Trees (RT) and MARS. None of these three methods were signi cantly better than Rule/5nn on any of the datasets under consideration except for RT doing signi cantly better on servo. Furthermore, Rule/5nn was signi cantly better than MT/3nn on three of ve datasets (servo, cpu and mpg) on which comparison is possible. The overall trend also is in favor of Rule/5nn. Comparing RT to Rule/5nn, we nd that except for servo, Rule/5nn is signi cantly better than RT on all the remaining datasets. Comparing MARS to Rule/5nn, we nd that for three of the datasets (price, peptide and pole), Rule/5nn is signi cantly better. Hence the empirical results overwhelmingly suggest that our new method is competitive with alternative regression methods, with hints of superiority over some methods." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b5", "b24", "b31" ], "table_ref": [ "tab_3" ], "text": "We have considered a new model for rule-based regression and provided comparisons with tree-based regression. For many applications, strong explanatory capabilities and high dimensional feature selection can make a DNF model quite advantageous. This is particularly true for knowledge-based applications, for example equipment repair or medical diagnosis, in contrast to pure pattern recognition applications such as speech recognition.\nWhile rules are similar to trees, the rule representation is potentially more compact because the rules are not mutually exclusive. This potential of nding a more compact solution can be particularly important for problems where model interpretation is crucial. Note that the space of all rules includes the space of all trees. Thus, if a tree solution is the best, theoretically the rule induction procedure has the potential to nd it.\nIn our experiments, the regression rules generally outperformed the regression trees. Fewer constant regions were required and the estimated error rates were generally lower. Finding the DNF regions was substantially more computationally expensive for the regression rules than the regression trees. For the regression rules, fairly complex optimization techniques were necessary. In addition, experiments must be performed to nd the appropriate number of pseudo-classes. This is more a matter of scale: scale of the application versus the scale of available computing. Excluding the telecommunications application, none of the cited applications takes more than 15 minutes of cpu time on a SS-20 for a sin-gle pseudo-classi cation problem and a full cross-validation. 7 As computing power increases the timing distinction is less important. Even a small percentage gain can be quite valuable for the appropriate application (Apt e, Damerau, & Weiss, 1994) and computational requirements are a secondary factor.\nWe have provided results on several real-world datasets. Mostly, these involve nonlinear relationships. One may wonder how the rule-based method would perform on data with obvious linear relationships. In our earlier experiments with data exhibiting linear relationships (for example, the drug study data (Efron, 1988)), the rule-based solutions did slightly better than trees. However, the true test is real-world data which, often involve complex non-linear relationships. Comparisons with alternative models can help assess the e ectiveness of the new techniques.\nLooking at Figure 7 and Tables 2 and3, we see that the pure rule-based solutions are competitive with other models. Additional gains are made when rules are used not for obtaining the function values directly, but instead used to nd the relevant cases which are then used to compute the function value. The results of these experiments support the view that this strategy of combining di erent methods can improve predictive performance. Strategies similar to ours have been applied before for classi cation problems (Ting, 1994;Widmer, 1993) and similar conclusions were drawn from those results. Our results indicate that the strategy is useful in the regression context too. Our empirical results also support the contention that for regression, partitioning methods and nearest neighbor methods are complementary. A solution can be found by partitioning alone, and then the incremental improvement can be observed when substituting the average y of the k-nearest neighbors for the median y of a partition. From the perspective of nearest neighbor regression methods, the sample cases are compartmentalized, simplifying the table lookup for a new case.\nWhile not conclusive, there are hints that our combination strategy is most e ective for small to moderate samples: it is likely that when the sample size grows large, increased numbers of partitions, in terms of rules or terminal nodes, can compensate for having single constant-valued regions. This conjecture is supported by the large-sample pole application, where the incremental gain for the addition of k-nn is small. 8 In our experiments we used k-nn with k=5. Depending on the application, a di erent value of k might produce better results. The optimal value might be estimated by crossvalidation in a strategy that systematically varies k and picks the value that gives the best results overall. However, it is unclear whether the increased computational e ort will result in any signi cant performance gain.\nAnother practical issue with large samples is the storage requirement: all the cases must be stored. This can be a serious drawback in real-world applications with limited memory. However, we tried experiments in which the cases associated with a partition are replaced by a fewer number of \\typical cases\". This results in considerable savings in terms of storage requirements. Results are slightly weaker (though not signi cantly di erent).\nIt would appear that further gains might be obtained by restricting the k-nn to consider only those features that appear in the path to the leaf node under examination. This might seem like a good idea because it attempts to ensure that only features that are relevant to 7. A 10-fold cross-validation requires solving a problem essentially 11 times: once on all training cases and 10 times for each group of test cases. 8. Although small, this di erence tests as signi cant because the sample is large.\nthe cases in the node, are used in the distance calculations. However, we found results for this to be weaker.\nA number of regression techniques have been presented by others to demonstrate the advantages of combined models. Most of these combine methods that are independently invoked. Instead of a typical election where there is one winner, the alternative models are combined and weighted. These combination techniques have the advantage that the outputs of di erent models can be treated as independent variables. They can be combined in a form of post-processing, after all model outputs are available.\nIn no way do we contradict the value of these alternative combination techniques. Both approaches show improved results for various applications. We do conclude, however, that there are advantages for more complex regression procedures that dynamically mix the alternative models. These procedures may be particularly strong when there is a fundamental rationale for choice of methods such as partitioning methods, or when properties of the combined models must be preserved.\nWe have presented the regression problem with one output variable. This is the classical form for linear models and regression trees. The issue of multiple outputs has not been directly addressed although such extensions are feasible. This issue and further experimentation await future work. Our model of regression can provide a basis for these e orts, while leveraging current strong methods in classi cation rule induction." } ]
[ { "authors": "C Apt; F Damerau; S Weiss", "journal": "ACM Transactions on O ce Information Systems", "ref_id": "b0", "title": "Automated Learning of Decison Rules for Text Categorization", "year": "1994" }, { "authors": "L Breiman", "journal": "", "ref_id": "b1", "title": "Stacked regression", "year": "1993" }, { "authors": "L Breiman; J Friedman; R Olshen; C Stone", "journal": "", "ref_id": "b2", "title": "Classi cation and Regression Tress", "year": "1984" }, { "authors": "P Clark; T Niblett", "journal": "Machine Learning", "ref_id": "b3", "title": "The CN2 induction algorithm", "year": "1989" }, { "authors": "P Craven; G Wahba", "journal": "Numer. Math", "ref_id": "b4", "title": "Smoothing noisy data with spline functions. estimating the correct degree of smoothing by the method of generalized cross-validation", "year": "1979" }, { "authors": "B Efron", "journal": "SIAM Review", "ref_id": "b5", "title": "Computer-intensive methods in statistical regression", "year": "1988" }, { "authors": "U Fayyad; K Irani", "journal": "", "ref_id": "b6", "title": "The attribute selection problem in decision tree generation", "year": "1992" }, { "authors": "J Friedman", "journal": "Annals of Statistics", "ref_id": "b7", "title": "Multivariate adaptive regression splines", "year": "1991" }, { "authors": "J Friedman; W Stuetzle", "journal": "J. Amer. Stat. Assoc", "ref_id": "b8", "title": "Projection pursuit regression", "year": "1981" }, { "authors": "F Girosi; T Poggio", "journal": "Biological Cybernetics", "ref_id": "b9", "title": "Networks and the best approximation property", "year": "1990" }, { "authors": "J Hartigan; M Wong", "journal": "Applied Statistics", "ref_id": "b10", "title": "A k-means clustering algorithm", "year": "1979" }, { "authors": "T Hastie; R Tibshirani", "journal": "Chapman and Hall", "ref_id": "b11", "title": "Generalized Additive Models", "year": "1990" }, { "authors": "S Jacoby; J Kowalik; J Pizzo", "journal": "Prentice-Hall", "ref_id": "b12", "title": "Iterative Methods for Non-linear Optimization Problems", "year": "1972" }, { "authors": "S Kirpatrick; C Gelatt; M Vecchi", "journal": "Science", "ref_id": "b13", "title": "Optimization by simulated annealing", "year": "1983" }, { "authors": "M Leblanc; R Tibshirani", "journal": "", "ref_id": "b14", "title": "Combining estimates in regression and classi cation", "year": "1993" }, { "authors": "M Lebowitz", "journal": "Cognitive Science", "ref_id": "b15", "title": "Categorizing numeric information for generalization", "year": "1985" }, { "authors": "S Lin; B Kernighan", "journal": "Operations Research", "ref_id": "b16", "title": "An e cient heuristic for the traveling salesman problem", "year": "1973" }, { "authors": "J Mcclelland; D Rumelhart", "journal": "MIT Press", "ref_id": "b17", "title": "Explorations in Parallel Distributed Processing", "year": "1988" }, { "authors": "R Michalski; I Mozetic; J Hong; N Lavrac", "journal": "", "ref_id": "b18", "title": "The multi-purpose incremental learning system AQ15 and its testing application to three medical domains", "year": "1986" }, { "authors": "J Quinlan", "journal": "Machine Learning", "ref_id": "b19", "title": "Induction of decision trees", "year": "1986" }, { "authors": "J Quinlan", "journal": "International Journal of Man-Machine Studies", "ref_id": "b20", "title": "Simplifying decision trees", "year": "1987" }, { "authors": "J Quinlan", "journal": "", "ref_id": "b21", "title": "Combining instance-based and model-based learning", "year": "1993" }, { "authors": "B Ripley", "journal": "Chapman and Hall", "ref_id": "b22", "title": "Statistical aspects of neural networks", "year": "1993" }, { "authors": "H Sche E", "journal": "Wiley", "ref_id": "b23", "title": "The Analysis of Variance", "year": "1959" }, { "authors": "K Ting", "journal": "", "ref_id": "b24", "title": "The problem of small disjuncts: Its remedy in decision trees", "year": "1994" }, { "authors": "S Weiss; N Indurkhya", "journal": "IEEE Expert", "ref_id": "b25", "title": "Optimized Rule Induction", "year": "1993" }, { "authors": "S Weiss; N Indurkhya", "journal": "", "ref_id": "b26", "title": "Rule-based regression", "year": "1993" }, { "authors": "S Weiss; N Indurkhya", "journal": "", "ref_id": "b27", "title": "Decision tree pruning: Biased or optimal?", "year": "1994" }, { "authors": "S Weiss; I Kapouleas", "journal": "", "ref_id": "b28", "title": "An empirical comparison of pattern recognition, neural nets, and machine learning classi cation methods", "year": "1989" }, { "authors": "S Weiss; C Kulikowski", "journal": "", "ref_id": "b29", "title": "Computer Systems that Learn: Classi cation and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems", "year": "1991" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b30", "title": "", "year": "" }, { "authors": "G Widmer", "journal": "Informatica", "ref_id": "b31", "title": "Combining knowledge-based and instance-based learning to exploit qualitative knowledge", "year": "1993" }, { "authors": "D Wolpert", "journal": "Neural Networks", "ref_id": "b32", "title": "Stacked generalization", "year": "1992" } ]
[ { "formula_coordinates": [ 3, 252.12, 357.36, 269.76, 84.48 ], "formula_id": "formula_0", "formula_text": "= 1 n n X i=1 (y i y 0 i ) 2 (1) MAD = 1 n n X i=1 jy i y 0 i j (2)" }, { "formula_coordinates": [ 11, 244.68, 95.4, 277.2, 48 ], "formula_id": "formula_1", "formula_text": "GCV (M) = n X i=1 jy i y 0 i j n 1 C(M) n (6)" }, { "formula_coordinates": [ 12, 265.32, 266.88, 256.56, 46.68 ], "formula_id": "formula_2", "formula_text": "y = K X k=1 w k M k (x) (8)" }, { "formula_coordinates": [ 12, 204, 478.2, 317.88, 46.8 ], "formula_id": "formula_3", "formula_text": "y = 1 K K X k=1 V (N(x) k ) (T (N(x) k ) T(x)) (9)" }, { "formula_coordinates": [ 13, 232.92, 194.76, 288.84, 19.2 ], "formula_id": "formula_4", "formula_text": "if x R i then f(x) = y i knn (x)(10)" }, { "formula_coordinates": [ 13, 376.92, 237.12, 144.84, 19.2 ], "formula_id": "formula_5", "formula_text": ") = y i knn (x)(11)" } ]
Rule-based Machine Learning Methods for Functional Prediction
We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search e ciently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance.
Sholom M Weiss
[ { "figure_caption": "Figure 1 :1Figure 1: Example of Regression Tree", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Example of Regression Rules", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of Method for Learning Regression Rules", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Composing Pseudo-Classes (P-Class)", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Optimization by Rule Component Swapping", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Input: fy i g a set of output values Initialize n := number of cases, k := number of classes For each Class i Class i := next n/k cases from list of sorted y values end-for Compute Err new Repeat Err old = Err new For each Case j When it is in Class i 1. If Dist Case j , Mean(Class i 1 )] < Dist Case j , Mean(Class i )] Move Case j to Class i 1 2. If Dist Case j , Mean(Class i+1 )] < Dist Case j , Mean(Class i )] Move Case j to Class i+1 Next Case j Compute Err new Until Err new is not less than Err old", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Dataset Characteristics of the other datasets can be found in the literature", "figure_data": "Dataset Cases Vars price 159 16 servo 167 19 cpu 209 6 mpg 392 13 peptide 431 128 housing 506 13 pole 15000 48", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Previous Resultscation, where predictions must have fewer errors than simply predicting the largest class, in regression too we must do better than the average distance from the median to have meaningful results.", "figure_data": "Relative Error0.650.60.550.50.450.40.350.32345678910Number of Pseudo-ClassesFigure 6: Prototypical Performance for Varying Pseudo-ClassesDataset MT NNET 3-nn MT/3-nn price 1562 1833 1689 1386 servo .45 .30 .52 .30 cpu 28.9 28.7 34.0 28.1 mpg 2.11 2.02 2.72 2.18 peptide .95 ---housing 2.45 2.29 2.90 2.32", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b24" ], "table_ref": [], "text": "Temporal reasoning is an essential part of many arti cial intelligence tasks. It is desirable, therefore, to develop a temporal reasoning component that is useful across applications. Some applications, such as planning and scheduling, can rely heavily on a temporal reasoning component and the success of the application can depend on the e ciency of the underlying temporal reasoning component. In this paper, we discuss the design and empirical analysis of two algorithms for a temporal reasoning system based on Allen's (1983) in uential interval-based framework for representing temporal information. The two algorithms, a path consistency algorithm and a backtracking algorithm, are important for two fundamental tasks: determining whether the temporal information is consistent, and, if so, nding one or more scenarios that are consistent with the temporal information.\nOur stress is on designing algorithms that are robust and e cient in practice. For the path consistency algorithm, we develop techniques that can result in up to a ten-fold speedup over an already highly optimized implementation. For the backtracking algorithm, we develop variable and value ordering heuristics that are shown empirically to dramatically improve the performance of the algorithm. As well, we show that a previously suggested reformulation of the backtracking search problem (van Beek, 1992) can reduce the time and space requirements of the backtracking search. Taken together, the techniques we develop Relation Symbol Inverse Meaning x before y b bi x y\nx meets y m mi x y\nx overlaps y o oi x y\nx starts y s si x y\nx during y d di x y\nx nishes y f x y\nx equal y eq eq x y\nFigure 1: Basic relations between intervals allow a temporal reasoning component to solve problems that are of realistic size. As part of the evidence to support this claim, we evaluate the techniques for improving the algorithms on a large problem that arises in molecular biology." }, { "figure_ref": [], "heading": "Representing Temporal Information", "publication_ref": [ "b1" ], "table_ref": [], "text": "In this section, we review Allen's (1983) framework for representing relations between intervals. We then discuss the set of problems that was chosen to test the algorithms." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Allen's framework", "publication_ref": [ "b1", "b5", "b2", "b11", "b3", "b11" ], "table_ref": [], "text": "There are thirteen basic relations that can hold between two intervals (see Figure 1; Allen, 1983;Bruce, 1972). In order to represent inde nite information, the relation between two intervals is allowed to be a disjunction of the basic relations. Sets are used to list the disjunctions. For example, the relation fm,o,sg between events A and B represents the disjunction, (A meets B) _ (A overlaps B) _ (A starts B): Let I be the set of all basic relations, fb,bi,m,mi,o,oi,s,si,d,di,f, ,eqg. Allen allows the relation between two events to be any subset of I .\nWe use a graphical notation where vertices represent events and directed edges are labeled with sets of basic relations. As a graphical convention, we never show the edges (i; i), and if we show the edge (i; j ), we do not show the edge (j; i). Any edge for which we have no explicit knowledge of the relation is labeled with I ; by convention such edges are also not shown. We call networks with labels that are arbitrary subsets of I , interval algebra or IA networks.\nExample 1. Allen and Koomen (1983) show how IA networks can be used in non-linear planning with concurrent actions. As an example of representing temporal information using IA networks, consider the following blocks-world planning problem. There are three blocks, A, B, and C. In the initial state, the three blocks are all on the table. The goal state is simply a tower of the blocks with A on B and B on C. We associate states, actions, and properties with the intervals they hold over, and we can immediately write down the following temporal information. Stack(B,C) fmg On(B,C) A graphical representation of the IA network for this planning problem is shown in Figure 2a. Two fundamental tasks are determining whether the temporal information is consistent, and, if so, nding one or more scenarios that are consistent with the temporal information. An IA network is consistent if and only if there exists a mapping M of a real interval M (u) for each event or vertex u in the network such that the relations between events are satis ed (i.e., one of the disjuncts is satis ed). For example, consider the small subnetwork in Figure 2a consisting of the events On(A,B), On(B,C), and Goal. This subnetwork is consistent as demonstrated by the assignment, M (On(A,B)) = 1; 5], M (On(B,C)) = 2; 5], and M (Goal) = 3; 4]. If we were to change the subnetwork and insist that On(A,B) must be before On(B,C), no such mapping would exist and the subnetwork would be inconsistent. A consistent scenario of an IA network is a non-disjunctive subnetwork (i.e., every edge is labeled with a single basic relation) that is consistent. In our planning example, nding a consistent scenario of the network corresponds to nding an ordering of the actions that will accomplish the goal of stacking the three blocks. One such consistent scenario can be reconstructed from the qualitative mapping shown in Figure 2b.\nExample 2. Golumbic and Shamir (1993) discuss how IA networks can be used in a problem in molecular biology: examining the structure of the DNA of an organism (Benzer, 1959). The intervals in the IA network represent segments of DNA. Experiments can be performed to determine whether a pair of segments is either disjoint or intersects. Thus, the IA networks that result contain edges labeled with disjoint (fb,big), intersects (fm,mi,o,oi,s,si,d,di,f, ,eqg), or I , the set of all basic relations|which indicates no experiment was performed. If the IA network is consistent, this is evidence for the hypothesis that DNA is linear in structure; if it is inconsistent, DNA is nonlinear (it forms loops, for example). Golumbic and Shamir (1993) show that determining consistency in this restricted version of IA networks is NP-complete. We will show that problems that arise in this application can often be solved quickly in practice.\n(a) IA network for block-stacking example: " }, { "figure_ref": [], "heading": "Test problems", "publication_ref": [ "b14", "b3", "b3" ], "table_ref": [], "text": "We tested how well the heuristics we developed for improving path consistency and backtracking algorithms perform on a test suite of problems.\nThe purpose of empirically testing the algorithms is to determine the performance of the algorithms and the proposed improvements on \\typical\" problems. There are two approaches: (i) collect a set of \\benchmark\" problems that are representative of problems that arise in practice, and (ii) randomly generate problems and \\investigate how algorithmic performance depends on problem characteristics ... and learn to predict how an algorithm will perform on a given problem class\" (Hooker, 1994).\nFor IA networks, there is no existing collection of large benchmark problems that actually arise in practice|as opposed to, for example, planning in a toy domain such as the blocks world. As a start to a collection, we propose an IA network with 145 intervals that arose from a problem in molecular biology (Benzer, 1959(Benzer, , pp. 1614-15;-15; see Example 2, above). The proposed benchmark problem is not strictly speaking a temporal reasoning problem as the intervals represent segments of DNA, not intervals of time. Nevertheless, it can be formulated as a temporal reasoning problem. The value is that the benchmark problem arose in a real application. We will refer to this problem as Benzer's matrix.\nIn addition to the benchmark problem, in this paper we use two models of a random IA network, denoted B(n) and S(n; p), to evaluate the performance of the algorithms, where n is the number of intervals, and p is the probability of a (non-trivial) constraint between two intervals. Model B(n) is intended to model the problems that arise in molecular biology (as estimated from the problem discussed in Benzer, 1959). Model S(n; p) allows us to study how algorithm performance depends on the important problem characteristic of sparseness of the underlying constraint graph. Both models, of course, allow us to study how algorithm performance depends on the size of the problem.\nFor B(n), the random instances are generated as follows.\nStep 1. Generate a \\solution\" of size n as follows. Generate n real intervals by randomly generating values for the end points of the intervals. Determine the IA network by determining, for each pair of intervals, whether the two intervals either intersect or are disjoint.\nStep 2. Change some of the constraints on edges to be the trivial constraint by setting the label to be I , the set of all 13 basic relations. This represents the case where no experiment was performed to determine whether a pair of DNA segments intersect or are disjoint. Constraints are changed so that the percentage of non-trivial constraints (approximately 6% are intersects and 17% are disjoint) and their distribution in the graph are similar to those in Benzer's matrix. For S(n; p), the random instances are generated as follows.\nStep 1. Generate the underlying constraint graph by indicating which of the possible ( n 2) edges is present. Let each edge be present with probability p, independently of the presence or absence of other edges.\nStep 2. If an edge occurs in the underlying constraint graph, randomly chose a label for the edge from the set of all possible labels (excluding the empty label) where each label is chosen with equal probability. If an edge does not occur, label the edge with I , the set of all 13 basic relations.\nStep 3. Generate a \\solution\" of size n as follows. Generate n real intervals by randomly generating values for the end points of the intervals. Determine the consistent scenario by determining the basic relations which are satis ed by the intervals. Finally, add the solution to the IA network generated in Steps 1{2.\nHence, only consistent IA networks are generated from S(n; p). If we omit Step 3, it can be shown both analytically and empirically that almost all of the di erent possible IA networks generated by this distribution are inconsistent and that the inconsistency is easily detected by a path consistency algorithm. To avoid this potential pitfall, we test our algorithms on consistent instances of the problem. This method appears to generate a reasonable test set for temporal reasoning algorithms with problems that range from easy to hard. It was found, for example, that instances drawn from S(n; 1=4) were hard problems for the backtracking algorithms to solve, whereas for values of p on either side (S(n; 1=2) and S(n; 1=8)) the problems were easier." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Path Consistency Algorithm", "publication_ref": [ "b0", "b18", "b19", "b1", "b18", "b15", "b20", "b15", "b1", "b1", "b1", "b13", "b29", "b22", "b1", "b10", "b24", "b15", "b12", "b20", "b22" ], "table_ref": [], "text": "Path consistency or transitive closure algorithms (Aho, Hopcroft, & Ullman, 1974;Mackworth, 1977;Montanari, 1974) are important for temporal reasoning. Allen (1983) shows that a path consistency algorithm can be used as a heuristic test for whether an IA network is consistent (sometimes the algorithm will report that the information is consistent when really it is not). A path consistency algorithm is useful also in a backtracking search for a consistent scenario where it can be used as a preprocessing algorithm (Mackworth, 1977;Ladkin & Reinefeld, 1992) and as an algorithm that can be interleaved with the backtracking search (see the next section; Nadel, 1989;Ladkin & Reinefeld, 1992). In this section, we examine methods for speeding up a path consistency algorithm.\nThe idea behind the path consistency algorithm is the following. Choose any three vertices i, j , and k in the network. The labels on the edges (i; j ) and (j; k) potentially constrain the label on the edge (i; k) that completes the triangle. For example, consider the three vertices Stack(A,B), On(A,B), and Goal in Figure 2a. From Stack(A,B) fmg On(A,B) and On(A,B) fdig Goal we can deduce that Stack(A,B) fbg Goal and therefore can change the label on that edge from I , the set of all basic relations, to the singleton set fbg. To perform this deduction, the algorithm uses the operations of set intersection (\\) and composition ( ) of labels and checks whether C ik = C ik \\ C ij C jk , where C ik is the label on edge (i; k). If C ik is updated, it may further constrain other labels, so (i; k) is added to a list to be processed in turn, provided that the edge is not already on the list. The algorithm iterates until no more such changes are possible. A unary operation, inverse, is also used in the algorithm. The inverse of a label is the inverse of each of its elements (see Figure 1 for the inverses of the basic relations).\nWe designed and experimentally evaluated techniques for improving the e ciency of a path consistency algorithm. Our starting point was the variation on Allen's (1983) algorithm shown in Figure 3. For an implementation of the algorithm to be e cient, the intersection and composition operations on labels must be e cient (Steps 5 & 10). Intersection was made e cient by implementing the labels as bit vectors. The intersection of two labels is then simply the logical AND of two integers. Composition is harder to make e cient. Unfortunately, it is impractical to implement the composition of two labels using table lookup as the table would need to be of size 2 13 2 13 , there being 2 13 possible labels.\nWe experimentally compared two practical methods for composition that have been proposed in the literature. Allen (1983) gives a method for composition which uses a table of size 13 13. The table gives the composition of the basic relations (see Allen, 1983, for the table). The composition of two labels is computed by a nested loop that forms the union of the pairwise composition of the basic relations in the labels. Hogge (1987) gives a method for composition which uses four tables of size 2 7 2 7 , 2 7 2 6 , 2 6 2 7 , and 2 6 2 6 . The composition of two labels is computed by taking the union of the results of four array references (H. Kautz independently devised a similar scheme). In our experiments, the implementations of the two methods di ered only in how composition was computed. In both, the list, L, of edges to be processed was implemented using a rst-in, rst-out policy (i.e., a stack).\nWe also experimentally evaluated methods for reducing the number of composition operations that need to be performed. One idea we examined for improving the e ciency is Path-Consistency(C; n) 1. L f(i; j ) j 1 i < j ng 2. while (L is not empty) 3. do select and delete an (i; j ) from L 4.\nfor k 1 to n, k 6 = i and k 6 = j 5.\ndo t C ik \\ C ij C jk 6. if (t 6 = C ik ) 7. then C ik t 8. C ki Inverse(t) 9. L L f(i; k)g 10. t C kj \\ C ki C ij 11. if (t 6 = C kj ) 12.\nthen C kj t 13.\nC jk Inverse(t)\n14.\nL L f(k; j )g Figure 3: Path consistency algorithm for IA networks to avoid the computation when it can be predicted that the result will not constrain the label on the edge that completes the triangle. Three such cases we identi ed are shown in Figure 4. Another idea we examined, as rst suggested by Mackworth (1977, p. 113), is that the order that the edges are processed can a ect the e ciency of the algorithm. The reason is the following. The same edge can appear on the list, L, of edges to be processed many times as it progressively gets constrained. The number of times a particular edge appears on the list can be reduced by a good ordering. For example, consider the edges (3; 1) and (3; 5) in Figure 2a. If we process edge (3; 1) rst, edge (3; 2) will be updated to fo,oi,s,si,d,di,f, ,eqg and will be added to L (k = 2 in Steps 5{9). Now if we process edge (3; 5), edge (3; 2) will be updated to fo,s,dg and will be added to L a second time. However, if we process edge (3; 5) rst, (3; 2) will be immediately updated to fo,s,dg and will only be added to L once. Three heuristics we devised for ordering the edges are shown in Figure 9. The edges are assigned a heuristic value and are processed in ascending order. When a new edge is added to the list (Steps 9 & 14), the edge is inserted at the appropriate spot according to its new heuristic value. There has been little work on ordering heuristics for path consistency algorithms. Wallace and Freuder (1992) discuss ordering heuristics for arc consistency algorithms, which are closely related to path consistency algorithms. Two of their heuristics cannot be applied in our context as the heuristics assume a constraint satisfaction problem with nite domains, whereas IA networks are examples of constraint satisfaction problems with in nite domains. A third heuristic (due to B. Nudel, 1983) closely corresponds to our cardinality heuristic.\nAll experiments were performed on a Sun 4/25 with 12 megabytes of memory. We report timings rather than some other measure such as number of iterations as we believe this gives a more accurate picture of whether the results are of practical interest. Care was The computation, C ik \\ C ij C jk , can be skipped when it is known that the result of the composition will not constrain the label on the edge (i; k):\na. If either C ij or C jk is equal to I , the result of the composition will be I and therefore will not constrain the label on the edge (i; k). Thus, in Step 1 of Figure 3, edges that are labeled with I are not added to the list of edges to process. b. If the condition,\n(b 2 C ij ^bi 2 C jk ) _ (bi 2 C ij ^b 2 C jk ) _ (d 2 C ij ^di 2 C jk );\nis true, the result of composing C ij and C jk will be I . The condition is quickly tested using bit operations. Thus, if the above condition is true just before Step 5, Steps 5{9 can be skipped. A similar condition can be formulated and tested before Step 10. c. If at some point in the computation of C ij C jk it is determined that the result accumulated so far would not constrain the label C ik , the rest of the computation can be skipped.\nFigure 4: Skipping techniques taken to always start with the same base implementation of the algorithm and only add enough code to implement the composition method, new technique, or heuristic that we were evaluating. As well, every attempt was made to implement each method or heuristic as e ciently as we could.\nGiven our implementations, Hogge's method for composition was found to be more e cient than Allen's method for both the benchmark problem and the random instances (see Figures 5{8). This much was not surprising. However, with the addition of the skipping techniques, the two methods became close in e ciency. The skipping techniques sometimes dramatically improved the e ciency of both methods. The ordering heuristics can improve the e ciency, although here the results were less dramatic. The cardinality heuristic and the constraintedness heuristic were also tried for ordering the edges. It was found that the cardinality heuristic was just as costly to compute as the weight heuristic but did not out perform it. The constraintedness heuristic reduced the number of iterations but proved too costly to compute. This illustrates the balance that must be struck between the e ectiveness of a heuristic and the additional overhead the heuristic introduces.\nFor S(n; p), the skipping techniques and the weight ordering heuristic together can result in up to a ten-fold speedup over an already highly optimized implementation using Hogge's method for composition. The largest improvements in e ciency occur when the IA networks are sparse (p is smaller). This is encouraging for it appears that the problems that arise in planning and molecular biology are also sparse. For B(n) and Benzer's matrix, the speedup is approximately four-fold. Perhaps most importantly, the execution times reported indicate that the path consistency algorithm, even though it is an O(n 3 ) algorithm, can be used on practical-sized problems. In Figure 8, we show how well the algorithms scale up. It can be seen that the algorithm that includes the weight ordering heuristic out performs all others. However, this algorithm requires much space and the largest problem we were able to solve was with 500 intervals. The algorithms that included only the skipping techniques were able to solve much larger problems before running out of space (up to 1500 intervals) and here the constraint was the time it took to solve the problems. 4. Backtracking Algorithm Allen (1983) was the rst to propose that a backtracking algorithm (Golomb & Baumert, 1965) could be used to nd a consistent scenario of an IA network. In the worst case, a backtracking algorithm can take an exponential amount of time to complete. This worst case also applies here as Vilain andKautz (1986, 1989) show that nding a consistent scenario is NP-complete for IA networks. In spite of the worst case estimate, backtracking algorithms can work well in practice. In this section, we examine methods for speeding up a backtracking algorithm for nding a consistent scenario and present results on how well the algorithm performs on di erent classes of problems. In particular, we compare the e ciency of the algorithm on two alternative formulations of the problem: one that has previously been proposed by others and one that we have proposed (van Beek, 1992). We also improve the e ciency of the algorithm by designing heuristics for ordering the instantiation of the variables and for ordering the values in the domains of the variables.\nAs our starting point, we modeled our backtracking algorithm after that of Ladkin and Reinefeld (1992) as the results of their experimentation suggests that it is very successful at nding consistent scenarios quickly. Following Ladkin and Reinefeld our algorithm has the following characteristics: preprocessing using a path consistency algorithm, static order of instantiation of the variables, chronological backtracking, and forward checking or pruning using a path consistency algorithm. In chronological backtracking, when the search reaches a dead end, the search simply backs up to the next most recently instantiated variable and tries a di erent instantiation. Forward checking (Haralick & Elliott, 1980) is a technique where it is determined and recorded how the instantiation of the current variable restricts the possible instantiations of future variables. This technique can be viewed as a hybrid of tree search and consistency algorithms (see Nadel, 1989;Nudel, 1983). (See Dechter, 1992, for a general survey on backtracking.)" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Alternative formulations", "publication_ref": [ "b1", "b26", "b17", "b25", "b21" ], "table_ref": [], "text": "Let C be the matrix representation of an IA network, where C ij is the label on edge (i; j ). The traditional method for nding a consistent scenario of an IA network is to search for a subnetwork S of a network C such that, (a) S ij C ij , (b) jS ij j = 1, for all i; j , and (c) S is consistent. To nd a consistent scenario we simply search through the di erent possible S 's that satisfy conditions (a) and (b)|it is a simple matter to enumerate them|until we nd one that also satis es condition (c). Allen (1983) was the rst to propose using backtracking search to search through the potential S 's.\nOur alternative formulation is based on results for two restricted classes of IA networks, denoted here as SA networks and NB networks. In IA networks, the relation between two intervals can be any subset of I , the set of all thirteen basic relations. In SA networks (Vilain & Kautz, 1986), the allowed relations between two intervals are only those subsets of I that can be translated, using the relations f<, , =, >, , 6 =g, into conjunctions of relations between the endpoints of the intervals. For example, the IA network in Figure 2a is also an SA network. As a speci c example, the interval relation \\A fbi,mig B\" can be expressed as the conjunction of point relations, (B < B + ) ^(A < A + ) ^(A B + ); where A and A + represent the start and end points of interval A, respectively. (See Ladkin & Maddux, 1988;van Beek & Cohen, 1990, for an enumeration of the allowed relations for SA networks.) In NB networks (Nebel & B urckert, 1995), the allowed relations between two intervals are only those subsets of I that can be translated, using the relations f<, , =, >, , 6 =g, into conjunctions of Horn clauses that express the relations between the endpoints of the intervals. The set of NB relations is a strict superset of the SA relations.\nOur alternative formulation is as follows. We describe the method in terms of SA networks, but the same method applies to NB networks. The idea is that, rather than search directly for a consistent scenario of an IA network as in previous work, we rst search for something more general: a consistent SA subnetwork of the IA network. That is, we use backtrack search to nd a subnetwork S of a network C such that, (a) S ij C ij , (b) S ij is an allowed relation for SA networks, for all i; j , and (c) S is consistent.\nIn previous work, the search is through the alternative singleton labelings of an edge, i.e., jS ij j = 1. The key idea in our proposal is that we decompose the labels into the largest possible sets of basic relations that are allowed for SA networks and search through these decompositions. This can considerably reduce the size of the search space. For example, suppose the label on an edge is fb,bi,m,o,oi,sig. There are six possible ways to label the edge with a singleton label: fbg, fbig, fmg, fog, foig, fsig, but only two possible ways to label the edge if we decompose the labels into the largest possible sets of basic relations that are allowed for SA networks: fb,m,og and fbi,oi,sig. As another example, consider the network shown in Figure 2a. When searching through alternative singleton labelings, the worst case size of the search space is C 12 C 13 C 89 = 314 (the edges labeled with I must be included in the calculation). But when decomposing the labels into the largest possible sets of basic relations that are allowed for SA networks and searching through the decompositions, the size of the search space is 1, so no backtracking is necessary (in general, the search is, of course, not always backtrack free).\nTo test whether an instantiation of a variable is consistent with instantiations of past variables and with possible instantiations of future variables, we use an incremental path consistency algorithm (in Step 1 of Figure 3 instead of initializing L to be all edges, it is initialized to the single edge that has changed). The result of the backtracking algorithm is a consistent SA subnetwork of the IA network, or a report that the IA network is inconsistent. After backtracking completes, a solution of the SA network can be found using a fast algorithm given by van Beek (1992)." }, { "figure_ref": [], "heading": "Ordering heuristics", "publication_ref": [ "b4", "b8", "b22", "b7", "b9", "b12", "b4", "b23" ], "table_ref": [], "text": "Backtracking proceeds by progressively instantiating variables. If no consistent instantiation exists for the current variable, the search backs up. The order in which the variables Weight. The weight heuristic is an estimate of how much the label on an edge will restrict the labels on other edges. Restrictiveness was measured for each basic relation by successively composing the basic relation with every possible label and summing the cardinalities of the results. The results were then suitably scaled to give the table shown below.\nrelation b bi m mi o oi s si d di f eq weight 3 3 2 2 4 4 2 2 4 3 2 2 1 The weight of a label is then the sum of the weights of its elements. For example, the weight of the relation fm,o,sg is 2 + 4 + 2 = 8.\nCardinality. The cardinality heuristic is a variation on the weight heuristic. Here, the weight of every basic relation is set to one.\nConstraint. The constraintedness heuristic is an estimate of how much a change in a label on an edge will restrict the labels on other edges. It is determined as follows. Suppose the edge we are interested in is (i; j ). The constraintedness of the label on edge (i; j ) is the sum of the weights of the labels on the edges (k; i) and (j; k), k = 1; :::; n; k 6 = i; k 6 = j . The intuition comes from examining the path consistency algorithm (Figure 3) which would propagate a change in the label C ij . We see that C ij will be composed with C ki (Step 5) and C jk (Step 10), k = 1; :::; n; k 6 = i; k 6 = j .\nFigure 9: Ordering heuristics are instantiated and the order in which the values in the domains are tried as possible instantiations can greatly a ect the performance of a backtracking algorithm and various methods for ordering the variables (e.g. Bitner & Reingold, 1975;Freuder, 1982;Nudel, 1983) and ordering the values (e.g. Dechter & Pearl, 1988;Ginsberg et al., 1990;Haralick & Elliott, 1980) have been proposed.\nThe idea behind variable ordering heuristics is to instantiate variables rst that will constrain the instantiation of the other variables the most. That is, the backtracking search attempts to solve the most highly constrained part of the network rst. Three heuristics we devised for ordering the variables (edges in the IA network) are shown in Figure 9. For our alternative formulation, cardinality is rede ned to count the decompositions rather than the elements of a label. The variables are put in ascending order. In our experiments the ordering is static|it is determined before the backtracking search starts and does not change as the search progresses. In this context, the cardinality heuristic is similar to a heuristic proposed by Bitner and Reingold (1975) and further studied by Purdom (1983).\nThe idea behind value ordering heuristics is to order the values in the domains of the variables so that the values most likely to lead to a solution are tried rst. Generally, this is done by putting values rst that constrain the choices for other variables the least. Here we propose a novel technique for value ordering that is based on knowledge of the structure of solutions. The idea is to rst choose a small set of problems from a class of problems, and then nd a consistent scenario for each instance without using value ordering. Once we have a set of solutions, we examine the solutions and determine which values in the domains As an example of using this information to order the values in a domain, suppose that the label on an edge is fb,bi,m,o,oi,sig. If we are decomposing the labels into singleton labels, we would order the values in the domain as follows (most preferred rst): fbg, fbig, fog, foig, fmg, fsig. If we are decomposing the labels into the largest possible sets of basic relations that are allowed for SA networks, we would order the values in the domain as follows: fb,m,og, fbi,oi,sig, since 1900 + 20 + 220 > 1900 + 220 + 14. This technique can be used whenever something is known about the structure of solutions." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3", "fig_4", "fig_4", "fig_3", "fig_4" ], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "All experiments were performed on a Sun 4/20 with 8 megabytes of memory. The rst set of experiments, summarized in Figure 10, examined the e ect of problem formulation on the execution time of the backtracking algorithm. We implemented three versions of the algorithm that were identical except that one searched through singleton labelings (denoted hereafter and in Figure 10 as the SI method) and the other two searched through decompositions of the labels into the largest possible allowed relations for SA networks and NB networks, respectively. All of the methods solved the same set of random problems drawn from B(n) and were also applied to Benzer's matrix (denoted + and in Figure 10). For each problem, the amount of time required to solve the given IA network was recorded. As mentioned earlier, each IA network was preprocessed with a path consistency algorithm before backtracking search. The timings include this preprocessing time. The experiments indicate that the speedup by using the SA decomposition method can be up to three-fold over the SI method. As well, the SA decomposition method was able to solve larger problems before running out of space (n = 250 versus n = 175). The NB decomposition method gives exactly the same result as for the SA method on these problems because of the structure of the constraints. We also tested all three methods on a set of random problems drawn from S(100; p), where p = 1; 3=4; 1=2, and 1=8. In these experiments, the SA and NB methods were consistently twice as fast as the SI method. As well, the NB method showed no advantage over the SA method on these problems. This is surprising as the branching factor, and hence the size of the search space, is smaller for the NB method than for the SA method.\nThe second set of experiments, summarized in Figure 11, examined the e ect on the execution time of the backtracking algorithm of heuristically ordering the variables and the values in the domains of the variables before backtracking search begins. For variable ordering, all six permutations of the cardinality, constraint, and weight heuristics were tried as the primary, secondary, and tertiary sorting keys, respectively. As a basis of comparison, the experiments included the case of no heuristics. Figure 11 shows approximate cumulative frequency curves for some of the experimental results. Thus, for example, we can read from the curve representing heuristic value ordering and best heuristic variable ordering that approximately 75% of the tests completed within 20 seconds, whereas with random value and variable ordering only approximately 5% of the tests completed within 20 seconds. We can also read from the curves the 0, 10, : : : , 100 percentiles of the data sets (where the value of the median is the 50th percentile or the value of the 50th test). The curves are truncated at time = 1800 (1/2 hour), as the backtracking search was aborted when this time limit was exceeded.\nIn our experiments we found that S(100; 1=4) represents a particularly di cult class of problems and it was here that the di erent heuristics resulted in dramatically di erent performance, both over the no heuristic case and also between the di erent heuristics. With no value ordering, the best heuristic for variable ordering was the combination constraintedness/weight/cardinality where constraintedness is the primary sorting key and the remaining keys are used to break subsequent ties. Somewhat surprisingly, the best heuristic for variable ordering changes when heuristic value ordering is incorporated. Here the combination weight/constraintedness/cardinality works much better. This heuristic together with value ordering is particularly e ective at \\ attening out\" the distribution and so allowing a much greater number of problems to be solved in a reasonable amount of time. For S(100; p), where p = 1; 3=4; 1=2, and 1=8, the problems were much easier and all but three of the hundreds of tests completed within 20 seconds. In these problems, the heuristic used did not result in signi cantly di erent performance.\nIn summary, the experiments indicate that by changing the decomposition method we are able to solve larger problems before running out of space (n = 250 vs n = 175 on a machine with 8 megabytes; see Figure 10). The experiments also indicate that good heuristic orderings can be essential to being able to nd a consistent scenario of an IA network in reasonable time. With a good heuristic ordering we were able to solve much larger problems before running out of time (see Figure 11). The experiments also provide additional evidence for the e cacy of Ladkin andReinefeld's (1992, 1993) algorithm. Nevertheless, even with all of our improvements, some problems still took a considerable amount of time to solve. On consideration, this is not surprising. After all, the problem is known to be NP-complete." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b1" ], "table_ref": [], "text": "Temporal reasoning is an essential part of tasks such as planning and scheduling. In this paper, we discussed the design and an empirical analysis of two key algorithms for a temporal reasoning system. The algorithms are a path consistency algorithm and a backtracking algorithm. The temporal reasoning system is based on Allen's (1983) interval-based framework for representing temporal information. Our emphasis was on how to make the algorithms robust and e cient in practice on problems that vary from easy to hard. For the path consistency algorithm, the bottleneck is in performing the composition operation. We developed methods for reducing the number of composition operations that need to be performed. These methods can result in almost an order of magnitude speedup over an already highly optimized implementation of the algorithm. For the backtracking algorithm, we developed variable and value ordering heuristics and showed that an alternative formulation of the problem can considerably reduce the time taken to nd a solution. The techniques allow an interval-based temporal reasoning system to be applied to larger problems and to perform more e ciently in existing applications." } ]
[ { "authors": "A V Aho; J E Hopcroft; J D Ullman", "journal": "Addison-Wesley", "ref_id": "b0", "title": "The Design and Analysis of Computer Algorithms", "year": "1974" }, { "authors": "J F Allen", "journal": "Comm. ACM", "ref_id": "b1", "title": "Maintaining knowledge about temporal intervals", "year": "1983" }, { "authors": "J F Allen; J A Koomen", "journal": "", "ref_id": "b2", "title": "Planning using a temporal world model", "year": "1983" }, { "authors": "S Benzer", "journal": "Proc. Nat. Acad. Sci. USA", "ref_id": "b3", "title": "On the topology of the genetic ne structure", "year": "1959" }, { "authors": "J R Bitner; E M Reingold", "journal": "Comm. ACM", "ref_id": "b4", "title": "Backtrack programming techniques", "year": "1975" }, { "authors": "B C Bruce", "journal": "Arti cial Intelligence", "ref_id": "b5", "title": "A model for temporal references and its application in a question answering program", "year": "1972" }, { "authors": "R Dechter", "journal": "Arti cial Intelligence", "ref_id": "b6", "title": "From local to global consistency", "year": "1992" }, { "authors": "R Dechter; J Pearl", "journal": "Arti cial Intelligence", "ref_id": "b7", "title": "Network-based heuristics for constraint satisfaction problems", "year": "1988" }, { "authors": "E C Freuder", "journal": "J. ACM", "ref_id": "b8", "title": "A su cient condition for backtrack-free search", "year": "1982" }, { "authors": "M L Ginsberg; M Frank; M P Halpin; M C Torrance", "journal": "", "ref_id": "b9", "title": "Search lessons learned from crossword puzzles", "year": "1990" }, { "authors": "S Golomb; L Baumert", "journal": "J. ACM", "ref_id": "b10", "title": "Backtrack programming", "year": "1965" }, { "authors": "M C Golumbic; R Shamir", "journal": "J. ACM", "ref_id": "b11", "title": "Complexity and algorithms for reasoning about time: A graph-theoretic approach", "year": "1993" }, { "authors": "R M Haralick; G L Elliott", "journal": "Arti cial Intelligence", "ref_id": "b12", "title": "Increasing tree search e ciency for constraint satisfaction problems", "year": "1980" }, { "authors": "J C Hogge", "journal": "", "ref_id": "b13", "title": "TPLAN: A temporal interval-based planner with novel extensions", "year": "1987" }, { "authors": "J N Hooker", "journal": "Operations Research", "ref_id": "b14", "title": "Needed: An empirical science of algorithms", "year": "1994" }, { "authors": "P Ladkin; A Reinefeld", "journal": "Arti cial Intelligence", "ref_id": "b15", "title": "E ective solution of qualitative interval constraint problems", "year": "1992" }, { "authors": "P Ladkin; A Reinefeld", "journal": "Springer-Verlag", "ref_id": "b16", "title": "A symbolic approach to interval constraint problems", "year": "1993" }, { "authors": "P B Ladkin; R D Maddux", "journal": "", "ref_id": "b17", "title": "On binary constraint networks", "year": "1988" }, { "authors": "A K Mackworth", "journal": "Arti cial Intelligence", "ref_id": "b18", "title": "Consistency in networks of relations", "year": "1977" }, { "authors": "U Montanari", "journal": "Inform. Sci", "ref_id": "b19", "title": "Networks of constraints: Fundamental properties and applications to picture processing", "year": "1974" }, { "authors": "B A Nadel", "journal": "Computational Intelligence", "ref_id": "b20", "title": "Constraint satisfaction algorithms", "year": "1989" }, { "authors": "B Nebel; H.-J ", "journal": "J. ACM", "ref_id": "b21", "title": "Reasoning about temporal relations: A maximal tractable subclass of Allen's interval algebra", "year": "1995" }, { "authors": "B Nudel", "journal": "Arti cial Intelligence", "ref_id": "b22", "title": "Consistent-labeling problems and their algorithms: Expected-complexities and theory-based heuristics", "year": "1983" }, { "authors": "P W Purdom", "journal": "Arti cial Intelligence", "ref_id": "b23", "title": "Search rearrangement backtracking and polynomial average time", "year": "1983" }, { "authors": "P Van Beek", "journal": "Arti cial Intelligence", "ref_id": "b24", "title": "Reasoning about qualitative temporal information", "year": "1992" }, { "authors": "P Van Beek; R Cohen", "journal": "Computational Intelligence", "ref_id": "b25", "title": "Exact and approximate reasoning about temporal relations", "year": "1990" }, { "authors": "M Vilain; H Kautz", "journal": "", "ref_id": "b26", "title": "Constraint propagation algorithms for temporal reasoning", "year": "1986" }, { "authors": "M Vilain; H Kautz; P Van Beek", "journal": "", "ref_id": "b27", "title": "Constraint propagation algorithms for temporal reasoning: A revised report", "year": "1989" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b28", "title": "", "year": "" }, { "authors": "R J Wallace; E C Freuder", "journal": "", "ref_id": "b29", "title": "Ordering heuristics for arc consistency algorithms", "year": "1992" } ]
[ { "formula_coordinates": [ 7, 90, 169.92, 186.96, 111.8 ], "formula_id": "formula_0", "formula_text": "do t C ik \\ C ij C jk 6. if (t 6 = C ik ) 7. then C ik t 8. C ki Inverse(t) 9. L L f(i; k)g 10. t C kj \\ C ki C ij 11. if (t 6 = C kj ) 12." }, { "formula_coordinates": [ 8, 164.64, 203.64, 310.32, 15.2 ], "formula_id": "formula_1", "formula_text": "(b 2 C ij ^bi 2 C jk ) _ (bi 2 C ij ^b 2 C jk ) _ (d 2 C ij ^di 2 C jk );" } ]
The Design and Experimental Analysis of Algorithms for Temporal Reasoning
Many applications|from planning and scheduling to problems in molecular biology| rely heavily on a temporal reasoning component. In this paper, we discuss the design and empirical analysis of algorithms for a temporal reasoning system based on Allen's in uential interval-based framework for representing temporal information. At the core of the system are algorithms for determining whether the temporal information is consistent, and, if so, nding one or more scenarios that are consistent with the temporal information. Two important algorithms for these tasks are a path consistency algorithm and a backtracking algorithm. For the path consistency algorithm, we develop techniques that can result in up to a ten-fold speedup over an already highly optimized implementation. For the backtracking algorithm, we develop variable and value ordering heuristics that are shown empirically to dramatically improve the performance of the algorithm. As well, we show that a previously suggested reformulation of the backtracking search problem can reduce the time and space requirements of the backtracking search. Taken together, the techniques we develop allow a temporal reasoning component to solve problems that are of practical size.
Peter Van Beek; Dennis W Manchak
[ { "figure_caption": "Figure 2 :2Figure 2: Representing qualitative relations between intervals", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 5 :Figure 6 :56Figure 5: E ect of heuristics on time (sec.) of path consistency algorithms applied to Benzer's matrix", "figure_data": "", "figure_id": "fig_1", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Figure 7 :Figure 8 :78Figure 7: E ect of heuristics on average time (sec.) of path consistency algorithms. Each data point is the average of 100 tests on random instances of IA networks drawn from S(100; p); the coe cient of variation (standard deviation / average) for each set of 100 tests is bounded by 0.25", "figure_data": "", "figure_id": "fig_2", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Figure 1010Figure 10: E ect of decomposition method on average time (sec.) of backtracking algorithm. Each data point is the average of 100 tests on random instances of IA networks drawn from B(n); the coe cient of variation (standard deviation / average) for each set of 100 tests is bounded by 0.15", "figure_data": "", "figure_id": "fig_3", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: E ect of variable and value ordering heuristics on time (sec.) of backtracking algorithm. Each curve represents 100 tests on random instances of IA networks drawn from S(100; 1=4) where the tests are ordered by time taken to solve the instance. The backtracking algorithm used the SA decomposition method.", "figure_data": "", "figure_id": "fig_4", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "There is an action called \\Stack\". The e ect of the stack action is On(x; y): block x is on top of block y. For the action to be successfully executed, the conditions Clear(x) and Clear(y) must hold: neither block x or block y have a block on them. Planning introduces two stacking actions and the following temporal constraints.", "figure_data": "Initial ConditionsGoal ConditionsInitial fdg Clear(A) Initial fdg Clear(B) Initial fdg Clear(C)Goal fdg On(A,B) Goal fdg On(B,C)Stacking ActionStacking ActionStack(A,B) fbi,mig Initial Stack(A,B) fdg Clear(A) Stack(A,B) ffg Clear(B) Stack(A,B)Stack(B,C) fbi,mig Initial Stack(B,C) fdg Clear(B) Stack(B,C) ffg Clear(C)", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction: Why Dynamic Preferences are Needed", "publication_ref": [ "b14", "b21", "b22", "b15" ], "table_ref": [], "text": "Preferences among defaults play a crucial role in nonmonotonic reasoning. One source of preferences that has been studied intensively is speci city (Poole, 1985;Touretzky, 1986;Touretzky, Thomason, & Horty, 1991). In case of a con ict between defaults we tend to prefer the more speci c one since this default provides more reliable information. E.g., if we know that students are adults, adults are normally employed, students are normally not employed, we want to conclude \\Peter is not employed\" from the information that Peter is a student, thus preferring the student default over the con icting adult default.\nSpeci city is an important source of preferences, but not the only one, and at least in some applications not necessarily the most important one. In the legal domain it may, for instance, be the case that a more general rule is preferred since it represents federal law as opposed to state law (Prakken, 1993). In these cases preferences may be based on some basic principles regulating how con icts among rules are to be resolved.\nAlso in other application domains, like model based diagnosis or con guration, preferences play a fundamental role. Model based diagnosis uses logical descriptions of the normal behaviour of components of a device together with a logical description of the actually observed behaviour. One tries to assume normal behaviour for as many components as possible. A diagnosis corresponds to a set of components for which these normalcy assumptions lead to inconsistency. Very often a large number of possible diagnoses is obtained. In real life some components are less reliable than others. To eliminate less plausible diagnoses one can give the normalcy assumptions for reliable components higher priority.\nIn con guration tasks it is often impossible to achieve all of the design goals. Often one can distinguish more important goals from less important ones. To construct the best possible con gurations goals then have to be represented as defaults with di erent preferences according to their desirability. c 1996 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved." }, { "figure_ref": [], "heading": "Brewka", "publication_ref": [ "b11", "b9", "b3", "b23", "b18", "b12", "b1", "b0", "b12", "b7" ], "table_ref": [], "text": "The relevance of preferences is well-recognized in nonmonotonic reasoning, and prioritized versions for most of the nonmonotonic logics have been proposed, e.g., prioritized circumscription (Lifschitz, 1985), hierarchic autoepistemic logic (Konolige, 1988), prioritized default logic (Brewka, 1994a). In these approaches preferences are handled in an \\external\" manner in the following sense: some ordering among defaults is used to control the generation of the nonmonotonic conclusions. For instance, in the case of prioritized default logic this information is used to control the generation of extensions. However, the preference information itself is not expressed in the logical language. This means that this kind of information has to be fully pre-speci ed, there is no way of reasoning about (as opposed to reasoning with) preferences. This is in stark contrast to the way people reason and argue with each other. In legal argumentation, for instance, preferences are context-dependent, and the assessment of the preferences among involved con icting laws is a crucial (if not the most crucial) part of the reasoning.\nWhat we would like to have, therefore, is an approach that allows us to represent preference information in the language and derive such information dynamically. In a recent paper (Brewka, 1994b) the author has described a variant of normal default logic in which reasoning about preferences is possible. Although the version of default logic presented in this earlier paper produces reasonable results in most cases, this approach has several drawbacks:\n1. The approach is computationally extremely demanding as it involves the construction of the Reiter extensions and an additional compatibility check for each extension guaranteeing that the preference information was taken into account adequately. 2. It may happen that consistent default theories, i.e., theories whose strict part is satis able, possess no extensions at all. This is astonishing since in that paper we only dealt with normal defaults. The non-existence of extensions is due to defeasible preference information. It is highly questionable whether such information should be able to destroy all extensions. 3. The earlier paper did not take non-normal defaults into account, it is thus not general enough to cover normal logic programs with negation as failure. The approach presented in this paper will be based on extended logic programs with two types of negation. This means that in comparison with our earlier proposal we are more restrictive in one respect and more general in another: we are more restrictive since we do not allow arbitrary rst order formulas as in normal default logic; we are more general since we admit negation as failure and hence rules which correspond to non-normal defaults in Reiter's logic. We also switch from the extension based semantics of default logic to well-founded semantics (van Gelder, Ross, & Schlipf, 1991;Przymusinski, 1991;Lifschitz, 1996), i.e., to an inherently skeptical approach where the nonmonotonic conclusions are de ned directly, not through the notion of extensions. It is well-known that well-founded semantics sometimes looses intuitively expected conclusions. This is also the case in our proposal. However, this is outweighed by a tremendous gain in e ciency: the well-founded conclusions can be computed in polynomial time.\nThe outline of the rest of the paper is as follows: in Section 2 we rst review a de nition of well-founded semantics for logic programs with two types of negation which is based on the double application of a certain anti-monotone operator. The de nition extends Baral and Subrahmanian's formulation of well-founded semantics for normal logic programs (Baral & Subrahmanian, 1991) and was used by several authors (Baral & Gelfond, 1994;Lifschitz, 1996). We show that this de nition su ers from an unnecessary weakness and present a reformulation that leads to better results. Section 3, the main section of the paper, introduces our dynamic treatment of preferences together with several small motivating examples. We show that our conclusions are, in general, a superset of the well-founded conclusions. Section 4 illustrates the expressive power of our approach using a more realistic example from legal reasoning. Section 5 shows that the worst case time complexity for generating well-founded conclusions for prioritized programs is polynomial. Section 6 investigates the relationship to Gelfond and Lifschitz's answer set semantics (Gelfond & Lifschitz, 1990). Section 7 discusses related work and concludes." }, { "figure_ref": [], "heading": "Well-Founded Semantics for Extended Logic Programs", "publication_ref": [ "b0", "b7", "b18", "b23", "b17", "b1", "b0", "b12", "b20", "b0" ], "table_ref": [], "text": "A (propositional) extended logic program consists of rules of the form c a 1 ; : : : ; a n ; not b 1 ; : : : ; not b m where the a i ; b j and c are propositional literals, i.e., either propositional atoms or such atoms preceded by the classical negation sign. The symbol not denotes negation by failure (weak negation), : denotes classical (strong) negation. For convenience we will sometimes use a rule schema to represent a set of propositional rules, namely the set of all ground instances of the schema.\nExtended logic programs are very useful for knowledge representation purposes, see for instance (Baral & Gelfond, 1994) for a number of illustrative examples. Two major semantics for extended logic programs have been de ned: (1) answer set semantics (Gelfond & Lifschitz, 1990), an extension of stable model semantics, and (2) a version of well-founded semantics (Przymusinski, 1991). The second approach can be viewed as an e cient approximation of the rst.\nLet us rst introduce answer sets. We say a rule r 2 P of the form above is defeated by a literal l if l = b i for some i 2 f1; : : : ; mg. We say r is defeated by a set of literals X if X contains a literal that defeats r. Furthermore, we call the rule obtained by deleting weakly negated preconditions from r the monotonic counterpart of r and denote it with Mon(r). We also apply Mon to sets of rules with the obvious meaning.\nDe nition 1 Let P be a logic program, X a set of literals. The X-reduct of P, denoted P X , is the program obtained from P by deleting each rule defeated by X, and replacing each remaining rule r with its monotonic counterpart Mon(r).\nDe nition 2 Let R be a set of rules without negation as failure. Cn(R) denotes the smallest set of literals that is 1. closed under R, and Brewka 2. logically closed, i.e., either consistent or equal to the set of all literals.\nDe nition 3 Let P be a logic program, X a set of literals. De ne an operator P as follows:\nP (X) = Cn(P X ) X is an answer set of P i X = P (X).\nA literal l is a consequence of a program P under answer set semantics, denoted l 2 Ans(P), i l is contained in all answer sets of P.\nThe second major semantics for extended logic programs, well-founded semantics, is an inherently skeptical semantics that refrains from drawing conclusions whenever there is a potential con ict. The original formulation of well-founded semantics for general logic programs by Gelder, Ross and Schlipf (1991) is based on a certain partial model. Przymusinski reconstructed this de nition in 3-valued logic (Przymusinski, 1990). The formulation using an anti-monotone operator was rst given by Baral and Subrahmanian (1991) for general logic programs together with a corresponding de nition for default logic. The straightforward extension of this formulation (respectively, the restriction of the default logic de nition) to extended logic programs that will be introduced now was used by several authors, e.g. (Baral & Gelfond, 1994;Lifschitz, 1996).1 Note that in this paper we will only consider the literals that are true in the corresponding 3-valued semantics.\nLike answer set semantics the well-founded semantics for extended logic programs is based on the operator P However, the operator is used in a totally di erent way. Since P is anti-monotone the function P = ( P ) 2 is monotone. According to the famous Knaster-Tarski theorem (Tarski, 1955) every monotone operator has a least xpoint. The set of well-founded conclusions of a program P, denoted WFS(P), is de ned to be this least xpoint of P . The xpoint can be approached from below by iterating P on the empty set. In case P is nite this iteration is guaranteed to actually reach the xpoint.\nThe intuition behind this use of the operator is as follows: whenever P is applied to a set of literals X known to be true it produces the set of all literals that are still potentially derivable. Applying it to such a set of potentially derivable literals it produces a set of literals known to be true, often larger than the original set X. Starting with the empty set and iterating until the xpoint is reached thus produces a set of true literals. It can be shown that every well-founded conclusion is a conclusion under the answer set semantics. Well-founded semantics can thus be viewed as an approximation of answer set semantics.\nUnfortunately it turns out that for many programs the set of well-founded conclusions is extremely small and provides a very poor approximation of answer set semantics. Consider the following program P 0 which has also been discussed by Baral and Gelfond (1994):\n1) b not:b 2) a not:a 3) :a not a\nThe set of well-founded conclusions is empty since P 0 (;) equals Lit, the set of all literals, and the Lit-reduct of P 0 contains no rule at all. This is surprising since, intuitively, the con ict between 2) and 3) has nothing to do with :b and b.\nThis problem arises whenever the following conditions hold: 1. a complementary pair of literals is provable from the monotonic counterparts of the rules of a program P, and 2. there is at least one proof for each of the complementary literals whose rules are not defeated by Cn(P 0 ), where P 0 consists of the \\strict\" rules in P, i.e., those without negation as failure.\nIn this case well-founded semantics concludes l i l 2 Cn(P 0 ). It should be obvious that such a situation is not just a rare limiting case. To the contrary, it can be expected that many commonsense knowledge bases will give rise to such undesired behaviour. For instance, assume a knowledge base contains information that birds normally y and penguins normally don't, expressed as the set of ground instances of the following rule schemata: 1) fly(x) not:fly(x); bird(x) 2) :f ly(x) not fly(x); penguin(x) Assume further that the knowledge base contains the information that Tweety is a penguin bird. Now if neither fly(Tweety) nor :f ly(Tweety) follows from strict rules in the knowledge base we are in the same situation as with P 0 : well-founded semantics does not draw any \\defeasible\" conclusion, i.e. a conclusion derived from a rule with weak negation in the body, at all.\nWe want to show that a minor reformulation of the xpoint operator can overcome this intolerable weakness and leads to much better results. Consider the following operator ? P (X) = Cl(P X ) where Cl(R) denotes the minimal set of literals closed under the (classical) rules R. Cl(R) is thus like Cn(R) without the requirement of logical closedness. Now de ne ? P (X) = P ( ? P (X))\nAgain we iterate on the empty set to obtain the well-founded conclusions of a program P which we will denote WFS ? (P ).\nConsider the e ects of this modi cation on our example P 0 . ? P 0 (;) = fa; :a; bg. Rule 1) is contained in the fa; :a; bg-reduct of P 0 and thus ? P 0 (;) = fbg. Since b is also the only literal contained in all answer sets of P 0 our approximation actually coincides with answer set semantics in this case.\nIn the Tweety example both fly(Tweety) and :f ly(Tweety) are provable from the ;-reduct of the knowledge base. However, this has no in uence on whether a rule not containing the weak negation of one of these two literals in the body is used to produce ? P (;) or not. The e ect of the con icting information about Tweety's ying ability is thus kept local and does not have the disastrous consequences it has in the original formulation of well-founded semantics." }, { "figure_ref": [], "heading": "Brewka", "publication_ref": [ "b7" ], "table_ref": [], "text": "It is not di cult to see that the new monotone operator is equivalent to the original one whenever P does not contain negation as failure. In this case the X-reduct of P, for arbitrary X, is equivalent to P and for this reason it does not make any di erence whether to use P or ? P as the operator to be applied rst in the de nition of P . The same is obviously true for programs without classical negation: for such programs Cn can never produce complementary pairs of literals and for this reason the logical closedness condition is obsolete.\nIn the general case the new operator produces more conclusions than the original one:\nProposition 1 Let P be an extended logic program. For an arbitrary set of literals X we have P (X) ? P (X): Proof: We have ? P (X) P (X), thus P P (X) P ? P (X) . From this the result follows immediately. 2\nIt remains to be shown that the new operator produces no unwanted results, i.e., that our new semantics can still be viewed as an approximation of answer set semantics.\nProposition 2 Let P be an extended logic program. Let Ans(P) be the set of literals contained in all answer sets of P. WFS ? is correct wrt. answer set semantics, i.e., WFS ? (P ) Ans(P). Proof: The proposition is trivially satis ed whenever P has no answer set at all, or when Lit is the single answer set of P. So assume P possesses a non-empty set of consistent answer sets, the only remaining possibility according to results in (Gelfond & Lifschitz, 1990).\nTo show that iterating ? P on the empty set cannot produce a literal s 6 2 Ans(P) it su ces to show that X Ans(P) implies ? P (X) Ans(P).\nLet A be an arbitrary answer set and assume X Ans(P). Since X A we have P A P X . Since by assumption A is consistent we have A = Cn(P A ) Cl(P X ). Therefore ? P (X) = Cn(P Cl(P X ) ) Cn(P A ) = A. 2\nFor the rest of the paper a minor reformulation turns out to be convenient. Instead of using the monotonic counterparts of undefeated rules we will work with the original rules and extend the de nitions of the two operators Cn and Cl accordingly, requiring that weakly negated preconditions be neglected, i.e., for an arbitrary set of rules P with weak negation we de ne Cn(P) = Cn(Mon(P)) and Cl(P) = Cl(Mon(P)). We can now equivalently characterize P and ?\nP by the equations P (X) = Cn(P X ) ? P (X) = Cl(P X ) where P X denotes the set of rules not defeated by X.\nBefore we turn to the treatment of preferences we give an alternative characterization of ? P based on the following notion:\nDe nition 4 Let P be a logic program, X a set of literals. A rule r is X-safe wrt. P (r 2 SAFE X (P )) if r is not defeated by ? P (X) or, equivalently, if r 2 P ?\nWith this new notion we can obviously characterize ? P as follows:\n? P (X) = Cn(P ? P (X) ) = Cn(P ? P (X) ) = Cn(SAFE X (P )) It is this last formulation that we will modify later. More precisely, the notion of X-safeness will be weakened to handle preferences adequately." }, { "figure_ref": [], "heading": "Adding Preferences", "publication_ref": [ "b19", "b3" ], "table_ref": [], "text": "In order to handle preferences we need to be able to express preference information explicitly. Since we want to do this in the logical language we have to extend the language. We do this in two respects:\n1. we use a set of rule names N together with a naming function name to be able to refer to particular rules, 2. we use a special (in x) symbol that can take rule names as arguments to represent preferences among rules.\nIntuitively, n 1 n 2 where n 1 and n 2 are rule names means the rule with name n 1 is preferred over the rule with name n 2 .2 A prioritized logic program is a pair (R; name) where R is a set of rules and name a naming function. To make sure that the symbol has its intended meaning, i.e., represents a transitive and anti-symmetric relation, we assume that R contains all ground instances of the schemata N 1 N 3 N 1 N 2 ; N 2 N 3 and :(N 2 N 1 ) N 1 N 2 where N i are parameters for names. Note that in our examples we won't mention these rules explicitly.\nThe function name is a partial injective naming function that assigns a name n 2 N to some of the rules in R. Note that not all rules do necessarily have a name. The reason is that names will only play a role in con ict resolution among defeasible rules, i.e., rules with weakly negated preconditions. For this reason names for strict rules, i.e., rules in which the symbol not does not appear, won't be needed. A technical advantage of leaving some rules unnamed is that the use of rule schemata with parameters for rule names does not necessarily make programs in nite. If we would require names for all rules we would have to use a parameterized name for each schema and thus end up with an in nite set N of names.\nIn our examples we assume that N is given implicitly. We also de ne the function name implicitly. We write: n i : c a 1 ; : : : ; a n ; not b 1 ; : : : ; not b m to express that name(c a 1 ; : : : ; a n ; not b 1 ; : : : ; not b m ) = n i .\nFor convenience we will simply speak of programs instead of prioritized logic programs whenever this does not lead to misunderstandings.\nBefore introducing new de nitions we would like to point out how we want the new explicit preference information to be used. Our approach follows two principles:\n1. we want to extend well-founded semantics, i.e. we want that every WFS ? -conclusion remains a conclusion in the prioritized approach, 2. we want to use preferences to solve con icts whenever this is possible without violating principle 1.\nLet us rst explain what we mean by con ict here. Rules may be con icting in several ways.\nIn the simplest case two rules may have complementary literals in their heads. We call this a type-I con ict. Con icts of this type may render the set of well-founded conclusions inconsistent, but do not necessarily do so. If, for instance, a precondition of one of the rules is not derivable or a rule is defeated the con ict is implicitly resolved. In that case the preference information will simply be neglected. Consider the following program P 1 : n\n1 : b not c n 2 : :b not b n 3 : n 2 n 1\nThere is a type-I con ict between n 1 and n 2 . Although the explicit preference information gives precedence to n 2 we want to apply n 1 here to comply with the rst of our two principles.\nTechnically, this means that we can apply a preferred rule r only if we are sure that r's application actually leads to a situation where literals defeating r can no longer be derived.\nThe following two rules exhibit a di erent type of con ict:\na not b b not a\nThe heads of these rules are not complementary. However, the application of one rule defeats the other and vice versa. We call this a direct type-II con ict. Of course, in the general case the defeat of the con icting rule may be indirect, i.e. based on the existence of additional rules. We say r 1 and r 2 are type-II con icting wrt. a set of rules R i 1. Cl(R) neither defeats r 1 nor r 2 , 2. Cl(R + r 1 ) defeats r 2 , and 3. Cl(R + r 2 ) defeats r 1 Here R + r abbreviates R frg. A direct type-II con ict is thus a type-II con ict wrt. the empty set of rules. The rule sets R that have to be taken into account in our well-founded semantics based approach are subsets of the rules which are undefeated by the set of literals known to be true. Note that the two types of con ict are not disjoint, i.e. two rules may be in con ict of both type-I and type-II. Consider the following program P 2 , a slight modi cation of P 1 :\n1 : b not c; not:b n 2 : :b not b n 3 : n 2 n 1 Now we have a type-II con ict between n 1 and n 2 (more precisely, a direct type-II and a type-I con ict) that is not solvable by the implicit mechanisms of well-founded semantics alone. It is this kind of con ict that we try to solve by the explicit preference information.\nIn our example n 2 will be used to derive :b. Note that now the application of n 2 defeats n 1 and there is no danger that a literal defeating n 2 might become derivable later. Generally, a type-II con ict between r 1 and r 2 (wrt. some undefeated rules of the program) will be solved in favour of the preferred rule, say r 1 , only if applying r 1 excludes any further possibility of deriving an r 1 -defeating literal.\nNote that every type-I con ict can be turned into a direct type-II con ict by a (nonequivalent!) rerepresentation of the rules: if each con icting rule r is replaced by its seminormal form 3 then all con icts become type-II con icts and are thus amenable to con ict resolution through preference information.\nAfter this motivating discussion let us present the new de nitions. Our treatment of priorities is based on a weakening of the notion of X-safeness. In Sect. 2 we considered a rule r as X-safe whenever there is no proof for a literal defeating r from the monotonic counterparts of X-undefeated rules. Now in the context of a prioritized logic program we will consider a rule r as X-safe if there is no such proof from monotonic counterparts of a certain subset of the X-undefeated rules. The subset to be used depends on the rule r and consists of those rules that are not \\dominated\" by r. Intuitively, r 0 is dominated by r i r 0 is (1) known to be less preferred than r and (2) defeated when r is applied together with rules that already have been established to be X-safe. ( 2) is necessary to make sure that explicit preference information is used the right way, according to our discussion of P 1 .\nIt is obvious that whenever there is no proof for a defeating literal from all X-undefeated rules there can be no such proof from a subset of these rules. Rules that were X-safe according to our earlier de nition thus remain to be X-safe. Here are the precise de nitions: De nition 5 Let P = (R; name) be a prioritized logic program, X a set of literals, Y a set of rules, and r 2 R. The set of rules dominated by r wrt. X and Y , denoted Dom X;Y (r), is the set fr 0 2 R j name(r) name(r 0 ) 2 X and Cl(Y + r) defeats r 0 g Note that Dom X;Y (r) is monotonic in both X and Y . We can now de ne the X-safe rules inductively:\nDe nition 6 Let P = (R; name) be a prioritized logic program, X a set of literals. The set of X-safe rules of P, denoted SAFE pr X (P ), is de ned as follows: SAFE pr X (P ) = S 1 i=0 R i , where R 0 = ;, and for i > 0, R i = fr 2 R j r not defeated by Cl(R X n Dom X;R i 1 (r))g\n3. The seminormal form of c a1; : : : ; an; not b1; : : : ; not bm is c a1; : : : ; an; not b1; : : : ; not bm; not c 0 where c 0 is the complement of c. The term seminormal is taken from Reiter (1980).\n1 : b not c n 2 : :b not b n 3 : n 2 n 1 Since ? P 1 (;) does not defeat n 1 this rule is safe from the beginning, i.e., n 1 2 SAFE pr ; (P 1 ). pr P (;) yields fn 2 n 1 ; :(n 1 n 2 ); bg which is also the least xpoint. The explicit preference does not interfere with the implicit one, as intended.\nThe situation changes in P 2 where the rst rule in P 1 is replaced by n 1 : b not c; not:b The new rule n 1 is not in SAFE pr ; (P 2 ) since it is defeated by the consequence of n 2 and n 2 is not dominated by n 1 . pr P 2 (;) yields S 1 = fn 2 n 1 ; :(n 1 n 2 )g Now n 2 2 SAFE pr S 1 (P 2 ) since n 2 dominates n 1 wrt. S 1 and the empty set of rules. We thus conclude :b as intended. The least xpoint is S 2 = fn 2 n 1 ; :(n 1 n 2 ); bg\nIn (Brewka, 1994b) we used an example to illustrate the possible non-existence of extensions in our earlier approach. This example involved two normal defaults each of which had the conclusion that the other one is to be preferred. The prioritized logic programming representation of this example is the following: n 1 : n 2 n 1 not:(n 2 n 1 ) n 2 : n 1 n 2 not:(n 1 n 2 )\nIt is straightforward to verify that the set of well-founded conclusions for this example is empty." }, { "figure_ref": [], "heading": "A Legal Reasoning Example", "publication_ref": [ "b3", "b19" ], "table_ref": [], "text": "In this section we want to show that the additional expressiveness provided by our approach actually helps representing real world problems. We will use an example rst discussed by Gordon (1993, p.7). We somewhat simpli ed it for our purposes. The same example was also used in (Brewka, 1994b) to illustrate the approach presented there.\nAssume a person wants to nd out if her security interest in a certain ship is perfected. She currently has possession of the ship. According to the Uniform Commercial Code (UCC, x9-305) a security interest in goods may be perfected by taking possession of the collateral. However, there is a federal law called the Ship Mortgage Act (SMA) according to which a security interest in a ship may only be perfected by ling a nancing statement. Such a statement has not been led. Now the question is whether the UCC or the SMA takes precedence in this case. There are two known legal principles for resolving con icts of this kind. The principle of Lex Posterior gives precedence to newer laws. In our case the UCC is newer than the SMA. On the other hand, the principle of Lex Superior gives precedence to laws supported by the higher authority. In our case the SMA has higher authority since it is federal law.\nThe available information can nicely be represented in our approach. To make the example somewhat shorter we use the notation c ( a 1 ; : : : ; a n ; not b 1 ; : : : ; not b m as an abbreviation for the rule c a 1 ; : : : ; a n ; not b 1 ; : : : ; not b m ; not c 0 where c 0 is the complement of c, i.e. :c if c is an atom and a if c = :a. Such rules thus correspond to semi-normal or, if m = 0, normal defaults in Reiter's default logic (Reiter, 1980).\nWe use the ground instances of the following named rules to represent the relevant article of the UCC, the SMA, Lex Posterior (LP), and Lex Superior (LS). The following facts are known about the case and are represented as rules without body (and without name): possession ship :f in-statement more-recent(UCC; SMA) fed-law(SMA) state-law(UCC) Let's call the above set of literals H. Iterated application of pr P yields the following sequence of literal sets (in each case S i = ( pr P ) i (;)):\nS 1 = H S 2 = S 1\nThe iteration produces no new results besides the facts already contained in the program. The reason is that UCC and SMA block each other, and that no preference information is produced since also the relevant instances of Lex Posterior and Lex Superior block each other. The situation changes if we add information telling us how con icts between the latter two are to be resolved. Assume we add the following information: 5 LS(SMA; UCC) LP(UCC; SMA) 5. In realistic settings one would again use a schema here. In order to keep the example simple we use the relevant instance of the schema directly.\nwe obtain the following sequence:\nS 1 = H fLS(SM A; UCC) LP(UCC; SMA); :LP (UCC; SMA) LS(SMA; UCC)g S 2 = S 1 fSM A UCC; :U CC SMAg S 3 = S 2 f:perf ectedg S 4 = S 3 This example nicely illustrates how in our approach con ict resolution strategies can be speci ed declaratively, by simply asserting relevant preferences among the involved con icting rules." }, { "figure_ref": [], "heading": "Complexity", "publication_ref": [ "b0", "b24", "b5" ], "table_ref": [], "text": "The time complexity of well-founded semantics for a general logic program P is known to be quadratic in the size of P, a result attributed to folklore in (Baral & Gelfond, 1994). A\nproof was given by Witteveen (1991). His analysis is based on Dowling and Gallier's result whereby satis ability of Horn clauses can be tested in linear time (Dowling & Gallier, 1984). In Dowling and Gallier's approach it is actually a minimal model of a Horn theory that is computed in linear time. Since minimal models of Horn theories are equivalent to closures of rules without negation the result is directly applicable to well-founded semantics for general logic programs. It also applies to well-founded semantics for extended logic programs since for the computation of the least xed point of P respectively ? P the complementary literals l and :l can be viewed as two distinct atoms.\nFor the complexity analysis of our prioritized approach let n be the number of rules in a prioritized program P = (R; name). A straightforward implementation would model the application of pr P in an outer loop and the computation of SAFE pr X in an inner loop. Fortunately, we can combine the two loops into a single loop whose body is executed at most n times. The reason is that SAFE pr X grows monotonically with X and pr P grows monotonically with SAFE pr X . Here is a nondeterministic algorithm for computing the least xed point of pr P : Procedure WFS+ Input: A prioritized logic program P = (R; name) with jRj = n Output: the least xed point of pr P S 0 := ;; R 0 := ;; for i = 1 to n do if there is a rule r 2 R S i 1 n R i 1 such that Cl(R S i 1 n Dom S i 1 ;R i 1 (r)) does not defeat r then R i := R i 1 + r; S i := Cn(R i ) else return S i 1 endfor end WFS+ In each step S i and R i denote the well-founded conclusions, respectively safe rules established so far. The body of the for-loop is executed at most n times and there are at most n rules that have to be checked for satisfaction of the if-condition. The if-condition itself can, according to the results of Dowling and Gallier, be checked in linear time: we need to establish Dom S i 1 ;R i 1 (r) which involves the computation of a minimal model of the monotonic counterparts of R i 1 + r. We then have to eliminate the rules dominated by r form R S i 1 and compute another minimal model to see whether r is defeated.\nMore precisely, Dowling and Gallier show that the needed time is linear in the number of propositional constants. This number may be greater than n in principle. However, since literals that do not appear in the head of a rule must be false in the minimal model we can eliminate them accordingly and work with a set of rules that has at most n literals. This leads to an overall time complexity of O(n 3 ).\nIt should be mentioned, however, that due to the use of rule schemata for transitivity and anti-symmetry of prioritized programs can be considerably larger than corresponding unprioritized programs. The transitivity schema, for instance, has jN j 3 instances. An implementation should, therefore, be based on an approach where instances are only generated when actually needed, or on other built in techniques that handle transitivity and anti-symmetry. Such techniques are beyond the scope of this paper." }, { "figure_ref": [], "heading": "Relation to Answer Sets", "publication_ref": [ "b7" ], "table_ref": [], "text": "In this section we will investigate the relation of our modi cation of well-founded semantics to answer set semantics (Gelfond & Lifschitz, 1990). Since our approach handles an extended language in which certain symbols are given a particular pre-de ned meaning a thorough investigation of this relationship is only possible after a corresponding extension of answer set semantics to prioritized logic programs has been de ned. We are not planning to introduce and defend such an extension in this paper. Nevertheless, we can give some preliminary results here. More precisely, we will show that the conclusions produced in our proposal are correct wrt. a particular subclass of answer sets, the so-called priority-preserving answer sets.\nDe nition 8 Let R be a logic program, A an answer set of R, and let r = c a 1 ; : : : ; a n ; not b 1 ; : : : ; not b m be a rule in R. We say r is rebutted in A (r 2 re R (A)) i fa 1 ; : : : ; a n g A and r is defeated in A.\nDe nition 9 Let P = (R; name) be a prioritized logic program, A an answer set of R. A is called priority preserving i for every r 2 re R (A) the set Cl(R A n Dom A;R A (r))\ndefeats r.\nThe intuition behind the de nition is the following: whenever a rule r is rebutted in an answer set A but its rebuttal is solely based on rules dominated by r with respect to A the rules not defeated by A we consider this as a violation of the available preference information and \\reject\" the answer set.\nWe can now show correctness of our approach wrt. priority preserving answer sets.\nProposition 4 Let P = (R; name) be a prioritized logic program. l 2 WFS pr (P ) implies l is contained in all priority preserving answer sets of R.\nProof: The proof is similar to the correctness proof of WFS ? wrt. answer set semantics (Proposition 2). Again the proposition is trivially satis ed whenever there is no priority preserving answer set, or Lit is the single priority preserving answer set. We may therefore assume that every priority preserving answer set of R is consistent.\nIn the inductive step we show that for an arbitrary priority preserving answer set A a rule r is not defeated in A whenever r 2 SAFE pr X (P ), given that X is a set of literals true in A. From this it follows that Cn(SAFE pr X (P )) contains only literals true in all priority preserving answer sets.\nLet R i be de ned as in Def. 6 (the inductive de nition of X-safeness) and assume it is already known that the rules in R i 1 are not defeated in A. By de nition r = c a 1 ; : : : ; a n ; not b 1 ; : : : ; not b m 2 R i i r is not defeated by Cl(R X n Dom X;R i 1 (r)). We distinguish two cases:\nCase 1: a 1 ; : : : ; a n 2 A: Since X A and Dom is monotonic in both indices we have Cl(R A n Dom A;R A (r)) Cl(R X n Dom X;R i 1 (r)). Therefore r cannot be defeated in A since A is priority preserving.\nCase 2: a 1 ; : : : ; a n 6 2 A: Since the prerequisites of r cannot be derived from R A the set Dom A;R A (r) contains only rules defeated by Cl(R A ) alone. Since A is an answer set these rules can't be contained in R A . Therefore Cl(R A ) = Cl(R A n Dom A;R A (r)) and thus Cl(R A ) Cl(R X n Dom X;R i 1 (r)). Since by assumption A is consistent we also have Cl(R A ) = Cn(R A ) and therefore r cannot be defeated in A. 2 We have seen that our approach is guaranteed to produce only conclusions contained in all priority preserving answer sets. We can also ask the opposite question: given a particular answer set A, is it always possible to obtain A (or, more precisely, a superset of A containing additional preference information) through prioritized well-founded semantics by adding adequate preference information?\nThe answer to this question is no. The reason is that for sake of tractability we always consider single rules when determining X-safeness in our approach. Here is an example: n 1 : b not a n 2 : c not b n 3 : d not c n 4 : a not d This program has two answer sets S 1 = fb; dg and S 2 = fc; ag. Consider S 1 . Even if we add the preference information that both n 1 and n 3 are preferred to each of n 2 and n 4 we are unable to derive b and d. For instance, n 1 is not X-safe because its head does not defeat n 4 .\nIn order to derive S 1 it would be necessary to take the possibility of sets of rules (here n 1 and n 3 ) defeating less preferred sets of rules (here n 2 and n 4 ) into account. Although this is possible in principle it would clearly lead to intractability since in the worst case an exponential number of subsets of rules would have to be checked. Giving up tractability seems too high a price for what is gained and we stick to our more cautious approach for this reason." }, { "figure_ref": [], "heading": "Related Work and Conclusions", "publication_ref": [ "b10", "b4", "b4", "b16", "b6" ], "table_ref": [], "text": "Several approaches treating preferences in the context of logic programming have been described in the literature. We will now discuss how they relate to our proposal. Kowalski and Sadri (1991) proposed to consider rules with negation in the head as exceptions to more general rules and to give them higher priority. Technically, this is achieved by a rede nition of answer sets. It turns out that the original answer sets remain answer sets according to the new de nition whenever they are consistent. The main achievement is that programs whose single answer set is inconsistent become consistent in the new semantics. The approach can hardly be viewed as a satisfactory treatment of preferences for several reasons:\n1. preferences are implicit and highly restricted; the asymmetric treatment of positive and negative information seems unjusti ed, 2. it is di cult to see how, for instance, exceptions of exceptions can be represented, 3. fewer conclusions are obtained than in the original answer set semantics, contrary to what one would expect when preferences are taken into account.\nIt is, therefore, more reasonable to view Kowalski and Sadri's approach as a contribution to inconsistency handling rather than preference handling. An approach that is closer in spirit to ours is ordered logic programming (Buccafurri, Leone, & Rullo, 1996). An ordered logic program is a set of components forming an inheritance hierarchy. Each component consists of a set of rules. The inheritance hierarchy is used to settle con icts among rules: rules lower in the hierarchy have preference over those higher up in the hierarchy since the former are considered more speci c. A notion of a stable model for ordered logic programs can be de ned (see Buccafurri et al., 1996, for the details).\nThere are two main di erences between ordered logic programs and our extension of well-founded semantics:\n1. ordered logic programs use only one kind of negation, the distinction between negation as failure and classical negation is not expressible in the language, 2. the preferences of ordered logic programs are prede ned through the inheritance hierarchy, there is no way of deriving context-dependent preferences dynamically.\nFinally we would like to mention an approach recently presented by Prakken and Sartor (1995). They extend Dung's argument system style reconstruction of logic programming (Dung, 1993) with a preference handling method that is very close to ours. This is not astonishing since, as the authors point out, their approach is based on \\unpublished ideas of Gerhard Brewka\". In fact, it was a preliminary version of this paper that led to their formulation.\npresented in this paper an extension of logic programs with two types of negation where preference information among rules can be expressed in the logical language. This extension is very useful for practical applications, as was demonstrated using an example from legal reasoning. The main advantage of our approach is that also this type of information is context-dependent and can be reasoned upon and derived dynamically.\nFrom well-founded semantics we inherit some drawbacks and advantages. Sometimes reasonable conclusions are not obtained. On the other hand, the addition of preference information can make the set of conclusions considerably larger, as we have shown. Moreover, -and this certainly is the greatest advantage of well-founded semantics and our proposed extension -reasoning can be done in polynomial time.\nThe simple and natural representation of the legal example discussed in Sect. 4 seems to indicate that our generalization of well-founded semantics may provide a new attractive compromise between expressiveness and e ciency with a number of interesting potential applications." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I would like to thank Franz Baader, J urgen Dix, Tom Gordon, Henry Prakken, Cees Witteveen, and two anonymous referees for interesting comments helping to improve the quality of this paper." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b13" ], "table_ref": [], "text": "Note that X-safeness is obviously monotonic in X. Based on this notion we introduce a new monotonic operator pr P :\nDe nition 7 Let P = (R; name) be a prioritized logic program, X a set of literals. The operator pr P is de ned as follows:\npr P (X) = Cn(SAFE pr X (P ))\nAs before we de ne the (prioritized) well-founded conclusions of P, denoted WFS pr (P ), as the least xpoint of pr P . If a program does not contain preference information at all, i.e., if the symbol does not appear in R, the new semantics coincides with WFS ? since in that case no rule can dominate another rule. In the general case, since the new de nition of X-safeness is weaker than the one used earlier in Sect. 2 we may have more X-safe rules and for this reason obtain more conclusions than via ? P . The following result is thus obvious:\nProposition 3 Let P = (R; name) be a prioritized logic program. For every set of literals X we have ? R (X) pr P (X).\nFrom this and the monotonicity of both operators it follows immediately that WFS ? (R) WFS pr (P ). 4\nWell-founded semantics has sometimes been criticized for being too weak and missing intended conclusions. The proposition shows that we can strengthen the obtained results by adding adequate preference information. As a rst simple example let us consider the following program P 3 : n 1 : b not c n 2 : c not b n 3 : n 2 n 1\nWe rst apply pr P 3 to the empty set. Besides the instances of the transitivity and antisymmetry schema that we implicitly assume only n 3 is in SAFE pr ; (P 3 ). We thus obtain S 1 = fn 2 n 1 ; :(n 1 n 2 )g\nWe next apply pr P 3 to S 1 . Since n 2 n 1 2 S 1 we have n 1 2 Dom S 1 ;; (n 2 ). n 2 2 SAFE pr S 1 (P 3 ) since Cl(P 3 S 1 n fn 1 g) does not defeat n 2 and we obtain S 2 = fn 2 n 1 ; :(n 1 n 2 ); cg\nFurther iteration of pr P 3 yields no new literals, i.e. S 2 is the least xpoint. Note that c is not a conclusion under the original well-founded semantics.\nWe next show that the programs P 1 and P 2 discussed earlier are handled as intended. Here is P 1 : 4. Pereira and Alferes (1992) argue that each extension of well-founded semantics to two types of negation should satisfy what they call coherence principle: a weakly negated precondition should be considered satis ed whenever the corresponding strongly negated literal is derived. To model this principle in our approach one would have to weaken the notion of X-safeness even further. In the inductive de nition, a rule r would have to be considered a member of Ri whenever for each weak precondition not b of r b 6 2 Cl(RX n DomX;R i 1 (r)), or b 0 2 X, where b 0 = :b if b is an atom and a if b = :a." } ]
[ { "authors": "C Baral; M Gelfond", "journal": "Journal of Logic Programming", "ref_id": "b0", "title": "Logic programming and knowledge representation", "year": "1994" }, { "authors": "C Baral; V Subrahmanian", "journal": "Springer", "ref_id": "b1", "title": "Duality between alternative semantics of logic programs and nonmonotonic formalisms", "year": "1991" }, { "authors": "G Brewka", "journal": "Springer", "ref_id": "b2", "title": "Adding priorities and speci city to default logic", "year": "1994" }, { "authors": "G Brewka", "journal": "", "ref_id": "b3", "title": "Reasoning about priorities in default logic", "year": "1994" }, { "authors": "F Buccafurri; N Leone; P Rullo", "journal": "Journal of Logic Programming", "ref_id": "b4", "title": "Stable models and their computation for logic programming with inheritance and true negation", "year": "1996" }, { "authors": "W Dowling; J Gallier", "journal": "Journal of Logic Programming", "ref_id": "b5", "title": "Linear time algorithms for testing the satis ability of propositional horn formulae", "year": "1984" }, { "authors": "P Dung", "journal": "", "ref_id": "b6", "title": "The acceptability of arguments and its fundamental role in nonmonotonic reasoning and logic programming", "year": "1993" }, { "authors": "M Gelfond; V Lifschitz", "journal": "", "ref_id": "b7", "title": "Logic programs with classical negation", "year": "1990" }, { "authors": "T F Gordon", "journal": "", "ref_id": "b8", "title": "The Pleadings Game: An Arti cial Intelligence Model of Procedural Justice", "year": "1993" }, { "authors": "K Konolige", "journal": "", "ref_id": "b9", "title": "Hierarchic autoepistemic theories for nonmonotonic reasoning", "year": "1988" }, { "authors": "R Kowalski; F Sadri", "journal": "New Generation Computing", "ref_id": "b10", "title": "Logic programs with exceptions", "year": "1991" }, { "authors": "V Lifschitz", "journal": "", "ref_id": "b11", "title": "Computing circumscription", "year": "1985" }, { "authors": "V Lifschitz", "journal": "CSLI publishers", "ref_id": "b12", "title": "Foundations of declarative logic programming", "year": "1996" }, { "authors": "L Pereira; J Alferes", "journal": "", "ref_id": "b13", "title": "Well founded semantics for logic programs with explicit negation", "year": "1992" }, { "authors": "D Poole", "journal": "", "ref_id": "b14", "title": "On the comparison of theories: Preferring the most speci c explanation", "year": "1985" }, { "authors": "H Prakken", "journal": "", "ref_id": "b15", "title": "Logical Tools for Modelling Legal Argument", "year": "1993" }, { "authors": "H Prakken; G Sartor", "journal": "", "ref_id": "b16", "title": "On the relation between legal language and legal argument: Assumptions, applicability and dynamic priorities", "year": "1995" }, { "authors": "T Przymusinski", "journal": "Fundamenta Informaticae", "ref_id": "b17", "title": "The well-founded semantics coincides with the three-valued stable semantics", "year": "1990" }, { "authors": "T Przymusinski", "journal": "New Generation Computing", "ref_id": "b18", "title": "Stable semantics for disjunctive programs", "year": "1991" }, { "authors": "R Reiter", "journal": "Arti cial Intelligence", "ref_id": "b19", "title": "A logic for default reasoning", "year": "1980" }, { "authors": "A Tarski", "journal": "Paci c Journal of Mathematics", "ref_id": "b20", "title": "A lattice-theoretical xpoint theorem and its applications", "year": "1955" }, { "authors": "D S Touretzky", "journal": "", "ref_id": "b21", "title": "The Mathematics of Inheritance", "year": "1986" }, { "authors": "D S Touretzky; R H Thomason; J F Horty", "journal": "", "ref_id": "b22", "title": "A skeptic's menagerie: Con ictors, preemptors, reinstaters, and zombies in nonmonotonic inheritance", "year": "1991" }, { "authors": "A Van Gelder; K Ross; J Schlipf", "journal": "Journal of the ACM", "ref_id": "b23", "title": "The well-founded semantics for general logic programs", "year": "1991" }, { "authors": "C Witteveen", "journal": "Springer LNAI", "ref_id": "b24", "title": "Partial semantics for truth maintenance", "year": "1991" } ]
[ { "formula_coordinates": [ 4, 117.24, 619.2, 69.12, 43.12 ], "formula_id": "formula_0", "formula_text": "1) b not:b 2) a not:a 3) :a not a" }, { "formula_coordinates": [ 8, 117.24, 320.76, 74.04, 43.8 ], "formula_id": "formula_1", "formula_text": "1 : b not c n 2 : :b not b n 3 : n 2 n 1" }, { "formula_coordinates": [ 12, 280.32, 547.32, 50.76, 30.24 ], "formula_id": "formula_2", "formula_text": "S 1 = H S 2 = S 1" } ]
Well-Founded Semantics for Extended Logic Programs with Dynamic Preferences
The paper describes an extension of well-founded semantics for logic programs with two types of negation. In this extension information about preferences between rules can be expressed in the logical language and derived dynamically. This is achieved by using a reserved predicate symbol and a naming technique. Con icts among rules are resolved whenever possible on the basis of derived preference information. The well-founded conclusions of prioritized logic programs can be computed in polynomial time. A legal reasoning example illustrates the usefulness of the approach.
Gerhard Brewka
[]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b17", "b19", "b17", "b6", "b20", "b9", "b1", "b24", "b4", "b0", "b14" ], "table_ref": [], "text": "Probabilistic (Bayesian) networks are an increasingly popular modeling technique that has been used successfully in numerous applications of intelligent systems such as real-time planning and navigation, model-based diagnosis, information retrieval, classi cation, Bayesian forecasting, natural language processing, computer vision, medical informatics and computational biology. Probabilistic networks allow the user to describe the environment using a \\probabilistic database\" that consists of a large number of random variables, each corresponding to an important parameter in the environment. Some random variables could in fact be hidden and may correspond to some unknown parameters (causes) that in uence the observable variables. Probabilistic networks are quite general and can store information such as the probability of failure of a particular component in a computer system, the probc 1996 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. ability of page i in a computer cache being requested in the near future, the probability of a document being relevant to a particular query, or the probability of an amino-acid subsequence in a protein chain folding into an alpha-helix conformation.\nThe applications we have in mind include networks that are dynamically maintained to keep track of a probabilistic model of a changing system. For instance, consider the task of automated detection of power-plant failures. We might repeat a cycle that consists of the following sequence of operations: First we perform sensing operations. These operations cause updates to be performed to speci c variables in the probabilistic database. Based on this evidence we estimate (query) the probability of failure in certain sites. More precisely, we query the probability distribution of the random variables that measure the probability of failure in these sites based on the evidence. Since the plant requires constant monitoring, we must repeat the cycle of sense/evaluate on a frequent basis.\nA conventional (non-probabilistic) database tracking the plant's state would not be appropriate here, because it is not possible to directly observe whether a failure is about to occur. On the other hand, a probabilistic \\database\" based on a Bayesian network will only be useful if the operations|update and query|can be performed very quickly. Because real-time or near real-time is so often necessary, the question of doing extremely fast reasoning in probabilistic networks is important.\nTraditional (non-probabilistic) databases support e cient query and update procedures that often operate in time which is sublinear in the size of the database (e.g., using binary search). Our goal in this paper is to take a step toward systems that can perform dynamic probabilistic reasoning (such as what is the probability of an event given a set of observations) in time which is sublinear in the size of the probabilistic network. Typically, sublinear performance in complex networks is attained by using parallelism. This paper relies on preprocessing.\nSpeci cally, we describe new algorithms for performing queries and updates in belief networks in the form of trees (causal trees, polytrees and join trees). We de ne two natural database operations on probabilistic networks.\n1. Update-Node: Perform sensory input, modify the evidence at a leaf node (single variable) in the network and absorb this evidence into the network.\n2. Query-Node: Obtain the marginal probability distribution over the values of an arbitrary node (single variable) in the network.\nThe standard algorithms introduced by Pearl (1988) In this paper we describe an approach to perform both queries and updates in O(log N) time. This can be very signi cant in some systems since we improve the ability of a system to respond after a change has been encountered from O(N) time to O(log N). Our approach is based on preprocessing the network using a form of node absorption in a carefully structured way to create a hierarchy of abstractions of the network. Previous uses of node absorption techniques were reported by Peot and Shachter (1991).\nWe note that measuring complexity only in terms of the size of the network, N, can overlook some important factors. Suppose that each variable in the network has domain size k or less. For many purposes, k can be considered constant. Nevertheless, some of the algorithms we consider have a slowdown which is some power of k, which can be become signi cant in practice unless N is very large. Thus we will be careful to state this slowdown where it exists. Section 2 considers the case of causal trees, i.e., singly connected networks in which each node has at most one parent. The standard algorithm (see Pearl, 1988) must use O(k 2 N) time for either updates or for retrieval, although one of these operations can be done in O(1) time. As we discuss brie y in Section 2.1, there is also a straightforward variant on this algorithm that takes O(k 2 D) time for both queries and updates, where D is the height of the tree.\nWe then present an algorithm that takes O(k 3 log N) time for updates and O(k 2 log N) time for queries in any causal tree. This can of course represent a tremendous speedup, especially for large networks. Our algorithm begins with a polynomial-time preprocessing step (linear in the size of the network), constructing another data structure (which is not itself a probabilistic tree) that supports fast queries and updates. The techniques we use are motivated by earlier algorithms for dynamic arithmetic trees, and involve \\caching\" sucient intermediate computations during the update phase so that querying is also relatively easy. We note, however, that there are substantial and interesting di erences between the algorithm for probabilistic networks and those for arithmetic trees. In particular, as will be apparent later, computation in probabilistic trees requires both bottom-up and top-down processing, whereas arithmetic trees need only the former. Perhaps even more interesting is that the relevant probabilistic operations have a di erent algebraic structure than arithmetic operations (for instance, they lack distributivity). Bayesian trees have many applications in the literature including classi cation. For instance, one of the most popular methods for classi cation is the Bayes classi er that makes independence assumption on the features that are used to perform classi cation (Duda & Hart, 1973;Rachlin, Kasif, Salzberg, & Aha, 1994). Probabilistic trees have been used in computer vision (Hel-Or & Werman, 1992;Chelberg, 1990), signal processing (Wilsky, 1993), game playing (Delcher & Kasif, 1992), and statistical mechanics (Berger & Ye, 1990). Nevertheless, causal trees are fairly limited for modeling purposes. However similar structures, called join trees, arise in the course of one of the standard algorithms for computing with arbitrary Bayesian networks (see Lauritzen and Spiegelhalter, 1988). Thus our algorithm for join trees has potential relevance to many networks that are not trees. Because join trees have some special structure, they allow some optimization of the basic causal-tree algorithm. We elaborate on this in Section 5.\nIn Section 6 we consider the case of arbitrary polytrees. We give an O(log N) algorithm for updates and queries, which involves transforming the polytree to a join tree, and then using the results of Sections 2 and 5. The join tree of a polytree has a particularly simple form, giving an algorithm in which updates take O(k p+3 log N) time and queries O(k p+2 log N), where p is the maximum number of parents of any node. Although the constant appears large, it must be noted that the original polytree takes O(k p+1 N) space merely to represent, if conditional probability tables are given as explicit matrices." }, { "figure_ref": [], "heading": "Y M", "publication_ref": [ "b5" ], "table_ref": [], "text": "Y jX Z @ @ @ @ R M ZjX V M V jU X @ @ @ @ @ @ R M XjU U Figure 1: A segment of a causal tree.\nFinally, we discuss a speci c modelling application in computational biology where probabilistic models are used to describe, analyze and predict the functional behavior of biological sequences such as protein chains or DNA sequences (see Delcher, Kasif, Goldberg, and Hsu, 1993 for references). Much of the information in computational biology databases is noisy. However, a number of successful attempts to build probabilistic models have been made. In this case, we use a probabilistic tree of depth 300 that consists of 600 nodes and all the matrices of conditional probabilities are 2 2. The tree is used to model the dependence of a protein's secondary structure on its chemical structure. The detailed description of the problem and experimental results are given by Delcher et al. (1993). For this problem we obtain an e ective speed-up of about a factor of 10 to perform an update as compared to the standard algorithm. Clearly, getting an order of magnitude improvement in the response time of a probabilistic real-time system could be of tremendous importance in future use of such systems." }, { "figure_ref": [], "heading": "Causal Trees", "publication_ref": [ "b17", "b17", "b17", "b17" ], "table_ref": [], "text": "A probabilistic causal tree is a directed tree in which each node represents a discrete random variable X, and each directed edge is annotated by a matrix of conditional probabilities M Y jX (associated with edge X ! Y ). That is, if x is a possible value of X; and y of Y; then the (x; y)th component of M Y jX is Pr(Y = yjX = x). Such a tree represents a joint probability distribution over the product space of all variables; for detailed de nitions and discussion see Pearl (1988). Brie y, the idea is that we consider the product, over all nodes, of the conditional probability of the node given its parents. For example, in Figure 1 the implied distribution is:\nPr(U = u; V = v; X = x; Y = y; Z = z) = Pr(U = u) Pr(V = vjU = u) Pr(X = xjU = u) Pr(Y = yjX = x) Pr(Z = zjX = x):\nGiven particular values of u; v; x; y; z; the conditional probabilities can be read from the appropriate matrices M. One advantage of such a product representation is that it is very concise. In this example, we need four matrices and the unconditional probability over U, but the size of each is at most the square of the largest variable's domain size. In contrast, a general distribution over N variables requires an exponential (in N) representation.\nOf course, not every distribution can be represented as a causal tree. But it turns out that the product decomposition implied by the tree corresponds to a particular pattern of conditional independencies which often hold (if perhaps only approximately) in real applications. Intuitively speaking, in Figure 1 some of these implied independencies are that the conditional probability of U given V , X, Y and Z depends only on values of V and X; and the probability of Y given U, V , X, and Z depends only on X. Independencies of this sort can arise for many reasons, for instance from a causal modeling of the interactions between the variables. We refer the reader to Pearl (1988) for details related to the modeling of independence assumptions using graphs.\nIn the following, we make several assumptions that signi cantly simplify the presentation, but do not sacri ce generality. First, we assume that each variable ranges over the same, constant, number of values k.1 It follows that the marginal probability distribution for each variable can be viewed as a k-dimensional vector, and each conditional probability matrix such as M Y jX is a square k k matrix. A common case is that of binary random variables (k = 2); the distribution over the values (TRUE, FALSE) is then (p; 1 p) for some probability p.\nThe next assumption is that the tree is binary, and complete, so that each node has 0 or 2 children. Any tree can be converted into this form, by at most doubling the number of nodes. For instance, suppose node p has children c 1 ; c 2 ; c 3 in the original tree. We can create another \\copy\" of p, p 0 , and rearrange the tree such that the two children of p are c 1 and p 0 , and the two children of p 0 are c 2 and c 3 . We can constrain p 0 always to have the same value as p simply by choosing the identity matrix for the conditional probability table between p and p 0 . Then the distribution represented by the new tree is e ectively the same as the original. Similarly, we can always add \\dummy\" leaf nodes if necessary to ensure a node has two children. As explained in the introduction, we are interested in processes in which certain variables' values are observed, upon which we wish to condition. Our nal assumption is that these observed evidence nodes are all leaves of the tree. Again, because it is possible to \\copy\" nodes and to add dummy nodes, this is not restrictive.\nThe product distribution alluded to above corresponds to the distribution over variables prior to any observations. In practice, we are more interested in the conditional distribution, which is simply the result of conditioning on all the observed evidence (which, by the earlier assumption, corresponds to seeing values for all the leaf nodes). Thus, for each non-leaf node X we are interested in the conditional marginal probability over X, i.e., the k-dimensional vector:\nBel (X) = Pr(Xjall evidence values):\nThe main algorithmic problem is to compute Bel (X) for each (non-evidence) node X in the tree given the current evidence. It is well known that the probability vector Bel (X) can be computed in linear time (in the size of the tree) by a popular algorithm based on the following equation: Bel (X) = Pr(Xjall evidence) = (X) (X) Here is a normalizing constant, (X) is the probability of all the evidence in the subtree below node X given X, and (X) is the probability of X given all evidence in the rest of the tree. To interpret this equation, note that if X = (x 1 ; x 2 ; : : :; x k ) and (Y = y 1 ; y 2 ; : : :; y k ) are two vectors we de ne to be the operation of component-wise product (pairwise or dyadic product of vectors): X Y = (x 1 y 1 ; x 2 y 2 ; : : :; x k y k ):\nThe usefulness of (X) and (X) derives from the fact that they can be computed recursively, as follows:\n1. If X is the root node, (X) is the prior probability of X. 2. If X is a leaf node, (X) is a vector with 1 in the ith position (where the ith value has been observed) and 0 elsewhere. If no value for X has been observed, then (X) is a vector consisting of all 1's.2 \n3. Otherwise, if, as shown in Figure 1, the children of node X are Y and Z, its sibling is V and its parent is U, we have:\n(X) = (M Y jX (Y )) (M ZjX (Z)) (X) = M T XjU (U) (M V jU (V ))\nOur presentation of this technique follows that of Pearl (1988). However, we use a somewhat di erent notation in that we don't describe messages sent to parents or successors, but rather discuss the direct relations among the and vectors in terms of simple algebraic equations. We will take advantage of algebraic properties of these equations in our development.\nIt is very easy to see that the equations above can be evaluated in time proportional to the size of the network. The formal proof is given by Pearl (1988).\nTheorem 1: The belief distribution of every variable (that is, the marginal probability distribution for each variable, given the evidence) in a causal tree can be evaluated in O(k 2 N) time where N is the size of the tree. (The factor k 2 is due to the multiplication of a matrix by a vector that must be performed at each node.)\nThis theorem shows that it is possible to perform evidence absorption in O(N) time, and queries in constant time (i.e., by retrieving the previously computed values from a lookup table). In the next sections we will show how to perform both queries and updates in worst-case O(log N) time. Intuitively, we will not recompute all the marginal distributions after an update, but rather make only a small number of changes, su cient, however, to compute the value of any variable with only a logarithmic delay." }, { "figure_ref": [], "heading": "A Simple Preprocessing Approach", "publication_ref": [ "b5" ], "table_ref": [], "text": "To obtain intuition about the new approach we begin with a very simple observation.\nConsider a causal tree T of depth D. For each node X in the tree we initially compute its (X) vector. vectors are left uncomputed. Given an update to a node Y , we calculate the revised (X) vectors for all nodes X that are ancestors of Y in the tree. This clearly can be done in time proportional to the depth of the tree, i.e., O(D). The rest of the information in the tree remains unchanged. Now consider a Query-Node operation for some node V in the tree. We obviously already have the accurate (V ) vector for every node in the tree including V . However, in order to compute its (V ) vector we need to compute only the (Y ) vectors for all the nodes above V in the tree and multiply these by the appropriate vectors that are kept current. This means that to compute the accurate (V ) vector we need to perform O(D) work as well. Thus, in this approach we don't perform the complete update to every (X) and (X) vector in the tree.\nLemma 2: Update-Node and Query-Node operations in a causal tree T can be performed in O(k 2 D) time where D is the depth of the tree. This implies that if the tree is balanced, both operations can be done in O(log N)\ntime. However, in some important applications the trees are not balanced (e.g., models of temporal sequences, Delcher et al., 1993). The obvious question therefore is: Given a causal tree T can we produce an equivalent balanced tree T 0 ? While the answer to this question appears to be di cult, it is possible to use a more sophisticated approach to produce a data structure (which is not a causal tree) to process queries and updates in O(log N) time. This approach is described in the subsequent sections." }, { "figure_ref": [], "heading": "A Dynamic Data Structure For Causal Trees", "publication_ref": [], "table_ref": [], "text": "The data structure that will allow e cient incremental processing of a probabilistic tree T = T 0 will be a sequence of trees, T 0 ; T 1 ; T 2 ; : : :; T i ; : : :; T logN . Each T i+1 will be a contracted version of T i , whose nodes are a subset of those in T i . In particular, T i+1 will contain about half as many leaves as its predecessor.\nWe defer the details of this contraction process until the next section. However, one key idea is that we maintain consistency, in the sense that Bel (X); (X); and (X) are given the same values by all the trees in which X appears. We choose the conditional probability matrices in the contracted trees (i.e., all trees other than T 0 ) to ensure this.\nRecall that the and equations have the form\n(X) = (M Y jX (Y )) (M ZjX (Z)) (X) = M T XjU (U) (M V jU (V ))\nif Y and Z are children of X, X is a right child of U, and V is X's sibling (Figure 1). However, these equations are not in the most convenient form and the following notational conventions will be very helpful. First, let A i (x) (resp., B i (x)) denote the conditional probability matrix between X and X's left (resp., right) child in the tree T i . Note that the identity of these children can di er from tree to tree, because some of X's original children might be removed by the contraction process. One advantage of the new notation is that In T i e j z @ @ p p p p p p j v j x @ @ @ j u ) Rake (e; x)\nIn T i+1 j v j z @ @ @ p p p p p p j u Figure 2: The e ect of the operation Rake (e; x). e must be a leaf, but z may or may not be a leaf.\nthe explicit dependence on the identity of the children is suppressed. Next, suppose X's parent in T i is u. Then we let C i (x) denote either A i (u) or B i (u), and D i (x) denote either B i (u) T or A i (u) T , depending on whether X is the right or left child, respectively, of U. It will not be necessary to keep careful track of these correspondences, but simply to note that the above equations become: 3\n(x) = A i (x) (y) B i (x) (z) (x) = D i (x) ( (u) C i (x) (v))\nIn the next section we describe the preprocessing step that creates the dynamic data structure." }, { "figure_ref": [], "heading": "Rake Operation", "publication_ref": [ "b16", "b13", "b10", "b8", "b2" ], "table_ref": [], "text": "The basic operation used to contract the tree is Rake which removes both a leaf and its parent from the tree. The e ect of this operation on the tree is shown in Figure 2. We now de ne the algebraic e ect of this operation on the equations associated with this tree. Recall that we want to de ne the conditional probability matrices in the raked tree so that the distribution over the remaining variables is unchanged. We achieve this by substituting the equations for (x) and (x) into the equations for (u), (z), and (v). In the following, it is important to note that (u), (z) and (v) are una ected by the rake operation.\nIn the following, let Diag denote the diagonal matrix whose diagonal entries are the components of the vector . We derive the algebraic e ect of the rake operation as follows:\n(u) = A i (u) (v) B i (u) (x) = A i (u) (v) B i (u) (A i (x) (e) B i (x) (z)) = A i (u) (v) B i (u) Diag A i (x) (e) B i (x) (z) = A i (u) (v) B i (u) Diag A i (x) (e) B i (x) (z) = A i+1 (u) (v) B i+1 (u) (z) where A i+1 (u) = A i (u) and B i+1 (u) = B i (u) Diag A i (x) (e) B i (x). (Of course, the case\nwhere the leaf being raked is a right child generates analogous equations.) Thus, by de ning 3. Throughout, we assume that has lower precedence than matrix multiplication (indicated by ).\nA i+1 (u) and B i+1 (u) in this way, we ensure that all values in the raked tree are identical to the corresponding values in the original tree. This is not yet enough, because we must check that values are similarly preserved. The only two values that could possibly change are (z) and (v), so we check them both. For the former, we must have\n(z) = D i (z) ( (x) C i (z) (e)) = D i+1 (z) ( (u) C i+1 (z) (v)):\nAfter substituting for (x) and some algebraic manipulation, we see that this is assured if\nC i+1 (z) = C i (x) and D i+1 (z) = D i (z) Diag C i (z) (e) D i (x)\n. However recall that, by de nition, C i+1 (z) = A i+1 (u) and C i (x) = A i (u), and so C i+1 (z\n) = C i (x) follows. Furthermore, D i+1 (z) = B i+1 (u) T = (B i (u) Diag A i (x) (e) B i (x)) T = B i (x) T Diag A i (x) (e) B i (u) T = D i (z) Diag C i (z) (e) D i (x)\nas required.\nFor (v) it is necessary to verify that (u). But these identities follow by de nition, so we are done.\n(v) = D i (v) ( (u) C i (v) (x)) = D i+1 (v) ( (u) C i+1 (v) (z)): By substituting for (x), this can be shown to be true if D i+1 (v) = D i (v) = A i (u) T = A i+1 (u) T and C i+1 (v) = C i (v) Diag A i (x) (e) B i (x) = B i+1\nBeginning with the given tree T = T 0 , each successive tree is constructed by performing a sequence of rakes, so as to rake away about half of the remaining evidence nodes. More speci cally, let Contract be the operation in which we apply the Rake operation to every other leaf of a causal tree, in left-to-right order, excluding the leftmost and the rightmost leaf. Let fT i g be the set of causal trees constructed so that T i+1 is the causal tree generated from T i by a single application of Contract. The following result is proved using an easy inductive argument: Theorem 3: Let T 0 be a causal tree of size N. Then the number of leaves in T i+1 is equal to half the leaves in T i (not counting the two extreme leaves) so that starting with T 0 , after O(log N) applications of Contract, we produce a three-node tree: the root, the leftmost leaf and the rightmost leaf.\nBelow are a few observations about this process:\n1. The complexity of Contract is linear in the size of the tree. Additionally, log N applications of Contract reduce the set of tree equations to a single equation involving the root in O(N) total time. 2. The total space to store all the sets of equations associated with fT i g 0 i log N is about twice the space required to store the equations for T 0 .\n3. With each equation in T i+1 we also store equations that describe the relationship between the conditional probability matrices in T i+1 to the matrices in T i . Notice that, even though T i+1 is produced from T i by a series of rake operations, each matrix in T i+1 depends directly on matrices present in T i . This would not be the case if we attempted to simultaneously rake adjacent children.\nWe regard these equations as part of T i+1 . So, formally speaking fT i g are causal trees augmented with some auxiliary equations. Each of the contracted trees describes a probability distribution on a subset of the rst set of variables that is consistent with the original distribution.\nWe note that the ideas behind the Rake operation were originally developed by Miller and Reif (1985) in the context of parallel computation of bottom-up arithmetic expression trees (Kosaraju & Delcher, 1988;Karp & Ramachandran, 1990). In contrast, we are using it in the context of incremental update and query operations in sequential computing. A similar data structure to ours was independently proposed by Frederickson (1993) in the context of dynamic arithmetic expression trees, and a di erent approach for incremental computing on arithmetic trees was developed by Cohen and Tamassia (1991). There are important and interesting di erences between the arithmetic expression-tree case and our own. For arithmetic expressions all computation is done bottom-up. However, in probabilistic networks -messages must be passed top-down. Furthermore, in arithmetic expressions when two algebraic operations are allowed, we typically require the distributivity of one operation over the other, but the analogous property does not hold for us. In these respects our approach is a substantial generalization of the previous work, while remaining conceptually simple and practical." }, { "figure_ref": [], "heading": "Example: A Chain", "publication_ref": [], "table_ref": [], "text": "To obtain an intuition about the algorithms, we sketch how to generate and utilize the T i ; 0 i log N and their equations to perform -value queries and updates in O(log N) time on an N = 2L + 1 node chain of length L. Consider the chain of length 4 in Figure 3, and the trees that are generated by repeated application of Contract to the chain.\nThe equations that correspond to the contracted trees in the gure are as follows (ignoring trivial equations). Recall that A i (x j ) is the matrix associated with the left edge of random variable x j in T i .\n(x 1 ) = A 0 (x 1 ) (e 1 ) B 0 (x 1 ) (x 2 ) (x 2 ) = A 0 (x 2 ) (e 2 ) B 0 (x 2 ) (x 3 ) (x 3 ) = A 0 (x 3 ) (e 3 ) B 0 (x 3 ) (x 4 ) (x 4 ) = A 0 (x 4 ) (e 4 ) B 0 (x 4 ) (e 5 )\n9 > > > = > > > ;\nfor T 0 (x 1 ) = A 1 (x 1 ) (e 1 ) B 1 (x 1 ) (x 3 ) (x 3 ) = A 1 (x 3 ) (e 3 ) B 1 (x 3 ) (e 5 ) where B 1 (x 1 ) = B 0 (x 1 ) Diag A 0 (x 2 ) (e 2 ) B 0 (x 2 ) B 1 (x 3 ) = B 0 (x 3 ) Diag A 0 (x 4 ) (e 4 ) B 0 (x 4 ) (x 1 ) = A 2 (x 1 ) (e 1 ) B 2 (x 1 ) (e 5 ) where B 2 (x 1 ) = B 1 (x 1 ) Diag A 1 (x 3 ) (e 3 ) B 1 (x 3 )\n9 > > = > > ;\nfor T 2\nWe have not listed the A matrices because, in this example, they are constant. Now consider a query operation on x 2 . Rather than performing the standard computation we will nd the level where x 2 was \\raked\". Since this occurred on level 0, we obtain the equation (x 2 ) = A 0 (x 2 ) (e 2 ) B 0 (x 2 ) (x 3 ) Thus we must compute (x 3 ), and to do this we nd where x 3 is \\raked\". That happened on level 1. However, on that level the equation associated with x 3 is:\n(x 3 ) = A 1 (x 3 ) (e 3 ) B 1 (x 3 ) (e 5 ) That means that we need not follow down the chain. In general for a chain of N nodes we can answer any query to a node on the chain by evaluating log N equations instead of N equations. Now consider an update for e 4 . Since e 4 was raked immediately, we rst modify the equation B 1 (x 3 ) = B 0 (x 3 ) Diag A 0 (x 4 ) (e 4 ) B 0 (x 4 ) on the rst level where e 4 occurs on the right-hand side. Since B 1 (x 3 ) is a ected by the change to e 4 , we subsequently modify the equation B 2 (x 1 ) = B 1 (x 1 ) Diag A 1 (x 3 ) (e 3 ) B 1 (x 3 ) on the second level. In general, we clearly need to update at most log N equations; i.e., one per level. We now generalize this example and describe general algorithms for queries and updates in causal trees." }, { "figure_ref": [], "heading": "Performing Queries And Updates E ciently", "publication_ref": [], "table_ref": [], "text": "In this section we shall show how to utilize the contracted trees T i ; 0 i log N to perform queries and updates in O(log N) time in general causal trees. We shall show that a logarithmic amount of work will be necessary and su cient to compute enough information in our data structure to update and query any or value." }, { "figure_ref": [ "fig_2" ], "heading": "Queries", "publication_ref": [], "table_ref": [], "text": "To compute (x) for some node x we can do the following. We rst locate ind (x), which is de ned to be the highest level i such that x appears in T i . The equation for (x) is of the form:\n(x) = A i (x) (y) B i (x) (z) where y and z are the left and right children, respectively, of x in T i .\nSince x does not appear in T i+1 , it was raked at this level of equations, which implies that one child (we assume z) is a leaf. We therefore only need to compute (y), which can be done recursively. If instead y was the raked leaf, we would compute (z) recursively.\nIn either case O(1) operations are done in addition to one recursive call, which is to a value at a higher level of equations. Since there are O(log N) levels, and the only operations are matrix by vector multiplications, the procedure takes O(k 2 log N) time. The function -Query (x) is given in Figure 4." }, { "figure_ref": [ "fig_3" ], "heading": "Updates", "publication_ref": [], "table_ref": [], "text": "We now describe how the update operations can modify enough information in the data structure to allow us to query the vectors and vectors e ciently. Most importantly the reader should note that the update operation does not try to maintain the correct and values. It is su cient to ensure that, for all i and x, the matrices A i (x) and B i (x) (and thus also C i (x) and D i (x)) are always up to date.\nWhen we update the value of an evidence node, we are simply changing the value of some leaf e. At each level of equations, the value of (e) can appear at most twice: once in the -equation of e's parent and once in the -equation of e's sibling in T i . When e disappears, say at level i, its value is incorporated into one of the constant matrices A i+1 (u) or B i+1 (u) where u is the grandparent of e in T i . This constant matrix in turn a ects exactly one constant matrix in the next higher level, and so on. Since the e ect at each level can be computed in O(k 3 ) time (due to matrix multiplication) and there are O(log N) levels of equations, the update can be accomplished in O(k 3 log N) time. The constant k 3 is actually pessimistic, because faster matrix multiplication algorithms exist.\nThe update procedure is given in Figure 5. Update is initially called as Update( (E) = e; i) where E is a leaf, i the level at which it was raked, and e is the new evidence. This operation will start a sequence of O(log N) calls to function -Update (X = Term; i) as the change will propagate to log N equations." }, { "figure_ref": [], "heading": "FUNCTION -Query (x)", "publication_ref": [], "table_ref": [], "text": "We look up the equation associated with (x) in T ind (x) .\nCase 1: x is a leaf. Then the equation is of the form: (x) = e where e is known. In this case we return e.\nCase 2: The equation associated with (x) is of the form\n(x) = A i (x) (y) B i (x) (z)\nwhere z is a leaf and therefore (z) is known. In this case we return A i (X) -Query (y) B i (X) (z)\nThe case where y is the leaf is analogous. " }, { "figure_ref": [], "heading": "Queries", "publication_ref": [], "table_ref": [], "text": "It is relatively easy to use a similar recursive procedure to perform (x) queries. Unfortunately, this approach yields an O(log 2 N)-time algorithm if we simply use recursion to calculate terms and calculate terms using our earlier procedure. This is because there will be O(log N) recursive calls to calculate values, but each is de ned by an equation that also involves a term taking O(log N) time to compute.\nTo achieve O(log N) time, we shall instead implement (x) queries by de ning a procedure Calc (x; i) which returns a triple of vectors hP; L; Ri such that P = (x), L = (y) and R = (z) where y and z are the left and right children, respectively, of x in T i .\nTo compute (x) for some node x we can do the following. Let i = ind (x). The equation for (x) in T i is of the form:\n(x) = D i (x) ( (u) C i (x) (v))\nwhere u is the parent of x in T i and v its sibling. We then call procedure Calc (u; i + 1) which will return the triple h (u); (v); (x)i, from which we immediately can compute (x) using the above equation. Procedure Calc (x; i) can be implemented in the following fashion.\nCase 1: If T i is a 3-node tree with x as its root, then both children of x are leaves, hence their values are known, and (x) is a given sequence of prior probabilities for x.\nCase 2: If x does not appear in T i+1 , then one of x's children is a leaf, say e which is raked at level i. Let z be the other child. We call Calc (u; i + 1), where u is the parent of x in T i , and receive back h (u); (z); (v)i or h (u); (v); (z)i according to whether x FUNCTION -Update (Term = Value; i)\n1. Find the (at most one) equation in T i , de ning some A i or B i , in which Term appears on the right-hand side; let Term 0 be the matrix de ned by this equation (i.e., its left-hand side). 2. Update Term 0 ; let Value be the new value.\n3. Call -Update (Term 0 = Value; i + 1) recursively. was a left or right child of u in T i (and v is u's other child). We can now compute (x) from (u) and (v), and we have (e) and (z), so we can return the necessary triple. Speci cally,\n(x) = ( D i (x) ( (u) A i+1 (u) (v)) D i (x) ( (u) B i+1 (u) (v))\nwhere the choice depends on whether x is the right or left child, respectively, of u in T i .\nCase 3: If x does appear in T i+1 , then we call Calc (x; i + 1). This returns the correct value of (x). For any child z of x in T i that remains a child of x in T i+1 , it also returns the correct value of (z). If z is a child of x that does not occur in T i+1 , then it must be the case that z was raked at level i so that one of z's children, say e, is a leaf and let the other child be q. In this situation Calc (x; i + 1) has returned the value of (q) and we can compute\n(z) = A i (z) (e) B i (z) (q)\nand return this value.\nIn all three cases, there is a constant amount of work done in addition to a single recursive call that uses equations at a higher level. Since there are O(log N) levels of equations, each requiring only matrix by vector multiplication, the total work done is O(k 2 log N)." }, { "figure_ref": [ "fig_4" ], "heading": "Extended Example", "publication_ref": [], "table_ref": [], "text": "In this section we illustrate the application of our algorithms to a speci c example. Consider the sequence of contracted trees shown in Figure 6. Corresponding to these trees we have such equations as the following:\nFor T 0 :\n(x 1 ) = A 0 (x 1 ) (x 2 ) B 0 (x 1 ) (x 3 ) (x 2 ) = D 0 (x 2 ) ( (x 1 ) C 0 (x 2 ) (x 3 )) . . . . . .\nFor T 1 :\n(x 1 ) = A 1 (x 1 ) (x 2 ) B 1 (x 1 ) (e 9 ) (x 2 ) = D 1 (x 2 ) ( (x 1 ) C 1 (x 2 ) (e 9 )) . . . . . .\nFor T 2 :\n(x 1 ) = A 2 (x 1 ) (x 4 ) B 2 (x 1 ) (e 9 ) (x 4 ) = D 2 (x 4 ) ( (x 1 ) C 2 (x 4 ) (e 9 )) . . . . . .\nFor T 3 :\n(x 1 ) = A 3 (x 1 ) (e 1 ) B 3 (x 1 ) (e 9 )\nNow consider, for instance, the e ect of an update for e 2 . Since it is raked immediately, the new value of (e 2 ) is incorporated in:\nB 1 (x 6 ) = B 0 (x 6 ) Diag A0(x8) (e2) B 0 (x 8 )\nFrom subsequent Rake operations we know that A 2 (x 4 ) depends on B 1 (x 6 ), and A 3 (x 1 ) depends on A 2 (x 4 ), so we must also update these values as follows:\nA 2 (x 4 ) = A 1 (x 4 ) Diag B1(x6) (e3) A 1 (x 6 ) A 3 (x 1 ) = A 2 (x 1 ) Diag B2(x4) (e5) A 2 (x 4 )\nFinally, consider a query for x 7 . Since x 7 is raked together with e 5 in T 0 , we follow the steps outlined above and generate the following calls: Calc (x 7 ; 0), Calc (x 4 ; 1), Calc (x 4 ; 2), and Calc (x 1 ; 3). This provides us with (x 7 ). In this case, (x 7 ) is particularly easy to compute since both x 7 's children are leaf nodes. Then we simply compute (x 7 ) (x 7 ) and then normalize, giving us the conditional marginal distribution Bel (x 7 ) as required." }, { "figure_ref": [], "heading": "Join Trees", "publication_ref": [ "b14", "b22", "b17", "b22", "b22" ], "table_ref": [], "text": "Perhaps the best-known technique for computing with arbitrary (i.e., not singly-connected) Bayesian networks uses the idea of join trees (junction trees) (Lauritzen & Spiegelhalter, 1988). In many ways a join tree can be thought of as a causal tree, albeit one with somewhat special structure. Thus the algorithm in the previous section can be applied. However, the structure of a join tree permits some optimization, which we describe in this section. This becomes especially relevant in the next section, where we use the join-tree technique to show how O(log N) updates and queries can be done for arbitrary polytrees. Our review of join-trees and their utility is extremely brief and quite incomplete; for clear expositions see, for instance, Spiegelhalter et al. (1993) and Pearl (1988).\nGiven any Bayesian network, the rst step towards constructing a join-tree is to moralize the network: insert edges between every pair of parents of a common node, and then treat all edges in the graph as being undirected (Spiegelhalter et al., 1993). The resulting undirected graph is called the moral graph. We are interested in undirected graphs that are chordal: every cycle of length 4 or more should contain a chord (i.e., an edge between two nodes that are non-adjacent in the cycle). If the moral graph is not chordal, it is necessary to add edges to make it so; various techniques for this triangulation stage are known (for instance, see Spiegelhalter et al., 1993).\nIf p is a probability distribution represented in a Bayesian network G = (V; E), and M = (V; F) is the result of moralizing and then triangulating G, then:\n1. M has at most jV j cliques,4 say C 1 ; : : :; C jV j .\n2. The cliques can be ordered so that for each i > 1 there is some j(i\n) < i such that C i \\ C j(i) = C i \\ (C 1 C 2 : : : C i 1 :)\nThe tree T formed by treating the cliques as nodes, and connecting each node C i to its \\parent\" C j(i) , is called a join tree." }, { "figure_ref": [], "heading": "p =", "publication_ref": [], "table_ref": [], "text": "Y i p(C i jC j(i) )\n4. p(C i jC j(i) ) = p(C i jC j(i) \\ C i ) From 2 and 3, we see that if we direct the edges in T away from the \\parent\" cliques, the resulting directed tree is in fact a Bayesian causal tree that can represent the original distribution p. This is true no matter what the form of the original graph. Of course, the price is that the cliques may be large, and so the domain size (the number of possible values of a clique node) can be of exponential size. This is why this technique is not guaranteed to be e cient.\nWe can use the Rake technique of Section 2 on the directed join tree without any modi cation. However, property 4 above shows that the conditional probability matrices in the join tree have a special structure. We can use this to gain some e ciency. In the following, let k be the domain size of the variables in G as usual. Let n be the maximum size of cliques in the join tree; without loss of generality we can assume that all cliques are of the same size (because we can add \\dummy\" variables). Thus the domain size of each clique is K = k n . Finally, let c be the maximum intersection size of a clique and its parent (i.e., jC j(i) \\ C i j) and L = k c .\nIn the standard algorithm, we would represent p(C i jC j(i) ) as a K K matrix, M C i jC j(i) . However, p(C i jC j(i) \\ C i ) can be represented as a smaller L K matrix, M C i jC j(i) \\C i . By property 4 above, M C i jC j(i) is identical to M C i jC j(i) \\C i , except that many rows are repeated. Thus there is a K L matrix J such that M C i jC j(i) = J M C i jC j(i) \\C i :\n(J is actually a simple matrix whose entries are 0 and 1, with exactly one 1 per row; however we do not use this fact.)\nOur claim is that, in the case of join trees, the following is true. First, the matrices A i and B i used in the Rake algorithm can be stored in factored form, as the product of two matrices of dimension K L and L K respectively. So, for instance, we factor A i as A l i A r i . We never need to explicitly compute, or store, the full matrices. As we have just seen, this claim is true when i = 0 because the M matrices factor this way. The proof for i > 1 uses an inductive argument, which we illustrate below. The second claim is that, when the matrices are stored in factored form, all the matrix multiplications used in the Rake algorithm are of one of the following types: 1) an L K matrix times a K L matrix, 2) an L K matrix times a K K diagonal matrix, 3) an L L matrix times an L K matrix, or 4) an L K matrix times a vector.\nTo prove these claims consider, for instance, the equation de ning B i+1 in terms of lowerlevel matrices. From Section 2, B i+1 (u) = B i (u) Diag A i (x) (e) B i (x): But, by assumption, this is:\n(B l i (u) B r i (u)) Diag (A l i (x) A r i (x)) (e) (B l i (x) B l i (x)); which, using associativity, is clearly equivalent to B l i (u) h ((B r i (u) Diag A l i (x) (A r i (x) (e)) ) B l i (x)) B l i (x) i :\nHowever, every multiplication in this expression is one of the forms stated earlier. Identifying B l i+1 (u) as B l i (u) and B r i+1 (u) as the bracketed part of the expression proves this case, and of course the case where we rake a left child (so that A i+1 (u) is updated) is analogous.\nThus, even using the most straightforward technique for matrix multiplication, the cost of\nupdating B i+1 is O(KL 2 ) = O(k n+2c\n). This contrasts with O(K 3 ) if we do not factor the matrices, and may represent a worthwhile speedup if c is small. Note that the overall time for an update using this scheme is O(k n+2c log N). Queries, which only involve matrix by vector multiplication, require O(k n+c log N) time.\nFor many join trees the di erence between N and log N is unimportant, because the clique domain size K is often enormous and dominates the complexity. Indeed, K and L may be so large that we cannot represent the required matrices explicitly. Of course, in such cases our technique has little to o er. But there will be other cases in which the bene ts will be worthwhile. The most important general class in which this is so, and our immediate reason for presenting the technique for join trees, is the case of polytrees." }, { "figure_ref": [], "heading": "Polytrees", "publication_ref": [], "table_ref": [], "text": "A polytree is a singly connected Bayesian network; we drop the assumption of Section 2 that each node has at most one parent. Polytrees o er much more exibility than causal trees, and yet there is a well-known process that can update and query in O(N) time, just as for causal trees. For this reason polytrees are an extremely popular class of networks.\nWe suspect that it is possible to present an O(log N) algorithm for updates and queries in polytrees, as a direct extension of the ideas in Section 2. Instead we propose a di erent technique, which involves converting a polytree to its join tree and then using the ideas of the preceding section. The basis for this is the simple observation that the join tree of a polytree is already chordal. Thus (as we show in detail below) little is lost by considering the join tree instead of the original polytree. The speci c property of polytrees that we require is the following. We omit the proof of this well-known proposition.\nProposition 4: If T is the moral graph of a polytree P = (V; E) then T is chordal, and the set of maximal cliques in T is ffvg parents (v) : v 2 V g. Let p be the maximum number of parents of any node. From the proposition, every maximal clique in the join tree has at most p+1 variables, and so the domain size of a node in the join tree is K = k p+1 . This may be large, but recall that the conditional probability matrix in the original polytree, for a variable with p parents, has K entries anyway since we must give the conditional distribution for every combination of the node's parents. Thus K is really a measure of the size of the polytree itself.\nIt now follows from the proposition above that we can perform query and update in polytrees in time O(K 3 log N), simply by using the algorithm of Section 2 on the directed join tree. But, as noted in Section 5, we can do better. Recall that the savings depend on c, the maximum size of intersection between any node and its parent in the join tree.\nHowever, when the join tree is formed from a polytree, no two cliques can share more than a single node. This follows immediately from Proposition 4, for if two cliques have more than one node in common then there must be either two nodes that share more than one parent, or else a node and one of its parents that both share yet another parent. Neither of these is consistent with the network being a polytree. Thus in the complexity bounds of Section 5, we can put c = 1. It follows that we can process updates in O(Kk 2c log N) = O(k p+3 log N) time and queries in O(k p+2 log N)." }, { "figure_ref": [ "fig_5" ], "heading": "Application: Towards Automated Site-Speci c Muta-Genesis", "publication_ref": [ "b5", "b5", "b5" ], "table_ref": [], "text": "An experiment which is commonly performed in biology laboratories is a procedure where a particular site in a protein is changed (i.e., a single amino-acid is mutated) and then tested to see whether the protein settles into a di erent conformation. In many cases, with overwhelming probability the protein does not change its secondary structure outside the mutated region. This process is often called muta-genesis. Delcher et al. (1993) developed a probabilistic model of a protein structure which is basically a long chain. The length of the chain varies between 300{500 nodes. The nodes in the network are either protein-structure nodes (PS-nodes) or evidence nodes (E-nodes). Each PS-node in the network is a discrete random variable X i that assumes values corresponding to descriptors of secondary sequence structure: helix, sheet or coil. With each PS-node the model associates an evidence node that corresponds to an occurrence of a particular subsequence of amino acids at a particular location in the protein.\nIn our model, protein-structure nodes are nite strings over the alphabet fh; e; cg. For example the string hhhhhh is a string of six residues in an -helical conformation, while eecc is a string of two residues in a -sheet conformation followed by two residues folded as a coil. Evidence nodes are nodes that contain information about a particular region of the protein. Thus, the main idea is to represent physical and statistical rules in the form of a probabilistic network.\nIn our rst set of experiments we converged on the following model that, while clearly biologically naive, seems to match in prediction accuracy many existing approaches such as neural networks. The network looks like a set of PS-nodes connected as a chain. To each such node we connect a single evidence node. In our experiments the PS-nodes are strings of length two or three over the alphabet fh; e; cg and the evidence nodes are strings of the same length over the set of amino acids. The following example clari es our representation. Assume we have a string of amino acids GSAT. We model the string as a network comprised of three evidence nodes GS, SA, AT and three PS-nodes. The network is shown in Figure 7.\nA correct prediction will assign the values cc, ch, and hh to the PS-nodes as shown in the gure. Now that we have a probabilistic model, we can test the robustness of the protein or whether small changes in the protein a ect the structure of certain critical sites in the protein. In our experiments, the probabilistic network performs a \\simulated evolution\" of the protein, namely the simulator repeatedly mutates a region in the chain and then tests whether some designated sites in the protein that are coiled into a helix are predicted to remain in this conformation. The main goal of the experiment was to test if stable bonds far away from the mutated location were a ected. Our previous results (Delcher et al., 1993) support the current thesis in the biology community, namely that local distant changes rarely a ect structure.\nThe algorithms we presented in the previous sections of the paper are perfectly suited for this type of application and are predicted to generate a factor of 10 improvement in e ciency over the current brute-force implementation presented by Delcher et al. (1993) where each change is propagated throughout the network." }, { "figure_ref": [], "heading": "Summary", "publication_ref": [ "b3", "b23", "b21", "b7", "b15", "b11", "b4" ], "table_ref": [], "text": "This paper has proposed several new algorithms that yield a substantial improvement in the performance of probabilistic networks in the form of causal trees. Our updating procedures absorb su cient information in the tree such that our query procedure can compute the correct probability distribution of any node given the current evidence. In addition, all procedures execute in time O(log N), where N is the size of the network. Our algorithms are expected to generate orders-of-magnitude speed-ups for causal trees that contain long paths (not necessarily chains) and for which the matrices of conditional probabilities are relatively small. We are currently experimenting with our approach with singly connected networks (polytrees). It is likely to be more di cult to generalize the techniques to general networks. Since it is known that the general problem of inference in probabilistic networks is NP-hard (Cooper, 1990), it obviously is not possible to obtain polynomial-time incremental solutions of the type discussed in this paper for general probabilistic networks. The other natural open question is extending the approach developed in this paper to other dynamic operations on probabilistic networks such as addition and deletion of nodes and modifying the matrices of conditional probabilities (as a result of learning).\nIt would also be interesting to investigate the practical logarithmic-time parallel algorithms for probabilistic networks on realistic parallel models of computation. One of the main goals of massively parallel AI research is to produce networks that perform real-time inference over large knowledge-bases very e ciently (i.e., in time proportional to the depth of the network rather than the size of the network) by exploiting massive parallelism. Jerry Feldman pioneered this philosophy in the context of neural architectures (see Stan ll and Waltz, 1986, Shastri, 1993, and Feldman and Ballard, 1982). To achieve this type of performance in the neural network framework, we typically postulate a parallel hardware that associates a processor with each node in a network and typically ignores communication requirements. With careful mapping to parallel architectures one can indeed achieve e cient parallel execution of speci c classes of inference operations (see Mani and Shastri, 1994, Kasif, 1990, and Kasif and Delcher, 1992). The techniques outlined in this paper presented an alternative architecture that supports very fast (sub-linear time) response capability on sequential machines based on preprocessing. However, our approach is obviously limited to applications where the number of updates and queries at any time is constant. One would naturally hope to develop parallel computers that support real-time probabilistic reasoning for general networks." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Simon Kasif's research at Johns Hopkins University was sponsored in part by National Science foundation under Grants No. IRI-9116843, IRI-9223591 and IRI-9220960." } ]
[ { "authors": "T Berger; Z Ye", "journal": "IEEE Trans. on Information Theory", "ref_id": "b0", "title": "Entropic aspects of random elds on trees", "year": "1990" }, { "authors": "D M Chelberg", "journal": "", "ref_id": "b1", "title": "Uncertainty in interpretation of range imagery", "year": "1990" }, { "authors": "R F Cohen; R Tamassia", "journal": "", "ref_id": "b2", "title": "Dynamic trees and their applications", "year": "1991" }, { "authors": "G Cooper", "journal": "Arti cial Intelligence", "ref_id": "b3", "title": "The computational complexity of probabilistic inference using bayes belief networks", "year": "1990" }, { "authors": "A Delcher; S Kasif", "journal": "", "ref_id": "b4", "title": "Improved decision making in game trees: Recovering from pathology", "year": "1992" }, { "authors": "A L Delcher; S Kasif; H R Goldberg; B Hsu", "journal": "", "ref_id": "b5", "title": "Probabilistic prediction of protein secondary structure using causal networks", "year": "1993" }, { "authors": "R Duda; P Hart", "journal": "Wiley", "ref_id": "b6", "title": "Pattern Classi cation and Scene Analysis", "year": "1973" }, { "authors": "J A Feldman; D Ballard", "journal": "Cognitive Science", "ref_id": "b7", "title": "Connectionist models and their properties", "year": "1982" }, { "authors": "G N Frederickson", "journal": "", "ref_id": "b8", "title": "A data structure for dynamically maintaining rooted trees", "year": "1993" }, { "authors": "Y Hel-Or; M Werman", "journal": "", "ref_id": "b9", "title": "Absolute orientation from uncertain data: A uni ed approach", "year": "1992" }, { "authors": "R M Karp; V Ramachandran", "journal": "North-Holland", "ref_id": "b10", "title": "Parallel algorithms for shared-memory machines", "year": "1990" }, { "authors": "S Kasif", "journal": "Arti cial Intelligence", "ref_id": "b11", "title": "On the parallel complexity of discrete relaxation in constraint networks", "year": "1990" }, { "authors": "S Kasif; A Delcher", "journal": "Arti cial Intelligence", "ref_id": "b12", "title": "Analysis of local consistency in parallel constraint networks", "year": "1994" }, { "authors": "S R Kosaraju; A L Delcher", "journal": "Springer Verlag", "ref_id": "b13", "title": "Optimal parallel evaluation of tree-structured computations by raking", "year": "1988" }, { "authors": "S Lauritzen; D Spiegelhalter", "journal": "J. Royal Statistical Soc. Ser. B", "ref_id": "b14", "title": "Local computations with probabilities on graphical structures and their applications to expert systems", "year": "1988" }, { "authors": "D Mani; L Shastri", "journal": "", "ref_id": "b15", "title": "Massively parallel reasoning with very large knowledge bases", "year": "1994" }, { "authors": "G L Miller; J Reif", "journal": "", "ref_id": "b16", "title": "Parallel tree contraction and its application", "year": "1985" }, { "authors": "J Pearl", "journal": "", "ref_id": "b17", "title": "Probabilistic Reasoning in Intelligent Systems", "year": "1988" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b18", "title": "", "year": "" }, { "authors": "M A Peot; R D Shachter", "journal": "Arti cial Intelligence", "ref_id": "b19", "title": "Fusion and propagation with multiple observations in belief networks", "year": "1991" }, { "authors": "J Rachlin; S Kasif; S Salzberg; D Aha", "journal": "", "ref_id": "b20", "title": "Towards a better understanding of memory-based and bayesian classi ers", "year": "1994" }, { "authors": "L Shastri", "journal": "", "ref_id": "b21", "title": "A computational model of tractable reasoning: Taking inspiration from cognition", "year": "1993" }, { "authors": "D Spiegelhalter; A Dawid; S Lauritzen; R Cowell", "journal": "Statistical Science", "ref_id": "b22", "title": "Bayesian analysis in expert systems", "year": "1993" }, { "authors": "C Stan Ll; D Waltz", "journal": "Communications of the ACM", "ref_id": "b23", "title": "Toward memory-based reasoning", "year": "1986" }, { "authors": "A Wilsky", "journal": "IEEE Trans. Signal Processing", "ref_id": "b24", "title": "Multiscale representation of markov random elds", "year": "1993" } ]
[ { "formula_coordinates": [ 4, 202.56, 105.12, 205.68, 195.32 ], "formula_id": "formula_0", "formula_text": "Y jX Z @ @ @ @ R M ZjX V M V jU X @ @ @ @ @ @ R M XjU U Figure 1: A segment of a causal tree." }, { "formula_coordinates": [ 4, 98.88, 643.44, 414.24, 30 ], "formula_id": "formula_1", "formula_text": "Pr(U = u; V = v; X = x; Y = y; Z = z) = Pr(U = u) Pr(V = vjU = u) Pr(X = xjU = u) Pr(Y = yjX = x) Pr(Z = zjX = x):" }, { "formula_coordinates": [ 6, 225.6, 372.12, 191.76, 37.32 ], "formula_id": "formula_2", "formula_text": "(X) = (M Y jX (Y )) (M ZjX (Z)) (X) = M T XjU (U) (M V jU (V ))" }, { "formula_coordinates": [ 7, 211.92, 577.56, 191.76, 37.56 ], "formula_id": "formula_3", "formula_text": "(X) = (M Y jX (Y )) (M ZjX (Z)) (X) = M T XjU (U) (M V jU (V ))" }, { "formula_coordinates": [ 8, 223.2, 336.24, 172.8, 33.6 ], "formula_id": "formula_4", "formula_text": "(x) = A i (x) (y) B i (x) (z) (x) = D i (x) ( (u) C i (x) (v))" }, { "formula_coordinates": [ 8, 90, 562.56, 432, 113.52 ], "formula_id": "formula_5", "formula_text": "(u) = A i (u) (v) B i (u) (x) = A i (u) (v) B i (u) (A i (x) (e) B i (x) (z)) = A i (u) (v) B i (u) Diag A i (x) (e) B i (x) (z) = A i (u) (v) B i (u) Diag A i (x) (e) B i (x) (z) = A i+1 (u) (v) B i+1 (u) (z) where A i+1 (u) = A i (u) and B i+1 (u) = B i (u) Diag A i (x) (e) B i (x). (Of course, the case" }, { "formula_coordinates": [ 9, 210.96, 152.64, 197.04, 33.6 ], "formula_id": "formula_6", "formula_text": "(z) = D i (z) ( (x) C i (z) (e)) = D i+1 (z) ( (u) C i+1 (z) (v)):" }, { "formula_coordinates": [ 9, 90, 205.92, 279.42, 18 ], "formula_id": "formula_7", "formula_text": "C i+1 (z) = C i (x) and D i+1 (z) = D i (z) Diag C i (z) (e) D i (x)" }, { "formula_coordinates": [ 9, 201.12, 219.6, 318, 90.96 ], "formula_id": "formula_8", "formula_text": ") = C i (x) follows. Furthermore, D i+1 (z) = B i+1 (u) T = (B i (u) Diag A i (x) (e) B i (x)) T = B i (x) T Diag A i (x) (e) B i (u) T = D i (z) Diag C i (z) (e) D i (x)" }, { "formula_coordinates": [ 9, 90, 352.8, 432, 71.28 ], "formula_id": "formula_9", "formula_text": "(v) = D i (v) ( (u) C i (v) (x)) = D i+1 (v) ( (u) C i+1 (v) (z)): By substituting for (x), this can be shown to be true if D i+1 (v) = D i (v) = A i (u) T = A i+1 (u) T and C i+1 (v) = C i (v) Diag A i (x) (e) B i (x) = B i+1" }, { "formula_coordinates": [ 10, 390.48, 555.48, 8.88, 56.64 ], "formula_id": "formula_10", "formula_text": "9 > > > = > > > ;" }, { "formula_coordinates": [ 11, 395.76, 382.68, 8.88, 50.88 ], "formula_id": "formula_11", "formula_text": "9 > > = > > ;" }, { "formula_coordinates": [ 13, 234.48, 213.6, 162.96, 17.04 ], "formula_id": "formula_12", "formula_text": "(x) = A i (x) (y) B i (x) (z)" }, { "formula_coordinates": [ 13, 223.2, 533.04, 172.8, 17.04 ], "formula_id": "formula_13", "formula_text": "(x) = D i (x) ( (u) C i (x) (v))" }, { "formula_coordinates": [ 14, 221.28, 343.8, 183.36, 40.2 ], "formula_id": "formula_14", "formula_text": "(x) = ( D i (x) ( (u) A i+1 (u) (v)) D i (x) ( (u) B i+1 (u) (v))" }, { "formula_coordinates": [ 14, 242.88, 526.08, 146.16, 17.04 ], "formula_id": "formula_15", "formula_text": "(z) = A i (z) (e) B i (z) (q)" }, { "formula_coordinates": [ 16, 199.44, 337.8, 198.24, 16.8 ], "formula_id": "formula_16", "formula_text": "B 1 (x 6 ) = B 0 (x 6 ) Diag A0(x8) (e2) B 0 (x 8 )" }, { "formula_coordinates": [ 16, 199.44, 393.48, 198, 29.28 ], "formula_id": "formula_17", "formula_text": "A 2 (x 4 ) = A 1 (x 4 ) Diag B1(x6) (e3) A 1 (x 6 ) A 3 (x 1 ) = A 2 (x 1 ) Diag B2(x4) (e5) A 2 (x 4 )" }, { "formula_coordinates": [ 17, 223.44, 229.68, 282, 42.48 ], "formula_id": "formula_18", "formula_text": ") < i such that C i \\ C j(i) = C i \\ (C 1 C 2 : : : C i 1 :)" }, { "formula_coordinates": [ 17, 137.28, 312.12, 65.04, 33.96 ], "formula_id": "formula_19", "formula_text": "Y i p(C i jC j(i) )" }, { "formula_coordinates": [ 18, 90, 263.04, 347.52, 61.32 ], "formula_id": "formula_20", "formula_text": "(B l i (u) B r i (u)) Diag (A l i (x) A r i (x)) (e) (B l i (x) B l i (x)); which, using associativity, is clearly equivalent to B l i (u) h ((B r i (u) Diag A l i (x) (A r i (x) (e)) ) B l i (x)) B l i (x) i :" }, { "formula_coordinates": [ 18, 90, 380.16, 177.36, 18 ], "formula_id": "formula_21", "formula_text": "updating B i+1 is O(KL 2 ) = O(k n+2c" } ]
Logarithmic-Time Updates and Queries in Probabilistic Networks
Traditional databases commonly support e cient query and update procedures that operate in time which is sublinear in the size of the database. Our goal in this paper is to take a rst step toward dynamic reasoning in probabilistic databases with comparable e ciency. We propose a dynamic data structure that supports e cient algorithms for updating and querying singly connected Bayesian networks. In the conventional algorithm, new evidence is absorbed in time O(1) and queries are processed in time O(N), where N is the size of the network. We propose an algorithm which, after a preprocessing phase, allows us to answer queries in time O(logN) at the expense of O(logN) time per evidence absorption. The usefulness of sub-linear processing time manifests itself in applications requiring (near) real-time response over large probabilistic databases. We brie y discuss a potential application of dynamic probabilistic reasoning in computational biology.
Arthur L Delcher; Adam J Grove; Simon Kasif
[ { "figure_caption": "can perform the Query-Node operation in O(1) time although evidence absorption, i.e., the Update-Node operation, takes O(N) time where N is the size of the network. Alternatively, one can assume that the Update-Node operation takes O(1) time (by simply recording the change) and the Query-Node operation takes O(N) time (evaluating the entire network).", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3: A simple chain example.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Function to compute the value of a node.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The update procedure.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Example of tree contraction.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Example of causal tree model using pairs, showing protein segment GSAT with corresponding secondary structure cchh.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b32", "b14", "b47", "b26" ], "table_ref": [], "text": "Computation is ultimately a physical process (Landauer, 1991). That is, in practice the range of physically realizable devices determines what is computable and the resources, such as computer time, required to solve a given problem. Computing machines can exploit a variety of physical processes and structures to provide distinct trade-o s in resource requirements. An example is the development of parallel computers with their trade-o of overall computation time against the number of processors employed. E ective use of this trade-o can require algorithms that would be very ine cient if implemented serially.\nAnother example is given by hypothetical quantum computers (DiVincenzo, 1995). They o er the potential of exploiting quantum parallelism to trade computation time against the use of coherent interference among very many di erent computational paths. However, restrictions on physically realizable operations make this trade-o di cult to exploit for search problems, resulting in algorithms essentially equivalent to the ine cient method of generate-and-test. Fortunately, recent work on factoring (Shor, 1994) shows that better algorithms are possible. Here we continue this line of work by introducing a new quantum algorithm for some particularly di cult combinatorial search problems. While this algorithm represents a substantial improvement for quantum computers, it is particularly ine cient as a classical search method, both in memory and time requirements.\nWhen evaluating algorithms, computational complexity theory usually focuses on the scaling behavior in the worst case. Of particular theoretical concern is whether the search cost grows exponentially or polynomially. However, in many practical situations, typical or average behavior is of more interest. This is especially true because many instances of search problems are much easier to solve than is suggested by worst case analyses. In fact, recent studies have revealed an important regularity in the class of search problems. Speci cally, for a wide variety of search methods, the hard instances are not only rare but c 1996 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. also concentrated near abrupt transitions in problem behavior analogous to physical phase transitions (Hogg, Huberman, & Williams, 1996). To exhibit this concentration of hard instances a search algorithm must exploit the problem constraints to prune unproductive search choices. Unfortunately, this is not easy to do within the range of allowable quantum computational operations. It is thus of interest to see if these results generalize to quantum search methods as well.\nIn this paper, the new algorithm is evaluated empirically to determine its average behavior. The algorithm is also shown to exhibit the phase transition, indicating it is indeed managing to, in e ect, prune unproductive search. This leaves for future work the analysis of its worst case performance.\nThis paper is organized as follows. First we discuss combinatorial search problems and the phase transitions where hard problem instances are concentrated. Second, after a brief summary of quantum computing, the new quantum search algorithm is motivated and described. In fact, there are a number of natural variants of the general algorithm. Two of these are evaluated empirically to exhibit the generality of the phase transition and their performance. Finally, some important caveats for the implementation of quantum computers and open issues are presented." }, { "figure_ref": [], "heading": "Combinatorial Search", "publication_ref": [ "b22", "b36", "b22" ], "table_ref": [], "text": "Combinatorial search is among the hardest of common computational problems: the solution time can grow exponentially with the size of the problem (Garey & Johnson, 1979). Examples arise in scheduling, planning, circuit layout and machine vision, to name a few areas. Many of these examples can be viewed as constraint satisfaction problems (CSPs) (Mackworth, 1992). Here we are given a set of n variables each of which can be assigned b possible values. The problem is to nd an assignment for each variable that together satisfy some speci ed constraints. For instance, consider the small scheduling problem of selecting one of two periods in which to teach each of two classes that are taught by the same person.\nWe can regard each class as a variable and its time slot as its value, i.e., here n = b = 2.\nThe constraints are that the two classes are not assigned to be at the same time.\nFundamentally, the combinatorial search problem consists of nding those combinations of a discrete set of items that satisfy speci ed requirements. The number of possible combinations to consider grows very rapidly (e.g., exponentially or factorially) with the number of items, leading to potentially lengthy solution times and severely limiting the feasible size of such problems. For example, the number of possible assignments in a constraint problem is b n , which grows exponentially with the problem size (given by the number of variables n).\nBecause of the exponentially large number of possibilities it appears the time required to solve such problems must grow exponentially, in the worst case. However for many such problems it is easy to verify a solution is in fact correct. These problems form the wellstudied class of NP problems: informally we say they are hard to solve but easy to check. One well-studied instance is graph coloring, where the variables represent nodes in a graph, the values are colors for the nodes and the constraints are that each pair of nodes linked by an edge in the graph must have di erent colors. Another example is propositional satis ability (SAT), where the variables take on logical values of true or false, and the assignment must satisfy a speci ed propositional formula involving the variables. Both these examples are instances of particularly di cult NP problems known as the class of NP-complete search problems (Garey & Johnson, 1979)." }, { "figure_ref": [], "heading": "Phase Transitions", "publication_ref": [ "b6", "b26", "b24" ], "table_ref": [], "text": "Much of the theoretical work on NP search problems examines their worst case behavior. Although these search problems can be very hard, in the worst case, there is a great deal of individual variation in these problems and among di erent search methods. A number of recent studies of NP search problems have focused on regularities of the typical behavior (Cheeseman, Kanefsky, & Taylor, 1991;Mitchell, Selman, & Levesque, 1992;Williams & Hogg, 1994;Hogg et al., 1996;Hogg, 1994). This work has identi ed a number of common behaviors. Speci cally, for large problems, a few parameters characterizing their structure determine the relative di culty for a wide variety of common search methods, on average. Moreover, changes in these parameters give rise to transitions, becoming more abrupt for larger problems, that are analogous to phase transitions in physical systems. In this case, the transition is from underconstrained to overconstrained problems, with the hardest cases concentrated in the transition region. One powerful result of this work is that this concentration of hard cases occurs at the same parameter values for a wide range of search methods. That is, this behavior is a property of the problems rather than of the details of the search algorithm.\nThis can be understood by viewing a search as making a series of choices until a solution is found. The overall search will usually be relatively easy (i.e., require few steps) if either there are many choices leading to solutions or else choices that do not lead to solutions can be recognized quickly as such, so that unproductive search is avoided. Whether this condition holds is in turn determined by how tightly constrained the problem is. When there are few constraints almost all choices are good ones, leading quickly to a solution. With many constraints, on the other hand, there are few good choices but the bad ones can be recognized very quickly as violating some constraints so that not much time is wasted considering them. In between these two cases are the hard problems: enough constraints so good choices are rare but few enough that bad choices are usually recognized only with a lot of additional search.\nA more detailed analysis suggests a series of transitions (Hogg & Williams, 1994). With very few constraints, the average search cost scales polynomially. As more constraints are added, there is a transition to exponential scaling. The rate of growth of this exponential increases until the transition region described above is reached. Beyond that point, with its concentration of hard problems, the growth rate decreases. Eventually, for very highly constrained problems, the search cost again grows only polynomially with size." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "The Combinatorial Search Space", "publication_ref": [ "b31", "b38" ], "table_ref": [], "text": "A general view of the combinatorial search problem is that it consists of N items 1 and a requirement to nd a solution, i.e., a set of L<N items that satis es speci ed conditions or constraints. These conditions in turn can be described as a collection of nogoods, i.e., sets 1. For CSPs, these items are all possible variable-value pairs. are grouped into levels by size and lines drawn between each set and its immediate supersets and subsets. The bottom of the lattice, level 0, represents the single set of size zero, the four points at level 1 represent the four singleton subsets, etc. of items whose combination is inconsistent with the given conditions. In this context we de ne a good to be a set of items that is consistent with all the constraints of the problem. We also say a set is complete if it has L items, while smaller sets are partial or incomplete. Thus a solution is a complete good set. In addition, a partial solution is an incomplete good set.\nA key property that makes this set representation conceptually useful is that if a set is nogood, so are all of its supersets. These sets, grouped by size and with each set linked to its immediate supersets and subsets, form a lattice structure. This structure for N = 4 is shown in Fig. 1. We say that the\nN i = N i (1)\nsets of size i are at level i in the lattice. As described below, the various paths through the lattice from levels near the bottom up to solutions, at level L, can be used to create quantum interference as the basis for a search algorithm.\nAs an example, consider a problem with N = 4 and L = 2, and suppose the constraints eliminate items 1 and 3. Then we have the sets fg, f2g, and f4g as partial goods, while f1g and f3g are partial nogoods. Among the 6 complete sets, only f2,4g is good as the others are supersets of f1g or f3g and hence nogood.\nFor the search problems studied here, the nogoods directly speci ed by the problem constraints will be small sets of items, e.g., of size two or three. On the other hand, the number of items and the size of the solutions will grow with the problem size. This gives a number of small nogoods, i.e., near the bottom of the lattice. Examples of such problems include binary constraint satisfaction, graph coloring and propositional satis ability mentioned above.\nFor CSPs, the items are just the possible variable-value pairs in the problem. Thus a CSP with n variables and b values for each has N = nb items2 . A solution consists of an assignment to each variable that satis es whatever constraints are given in the problem.\nThus a solution consists of a set of L = n items. In terms of the general framework for combinatorial search these constraint satisfaction problems will also contain a number of problem-independent necessary nogoods, namely those corresponding to giving the same variable two di erent values. There are n b 2 such necessary nogoods. For a nontrivial search we must have b 2, so we restrict our attention to the case where L N=2. This requirement is important in allowing the construction of the quantum search method described below.\nAnother example is given by a simple CSP consisting of n = 2 variables (v 1 and v 2 ) each of which can take on one of b = 2 values (1 or 2) and the single constraint that the two variables take on distinct values, i.e., v 1 6 = v 2 . Hence there are N = nb = 4 variablevalue pairs v 1 = 1; v 1 = 2; v 2 = 1; v 2 = 2 which we denote as items 1; 2; 3; 4 respectively. The corresponding lattice is given in Fig. 1. What are the nogoods for this problem? First there are those due to the explicit constraint that the two variables have distinct values: fv 1 = 1; v 2 = 1g and fv 1 = 2; v 2 = 2g or f1; 3g and f2; 4g. In addition, there are necessary nogoods implied by the requirement that a variable takes on a unique value so that any set giving multiple assignments to the same variable is necessarily nogood, namely fv 1 = 1; v 1 = 2g and fv 2 = 1; v 2 = 2g or f1; 2g and f3; 4g. Referring to Fig. 1, we see that these four nogoods force all sets of size 3 and 4 to be nogood too. However, sets of size zero and one are goods as are the remaining two sets of size two: f2; 3g and f1; 4g corresponding to fv 1 = 2; v 2 = 1g and fv 1 = 1; v 2 = 2g which are the solutions to this problem.\nSearch methods use various strategies for examining the sets in this lattice. For instance, methods such as simulated annealing (Kirkpatrick, Gelatt, & Vecchi, 1983), heuristic repair (Minton, Johnston, Philips, & Laird, 1992) and GSAT (Selman, Levesque, & Mitchell, 1992) move among complete sets, attempting to nd a solution by a series of small changes to the sets. Generally these search techniques continue inde nitely if the problem has no solution and thus they can never show that a problem is insoluble. Such methods are called incomplete. In these methods, the search is repeated, from di erent initial conditions or making di erent random choices, until either a solution is found or some speci ed limit on the number of trials is reached. In the latter case, one cannot distinguish a problem with no solution at all from just a series of unlucky choices for a soluble problem. Other search techniques attempt to build solutions starting from smaller sets, often by a process of extending a consistent set until either a solution is found or no further consistent extensions are possible. In the latter case the search backtracks to a previous decision point and tries another possible extension until no further choices remain. By recording the pending choices at each decision point, these backtrack methods can determine a problem is insoluble, i.e., they are complete or systematic search methods.\nThis description highlights two distinct aspects of the search procedure: a general method for moving among sets, independent of any particular problem, and a testing procedure that checks sets for consistency with the particular problem's requirements. Often, heuristics are used to make the search decisions depend on the problem structure hoping to identify changes most likely to lead to a solution and avoid unproductive regions of the search space. However, conceptually these aspects can be separated, as in the case of the quantum search algorithm presented below." }, { "figure_ref": [], "heading": "Quantum Search Methods", "publication_ref": [], "table_ref": [], "text": "This section brie y describes the capabilities of quantum computers, why some straightforward attempts to exploit these capabilities for search are not particularly e ective, then motivates and describes a new search algorithm." }, { "figure_ref": [], "heading": "An Overview of Quantum Computers", "publication_ref": [ "b1", "b3", "b11", "b12", "b18", "b19", "b28", "b30", "b34", "b47", "b50", "b13", "b40", "b20" ], "table_ref": [], "text": "The basic distinguishing feature of a quantum computer (Benio , 1982;Bernstein & Vazirani, 1993;Deutsch, 1985Deutsch, , 1989;;Ekert & Jozsa, 1995;Feynman, 1986;Jozsa, 1992;Kimber, 1992;Lloyd, 1993;Shor, 1994;Svozil, 1995) is its ability to operate simultaneously on a collection of classical states, thus potentially performing many operations in the time a classical computer would do just one. Alternatively, this quantum parallelism can be viewed as a large parallel computer requiring no more hardware than that needed for a single processor. On the other hand, the range of allowable operations is rather limited.\nTo describe this more concretely, we adopt the conventional ket notation from quantum mechanics (Dirac, 1958, section 6) to denote various states3 . That is, we use j i to denote the state of a computer described by . At a low level of description, the state of a classical computer is described by values of its bits. So for instance if it has n bits, then there are N = 2 n possible states for the machine, which can be associated with the numbers s 1 = 0; : : :; s N = 2 n 1. We then say the computer is in state js i i when the values of its bits correspond to the number i 1. More commonly, a computer is described in terms of higher level constructs formed from groups of bits, such as integers, character strings, sets and addresses of variables in a program. For example, a state that could arise during a search is jfv 1 = 1; v 2 = 1g; soln = Falsei corresponding to a set of assignments for variables in a CSP and a value of false for the program variable soln, e.g., used to represent whether a solution has been found. In these higher level descriptions, there will often be aspects of the computer's state, e.g., stack pointers or values for various iteration counters, that are not explicitly mentioned.\nThe states presented so far, where each bit or higher-level construct has a de nite value, apply both to classical and quantum computers. However, quantum computers have a far richer set of possible states. Speci cally, if js 1 i; : : :; js N i are the possible states for a classical computer, the possible states of the corresponding quantum computer are all linear superpositions of these states, i.e., states of the form jsi = P i js i i where i is a complex number called the amplitude associated with the state js i i. The physical interpretation of the amplitudes comes from the measurement process. When a measurement is made on the quantum computer in state jsi, e.g., to determine the result of the computation represented by a particular con guration of the bits in a register, one of the possible classical states is obtained. Speci cally, the classical state js i i is obtained with probability j i j 2 . Furthermore, the measurement process changes the state of the computer to exactly match the result. That is, the measurement is said to collapse the original superposition to the new superposition consisting of the single classical state (i.e., the amplitude of the returned state is 1 and all other amplitudes are zero). This means repeated measurements will always return the same result.\nAn important consequence of this interpretation results from the fact that probabilities must sum to one. Thus the amplitudes of any superposition of states must satisfy the normalization condition\nX i j i j 2 = 1 (2)\nAnother consequence is that the full state of a quantum computer, i.e., the superposition, is not itself an observable quantity. Nevertheless, by changing the amplitude associated with di erent classical states, operations on the superposition can a ect the probability with which various states are observed. This possibility is crucial for exploiting quantum computation, and makes it potentially more powerful than probabilistic classical machines, in which some choices in the program are made randomly.\nThese superpositions can also be viewed as vectors in a space whose basis is the individual classical states js i i and i is the component of the vector along the i th basis element of the space. Such a state vector can also be speci ed by its components as\n( 1 ; : : :; N ) when the basis is understood from context. The inner product of two such vectors is = P N i=1 i i where i denotes the complex conjugate of i . In matrix notation, this can also be written as y where is treated as a column vector and y is a row vector given by the transpose of with all entries changed to their complex conjugate values. For these vectors, the normalization condition amounts to requiring that y = 1.\nTo complete this overview of quantum computers, it remains to describe how superpositions can be used within a program. In addition to the measurement process described above, there are two types of operations that can be performed on a superposition of states. The rst type is to run classical programs on the machine, and the second allows for creating and manipulating the amplitudes of a superposition. In both these cases, the key property of the superposition is its linearity: an operation on a superposition of states gives the superposition of that operation acting on each of those states individually. As described below, this property, combined with the normalization condition, greatly limits the range of physically realizable operations.\nIn the rst case, a quantum computer can perform a classical program provided it is reversible, i.e., the nal state contains enough information to recover the initial state. One way to achieve this is to retain the initial input as part of the output. To illustrate the linearity of operations, consider some reversible classical computation on these states, e.g., f(s i ) which produces a new state from a given input one. When applied to a superposition of states, the result is f(jsi) = P i jf(s i )i. Why is reversibility required? Suppose the procedure f is not reversible, i.e., it maps at least two distinct states to the same result.\nFor example, suppose f(s 1 ) = f(s 2 ) = s 3 . Then for the superposition jsi = 1 p 2 (js 1 i + js 2 i) linearity requires that f(jsi) = 1 p 2 (jf(s 1 )i + jf(s 2 )i) giving p 2js 3 i, a superposition that violates the normalization condition. Thus this irreversible classical operation is not physically realizable on a superposition, i.e., it cannot be used with quantum parallelism.\nIn contrast to this use of computations on individual states, the second type of operation modi es the amplitude of various states within a superposition. That is, starting from jsi = P k js k i the operation, denoted by U, creates a new superposition js 0 i = Ujsi = P 0 j js j i.\nBecause the operations are linear with respect to superpositions, the new amplitudes can be expressed in terms of the original ones by 0 j = P k U jk k , or in matrix notation by 0 = U . That is, linearity means that an operation changing the amplitudes can be represented as a matrix. To satisfy the normalization condition, Eq. 2, this matrix must be such that ( 0 ) y 0 = 1. In terms of the matrix U this condition becomes4 1 = (U ) y (U ) = y U y U\n(3) which must hold for any initial state vector with y = 1. To see what this implies about the matrix A U y U, suppose = êj = (: : :; 0; 1; 0; : : :) is the j th unit vector, corresponding to the superposition js j i where all amplitudes are zero except for j = 1. In this case y A = A jj which must equal one by Eq. 3. That is, the diagonal elements of U y U must all be equal to one. For = 1 p 2 (ê j + êk ) with j 6 = k, y A = 1 2 (ê j + êk )A(ê j + êk ) = 1 2 A jj + A kk + A jk + A kj ]\n(4)\nThis must equal one by Eq. 3, and we already know that the diagonal terms equal one. Thus we conclude A jk = A kj . A similar argument using = 1 p 2 (ê j + iê k ), a superposition with an imaginary value for the second amplitude, gives A jk = A kj . Together these conditions mean that A is the identity matrix, so U y U = I, i.e., the matrix U must be unitary to operate on superpositions. Moreover, this condition is su cient to make any initial state satisfy Eq. 3. This shows how the restriction to linear unitary operations arises directly from the linearity of quantum mechanics and Eq. 2, the normalization condition for probabilities. The class of unitary matrices includes permutations, rotations and arbitrary phase changes (i.e., diagonal matrices where each element on the diagonal is a complex number with magnitude equal to one).\nReversible classical programs, unitary operations on the superpositions and the measurement process are the basic ingredients used to construct a program for a quantum computer. As used in the search algorithm described below, such a program consists of rst preparing an initial superposition of states, operating on those states with a series of unitary matrices in conjunction with a classical program to evaluate the consistency of various states with respect to the search requirements, and then making a measurement to obtain a de nite nal answer. The amplitudes of the superposition just before the measurement is made determine the probability of obtaining a solution. The overall structure is a probabilistic Monte Carlo computation (Motwani & Raghavan, 1995) in which at each trial there is some probability to get a solution, but no guarantee. This means the search method is incomplete: it can nd a solution if one exists but can never guarantee a solution doesn't exist.\nAn alternate conceptual view of these quantum programs is provided by the path integral approach to quantum mechanics (Feynman, 1985). In this view, the nal amplitude of a given state is obtained by a weighted sum over all possible paths that produce that state. In this way, the various possibilities involved in a computation can interfere with each other, either constructively or destructively. This di ers from the classical combination of probabilities of di erent ways to reach the same outcome (e.g., as used in probabilistic algorithms): the probabilities are simply added, giving no possibility for interference. Interference is also seen in classical waves, such as with sound or ripples on the surface of water. But these systems lack the capability of quantum parallelism. The various formulations of quantum mechanics, involving operators, matrices or sums over paths are equivalent but suggest di erent intuitions about constructing possible quantum algorithms." }, { "figure_ref": [], "heading": "Example: A One-Bit Computer", "publication_ref": [ "b14", "b35", "b55", "b41", "b0", "b48", "b9", "b37", "b50", "b2" ], "table_ref": [], "text": "A simple example of these ideas is given by a single bit. In this case there are two possible classical states j0i and j1i corresponding to the values 0 and 1, respectively, for the bit. This de nes a two dimensional vector space of superpositions for a quantum bit. There are a number of proposals for implementing quantum bits, i.e., devices whose quantum mechanical properties can be controlled to produce desired superpositions of two classical values. One example (DiVincenzo, 1995;Lloyd, 1995) is an atom whose ground state corresponds to the value 0 and an excited state to the value 1. The use of lasers of appropriate frequencies can switch such an atom between the two states or create superpositions of the two classical states. This ability to manipulate quantum superpositions has been demonstrated in small cases (Zhu, Kleiman, Li, Lu, Trentelman, & Gordon, 1995). Another possibility is through the use of atomically precise manipulations (DiVincenzo, 1995) using a scanning tunneling or atomic force microscope. This possibility of precise manipulation of chemical reactions has also been demonstrated (Muller, Klein, Lee, Clarke, McEuen, & Schultz, 1995). There are also a number of other proposals under investigation (Barenco, Deutsch, & Ekert, 1995;Sleator & Weinfurter, 1995;Cirac & Zoller, 1995), including the possibility of multiple simultaneous quantum operations (Margolus, 1990).\nA simple computation on a quantum bit is the logical NOT operation, i.e., NOT(j0i) = j1i and NOT(j1i) = j0i. This operator simply exchanges the state vector's components:\nNOT 0 1 NOT( 0 j0i + 1 j1i) = 0 j1i + 1 j0i 1 0\n(5)\nThis operation can also be represented as multiplication by the permutation matrix 0 1 1 0 . Another operator is given by the rotation matrix U( ) = cos sin sin cos (6)\nThis can be used to create superpositions from single classical states, e.g.,\nU 4 1 0 U 4 j0i = 1 p 2 (j0i + j1i) 1 p 2 1 1(7)\nThis rotation matrix can also be used to illustrate interference, an important way in which quantum computers di er from probabilistic classical algorithms. First, consider a classical algorithm with two methods for generating random bits, R 0 (producing a \\0\" with probability 3=4) and R 1 (producing a \\0\" with probability 1=4). Suppose a \\0\" represents a failure (e.g., a probabilistic search that does not nd a solution) while \\1\" represents a success. Finally, let the classical algorithm consist of selecting one of these methods to use, with probability p to pick R 0 . Then the overall probability to obtain a \\0\" as the nal result is just 3\n4 p + 1 4 (1 p) or P classical = 1 4 + p 2(8)\nThe best that can be done is to choose p = 0, giving a probability of 1=4 for failure.\nA quantum analog of this simple calculation can be obtained from a rotation with = 3 . Starting from the individual classical states this gives superpositions\nU 3 1 0 = 1 2 p 3 1 ! U 3 0 1 = 1 2 1 p 3 (9)\nwhich correspond to the generators R 0 and R 1 respectively, because of their respective probabilities of 3=4 and 1=4 to produce a \\0\" when measured. Starting instead from a superposition of the two classical states, cos sin , corresponds to the step of the classical algorithm where generator R 0 is selected with probability p = cos 2 . The resulting state after applying the rotation, U 3 cos sin , has probability\nP quantum = 1 4 + cos 2 2 p 3 4 sin (2 ) = P classical p 3 4 sin (2 ) (10)\nto produce a \\0\" value. In this case the minimum value of the probability to obtain a \\0\" is not 1=4 but in fact can be made to equal 0 with the choice = 3 . In this case the amplitudes from the two original states exactly cancel each other, an example of destructive interference.\nAs a nal example, illustrating the limits of operations on superpositions, consider the simple classical program that sets a bit to the value one. That is, SET(j0i) = j1i and SET(j1i) = j1i. This operation is not reversible: knowing the result does not determine the original input. By linearity, SET 1 p 2 (j0i + j1i) = 1 p 2 (SET(j0i) + SET(j1i)), which in turn is 1 p 2 2j1i = p 2j1i. This state violates the normalization condition. Thus we see that this classical operation is not physically realizable for a quantum computer. Similarly, another common classical operation, making a copy of a bit, is also ruled out (Svozil, 1995), forming the basis for quantum cryptography (Bennett, 1992)." }, { "figure_ref": [], "heading": "Some Approaches to Search", "publication_ref": [ "b5", "b15", "b47" ], "table_ref": [], "text": "A device consisting of n quantum bits allows for operations on superpositions of 2 n classical states. This ability to operate simultaneously on an exponentially large number of states with just a linear number of bits is the basis for quantum parallelism. In particular, repeating the operation of Eq. 7 n times, each on a di erent bit, gives a superposition with equal amplitudes in 2 n states. At rst sight quantum computers would seem to be ideal for combinatorial search problems that are in the class NP. In such problems, there is an e cient procedure f(s) that takes a potential solution set s and determines whether s is in fact a solution, but there are exponentially many potential solutions, very few of which are in fact solutions. If s 1 ; : : :; s N are the potential sets to consider, we can quickly form the superposition 1 p N (js 1 i + : : : + js N i) and then simultaneously evaluate f(s) for all these states, resulting in a superposition of the sets and their evaluation, i.e., 1 p N P js i ; soln = f(s i )i. Here js i ; soln = f(s i )i represents a classical search state considering the set s i along with a variable soln whose value is true or false according to the result of evaluating the consistency of the set with respect to the problem requirements. At this point the quantum computer has, in a sense, evaluated all possible sets and determined which are solutions. Unfortunately, if we make a measurement of the system, we get each set with equal probability 1=N and so are very unlikely to observe a solution. This is thus no better than the slow classical search method of random generate-and-test where sets are randomly constructed and tested until a solution is found. Alternatively, we can obtain a solution with high probability by repeating this operation O(N) times, either serially (taking a long time) or with multiple copies of the device (requiring a large amount of hardware or energy if, say, the computation is done by using multiple photons). This shows a trade-o between time and energy (or other physical resources), conjectured to apply more generally to solving these search problems (Cerny, 1993), and also seen in the trade-o of time and number of processors in parallel computers.\nTo be useful for combinatorial search, we can't just evaluate the various sets but instead must arrange for amplitude to be concentrated into the solution sets so as to greatly increase the probability a solution will be observed. Ideally this would be done with a mapping that gives constructive interference of amplitude in solutions and destructive interference in nonsolutions. Designing such maps is complicated by the fact that they must be linear unitary operators as described above. Beyond this physical restriction, there is an algorithmic or computational requirement: the mapping should be e ciently computable (DiVincenzo & Smolin, 1994). For example, the map cannot require a priori knowledge of the solutions (otherwise constructing the map would require rst doing the search). This computational requirement is analogous to the restriction on search heuristics: to be useful, the heuristic itself must not take a long time to compute. These requirements on the mapping trade o against each other. Ideally one would like to nd a way to satisfy them all so the map can be computed in polynomial time and give, at worst, polynomially small probability to get a solution if the problem is soluble. One approach is to arrange for constructive interference in solutions while nonsolutions receive random contributions to their amplitude. While such random contributions are not as e ective as a complete destructive interference, they are easier to construct and form the basis for a recent factoring algorithm (Shor, 1994) as well as the method presented here.\nClassical search algorithms can suggest ways to combine the use of superpositions with interference. These include local repair styles of search where complete assignments are modi ed, and backtracking search, where solutions are built up incrementally. Using superpositions, many possibilities could be simultaneously considered. However these search methods have no a priori speci cation of the number of steps required to reach a solution so it is unclear how to determine when enough amplitude might be concentrated into solution states to make a measurement worthwhile. Since the measurement process destroys the superposition, it is not possible to resume the computation at the point where the measurement was made if it does not produce a solution. A more subtle problem arises because di erent search choices lead to solutions in di ering numbers of steps. Thus one would also need to maintain any amplitude already in solution states while the search continues. This is di cult due to the requirement for reversible computations.\nWhile it may be fruitful to investigate these approaches further, the quantum method proposed below is based instead on a breadth-rst search that incrementally builds up all solutions. Classically, such methods maintain a list of goods of a given size. At each step, the list is updated to include all goods with one additional variable. Thus at step i, the list consists of sets of size i which are used to create the new list of sets of size i + 1. For a CSP with n variables, i ranges from 0 to n 1, and after completing these n steps the list will contain all solutions to the problem. Classically, this is not a useful method for nding a single solution because the list of partial assignments grows exponentially with the number of steps taken. A quantum computer, on the other hand, can handle such lists readily as superpositions. In the method described below, the superposition at step i consists of all sets of size i, not just consistent ones, i.e., the sets at level i in the lattice. There is no question of when to make the nal measurement because the computation requires exactly n steps. Moreover, there is an opportunity to use interference to concentrate amplitude toward goods. This is done by changing the phase of amplitudes corresponding to nogoods encountered while moving through the lattice.\nAs with the division of search methods into a general strategy (e.g., backtrack) and problem speci c choices, the quantum mapping described below has a general matrix that corresponds to exploring all possible changes to the partial sets, and a separate, particularly simple, matrix that incorporates information on the problem speci c constraints. More complex maps are certainly possible, but this simple decomposition is easier to design and describe. With this decomposition, the di cult part of the quantum mapping is independent of the details of the constraints in a particular problem. This suggests the possibility of implementing a special purpose quantum device to perform the general mapping. The constraints of a speci c problem are used only to adjust phases as described below, a comparatively simple operation.\nFor constraint satisfaction problems, a simple alternative representation to the full lattice structure is to use partial assignments only, i.e., sets of variable-value pairs that have no variable more than once. At rst sight this might seem better in that it removes from consideration the necessary nogoods and hence increases the proportion of complete sets that are solutions. However, in this case the number of sets as a function of level in the lattice decreases before reaching the solution level, precluding the simple form of a unitary mapping described below for the quantum search algorithm. Another representation that avoids this problem is to consider assignments in only a single order for the variables (selected randomly or through the use of heuristics). This version of the set lattice has been previously used in theoretical analyses of phase transitions in search (Williams & Hogg, 1994). This may be useful to explore further for the quantum search, but is unlikely to be as e ective. This is because in a xed ordering some sets will become nogood only at the last few steps, resulting is less opportunity for interference based on nogoods to focus on solutions." }, { "figure_ref": [], "heading": "Motivation", "publication_ref": [ "b29" ], "table_ref": [], "text": "To motivate the mapping described below, we consider an idealized version. It shows why paths through the lattice tend to interfere destructively for nonsolution states, provided the constraints are small.\nThe idealized map simply maps each set in the lattice equally to its supersets at the next level, while introducing random phases for sets found to be nogood. For this discussion we are concerned with the relative amplitude in solutions and nogoods so we ignore the overall normalization. Thus for instance, with N = 6, the state jf1; 2gi will map to an unnormalized superposition of its four supersets of size 3, namely the state jf1; 2; 3gi + : : : + jf1; 2; 6gi.\nWith this mapping, a good at level j will receive equal contribution from each of its j subsets at the prior level. Starting with amplitude of 1 at level 0 then gives an amplitude of j! for goods at level j. In particular, L! for solutions.\nHow does this compare with contribution to nogoods, on average? This will depend on how many of the subsets are nogoods also. A simple case for comparison is when all sets in the lattice are nogood (starting with those at level k given by the size of the constraints, e.g., k = 2 for problems with binary constraints). Let r j be the expected value of the magnitude of the amplitude for sets at level j. Each set at level k will have r k = k! (and a zero phase) because all smaller subsets will be goods. A set s at level j>k will be a sum of j contributions from (nogood) subsets, giving a total contribution of (\ns) = j X m=1 (s m )e i m (11\n)\nwhere the s m are the subsets of s of size j 1 and the phases m are randomly selected. The (s m ) have expected magnitude r j 1 and some phase that can be combined with m to give a new random phase m . Ignoring the variation in the magnitude of the amplitudes at each level this gives\nr j = r j 1 * j X m=1 e i m + = r j 1 p j (12)\nbecause the sum of j random phases is equivalent to an unbiased random walk (Karlin & Taylor, 1975) with j unit steps which has expected net distance of p j. Thus r j = r k p j!=k! or r j = p j!k! for j>k.\nThis crude argument gives a rough estimate of the relative probabilities for solutions compared to complete nogoods. Suppose there is only one solution. Then its relative probability is L! 2 . The nogoods have relative probability (N L 1)r 2 L N L L!k! with N L given by Eq. 1. An interesting scaling regime is L = N=b with xed b, corresponding to a variety of well-studied constraint satisfaction problems. This gives ln P soln P nogood\n! = ln L! N L k! N b ln N + O(N)(13)\nThis goes to in nity as problems get large so the enhancement of solutions is more than enough to compensate for their rareness among sets at the solution level.\nThe main limitation of this argument is assuming that all subsets of a nogood are also nogood. For many nogoods, this will not be the case, resulting in less opportunity for cancellation of phases. The worst situation in this respect is when most subsets are goods. This could be because the constraints are large, i.e., they don't rule out states until many items are included. Even with small constraints, this could happen occasionally due to a poor ordering choice for adding items to the sets, hence suggesting that a lattice restricted to assignments in a single order will be much less e ective in canceling amplitude in nogoods. For the problems considered here, with small constraints, a large nogood cannot have too many good subsets because to be nogood means a small subset violates a (small) constraint and hence most subsets obtained by removing one element will still contain that bad subset giving a nogood. In fact, some numerical experiments (with the class of unstructured problems described below) show that this mapping is very e ective in canceling amplitude in the nogoods. Thus the assumptions made in this simpli ed argument seem to provide the correct intuitive description of the behavior.\nStill the assumption of many nogood subsets underlying the above argument does suggest the extreme cancellation derived above will least apply when the problem has many large partial solutions. This gives a simple explanation for the di culty encountered with the full map described below at the phase transition point: this transition is associated with problems with relatively many large partial solutions but few complete solutions. Hence we can expect relatively less cancellation of at least some nogoods at the solution level and a lower overall probability to nd a solution.\nThis discussion suggests why a mapping of sets to supersets along with random phases introduced at each inconsistent set can greatly decrease the contribution to nogoods at the solution level. However, this mapping itself is not physically realizable because it is not unitary. For example, the mapping from level 1 to 2 with N = 3 takes the states jf1gi; jf2gi; jf3gi to jf1; 2gi; jf1; 3gi; jf2; 3gi with the matrix\nM = 0 @ 1 1 0 1 0 1 0 1 1 1 A (14)\nHere, the rst column means the state jf1gi contributes equally to jf1; 2gi and jf1; 3gi, its supersets, and gives no contribution to jf2; 3gi. We see immediately that the columns of this matrix are not orthogonal, though they can be easily normalized by dividing the entries by p 2. More generally, this mapping takes each set at level i to the N i sets with one more element. The corresponding matrix M has one column for each i{set and one row for each (i+1)-set. In each column there will be exactly N i 1's (corresponding to the supersets of the given i{set) and the remaining entries will be 0. Two columns will have at most a single nonzero value in common (and only when the two corresponding i{sets have all but one of their values in common: this is the only way they can share a superset in common). This means that as N gets large, the columns of this matrix are almost orthogonal (provided i<N=2, the case of interest here). This fact is used below to obtain a unitary matrix that is fairly close to M." }, { "figure_ref": [], "heading": "A Search Algorithm", "publication_ref": [], "table_ref": [], "text": "The general idea of the mapping introduced here is to move as much amplitude as possible to supersets (just as in classical breadth-rst search, increments to partial sets give supersets). This is combined with a problem speci c adjustment of phases based on testing partial states for consistency (this corresponds to a diagonal matrix and thus is particularly simple in that it does not require any mixing of the amplitudes of di erent states). The speci c methods used are described in this section." }, { "figure_ref": [ "fig_2" ], "heading": "The Problem-Independent Mapping", "publication_ref": [ "b23", "b23", "b53" ], "table_ref": [], "text": "To take advantage of the potential cancellation of amplitude in nogoods described above we need a unitary mapping whose behavior is similar to the ideal mapping to supersets. There are two general ways to adjust the ideal mapping of sets to supersets (mixtures of these two approaches are possible as well). First, we can keep some amplitude at the same level of the lattice instead of moving all the amplitude up to the next level. This allows using the ideal map described above (with suitable normalization) and so gives excellent discrimination between solutions and nonsolutions, but unfortunately not much amplitude reaches solution level. This is not surprising: the use of random phases cancel the amplitude in nogoods but this doesn't add anything to solutions (because solutions are not a superset of any nogood and hence cannot receive any amplitude from them). Hence at best, even when all nogoods cancel completely, the amplitude in solutions will be no more than their relative number among complete sets, i.e., very small. Thus the random phases prevent much amplitude moving to nogoods high in the lattice, but instead of contributing to solutions this amplitude simply remains at lower levels of the lattice. Hence we have no better chance than random selection of nding a solution (but, when a solution is not found, instead of getting a nogood at the solution level, we are now likely to get a smaller set in the lattice). Thus we must arrange for amplitude taken from nogoods to contribute instead to the goods. This requires the map to take amplitude to sets other than just supersets, at least to some extent.\nThe second way to x the nonunitary ideal map is to move amplitude also to nonsupersets. This can move all amplitude to the solution level. It allows some canceled amplitude from nogoods to go to goods, but also vice versa, resulting in less e ective concentration into solutions. This can be done with a unitary matrix as close as possible to the nonunitary ideal map to supersets, and that also has a relatively simple form. The general question here is given k linearly independent vectors in m dimensional space, with k m, nd k orthonormal vectors in the space as close as possible to the k original ones.\nRestricting attention to the subspace de ned by the original vectors, this can be obtained 5 using the singular value decomposition (Golub & Loan, 1983) (SVD) of the matrix M whose columns are the k given vectors. Speci cally, this decomposition is M = A y B, where is a diagonal matrix containing the singular values of M and both A y and B have orthonormal columns. For a real matrix M, the matrices of the decomposition are also real-valued. The matrix U = A y B has orthonormal columns and is the closest set of orthogonal vectors according to the Frobenius matrix norm. That is, this choice for U minimizes jU Mj 2 P rs jU rs M rs j 2 among all unitary matrices. This construction fails if k>m since an m{dimensional space cannot have more than m orthogonal vectors. Hence we restrict consideration to mappings in the lattice at those levels i where level i + 1 has at least as many sets as level i, i.e., N i N i+1 . Obtaining a solution requires mapping up to level L so, from Eq. 1, this restricts consideration to problems where L dN=2e.\nFor example, the mapping from level 1 to 2 with N = 3 given in Eq. 14 has the singular value decomposition M = A y B with this decomposition given explicitly as\nA y B = 0 B B @ 1 p 3 1 p 2 1 p 6 1 p 3 1 p 2 1 p 6 1 p 3 0 q 2 3 1 C C A 0 @ 2 0 0 0 1 0 0 0 1 1 A 0 B B @ 1 p 3 1 p 3 1 p 3 0 1 p 2 1 p 2 q 2 3 1 p 6 1 p 6 1 C C A(15)\nThe closest unitary matrix is then\nU = A y B = 1 3 0 @ 2 2 1 2 1 2 1 2 2 1 A (16)\nWhile this gives a set of orthonormal vectors close to the original map, one might be concerned about the requirement to compute the SVD of exponentially large matrices. Fortunately, however, the resulting matrices have a particularly simple structure in that the entries depend only on the overlap between the sets. Thus we can write the matrix elements in the form U r = a jr\\ j (r is an (i+1)-subset, is an i-subset). The overlap jr \\ j ranges from i when r to 0 when there is no overlap. Thus instead of exponentially many distinct values, there are only i + 1, a linear number. This can be exploited to give a simpler method for evaluating the entries of the matrix as follows.\nWe can get expressions for the a values for a given N and i since the resulting column vectors are orthonormal. Restricting attention to real values, this gives\n1 = U y U = i X k=0 n k a 2 k (17) where n k = i k N i i + 1 k (18)\n5. I thank J. Gilbert for pointing out this technique, as a variant of the orthogonal Procrustes problem (Golub & Loan, 1983).\nis the number of ways to pick r with the speci ed overlap. For the o -diagonal terms, suppose j \\ j = p<i then, for real values of the matrix elements,\n0 = U y U = i X j;k=0 n (p) jk a j a k (19\n)\nwhere\nn (p) jk = X x i p k x p x i p j x N 2i + p i + 1 j k + x (20)\nis the number of sets r with the required overlaps with and , i.e., jr \\ j = k i and jr \\ j = j i. In this sum, x is the number of items the set r has in common with both and . Together these give i + 1 equations for the values of a 0 ; : : :; a i , which are readily solved numerically 6 . There are multiple solutions for this system of quadratic equations, each representing a possible unitary mapping. But there is a unique one closest to the ideal mapping to supersets, as given by the SVD. It is this solution we use for the quantum search algorithm 7 , although it is possible some other solution, in conjunction with various choices of phases, performs better. Note that the number of values and equations grows only linearly with the level in the lattice, even though the number of sets at each level grows exponentially. When necessary to distinguish the values at di erent levels in the lattice, we use a (i) k to mean the value of a k for the mapping from level i to i + 1. The example of Eq. 14, with N = 3 and i = 1, has 1 = a 2 0 + 2a 2 1 for Eq. 17 and 0 = 2a 0 a 1 + a 2 1 for Eq. 19. The solution of these unitarity conditions closest to Eq. 14 is a 0 = 1 3 ; a 1 = 2 3 corresponding to Eq. 16.\nA normalized version of the ideal map has a (i) i = 1 p n i = 1 p N i and all other values equal to zero. The actual values for a (i) k are fairly close to this (con rming that the ideal map is close to orthogonal already), and alternate in sign. To illustrate their behavior, it is useful to consider the scaled values b (i) k ( 1) k a (i) i k p n i k , with n i k evaluated using Eq. 18. The behavior of these values for N = 10 is shown in Fig. 2. Note that b (i) 0 is close to one, and decreases slightly as higher levels in the lattice (i.e., larger i values) are considered: the ideal mapping is closer to orthogonal at low levels in the lattice.\nDespite the simple values for the example of Eq. 16, the a k values in general do not appear to have a simple closed form expression. This is suggested by obtaining exact solutions to Eqs. 17 and 19 using the Mathematica symbolic algebra program (Wolfram, 1991). In most cases this gives complicated expressions involving nested roots. Since such expressions could simplify, the a k values were also checked for being close to rational numbers and whether they are roots of single variable polynomials of low degree 8 . Neither simpli cation was found to apply.\nFinally we should note that this mapping only describes how the sets at level i are mapped to the next level. The full quantum system will also perform some mapping on the remaining sets in the lattice. By changing the map at each step, most of the other sets can simply be left unchanged, but there will need to be a map of the sets at level i + 1 other than the identity mapping to be orthogonal to the map from level i. Any orthogonal set mapping partly back to level i and partly remaining in sets at level i + 1 will be suitable for this: in our application there is no amplitude at level i + 1 when the map is used and hence it doesn't matter what mapping is used. However, the choice of this part of the overall mapping remains a degree of freedom that could perhaps be exploited to minimize errors introduced by external noise." }, { "figure_ref": [ "fig_2" ], "heading": "Phases for Nogoods", "publication_ref": [ "b21" ], "table_ref": [], "text": "In addition to the general mapping from one level to the next, there is the problem-speci c aspect of the algorithm, namely the choice of phases for the nogood sets at each level. In the ideal case described above, random phases were given to each nogood, resulting in a great deal of cancellation for nogoods at the solution level. While this is a reasonable choice for the unitary mapping, other policies are possible as well. For example, one could simply invert the phase of each nogood 9 (i.e., multiply its amplitude by -1). This choice doesn't work well for the idealized map to supersets only but, as shown below, is helpful for the unitary map. It can be motivated by considering the coe cients shown in Fig. 2. Speci cally, when a nogood is encountered for the rst time on a path through the lattice, we would like to cancel phase to its supersets but at the same time enhance amplitude in other sets likely to lead to solutions. Because a (i) i 1 is negative, inverting the phase will tend to add to sets that di er by one element from the nogood. At least some of these will avoid violating the small constraint that produced this nogood set, and hence may contribute eventually to sets that do lead to solutions.\nMoreover, one could use information on the sets at the next level to decide what to do with the phase: as currently described, the computation makes no use of testing the 9. I thank J. Lamping for suggesting this.\nconsistency of sets at the solution level itself, and hence is completely ine ective for problems where the test requires the complete set. Perhaps better would be to mark a state as nogood if it has no consistent extensions with one more item (this is simple to check since the number of extensions grows only linearly with problem size). Another possibility is for the phase to be adjusted based on how many constraints are violated, which could be particularly appropriate for partial constraint satisfaction problems (Freuder & Wallace, 1992) or other optimization searches." }, { "figure_ref": [], "heading": "Summary", "publication_ref": [], "table_ref": [], "text": "The search algorithm starts by evenly dividing amplitude among the goods at a low level K of the lattice. A convenient choice for binary CSPs is to start at level K = 2, where the number of sets is proportional to N 2 . Then for each level from K to L 1, we adjust the phases of the states depending on whether they are good or nogood and map to the next level. Thus if (j) represents the amplitude of the set at level j, we have\n(j+1) r = X U r (j) = X k a (j) k X jr\\ j=k (j) (21\n)\nwhere is the phase assigned to the set after testing whether it is nogood, and the nal inner sum is over all sets that have k items in common with r. That is, = 1 when is a good set. For nogoods, = 1 when using the phase inversion method, and = e i with uniformly selected from 0; 2 ) when using the random phase method. Finally we measure the state, obtaining a complete set. This set will be a solution with probability p soln = X s (L) s 2 (22) with the sum over solution sets, depending on the particular problem and method for selecting the phases. What computational resources are required for this algorithm? The storage requirements are quite modest: N bits can produce a superposition of 2 N states, enough to represent all the possible sets in the lattice structure. Since each trial of this algorithm gives a solution only with probability p soln , on average it will need to be repeated 1=p soln times to nd a solution. The cost of each trial consists of the time required to construct the initial superposition and then evaluate the mapping on each step from the level K to the solution level L, a total of L K<N=2 mappings. Because the initial state consists of sets of size K, there are only a polynomial number of them (i.e., O N K ) and hence the cost to construct the initial superposition will be relatively modest. The mapping from one level to the next will need to be produced by a series of more elementary operations that can be directly implemented in physical devices. Determining the required number of such operations remains an open question, though the particularly simple structure of the matrices should not require involved computations and should also be able to exploit special purpose hardware. At any rate, this mapping is independent of the structure of the problem and its cost does not a ect the relative costs of di erent problem structures. Finally, determining the phases to use for the nogood sets involves testing the sets against the constraints, a relatively rapid operation for NP search problems. Thus to examine how the cost of this search algorithm depends on problem structure, the key quantity is the behavior of p soln ." }, { "figure_ref": [], "heading": "An Example of Quantum Search", "publication_ref": [], "table_ref": [], "text": "To illustrate the algorithm's operation and behavior, consider the small case of N = 3 with the map starting from level K = 0 and going up to level L = 2. Suppose that f3g and its supersets are the only nogoods. We begin with all amplitude in the empty set, i.e., with the state j;i. The map from level 0 to 1 gives equal amplitude to all singleton sets, producing 1 p 3 (jf1gi + jf2gi + jf3gi). We then introduce a phase for the nogood set, giving 1 p 3 jf1gi + jf2gi + e i jf3gi . Finally we use Eq. 16 to map this to the sets at level 2, giving the nal state 1 3 p 3 4 e i jf1; 2gi + 1 + 2e i jf1; 3gi + 1 + 2e i jf2; 3gi (23) At this level, only set f1,2g is good, i.e., a solution. Note that the algorithm does not make any use of testing the states at the solution level for consistency.\nThe probability to obtain a solution when the nal measurement is made is determined by the amplitude of the solution set, so in this case Eq. 22 becomes p soln = 1 3 p 3 4 e i 2 = 1 27 (17 8 cos ) ( 24)\nFrom this we can see the e ect of di erent methods for selecting the phase for nogoods.\nIf the phase is selected randomly, p soln = 17 27 = 0:63 because the average value of cos is zero. Inverting the phase of the nogood, i.e., using = , gives p soln = 25 27 = 0:93. These probabilities compare with the 1/3 chance of selecting a solution by random choice. In this case, the optimal choice of phase is the same as that obtained by simple inversion. However this is not true in general: picking phases optimally will require knowledge about the solutions and hence is not a feasible mapping. Note also that even the optimal choice of phase doesn't guarantee a solution is found." }, { "figure_ref": [], "heading": "Average Behavior of the Algorithm", "publication_ref": [], "table_ref": [], "text": "In this section, the behavior of the quantum algorithm is evaluated for two classes of combinatorial search problems. The rst class, of unstructured problems, is used to examine the phase transition in a particularly simple context using both random and inverted phases for nogoods. The second class, random propositional satis ability (SAT), evaluates the robustness of the algorithm for problems with particular structure.\nFor classical simulation of this algorithm we explicitly compute the amplitude of all sets in the lattice up to the solution level and the mapping between levels. Unfortunately, this results in an exponential slowdown compared to the quantum implementation and severely limits the feasible size of these classical simulations. Moreover, determining the expected behavior of the random phase method requires repeating the search a number of times on each problem (10 tries in the experiments reported here). This further limits the feasible problem size.\nAs a simple check on the numerical errors of the calculation, we recorded the total normalization in all sets at the solution level. With double precision calculations on a Sun Sparc10, for the experiments reported here typically the norm was 1 to within a few times 10 11 . As an indication of the execution time with unoptimized C++ code, a single trial for a problem with N = 14 and 16, with L = N=2, required about 70 and 1000 seconds, respectively. This uses a direct evaluation of the map from one level to the next as given by Eq. 21. A substantial reduction in compute time is possible by exploiting the simple structure of this mapping to give a recursive evaluation10 . Some additional improvement is possible by exploiting the fact that all amplitudes are real when using the method that inverts phases of nogoods. This reduced the execution time to about 1 and 6 seconds per trial for N of 14 and 16, respectively." }, { "figure_ref": [], "heading": "Unstructured Problems", "publication_ref": [ "b43", "b49", "b6" ], "table_ref": [], "text": "To examine the typical behavior of this quantum search algorithm with respect to problem structure, we need a suitable class of problems. This is particularly important for average case analyses since one could inadvertently select a class of search problems dominated by easy cases. Fortunately the observed concentration of hard cases near phase transitions provides a method to generate hard test cases.\nThe phase transition behavior has been seen in a variety of search problem classes. Here we select a particularly simple class of problems by supposing the constraints specify nogoods randomly at level 2 in the lattice. This corresponds to binary constraint satisfaction problems (Prosser, 1996;Smith & Dyer, 1996), but ignores the detailed structure of the nogoods imposed by the requirement that variables have a unique assignment. By ignoring this additional structure, we are able to test a wider range of the number of speci ed nogoods for the problems than would be the case by considering only constraint satisfaction problems. This lack of additional structure is also likely to make the asymptotic behavior more readily apparent at the small problem sizes that are feasible with a classical simulation.\nFurthermore, since the quantum search algorithm is appropriate only for soluble problems, we restrict attention to random problems with a solution. These could be obtained by randomly generating problems and rejecting any that have no solution (as determined using a complete classical search method). However, for overconstrained problems the soluble ones become quite rare and di cult to nd by this method. Instead, we generate problems with a prespeci ed solution. That is, when randomly selecting nogoods to add to a problem, we do not pick any nogoods that are subsets of a prespeci ed solution set. This always produces problems with at least one solution. Although these problems tend to be a bit easier than randomly selected soluble problems, they nevertheless exhibit the same concentration of hard problems and at about the same location as general random problems (Cheeseman et al., 1991;Williams & Hogg, 1994). The quantum search is started at level 2 in the lattice. problems with a prespeci ed solution as a function of = m=N for N = 10 (gray) and 20 (black) and L = N=2. Here m is the number of nogoods selected at level 2 of the search lattice. The cost is the average number of backtrack steps, starting from the empty set, required to nd the rst solution to the problem, averaged over 1000 problems. The error bars indicate the standard deviation of this estimate of the average value, and in most cases are smaller than the size of the plotted points. For comparison, the dashed curves show the probability for having a solution in randomly generated problems with the speci ed value, ranging from 1 at the left to 0 at the right." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Theory", "publication_ref": [ "b49" ], "table_ref": [], "text": "For this class of problems, the phase transition behavior is illustrated in Fig. 3. Speci cally, this shows the cost to solve the problem with a simple chronological backtrack search. The cost is given in terms of the number of search nodes considered until a solution is found. The minimum cost, for a search that proceeds directly to a solution with no backtrack is L + 1. The parameter distinguishing underconstrained from overconstrained problems is the ratio of the number of nogoods m at level 2 given by the constraints to the number of items N.\nEven for these relatively small problems, a peak in the average search cost is evident. Moreover, this peak is near the transition region where random problems11 change from mostly soluble to mostly insoluble. A simple, but approximate, theoretical value for the location of the transition is given by the point where the expected number of solutions is equal to one (Smith & Dyer, 1996;Williams & Hogg, 1994). Applying this to the class of problems considered here is straightforward. Speci cally, there are N L complete sets for the problem, as given by Eq. 1. A particular set s of size L will be good, i.e., a solution, only if none of the nogoods selected for the problem are a subset of s. Hence the probability it will be a solution is given by\nL = ( N 2 ) ( L 2 ) m ( N 2 ) m(25)\nbecause there are N 2 sets of size 2 from which to choose the m nogoods speci ed directly by the constraints. The average number of solutions is then just N soln = N L L . If we set m = N and L = N=b, for large N this becomes\nln N soln N h 1 b + ln 1 1 b 2 (26)\nwhere h(x) x ln x (1 x) ln (1 x). The predicted transition point12 is then given by crit = h(1=b)\nln (1 1=b 2 ) (27\n)\nwhich is crit = 2:41 for the case considered here (i.e., b = 2). This closely matches the location of the peak in the search cost for problems with prespeci ed solution, as shown in Fig. 3, but is about 20% larger than the location of the step in solubility13 . Furthermore, the theory predicts there is a regime of polynomial average cost for su ciently few constraints (Hogg & Williams, 1994). This is determined by the condition that the expected number of goods at each level in the lattice is monotonically increasing. Repeating the above argument for smaller levels in the lattice, we nd that this condition holds up to\npoly = b 2 1 2b ln(b 1)(28)\nwhich is poly = 0 for b = 2.\nWhile these estimates are only approximate, they do indicate that the class of random soluble problems de ned here behaves qualitatively and quantitatively the same with respect to the transition behavior as a variety of other, perhaps more realistic, problem classes. This close correspondence with the theory (derived for the limit of large problems), suggests that we are observing the correct transition behavior even with these relatively small problems. Moreover the above approximate theoretical argument suggests that the average cost of general classical search methods scales exponentially with the size of the problem over the full range of >0. Thus this provides a good test case for the average behavior of the quantum algorithm. As a nal observation, it is important to obtain a su cient number of samples, especially near the transition region. This is because there is considerable variation in problems near the transition, speci cally a highly skewed distribution in the number of solutions. In this region, most problems have few solutions but a few have extremely many: enough in fact to give a substantial contribution to the average number of solutions even though such problems are quite rare. prespeci ed solution with binary constraints, using random phases for nogoods.\nThe solid curve is for N = 10, with 100 samples per point. The gray curve is for N = 20 with 10 samples per point (but additional samples were used around the peak). The error bars indicate the standard error in the estimate of hTi." }, { "figure_ref": [ "fig_4", "fig_5", "fig_6" ], "heading": "Phase Transition", "publication_ref": [], "table_ref": [], "text": "To see how problem structure a ects this search algorithm, we evaluate p soln , the probability to nd a solution for problems with di erent structures, ranging from underconstrained to overconstrained. Low values for this probability indicate relatively harder problems. The expected number of repetitions of the search required to nd a solution is then given by T = 1=p soln . The results are shown in Figs. 4 and5 for di erent ways of introducing phases for nogood sets. We see the general easy-hard-easy pattern in both cases. Another common feature of phase transitions is an increased variance around the transition region.\nThe quantum search has this property as well, as shown in Fig. 6." }, { "figure_ref": [ "fig_4", "fig_5", "fig_9", "fig_10" ], "heading": "Scaling", "publication_ref": [], "table_ref": [], "text": "An important question in the behavior of this search method is how its average performance scales with problem size. To examine this question, we consider the scaling with xed . This is shown in Figs. 7 and 8 for algorithms using random and inverted phases for nogoods, respectively. To help identify the likely scaling, we show the same results on both a log plot (where straight lines correspond to exponential scaling) and a log-log plot (where straight lines correspond to power-law or polynomial scaling).\nIt is di cult to make de nite conclusions from these results for two reasons. First, the variation in behavior of di erent problems gives a statistical uncertainty to the estimates of the average values, particularly for the larger sizes where fewer samples are available. The standard errors in the estimates of the averages are indicated by the error bars in the gures (though in most cases, the errors are smaller than the size of the plotted points). Second, the scaling behavior could change as larger cases are considered. With these caveats in mind, the gures suggest that p soln remains nearly constant for underconstrained problems, even though the fraction of complete sets that are solutions is decreasing exponentially. This Figure 7: Scaling of the probability to nd a solution using the random phase method, for of 1 (solid), 2 (dashed), 3 (gray) and 4 (dashed gray). This is shown on log and log-log scales (left and right plots, respectively). Figure 8: Scaling of the probability to nd a solution using the phase inversion method, for of 1 (solid), 2 (dashed), 3 (gray) and 4 (dashed gray). This is shown on log and log-log scales (left and right plots, respectively).\nbehavior is also seen in the overlap of the curves for small in Figs. 4 and5. For problems with more constraints, p soln appears to decrease polynomially with the size of the problem, i.e., the curves are closer to linear in the log-log plots than in the log plots. This in con rmed quantitatively by making a least squares t to the values and seeing that the residuals of the t to a power-law are smaller than those for an exponential t. An interesting observation in comparing the two phase choices is that the scaling is qualitatively similar, even though the phase inversion method performs better. This suggests the detailed values of the phase choices are not critical to the scaling behavior, and in particular high precision evaluation of the phases is not required. Finally we should note that this illustration of the average scaling leaves open the behavior for the worst case instances.\nFor the underconstrained cases in Figs. 7 and 8 there is a small additional di erence between cases with an even and odd number of variables. This is due to oscillations in the amplitude in goods at each level of the lattice, and is discussed more fully in the context of SAT problems below. gorithm to the probability to nd a solution by random selection at the solution level, using the phase inversion method, for of 1 (solid), 2 (dashed), 3 (gray) and 4 (dashed gray). The curves are close to linear on this log scale indicating exponential improvement over the direct selection from among complete sets, with a higher enhancement for problems with more constraints. Another scaling comparison is to see how much this algorithm enhances the probability to nd a solution beyond the simple quantum algorithm of evaluating all the complete sets and then making a measurement. As shown in Fig. 9, this quantum algorithm appears to give an exponential improvement in the concentration of amplitude into solutions. A more explicit view of this di erence in behavior is shown in Fig. 10 for = 2. In this gure, the dashed curve shows the behavior of p soln for the phase inversion method, and is identical to the = 2 curve of Fig. 8." }, { "figure_ref": [], "heading": "Random 3SAT", "publication_ref": [ "b39", "b44", "b10" ], "table_ref": [], "text": "These experiments leave open the question of how additional problem structure might a ect the scaling behaviors. While the universality of the phase transition behavior in other search methods suggests that the average behavior of this algorithm will also be the same for a wide range of problems, it is useful to check this empirically. To this end the algorithm was applied to the satis ability (SAT) problem. This constraint satisfaction problem consists of a propositional formula with n variables and the requirement to nd an assignment (true or false) to each variable that makes the formula true. Thus there are b = 2 assignments for each variable and N = 2n possible variable-value pairs. We consider the well-studied NP-complete 3SAT problem where the formula is a conjunction of c clauses, each of which is a disjunction of 3 (possibly negated) variables.\nThe SAT problem is readily represented by nogoods in the lattice of sets (Williams & Hogg, 1994). As described in Sec. 2.2, there will be n necessary nogoods, each of size 2. In addition, each distinct clause in the proposition gives a single nogood of size 3. This case is thus of additional interest in having speci ed nogoods of two sizes. For evaluating the quantum algorithm, we start at level 3 in the lattice. Thus the smallest case for which the phase choices will in uence the result is for n = 5.\nWe generate random problems with a given number of clauses by selecting that number of di erent nogoods of size 3 from among those sets not already excluded by the necessary nogoods14 . For random 3SAT, the hard problems are concentrated near the transition (Mitchell et al., 1992) at c = 4:2n. Finally, from among these randomly generated problems, we use only those that do in fact have a solution 15 . Using randomly selected soluble problems results in somewhat harder problems than using a prespeci ed solution. Like other studies that need to examine many goods and nogoods in the lattice (Schrag & Crawford, 1996), these results are restricted to much smaller problems than in most studies of random SAT. Consequently, the transition region is rather spread out. Furthermore, the additional structure of the necessary nogoods and the larger size of the constraints, compared with the previous class of problems, makes it more likely that larger problems will be required to see the asymptotic scaling behavior. However, at least some asymptotic behaviors have been observed (Crawford & Auton, 1993) to persist quite accurately even for problems as small as n = 3, so some indication of the scaling behavior is not out of the question for the small problems considered here." }, { "figure_ref": [ "fig_11", "fig_5", "fig_2" ], "heading": "Phase Transition", "publication_ref": [], "table_ref": [], "text": "The behavior of the algorithm as a function of the ratio of clauses to variables is shown in Fig. 11 using the phase inversion method. This shows the phase transition behavior. Comparing to Fig. 5, this also shows the class of random 3SAT problems is harder, on average, for the quantum algorithm than the class of unstructured problems. for random 3SAT as a function of c=n, using the phase inversion method. The curves correspond to n = 5 (black) and n = 10 (gray). Figure 12: Scaling of the probability to nd a solution, using the phase inversion method, as a function of the number of variables for random 3SAT problems. The curves correspond to di erent clause to variable ratios: 2 (dashed), 4 (solid), 6 (gray) and 8 (gray, dashed). This is shown on log and log-log scales (left and right plots, respectively)." }, { "figure_ref": [ "fig_2", "fig_3", "fig_3", "fig_13", "fig_9", "fig_9", "fig_13", "fig_14" ], "heading": "Scaling", "publication_ref": [], "table_ref": [], "text": "The scaling of the probability to nd a solution is shown in Fig. 12 using the phase inversion method. More limited experiments with the random phase method showed the same behavior as seen with the unstructured class of problems: somewhat worse performance but similar scaling behavior. The results here are less clear-cut than those of Fig. 8. For c=n = 2 the results are consistent with either polynomial or exponential scaling. For problems with more constraints, exponential scaling is a somewhat better t.\nIn addition to the general scaling trend, there is also a noticeable di erence in behavior between cases with an even and odd number of variables. This is due to the behavior of the amplitude at each step in the lattice. Instead of a monotonic decrease in the concentration of amplitude into goods, there is an oscillatory behavior in which amplitude alternates between dispersing and being focused into goods at di erent levels. An extreme example of this behavior is shown in Fig. 13 for 3SAT problems with no constraints, i.e., c = 0. Speci cally, at level i this shows P s (i) s\n2 where the sum is over all sets s at level i in the lattice that are consistent, which, for these problems with no constraints, are all assignments to i variables. This is the probability that a good would be found if the algorithm were terminated at level i and gives an indication of how well the algorithm concentrates amplitude among consistent states. In this case, the expanded search space of the quantum algorithm results in slightly worse performance than random selection from among complete assignments (all of which are solutions in this case). Each search starts with all amplitude in goods at level 3. Then the total probability in goods alternately decreases and increases as the map proceeds up to the solution level. Cases with an even number of variables (the black curves in the gure) end on a step that decreases the probability in goods, resulting in relatively lower performance compared to the odd variable cases (gray curves). Although this might suggest an improvement for the even n cases by starting in level 2 rather than level 3, in fact this turns out not to be the case: starting in level 2 gives essentially the same behavior for the upper levels as starting the search from level 3 of the lattice due to one oscillation at intermediate levels that takes 2 steps to complete. Increasing the value of c=n, i.e., examining SAT problems with constraints, reduces the extent of the oscillations, particularly in higher levels of the lattice, and eventually results in monotonic decrease in probability as the search moves up the lattice. Nevertheless, for problems with a few constraints the existence of these oscillations gives rise to the observed di erence in behavior between cases with an even and odd number of variables. These oscillations are also seen for underconstrained cases of unstructured problems in Figs. 7 and8.\nWhile Fig. 13 shows that the oscillatory behavior decreases for larger problems, it also suggests there may be more appropriate choices of the phases. Speci cally, it may be possible to obtain a greater concentration of amplitude into solutions by allowing more dispersion into nogoods at intermediate levels of the lattice or using an initial condition with some amplitude in nogoods. If so, this would represent a new policy for selecting the phases that takes into account the problem-independent structure of the necessary nogoods. This would be somewhat analogous to focusing light with a lens: paths in many directions are modi ed by the lens to cause a convergence to a single point. More de nite results are obtained for the improvement over random selection. Specically, Fig. 14 shows an exponential improvement for both the phase inversion and random phase methods, corresponding to the behavior for unstructured problems in Fig. 9. Similar improvement is seen for other values of c=n as well: as in Fig. 9 the more highly constrained problems give larger improvements. A more stringent comparison is with random selection from among complete assignments (i.e., each variable given a single value) rather than from among all complete sets of variable-value pairs. This is also shown in Fig. 14, appearing to grow exponentially as well. This is particularly signi cant because the quantum algorithm uses a larger search space containing the necessary nogoods. Another view of this comparison is given in Fig. 15, showing the probabilities to nd a solution with the quantum search and random selection from among complete assignments. We conclude from these results that the additional structure of necessary nogoods and constraints of di erent sizes is qualitatively similar to that for unstructured random problems but a detailed comparison of the scaling behaviors requires examining larger problem sizes. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b25", "b6", "b42", "b54", "b0", "b48", "b9", "b33", "b16", "b17", "b41", "b46", "b4", "b51", "b8", "b47" ], "table_ref": [], "text": "In summary, we have introduced a quantum search algorithm and evaluated its average behavior on a range of small search problems. It appears to increase the amplitude into solution states exponentially compared to evaluating and measuring a quantum superposition of potential solutions directly. Moreover, this method exhibits the same transition behavior, with its associated concentration of hard problems, as seen with many classical search methods. It thus extends the range of methods to which this phenomenon applies. More importantly, this indicates the algorithm is e ectively exploiting the same structure of search problems as, say, classical backtrack methods, to prune unproductive search directions. It is thus a major improvement over the simple applications of quantum computing to search problems that behave essentially the same as classical generate-and-test, a method that completely ignores the possibility of pruning and hence doesn't exhibit the phase transition.\nThe transition behavior is readily understood because problems near the transition point have many large partial goods that do not lead to solutions (Williams & Hogg, 1994). Thus there will be a relatively high proportion of paths through the lattice that appear good for quite a while but eventually give deadends. A choice of phases based on detecting nogoods will not be able to work on these paths until near the solution level and hence give less chance to cancel out or move amplitude to those paths that do in fact lead to solutions. Hence problems with many large partial goods are likely to prove relatively di cult for any quantum algorithms that operate by distinguishing goods from nogoods of various sizes.\nThere remain many open questions. In the algorithm, the division between a problem{ independent mapping through the lattice and a simple problem-speci c adjustment to phases allows for a range of policies for selecting the phases. It would be useful to understand the e ect of di erent policies in the hope of improving the concentration of amplitude into solutions. For example, the use of phases has two distinct jobs: rst, to keep amplitude moving up along good sets rather than di using out to nogoods, and second, when a deadend is reached (i.e., a good set that has no good supersets) to send the amplitude at this deadend to a promising region of the search space, possibly very far from where the deadend occurred. These goals, of keeping amplitude concentrated on the one hand and sending it away on the other, are to some extent contradictory. Thus it may prove worthwhile to consider di erent phase choice policies for these two situations. Furthermore, the mapping through the lattice is motivated by classical methods that incrementally build solutions by moving from sets to supersets in the lattice. Instead of using unitary maps at each step that are as close as possible to this classical behavior, other approaches could allow more signi cant spreading of the amplitude at intermediate levels in the lattice and only concentrate it into solutions in the last few steps. It may prove fruitful to consider another type of mapping based on local repair methods moving among neighbors of complete sets. In this case, sets are evaluated based on the number of constraints they violate so an appropriate phase selection policy could depend on this number, rather than just whether the set is inconsistent or not. These possibilities may also suggest new probabilistic classical algorithms that might be competitive with existing heuristic search methods.\nAs a new example of a search method exhibiting the transition behavior, this work raises the same issues as prior studies of this phenomenon. For instance, to what extent does this behavior apply to more realistic classes of problems, such as those with clustering inherent in situations involving localized interactions (Hogg, 1996). This will be di cult to check empirically due to the limitation to small problems that are feasible for a classical simulation of this algorithm. However the observation that this behavior persists for many classes of problems with other search methods suggests it will be widely applicable. It is also of interest to see if other phase transition phenomena appear in these quantum search algorithms, such as observed in optimization searches (Cheeseman et al., 1991;Pemberton & Zhang, 1996;Zhang & Korf, 1996;Gent & Walsh, 1995). There may also be transitions unique to quantum algorithms, for example in the required coherence time or sensitivity to environmental noise.\nFor the speci c instances of the algorithm presented here, there are also some remaining issues. An important one is the cost of the mapping from one level to the next in terms of more basic operations that might be realized in hardware, although the simple structure of the matrices involved suggest this should not be too costly. The scaling behavior of the algorithm for larger cases is also of interest, which can perhaps be approached by examining the asymptotic nature of the matrix coe cients of Eqs. 17 and 19.\nAn important practical question is the physical implementation of quantum computers in general (Barenco et al., 1995;Sleator & Weinfurter, 1995;Cirac & Zoller, 1995), and the requirements imposed by the algorithm described here. Any implementation of a quantum computer will need to deal with two important di culties (Landauer, 1994). First, there will be defects in the construction of the device. Thus even if an ideal design exactly produces the desired mapping, occasional manufacturing defects and environmental noise will introduce errors. We thus need to understand the sensitivity of the algorithm's behavior to errors in the mappings. Here the main di culty is likely to be in the problemindependent mapping from one level of the lattice to the next, since the choice of phases in the problem-speci c part doesn't require high precision. In this context we should note that standard error correction methods cannot be used with quantum computers in light of the requirement that all operations are reversible. We also need to address the extent to which such errors can be minimized in the rst place, thus placing less severe requirements on the algorithm. Particularly relevant in this respect is the possibility of drastically reducing defects in manufactured devices by atomically precise control of the hardware (Drexler, 1992;Eigler & Schweizer, 1990;Muller et al., 1995;Shen, Wang, Abeln, Tucker, Lyding, Avouris, & Walkup, 1995). There are also uniquely quantum mechanical approaches to controlling errors (Berthiaume, Deutsch, & Jozsa, 1994) based on partial measurements of the state. This work could substantially extend the range of ideal quantum algorithms that will be possible to implement.\nThe second major di culty with constructing quantum computers is maintaining coherence of the superposition of states long enough to complete the computation. Environmental noise gradually couples to the state of the device, reducing the coherence and eventually limiting the time over which a superposition can perform useful computations (Unruh, 1995;Chuang, La amme, Shor, & Zurek, 1995). In e ect, the coupling to the environment can be viewed as performing a measurement on the quantum system, destroying the superposition of states. This problem is particularly severe for proposed universal quantum computers that need to maintain superpositions for arbitrarily long times. In the method presented here, the number of steps is known in advance and could be implemented as a special purpose search device (for problems of a given size) rather than as a program running on a universal computer. Thus a given achievable coherence time would translate into a limit on feasible problem size. To the extent that this limit can be made larger than feasible for alternative classical search methods, the quantum search could be useful.\nThe open question of greatest theoretical interest is whether this algorithm or simple variants of it can concentrate amplitude into solutions su ciently to give a polynomial, rather than exponential, decrease in the probability to nd a solution of any NP search problem with small constraints. This is especially interesting since this class of problems includes many well-studied NP-complete problems such as graph coloring and propositional satis ability. Even if this is not so in the worst case, it may be so on average for some classes of otherwise di cult real-world problems. While it is by no means clear to what extent quantum coherence provides more powerful computational behavior than classical machines, a recent proposal for rapid factoring (Shor, 1994) is an encouraging indication of its capabilities.\nA more subtle question along these lines is how the average scaling behaves away from the transition region of hard problems. In particular, can such quantum algorithms expand the range of the polynomially scaling problems seen for highly underconstrained or overconstrained instances? If so, this would provide a class of problems of intermediate di culty for which the quantum search is exponentially faster than classical methods, on average. This highlights the importance of broadening theoretical discussions of quantum algorithms to include typical or average behaviors in addition to worst case analyses. More generally, are there any di erences in the phase transition behaviors or their location compared with the usual classical methods? These questions, involving the precise location of transition points, are not currently well understood even for classical search algorithms. Thus a comparison with the behavior of this quantum algorithm may help shed light on the nature of the various phase transitions that seem to be associated with the intrinsic structure of the search problems rather than with speci c search algorithms." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I thank John Gilbert, John Lamping and Steve Vavasis for their suggestions and comments on this work. I have also bene ted from discussions with Peter Cheeseman, Scott Clearwater, Bernardo Huberman, Don Kimber, Colin Williams, Andrew Yao and Michael Youssefmir." } ]
[ { "authors": "A Barenco; D Deutsch; A Ekert", "journal": "Physical Review Letters", "ref_id": "b0", "title": "Conditional quantum dynamics and logic gates", "year": "1995" }, { "authors": "P Benio", "journal": "J. Stat. Phys", "ref_id": "b1", "title": "Quantum mechanical hamiltonian models of Turing machines", "year": "1982" }, { "authors": "C H Bennett", "journal": "Science", "ref_id": "b2", "title": "Quantum cryptography: Uncertainty in the service of privacy", "year": "1992" }, { "authors": "E Bernstein; U Vazirani", "journal": "", "ref_id": "b3", "title": "Quantum complexity theory", "year": "1993" }, { "authors": "A Berthiaume; D Deutsch; R Jozsa", "journal": "IEEE Press", "ref_id": "b4", "title": "The stabilization of quantum computations", "year": "1994" }, { "authors": "V Cerny", "journal": "Physical Review A", "ref_id": "b5", "title": "Quantum computers and intractable (NP-complete) computing problems", "year": "1993" }, { "authors": "P Cheeseman; B Kanefsky; W M Taylor", "journal": "", "ref_id": "b6", "title": "Where the really hard problems are", "year": "1991" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b7", "title": "", "year": "" }, { "authors": "I L Chuang; R La Amme; P W Shor; W H Zurek", "journal": "Science", "ref_id": "b8", "title": "Quantum computers, factoring and decoherence", "year": "1995" }, { "authors": "J I Cirac; P Zoller", "journal": "Physical Review Letters", "ref_id": "b9", "title": "Quantum computations with cold trapped ions", "year": "1995" }, { "authors": "J M Crawford; L D Auton", "journal": "AAAI Press", "ref_id": "b10", "title": "Experimental results on the cross-over point in satis ability problems", "year": "1993" }, { "authors": "D Deutsch", "journal": "Proc. R. Soc. London A", "ref_id": "b11", "title": "Quantum theory, the Church-Turing principle and the universal quantum computer", "year": "1985" }, { "authors": "D Deutsch", "journal": "Proc. R. Soc. Lond., A", "ref_id": "b12", "title": "Quantum computational networks", "year": "1989" }, { "authors": "P A M Dirac", "journal": "", "ref_id": "b13", "title": "The Principles of Quantum Mechanics", "year": "1958" }, { "authors": "D P Divincenzo", "journal": "Science", "ref_id": "b14", "title": "Quantum computation", "year": "1995" }, { "authors": "D P Divincenzo; J Smolin", "journal": "IEEE Press", "ref_id": "b15", "title": "Results on two-bit gate design for quantum computers", "year": "1994" }, { "authors": "K E Drexler", "journal": "John Wiley", "ref_id": "b16", "title": "Nanosystems: Molecular Machinery, Manufacturing, and Computation", "year": "1992" }, { "authors": "D M Eigler; E K Schweizer", "journal": "Nature", "ref_id": "b17", "title": "Positioning single atoms with a scanning tunnelling microscope", "year": "1990" }, { "authors": "A Ekert; R Jozsa", "journal": "Rev. Mod. Phys", "ref_id": "b18", "title": "Shor's quantum algorithm for factorising numbers", "year": "1995" }, { "authors": "R P Feynman", "journal": "Foundations of Physics", "ref_id": "b19", "title": "Quantum mechanical computers", "year": "1986" }, { "authors": "R P Feynman", "journal": "Princeton Univ. Press", "ref_id": "b20", "title": "QED: The Strange Theory of Light and Matter", "year": "1985" }, { "authors": "E C Freuder; R J Wallace", "journal": "Arti cial Intelligence", "ref_id": "b21", "title": "Partial constraint satisfaction", "year": "1992" }, { "authors": "M R Garey; D S Johnson", "journal": "W. H. Freeman", "ref_id": "b22", "title": "A Guide to the Theory of NP-Completeness", "year": "1979" }, { "authors": "G H Golub; C F V Loan", "journal": "John Hopkins University Press", "ref_id": "b23", "title": "Matrix Computations", "year": "1983" }, { "authors": "T Hogg", "journal": "", "ref_id": "b24", "title": "Phase transitions in constraint satisfaction search", "year": "1994" }, { "authors": "T Hogg", "journal": "Arti cial Intelligence", "ref_id": "b25", "title": "Re ning the phase transitions in combinatorial search", "year": "1996" }, { "authors": "T Hogg; B A Huberman; C Williams", "journal": "Arti cial Intelligence", "ref_id": "b26", "title": "Phase transitions and the search problem", "year": "1996" }, { "authors": "T Hogg; C P Williams", "journal": "Arti cial Intelligence", "ref_id": "b27", "title": "The hardest constraint problems: A double phase transition", "year": "1994" }, { "authors": "R Jozsa", "journal": "IEEE Computer Society", "ref_id": "b28", "title": "Computation and quantum superposition", "year": "1992" }, { "authors": "S Karlin; H M Taylor", "journal": "Academic Press", "ref_id": "b29", "title": "A First Course in Stochastic Processes", "year": "1975" }, { "authors": "D Kimber", "journal": "", "ref_id": "b30", "title": "An introduction to quantum computation", "year": "1992" }, { "authors": "S Kirkpatrick; C D Gelatt; M P Vecchi", "journal": "Science", "ref_id": "b31", "title": "Optimization by simulated annealing", "year": "1983" }, { "authors": "R Landauer", "journal": "Physics Today", "ref_id": "b32", "title": "Information is physical", "year": "1991" }, { "authors": "R Landauer", "journal": "International Press", "ref_id": "b33", "title": "Is quantum mechanically coherent computation useful", "year": "1994" }, { "authors": "S Lloyd", "journal": "Science", "ref_id": "b34", "title": "A potentially realizable quantum computer", "year": "1993" }, { "authors": "S Lloyd", "journal": "Scienti c American", "ref_id": "b35", "title": "Quantum-mechanical computers", "year": "1995" }, { "authors": "A Mackworth", "journal": "Wiley", "ref_id": "b36", "title": "Constraint satisfaction", "year": "1992" }, { "authors": "N Margolus", "journal": "Addison-Wesley", "ref_id": "b37", "title": "Parallel quantum computation", "year": "1990" }, { "authors": "S Minton; M D Johnston; A B Philips; P Laird", "journal": "Arti cial Intelligence", "ref_id": "b38", "title": "Minimizing con icts: A heuristic repair method for constraint satisfaction and scheduling problems", "year": "1992" }, { "authors": "D Mitchell; B Selman; H Levesque", "journal": "AAAI Press", "ref_id": "b39", "title": "Hard and easy distributions of SAT problems", "year": "1992" }, { "authors": "R Motwani; P Raghavan", "journal": "Cambridge University Press", "ref_id": "b40", "title": "Randomized Algorithms", "year": "1995" }, { "authors": "W T Muller; D L Klein; T Lee; J Clarke; P L Mceuen; P G Schultz", "journal": "Science", "ref_id": "b41", "title": "A strategy for the chemical synthesis of nanostructures", "year": "1995" }, { "authors": "J C Pemberton; W Zhang", "journal": "Arti cial Intelligence", "ref_id": "b42", "title": "Epsilon-transformation: Exploiting phase transitions to solve combinatorial optimization problems", "year": "1996" }, { "authors": "P Prosser", "journal": "Arti cial Intelligence", "ref_id": "b43", "title": "An empirical study of phase transitions in binary constraint satisfaction problems", "year": "1996" }, { "authors": "R Schrag; J Crawford", "journal": "Arti cial Intelligence", "ref_id": "b44", "title": "Implicates and prime implicates in random 3-SAT", "year": "1996" }, { "authors": "B Selman; H Levesque; D Mitchell", "journal": "AAAI Press", "ref_id": "b45", "title": "A new method for solving hard satis ability problems", "year": "1992" }, { "authors": "T C Shen; C Wang; G C Abeln; J R Tucker; J W Lyding; P Avouris; R E Walkup", "journal": "Science", "ref_id": "b46", "title": "Atomic-scale desorption through electronic and vibrational excitation mechanisms", "year": "1995" }, { "authors": "P W Shor", "journal": "IEEE Press", "ref_id": "b47", "title": "Algorithms for quantum computation: Discrete logarithms and factoring", "year": "1994" }, { "authors": "T Sleator; H Weinfurter", "journal": "Physical Review Letters", "ref_id": "b48", "title": "Realizable universal quantum logic gates", "year": "1995" }, { "authors": "B M Smith; M E Dyer", "journal": "Arti cial Intelligence", "ref_id": "b49", "title": "Locating the phase transition in binary constraint satisfaction problems", "year": "1996" }, { "authors": "K Svozil", "journal": "Bulletin of the European Association of Theoretical Computer Sciences", "ref_id": "b50", "title": "Quantum computation and complexity theory I", "year": "1995" }, { "authors": "W G Unruh", "journal": "Physical Review A", "ref_id": "b51", "title": "Maintaining coherence in quantum computers", "year": "1995" }, { "authors": "C P Williams; T Hogg", "journal": "Arti cial Intelligence", "ref_id": "b52", "title": "Exploiting the deep structure of constraint problems", "year": "1994" }, { "authors": "S Wolfram", "journal": "Addison-Wesley", "ref_id": "b53", "title": "Mathematica: A System for Doing Mathematics by Computer", "year": "1991" }, { "authors": "W Zhang; R E Korf", "journal": "Arti cial Intelligence", "ref_id": "b54", "title": "A uni ed view of complexity transitions on the travelling salesman problem", "year": "1996" }, { "authors": "L Zhu; V Kleiman; X Li; S P Lu; K Trentelman; R J Gordon", "journal": "Science", "ref_id": "b55", "title": "Coherent laser control of the product distribution obtained in the photoexcitation of HI", "year": "1995" } ]
[ { "formula_coordinates": [ 4, 279.12, 560.88, 243.12, 30.88 ], "formula_id": "formula_0", "formula_text": "N i = N i (1)" }, { "formula_coordinates": [ 7, 277.2, 280.56, 245.04, 37.9 ], "formula_id": "formula_1", "formula_text": "X i j i j 2 = 1 (2)" }, { "formula_coordinates": [ 9, 163.92, 624.6, 275.28, 26.88 ], "formula_id": "formula_2", "formula_text": "NOT 0 1 NOT( 0 j0i + 1 j1i) = 0 j1i + 1 j0i 1 0" }, { "formula_coordinates": [ 10, 182.4, 189.72, 339.84, 31.28 ], "formula_id": "formula_3", "formula_text": "U 4 1 0 U 4 j0i = 1 p 2 (j0i + j1i) 1 p 2 1 1(7)" }, { "formula_coordinates": [ 10, 154.08, 318.84, 368.16, 43.76 ], "formula_id": "formula_4", "formula_text": "4 p + 1 4 (1 p) or P classical = 1 4 + p 2(8)" }, { "formula_coordinates": [ 10, 251.04, 402, 271.2, 69.56 ], "formula_id": "formula_5", "formula_text": "U 3 1 0 = 1 2 p 3 1 ! U 3 0 1 = 1 2 1 p 3 (9)" }, { "formula_coordinates": [ 10, 218.4, 554.04, 303.84, 66.08 ], "formula_id": "formula_6", "formula_text": "P quantum = 1 4 + cos 2 2 p 3 4 sin (2 ) = P classical p 3 4 sin (2 ) (10)" }, { "formula_coordinates": [ 13, 265.35, 553.68, 251.96, 37.66 ], "formula_id": "formula_7", "formula_text": "s) = j X m=1 (s m )e i m (11" }, { "formula_coordinates": [ 13, 517.31, 564.36, 4.92, 15.2 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 13, 231.35, 642, 290.89, 45.34 ], "formula_id": "formula_9", "formula_text": "r j = r j 1 * j X m=1 e i m + = r j 1 p j (12)" }, { "formula_coordinates": [ 14, 251.28, 195.84, 270.96, 42.46 ], "formula_id": "formula_10", "formula_text": "! = ln L! N L k! N b ln N + O(N)(13)" }, { "formula_coordinates": [ 14, 263.28, 622.08, 258.96, 50.6 ], "formula_id": "formula_11", "formula_text": "M = 0 @ 1 1 0 1 0 1 0 1 1 1 A (14)" }, { "formula_coordinates": [ 16, 150.96, 305.76, 371.28, 64.68 ], "formula_id": "formula_12", "formula_text": "A y B = 0 B B @ 1 p 3 1 p 2 1 p 6 1 p 3 1 p 2 1 p 6 1 p 3 0 q 2 3 1 C C A 0 @ 2 0 0 0 1 0 0 0 1 1 A 0 B B @ 1 p 3 1 p 3 1 p 3 0 1 p 2 1 p 2 q 2 3 1 p 6 1 p 6 1 C C A(15)" }, { "formula_coordinates": [ 16, 230.16, 386.16, 292.08, 50.6 ], "formula_id": "formula_13", "formula_text": "U = A y B = 1 3 0 @ 2 2 1 2 1 2 1 2 2 1 A (16)" }, { "formula_coordinates": [ 16, 90, 584.64, 432.24, 81.52 ], "formula_id": "formula_14", "formula_text": "1 = U y U = i X k=0 n k a 2 k (17) where n k = i k N i i + 1 k (18)" }, { "formula_coordinates": [ 17, 235.44, 124.08, 281.88, 38.14 ], "formula_id": "formula_15", "formula_text": "0 = U y U = i X j;k=0 n (p) jk a j a k (19" }, { "formula_coordinates": [ 17, 517.32, 135.96, 4.92, 15.2 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 17, 179.28, 171.84, 342.96, 37.18 ], "formula_id": "formula_17", "formula_text": "n (p) jk = X x i p k x p x i p j x N 2i + p i + 1 j k + x (20)" }, { "formula_coordinates": [ 19, 203.28, 286.8, 314.04, 38.76 ], "formula_id": "formula_18", "formula_text": "(j+1) r = X U r (j) = X k a (j) k X jr\\ j=k (j) (21" }, { "formula_coordinates": [ 19, 517.32, 302.76, 4.92, 15.2 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 23, 270.24, 100.86, 252, 45.04 ], "formula_id": "formula_20", "formula_text": "L = ( N 2 ) ( L 2 ) m ( N 2 ) m(25)" }, { "formula_coordinates": [ 23, 215.04, 198.12, 307.2, 30.28 ], "formula_id": "formula_21", "formula_text": "ln N soln N h 1 b + ln 1 1 b 2 (26)" }, { "formula_coordinates": [ 23, 299.28, 259.08, 218.04, 22.4 ], "formula_id": "formula_22", "formula_text": "ln (1 1=b 2 ) (27" }, { "formula_coordinates": [ 23, 517.32, 259.08, 4.92, 15.2 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 23, 257.28, 385.08, 264.96, 31.52 ], "formula_id": "formula_24", "formula_text": "poly = b 2 1 2b ln(b 1)(28)" } ]
Quantum Computing and Phase Transitions in Combinatorial Search
We introduce an algorithm for combinatorial search on quantum computers that is capable of signi cantly concentrating amplitude into solutions for some NP search problems, on average. This is done by exploiting the same aspects of problem structure as used by classical backtrack methods to avoid unproductive search choices. This quantum algorithm is much more likely to nd solutions than the simple direct use of quantum parallelism. Furthermore, empirical evaluation on small problems shows this quantum algorithm displays the same phase transition behavior, and at the same location, as seen in many previously studied classical search methods. Speci cally, di cult problem instances are concentrated near the abrupt change from underconstrained to overconstrained problems.
Tad Hogg
[ { "figure_caption": "Figure 1 :1Figure 1: Structure of the set lattice for a problem with four items. The subsets of f1; 2; 3; 4g", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "6.High precision values were obtained from the FindRoot function of Mathematica. 7. The values are given in Online Appendix 1. 8. Using the Mathematica function Rationalize and the package NumberTheory`Recognize`.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Behavior of b(i) k vs. k on a log scale for N = 10. The three curves show the values for i = 4 (black), 3 (dashed) and 2 (gray).", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The solid curves show the classical backtrack search cost for randomly generated", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Expected number of trials hTi to nd a solution vs. for random problems with", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: Expected number of trials hTi to nd a solution vs. for random problems with prespeci ed solution with binary constraints, using inverted phases for nogoods.The solid curve is for N = 10, with 1000 samples per point. The gray curve is for N = 20 with 100 samples per point (but additional samples were used around the peak). The error bars indicate the standard error in the estimate of hTi.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Standard deviation in the number of trials to nd a solution for N = 20 as a function of . The black curve is for random phases assigned to nogoods, and the gray one for inverting phases.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure9: Scaling of the ratio of the probability to nd a solution using the quantum algorithm to the probability to nd a solution by random selection at the solution level, using the phase inversion method, for of 1 (solid), 2 (dashed), 3 (gray) and 4 (dashed gray). The curves are close to linear on this log scale indicating exponential improvement over the direct selection from among complete sets, with a higher enhancement for problems with more constraints.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure10: Comparison of scaling of probability to nd a solution with the quantum algorithm using the phase inversion method (dashed curve) and by random selection at the solution level (solid curve) for = 2.", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Average number of tries to nd a solution with the quantum search algorithm", "figure_data": "", "figure_id": "fig_11", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure14: Scaling of the ratio of the probability to nd a solution using the quantum algorithm to the probability to nd a solution by random selection at the solution level as a function of the number of variables for random 3SAT problems with clause to variable ratio equal to 4. The solid and dashed curves correspond to using the phase inversion and random phase methods, respectively. The black curves compare to random selection among complete sets, while the gray compare to selection only from among complete assignments. The curves are close to linear on this log scale indicating exponential improvement over the direct selection from among complete sets.", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Comparison of scaling of probability to nd a solution with the quantum algorithm using the phase inversion method (solid curve) and by random selection from among complete assignments (gray curve) for c=n = 4.", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Probability in goods (i.e., consistent sets) as a function of level in the lattice for 3SAT problems with no constraints. This shows the behavior for n equal to 9 (gray dashed), 10 (black dashed), 11 (gray) and 12 (black). For each problem, the nal probability at level n is the probability a solution is obtained with the quantum algorithm.", "figure_data": "1probability0.9 0.950.853 4681012levelFigure 13:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b21", "b16", "b1", "b24", "b21", "b8", "b20", "b0", "b22", "b18", "b12", "b10" ], "table_ref": [], "text": "Bayesian belief networks (Pearl, 1988;Lauritzen & Spiegelhalter, 1988) provide a rich graphical representation of probabilistic models. The nodes in these networks represent random variables, while the links represent causal in uences. These associations endow directed acyclic graphs (DAGs) with a precise probabilistic semantics. The ease of interpretation a orded by this semantics explains the growing appeal of belief networks, now widely used as models of planning, reasoning, and uncertainty.\nInference and learning in belief networks are possible insofar as one can e ciently compute (or approximate) the likelihood of observed patterns of evidence (Buntine, 1994;Russell, Binder, Koller, & Kanazawa, 1995). There exist provably e cient algorithms for computing likelihoods in belief networks with tree or chain-like architectures. In practice, these algorithms also tend to perform well on more general sparse networks. However, for networks in which nodes have many parents, the exact algorithms are too slow (Jensen, Kong, & Kjaeful , 1995). Indeed, in large networks with dense or layered connectivity, exact methods are intractable as they require summing over an exponentially large number of hidden states.\nOne approach to dealing with such networks has been to use Gibbs sampling (Pearl, 1988), a stochastic simulation methodology with roots in statistical mechanics (Geman & Geman, 1984). Our approach in this paper relies on a di erent tool from statistical mechanics|namely, mean eld theory (Parisi, 1988). The mean eld approximation is well known for probabilistic models that can be represented as undirected graphs|so-called Markov networks. For example, in Boltzmann machines (Ackley, Hinton, & Sejnowski, 1985), mean eld learning rules have been shown to yield tremendous savings in time and computation over sampling-based methods (Peterson & Anderson, 1987).\nThe main motivation for this work was to extend the mean eld approximation for undirected graphical models to their directed counterparts. Since belief networks can be transformed to Markov networks, and mean eld theories for Markov networks are well known, it is natural to ask why a new framework is required at all. The reason is that probabilistic models which have compact representations as DAGs may have unwieldy representations as undirected graphs. As we shall see, avoiding this complexity and working directly on DAGs requires an extension of existing methods.\nIn this paper we focus on sigmoid belief networks (Neal, 1992), for which the resulting mean eld theory is most straightforward. These are networks of binary random variables whose local conditional distributions are based on log-linear models. We develop a mean eld approximation for these networks and use it to compute a lower bound on the likelihood of evidence. Our method applies to arbitrary partial instantiations of the variables in these networks and makes no restrictions on the network topology. Note that once a lower bound is available, a learning procedure can maximize the lower bound; this is useful when the true likelihood itself cannot be computed e ciently. A similar approximation for models of continous random variables is discussed by Jaakkola et al (1995).\nThe idea of bounding the likelihood in sigmoid belief networks was introduced in a related architecture known as the Helmholtz machine (Hinton, Dayan, Frey, & Neal 1995). A fundamental advance of this work was to establish a framework for approximation that is especially conducive to learning the parameters of layered belief networks. The close connection between this idea and the mean eld approximation from statistical mechanics, however, was not developed.\nIn this paper we hope not only to elucidate this connection, but also to convey a sense of which approximations are likely to generate useful lower bounds while, at the same time, remaining analytically tractable. We develop here what is perhaps the simplest such approximation for belief networks, noting that more sophisticated methods (Jaakkola & Jordan, 1996a;Saul & Jordan, 1995) are also available. It should be emphasized that approximations of some form are required to handle the multilayer neural networks used in statistical pattern recognition. For these networks, exact algorithms are hopelessly intractable; moreover, Gibbs sampling methods are impractically slow.\nThe organization of this paper is as follows. Section 2 introduces the problems of inference and learning in sigmoid belief networks. Section 3 contains the main contribution of the paper: a tractable mean eld theory. Here we present the mean eld approximation for sigmoid belief networks and derive a lower bound on the likelihood of instantiated patterns of evidence. Section 4 looks at a mean eld algorithm for learning the parameters of sigmoid belief networks. For this algorithm, we give results on a benchmark problem in pattern recognition|the classi cation of handwritten digits. Finally, section 5 presents our conclusions, as well as future issues for research." }, { "figure_ref": [ "fig_0" ], "heading": "Sigmoid Belief Networks", "publication_ref": [ "b18", "b17" ], "table_ref": [], "text": "The great virtue of belief networks is that they clearly exhibit the conditional dependencies of the underlying probability model. Consider a belief network de ned over binary random variables S = (S 1 ; S 2 ; : : : ; S N ). We denote the parents of S i by pa(S i ) fS 1 ; S 2 ; : : : S i 1 g; this is the smallest set of nodes for which P (S i jS 1 ; S 2 ; : : : ; S i 1 ) = P (S i jpa(S i )):\n(1) In sigmoid belief networks (Neal, 1992), the conditional distributions attached to each node are based on log-linear models. In particular, the probability that the ith node is activated is given by\nP (S i = 1jpa(S i )) = 0 @ X j J ij S j + h i 1 A ;\n(2) where J ij and h i are the weights and biases in the network, and\n(z) = 1 1 + e z (3)\nis the sigmoid function shown in Figure 1. In sigmoid belief networks, we have J ij = 0 for S j 6 2 pa(S i ); moreover, J ij = 0 for j i since the network's structure is that of a directed acyclic graph. The sigmoid function in eq. ( 2) provides a compact parametrization of the conditional probability distributions 1 in eq. ( 2) used to propagate beliefs. In particular, P (S i jpa(S i )) depends on pa(S i )\nonly through a sum of weighted inputs, where the weights may be viewed as the parameters in a 1. The relation to noisy-OR models is discussed in appendix A. logistic regression (McCullagh & Nelder, 1983). The conditional probability distribution for S i may be summarized as:\nP (S i jpa(S i )) = exp h P j J ij S j + h i S i i 1 + exp h P j J ij S j + h i i :(4)\nNote that substituting S i = 1 in eq. ( 4) recovers the result in eq. ( 2). Combining eqs. ( 1) and ( 4), we may write the joint probability distribution over the variables in the network as:\nP (S) = Y i P (S i jpa(S i )) (5) = Y i 8 < : exp h P j J ij S j + h i S i i 1 + exp h P j J ij S j + h i i 9 = ; :(6)\nThe denominator in eq. ( 6) ensures that the probability distribution is normalized to unity. We now turn to the problem of inference in sigmoid belief networks. Absorbing evidence divides the units in the belief network into two types, visible and hidden. The visible units (or \\evidence nodes\") are those for which we have instantiated values; the hidden units are those for which we do not. When there is no possible ambiguity, we will use H and V to denote the subsets of hidden and visible units. Using Bayes' rule, inference is done under the conditional distribution P (HjV ) = P (H; V ) P (V ) ;\n(7)\nwhere P (V ) = X H P (H; V ) (8)\nis the likelihood of the evidence V . In principle, the likelihood may be computed by summing over all 2 jHj con gurations of the hidden units. Unfortunately, this calculation is intractable in large, densely connected networks. This intractability presents a major obstacle to learning parameters for these networks, as nearly all procedures for statistical estimation require frequent estimates of the likelihood. The calculations for exact probabilistic inference are beset by the same di culties.\nUnable to compute P (V ) or work directly with P (HjV ), we will resort to an approximation from statistical physics known as mean eld theory." }, { "figure_ref": [ "fig_2" ], "heading": "Mean Field Theory", "publication_ref": [ "b11", "b20", "b3", "b3", "b9", "b6", "b19", "b26", "b23", "b21", "b16" ], "table_ref": [], "text": "The mean eld approximation appears under a multitude of guises in the physics literature; indeed, it is \\almost as old as statistical mechanics\" (Itzykson & Drou e, 1991). Let us brie y explain how it acquired its name and why it is so ubiquitous. In the physical models described by Markov networks, the variables S i represent localized magnetic moments (e.g., at the sites of a crystal lattice), and the sums P j J ij S j + h i represent local magnetic elds. Roughly speaking, in certain cases a central limit theorem may be applied to these sums, and a useful approximation is to ignore the uctuations in these elds and replace them by their mean value|hence the name, \\mean eld\" theory. In some models, this is an excellent approximation; in others, a poor one. Because of its simplicity, however, it is widely used as a rst step in understanding many types of physical phenomena.\nThough this explains the philological origins of mean eld theory, there are in fact many ways to derive what amounts to the same approximation (Parisi, 1988). In this paper we present the formulation most appropriate for inference and learning in graphical models. In particular, we view mean eld theory as a principled method for approximating an intractable graphical model by a tractable one. This is done via a variational principle that chooses the parameters of the tractable model to minimize an entropic measure of error.\nThe basic framework of mean eld theory remains the same for directed graphs, though we have found it necessary to introduce extra mean eld parameters in addition to the usual ones. As in Markov networks, one nds a set of nonlinear equations for the mean eld parameters that can be solved by iteration. In practice, we have found this iteration to converge fairly quickly and to scale well to large networks.\nLet us now return to the problem posed at the end of the last section. There we found that for many belief networks, it was intractable to decompose the joint distribution as P (S) = P (HjV )P(V ), where P (V ) was the likelihood of the evidence V . For the purposes of probabilistic modeling, mean eld theory has two main virtues. First, it provides a tractable approximation, Q(HjV ) P (HjV ), to the conditional distributions required for inference. Second, it provides a lower bound on the likelihoods required for learning.\nLet us rst consider the origin of the lower bound. Clearly, for any approximating distribution Q(HjV ), we have the equality:\nln P (V ) = ln X H P (H; V ) (9) = ln X H Q(HjV ) P (H; V ) Q(HjV ) :(10)\nTo obtain a lower bound, we now apply Jensen's inequality (Cover & Thomas, 1991), pushing the logarithm through the sum over hidden states and into the expectation:\nln P (V ) X H Q(HjV ) ln P (H; V ) Q(HjV ) :(11)\nIt is straightforward to verify that the di erence between the left and right hand side of eq. ( 11) is the Kullback-Leibler divergence (Cover & Thomas, 1991):\nKL(QjjP) = X H Q(HjV ) ln Q(HjV ) P (HjV ) : (12)\nThus, the better the approximation to P (HjV ), the tighter the bound on ln P (V ).\nAnticipating the connection to statistical mechanics, we will refer to Q(HjV ) as the mean eld distribution. It is natural to divide the calculation of the bound into two components, both of which are particular averages over this approximating distribution. These components are the mean eld entropy and energy; the overall bound is given by their di erence:\nln P (V ) X H Q(HjV ) lnQ(HjV ) ! X H Q(HjV ) lnP(H; V ) ! :(13)\nBoth terms have physical interpretations. The rst measures the amount of uncertainty in the meaneld distribution and follows the standard de nition of entropy. The second measures the average value 2 of ln P (H; V ); the name \\energy\" arises from interpreting the probability distributions in belief networks as Boltzmann distributions 3 at unit temperature. In this case, the energy of each network con guration is given (up to a constant) by minus the logarithm of its probability under the Boltzmann distribution. In sigmoid belief networks, the energy has the form\nlnP(H; V ) = X ij J ij S i S j X i h i S i + X i ln 2 4 1 + exp 0 @ X j J ij S j + h i 1 A 3 5 ; (14)\nas follows from eq. ( 6). The rst two terms in this equation are familiar from Markov networks with pairwise interactions (Hertz, Krogh, & Palmer, 1991); the last term is peculiar to sigmoid belief networks. Note that the overall energy is neither a linear function of the weights nor a polynomial function of the units. This is the price we pay in sigmoid belief networks for identifying P (HjV ) as a Boltzmann distribution and the log-likelihood P (V ) as its partition function. Note that this identi cation was made implicitly in the form of eqs. ( 7) and ( 8).\nThe bound in eq. ( 11) is valid for any probability distribution Q(HjV ). To make use of it, however, we must choose a distribution that enables us to evaluate the right hand side of eq. ( 11). Consider the factorized distribution\nQ(HjV ) = Y i2H Si i (1 i ) 1 Si ;\n(15) in which the binary hidden units fS i g i2H appear as independent Bernoulli variables with adjustable means i . A mean eld approximation is obtained by substituting the factorized distribution, eq. ( 15), for the true Boltzmann distribution, eq. ( 7). It may seem that this approximation replaces the rich probabilistic dependencies in P (HjV ) by an impoverished assumption of complete factorizability. Though this is true to some degree, the reader should keep in mind that the values we choose for f i g i2H (and hence the statistics of the hidden units) will depend on the evidence V .\nThe best approximation of the form, eq. ( 15), is found by choosing the mean values, f i g i2H , that minimize the Kullback-Leibler divergence, KL(QjjP ). This is equivalent to minimizing the gap between the true log-likelihood, lnP(V ), and the lower bound obtained from mean eld theory. The 2. A similar average is performed in the E-step of an EM algorithm (Dempster, Laird, & Rubin, 1977); the di erence here is that the average is performed over the mean eld distribution, Q(HjV ), rather than the true posterior, P (HjV ). For a related discussion, see Neal & Hinton (1993). 3. Our terminology is as follows. Let S denote the degrees of freedom in a statistical mechanical system. The energy of the system, E(S), is a real-valued function of these degrees of freedom, and the Boltzmann distribution P (S) = e E(S) P S e E(S) de nes a probability distribution over the possible con gurations of S. The parameter is the inverse temperature; it serves to calibrate the energy scale and will be xed to unity in our discussion of belief networks. Finally, the sum in the denominator|known as the partition function|ensures that the Boltzmann distribution is normalized to unity. mean eld bound on the log-likelihood may be calculated by substituting eq. ( 15) into the right hand side of eq. ( 11). The result of this calculation is ln\nP (V ) X ij J ij i j + X i h i i X i ln 1 + e P j Jij Sj +hi (16) X i i ln i + (1 i ) ln(1 i )] ;\nwhere h i indicates an expectation value over the mean eld distribution, eq. ( 15). The terms in the rst line of eq. ( 16) represent the mean eld energy, derived from eq. ( 14); those in the second represent the mean eld entropy. In a slight abuse of notation, we have de ned mean values i for the visible units; these of course are set to the instantiated values i 2 f0; 1g.\nNote that to compute the average energy in the mean eld approximation, we must nd the expected value of hln 1 + e zi ]i, where z i = P j J ij S j + h i is the sum of weighted inputs to the ith unit in the belief network. Unfortunately, even under the mean eld assumption that the hidden units are uncorrelated, this average does not have a simple closed form. This term does not arise in the mean eld theory for Markov networks with pairwise interactions; again, it is peculiar to sigmoid belief networks.\nIn principal, the average may be performed by enumerating the possible states of pa(S i ). The result of this calculation, however, would be an extremely unwieldy function of the parameters in the belief network. This re ects the fact that in general, the sigmoid belief network de ned by the weights J ij has an equivalent Markov network with N th order interactions and not pairwise ones. To avoid this complexity, we must develop a mean eld theory that works directly on DAGs.\nHow we handle the expected value of hln 1 + e zi ]i is what distinguishes our mean eld theory from previous ones. Unable to compute this term exactly, we resort to another bound. Note that for any random variable z and any real number , we have the equality: \nSetting = 0 in eq. ( 19) gives the standard bound: hln(1 + e z )i lnh1 + e z i. A tighter bound (Seung, 1995) can be obtained, however, by allowing non-zero values of . This is illustrated in Figure 2 for the special case where z is a Gaussian distributed random variable with zero mean and unit variance. The bound in eq. ( 19) has two useful properties which we state here without proof: (i) the right hand side is a convex function of ; (ii) the value of which minimizes this function occurs in the interval 2 0; 1]. Thus, provided it is possible to evaluate eq. ( 19) for di erent values of , the tightest bound of this form can be found by a simple one-dimensional minimization.\nThe above bound can be put to immediate use by attaching an extra mean eld parameter i to each unit in the belief network. We can then upper bound the intractable terms in the mean eld energy by ln 1 + e P j Jij Sj +hi i 0 @ X j J ij j + h i 19) for the case where z is normally distributed with zero mean and unit variance. In this case, the exact result is hln(1 + e z )i = 0:806; the bound gives min n ln e 1 2 2 + e 1 2 (1 ) 2 ] o = 0:818. The standard bound from Jensen's inequality occurs at = 0 and gives 0:974.\nwhere z i = P j J ij S j + h i . The expectations inside the logarithm can be evaluated exactly for the factorial distribution, eq. ( 15); for example, he i zi i = e i hi Y j 1 j + j e iJij : (\nA similar result holds for he (1 i)zi i. Though these averages are tractable, we will tend not to write them out in what follows. The reader, however, should keep in mind that these averages do not present any di culty; they are simply averages over products of independent random variables, as opposed to sums.\nAssembling the terms in eqs. ( 16) and ( 20) gives a lower bound ln P (V ) L V ,\nL V = X ij J ij i j + X i h i i X i i 0 @ X j J ij j + h i 1 A (22) X i ln D e izi + e (1 i)zi E + X i i ln i + (1 i ) ln(1 i )] ;\non the log-likelihood of the evidence V . So far we have not speci ed the parameters f i g i2H and f i g; in particular, the bound in eq. ( 22) is valid for any choice of parameters. We naturally seek the values that maximize the right hand side of eq. ( 22). Suppose we x the mean values f i g i2H and ask for the parameters f i g that yield the tightest possible bound. Note that the right hand side of eq. ( 22) does not couple terms with i that belong to di erent units in the network. The minimization over f i g therefore reduces to N independent minimizations over the interval 0; 1].\nThese can be done by any number of standard methods (Press, Flannery, Teukolsky, & Vetterling, 1986).\nTo choose the means, we set the gradients of the bound with respect to f i g i2H equal to zero.\nTo this end, let us de ne the intermediate matrix:\nK ij = @ @ j ln D e i zi + e (1 i)zi E ; (23\n) S i\nFigure 3: The Markov blanket of unit S i includes its parents and children, as well as the other parents of its children.\nwhere z i is the weighted sum of inputs to ith unit. Note that K ij is zero unless S j is a parent of S i ; in other words, it has the same connectivity as the weight matrix J ij . Within the mean eld approximation, K ij measures the parental in uence of S j on S i given the instantiated evidence V .\nThe degree of correlation (positive or negative) is measured relative to the other parents of S i .\nThe matrix elements of K may be evaluated by expanding the expectations as in eq. ( 21); a full derivation is given in appendix B. Setting the gradient @L V =@ i equal to zero gives the nal mean eld equation:\ni = 0 @ h i + X j J ij j + J ji ( j j ) + K ji ] 1 A ;(24)\nwhere ( ) is the sigmoid function. The argument of the sigmoid function may be viewed as an e ective input to the ith unit in the belief network. This e ective input is composed of terms from the unit's Markov blanket (Pearl, 1988), shown in Figure 3; in particular, these terms take into account the unit's internal bias, the values of its parents and children, and, through the matrix K ji , the values of its children's other parents. In solving these equations by iteration, the values of the instantiated units are propagated throughout the entire network. An analogous propagation of information occurs in exact algorithms (Lauritzen & Spiegelhalter, 1988) to compute likelihoods in belief networks.\nWhile the factorized approximation to the true posterior is not exact, the mean eld equations set the parameters f i g i2H to values which make the approximation as accurate as possible. This in turn translates into the tightest mean eld bound on the log-likelihood. The overall procedure for bounding the log-likelihood thus consists of two alternating steps: (i) update f i g for xed f i g; (ii) update f i g i2H for xed f i g. The rst step involves N independent minimizations over the interval 0; 1]; the second is done by iterating the mean eld equations. In practice, the steps are repeated until the mean eld bound on the log-likelihood converges4 to a desired degree of accuracy.\nThe quality of the bound depends on two approximations: the complete factorizability of the mean eld distribution, eq. ( 15), and the logarithm bound, eq. ( 19). How reliable are these approximations in belief networks? To study this question, we performed numerical experiments on the three layer belief network shown in Figure 4. The advantage of working with such a small network (2x4x6) is that true likelihoods can be computed by exact enumeration. We considered the particular event that all the units in the bottom layer were instantiated to zero. For this event, we compared the mean eld bound on the likelihood to its true value, obtained by enumerating the The log-likelihood was computed for the event that the all the nodes in the bottom layer were instantiated to zero.\nstates in the top two layers. This was done for 10000 random networks whose weights and biases were uniformly distributed between -1 and 1. Figure 5 (left) shows the histogram of the relative error in log likelihood, computed as L V = ln P (V ) 1; for these networks, the mean relative error is 1.6%.\nFigure 5 (right) shows the histogram that results from assuming that all states in the bottom layer occur with equal probability; in this case the relative error was computed as (ln 2 6 )= lnP(V ) 1. For this \\uniform\" approximation, the root mean square relative error is 22.6%. The large discrepancy between these results suggests that mean eld theory can provide a useful lower bound on the likelihood in certain belief networks. Of course, what ultimately matters is the behavior of mean eld theory in networks that solve meaningful problems. This is the subject of the next section." }, { "figure_ref": [ "fig_5" ], "heading": "Learning", "publication_ref": [ "b10", "b10" ], "table_ref": [ "tab_0", "tab_1" ], "text": "One attractive use of sigmoid belief networks is to perform density estimation in high dimensional input spaces. This is a problem in parameter estimation: given a set of patterns over particular units in the belief network, nd the set of weights J ij and biases h i that assign high probability to these patterns. Clearly, the ability to compute likelihoods lies at the crux of any algorithm for learning the parameters in belief networks. Figure 6: Relationship between the true log-likelihood and its lower bound during learning. One possibility (at left) is that both increase together. The other is that the true log-likelihood decreases, closing the gap between itself and the bound. The latter can be viewed as a form of regularization.\nMean eld algorithms provide a strategy for discovering appropriate values of J ij and h i without resort to Gibbs sampling. Consider, for instance, the following procedure. For each pattern in the training set, solve the mean eld equations for f i ; i g and compute the associated bound on the log-likelihood, L V . Next, adapt the weights in the belief network by gradient ascent 5 in the mean eld bound,\nJ ij = @L V @J ij (25) h i = @L V @h i ; (26\n)\nwhere is a suitably chosen learning rate. Finally, cycle through the patterns in the training set, maximizing their likelihoods 6 for a xed number of iterations or until one detects the onset of over tting (e.g., by cross-validation).\nThe above procedure uses a lower bound on the log-likelihood as a cost function for training belief networks (Hinton, Dayan, Frey, & Neal, 1995). The fact that we have a lower bound on the loglikelihood, rather than an upper bound, is of course crucial to the success of this learning algorithm. Adjusting the weights to maximize this lower bound can a ect the true log-likelihood in two ways (see Figure 6). Either the true log-likelihood increases, moving in the same direction as the bound, or the true log-likelihood decreases, closing the gap between these two quantities. For the purposes of maximum likelihood estimation, the rst outcome is clearly desirable; the second, though less desirable, can also be viewed in a positive light. In this case, the mean eld approximation is acting as a regularizer, steering the network toward simple, factorial solutions even at the expense of lower likelihood estimates.\nWe tested this algorithm by building a maximum-likelihood classi er for images of handwritten digits. The data consisted of 11000 examples of handwritten digits 0-9] compiled by the U.S. Postal Service O ce of Advanced Technology. The examples were preprocessed to produce 8x8 binary images, as shown in Figure 7. For each digit, we divided the available data into a training set with 700 examples and a test set with 400 examples. We then trained a three layer network 7 (see 5. Expressions for the gradients of L V are given in the appendix B. 6. Of course, one can also incorporate prior distributions over the weights and biases and maximize an approximation to the log posterior probability of the training set. 7. There are many possible architectures that could be chosen for the purpose of density estimation; we used layered networks to permit a comparison with previous benchmarks on this data set. Table 1: Confusion matrix for digit classi cation. The entry in the ith row and jth column counts the number of times that digit i was classi ed as digit j.\nFigure 4) on each digit, sweeping through each training set ve times with learning rate = 0:05. The networks had 8 units in the top layers, 24 units in the middle layer, and 64 units in the bottom layer, making them far too large to be treated with exact methods.\nAfter training, we classi ed the digits in each test set by the network that assigned them the highest likelihood. Table 1 shows the confusion matrix in which the ijth entry counts the number of times digit i was classi ed as digit j. There were 184 errors in classi cation (out of a possible 4000), yielding an overall error rate of 4.6%. Table 2 gives the performance of various other algorithms on the same partition of this data set. Table 3 shows the average log-likelihood score of each network on the digits in its test set. (Note that these scores are actually lower bounds.) These scores are normalized so that a network with zero weights and biases (i.e., one in which all 8x8 patterns are equally likely) would receive a score of -1. As expected, digits with relatively simple constructions (e.g., zeros, ones, and sevens) are more easily modeled than the rest.\nBoth measures of performance|error rate and log-likelihood score|are competitive with previously published results (Hinton, Dayan, Frey, & Neal, 1995) on this data set. The success of the algorithm a rms both the strategy of maximizing a lower bound and the utility of the mean eld approximation. Though similar results can be obtained via Gibbs sampling, this seems to require considerably more computation than methods based on maximizing a lower bound (Frey, Dayan, & Hinton, 1995 " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b2", "b4", "b5", "b5", "b13" ], "table_ref": [], "text": "Endowing networks with probabilistic semantics provides a uni ed framework for incorporating prior knowledge, handling missing data, and performing inference under uncertainty. Probabilistic calculations, however, can quickly become intractable, so it is important to develop techniques that approximate probability distributions in a exible manner. This is especially true for networks with multilayer architectures and large numbers of hidden units. Exact algorithms and Gibbs sampling methods are not generally practical for such networks; approximations are required.\nIn this paper we have developed a mean eld approximation for sigmoid belief networks. As a computational tool, our mean eld theory has two main virtues: rst, it provides a tractable approximation to the conditional distributions required for inference; second, it provides a lower bound on the likelihoods required for learning.\nThe problem of computing exact likelihoods in belief networks is NP-hard (Cooper, 1990); the same is true for approximating likelihoods to within a guaranteed degree of accuracy (Dagum & Luby, 1993). It follows that one cannot establish universal guarantees for the accuracy of the mean eld approximation. For certain networks, clearly, the mean eld approximation is bound to fail|it cannot capture logical constraints or strong correlations between uctuating units. Our preliminary results, however, suggest that these worst-case results do not apply to all belief networks. It is worth noting, moreover, that all the above quali cations apply to Markov networks, and that in this domain, mean eld methods are already well-established.\nThe idea of bounding the likelihood in sigmoid belief networks was introduced in a related architecture known as the Helmholtz machine (Hinton, Dayan, Neal, & Zemel, 1995). The formalism in this paper di ers in a number of respects from the Helmholtz machine. Most importantly, it enables one to compute a rigorous lower bound on the likelihood. This cannot be said for the wake-sleep algorithm (Frey, Hinton, & Dayan, 1995), which relies on sampling-based methods, or the heuristic approximation of Dayan et al (1995), which does not guarantee a rigorous lower bound. Also, our mean eld theory|which takes the place of the \\recognition model\" of the Helmholtz machine|applies generally to sigmoid belief networks with or without layered structure. Moreover, it places no restrictions on the locations of visible units; they may occur anywhere within the network|an important feature for handling problems with missing data. Of course, these advantages are not accrued without extra computational demands and more complicated learning rules.\nIn recent work that builds on the theory presented here, we have begun to relax the assumption of complete factorizability in eq. ( 15). In general, one would expect more sophisticated approximations to the Boltzmann distribution to yield tighter bounds on the log-likelihood. The challenge here is to nd distributions that allow for correlations between hidden units while remaining computationally tractable. By tractable, we mean that the choice of Q(HjV ) must enable one to evaluate (or at least upper bound) the right hand side of eq. ( 13). Extensions of this kind include mixture models (Jaakkola & Jordan, 1996) and/or partially factorized distributions (Saul & Jordan, 1995) that exploit the presence of tractable substructures in the original network. Our approach in this paper has been to work out the simplest mean eld theory that is computationally tractable, but clearly better results will be obtained by tailoring the approximation to the problem at hand." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We are especially grateful to P. Dayan, G. Hinton, B. Frey, R. Neal, and H. Seung for sharing early versions of their manuscripts and for providing many stimulating discussions about this work. The paper was also improved greatly by the comments of several anonymous reviewers. To facilitate comparisons with similar methods, the results reported in this paper used images that were preprocessed at the University of Toronto. The authors acknowledge support from NSF grant CDA-9404932, ONR grant N00014-94-1-0777, ATR Research Laboratories, and Siemens Corporation." }, { "figure_ref": [], "heading": "Appendix A. Sigmoid versus Noisy-OR", "publication_ref": [ "b21", "b14" ], "table_ref": [], "text": "The semantics of the sigmoid function are similar, but not identical, to the noisy-OR gates (Pearl, 1988) more commonly found in the belief network literature. Noisy-OR gates use the weights in the network to represent independent causal events. In this case, the probability that unit S i is activated is given by P\nwhere p ij is the probability that S j = 1 causes S i = 1 in the absence of all other causal events. If we de ne the weights of a noisy-OR belief network by ij = ln(1 p ij ), it follows that\nwhere (z) = 1 e z (29) is the noisy-OR gating function. Comparing this to the sigmoid function, eq. ( 3), we see that both model P (S i jpa(S i )) as a monotonically increasing function of a sum of weighted inputs. The main di erence is that in noisy-OR networks, the weights ij are constrained to be positive by an underlying set of probabilities, p ij . Recently, Jaakkola and Jordan (1996b) have developed a mean eld approximation for noisy-OR belief networks." }, { "figure_ref": [], "heading": "Appendix B. Gradients", "publication_ref": [], "table_ref": [], "text": "Here we provide expressions for the gradients that appear in eqs. ( 23), ( 25) and ( 26). As usual, let z i = P j J ij S j + h i denote the sum of inputs into unit S i . Under the factorial distribution, eq. ( 15), we can compute the averages: he i zi i = e ihi Y j 1 j + j e i Jij ;\n(30)\nFor each unit in the network, let us de ne the quantity i =\nhe (1 i )zi i he izi + e (1 i)zi i : (32) Note that i lies between zero and one. With this de nition, we can write the matrix elements in eq. ( 23) as:\n1 j + j e i Jij + i (1 e (1 i)Jij ) 1 j + j e (1 i)Jij :\n(33)\nThe gradients in eqs. ( 25) and ( 26) are found by similar means. For the weights, we have @L V @J ij = ( i i ) j + (1 i ) i j e i Jij 1 j + j e iJij i (1 i ) j e (1 i)Jij 1 j + j e (1 i)Jij : (34)\nLikewise, for the biases, we have\nFinally, we note that one may obtain simpler gradients at the expense of introducing a weaker bound than eq. ( 19). This can be advantageous when speed of computation is more important than the quality of the bound. All the experiments in this paper used the bound in eq. ( 19)." } ]
[ { "authors": "D Ackley; G Hinton; T Sejnowski", "journal": "Cognitive Science", "ref_id": "b0", "title": "A learning algorithm for Boltzmann machines", "year": "1985" }, { "authors": "W Buntine", "journal": "Journal of Arti cial Intelligence Research", "ref_id": "b1", "title": "Operations for learning with graphical models", "year": "1994" }, { "authors": "G Cooper", "journal": "Arti cial Intelligence", "ref_id": "b2", "title": "Computational complexity of probabilistic inference using Bayesian belief networks", "year": "1990" }, { "authors": "T Cover; J Thomas", "journal": "John Wiley & Sons", "ref_id": "b3", "title": "Elements of Information Theory", "year": "1991" }, { "authors": "P Dagum; M Luby", "journal": "Arti cial Intelligence", "ref_id": "b4", "title": "Approximately probabilistic reasoning in Bayesian belief networks is NP-hard", "year": "1993" }, { "authors": "P Dayan; G Hinton; R Neal; R Zemel", "journal": "Neural Computation", "ref_id": "b5", "title": "The Helmholtz machine", "year": "1995" }, { "authors": "A Dempster; N Laird; D Rubin", "journal": "Journal of the Royal Statistical Society B", "ref_id": "b6", "title": "Maximum likelihood from incomplete data via the EM algorithm", "year": "1977" }, { "authors": "B Frey; G Hinton; P Dayan", "journal": "", "ref_id": "b7", "title": "Does the wake-sleep algorithm learn good density estimators", "year": "1995" }, { "authors": "S Geman; D Geman", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b8", "title": "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images", "year": "1984" }, { "authors": "J Hertz; A Krogh; R G Palmer", "journal": "Addison-Wesley", "ref_id": "b9", "title": "Introduction to the Theory of Neural Computation", "year": "1991" }, { "authors": "G Hinton; P Dayan; B Frey; R Neal", "journal": "Science", "ref_id": "b10", "title": "The wake-sleep algorithm for unsupervised neural networks", "year": "1995" }, { "authors": "C Itzykson; J M Drou E", "journal": "Cambridge University Press", "ref_id": "b11", "title": "Statistical Field Theory", "year": "1991" }, { "authors": "T Jaakkola; L Saul; M Jordan", "journal": "", "ref_id": "b12", "title": "Fast learning by bounding likelihoods in sigmoid-type belief networks", "year": "1995" }, { "authors": "T Jaakkola; M Jordan", "journal": "", "ref_id": "b13", "title": "Mixture model approximations for belief networks", "year": "1996" }, { "authors": "T Jaakkola; M Jordan", "journal": "", "ref_id": "b14", "title": "Computing upper and lower bounds on likelihoods in intractable networks", "year": "1996" }, { "authors": "C S Jensen; A Kong; U Kjaerul", "journal": "International Journal of Human Computer Studies", "ref_id": "b15", "title": "Blocking Gibbs sampling in very large probabilistic expert systems", "year": "1995" }, { "authors": "S Lauritzen; D Spiegelhalter", "journal": "Journal of the Royal Statistical Society B", "ref_id": "b16", "title": "Local computations with probabilities on graphical structures and their application to expert systems", "year": "1988" }, { "authors": "P Mccullagh; J A Nelder", "journal": "Chapman and Hall", "ref_id": "b17", "title": "Generalized Linear Models", "year": "1983" }, { "authors": "R Neal", "journal": "Arti cial Intelligence", "ref_id": "b18", "title": "Connectionist learning of belief networks", "year": "1992" }, { "authors": "R Neal; G Hinton", "journal": "", "ref_id": "b19", "title": "A new view of the EM algorithm that justi es incremental and other variants", "year": "1993" }, { "authors": "G Parisi", "journal": "Addison-Wesley", "ref_id": "b20", "title": "Statistical Field Theory", "year": "1988" }, { "authors": "J Pearl", "journal": "Morgan Kaufmann", "ref_id": "b21", "title": "Probabilistic Reasoning in Intelligent Systems", "year": "1988" }, { "authors": "C Peterson; J R Anderson", "journal": "Complex Systems", "ref_id": "b22", "title": "A mean eld theory learning algorithm for neural networks", "year": "1987" }, { "authors": "W H Press; B P Flannery; S A Teukolsky; W T Vetterling", "journal": "Cambridge University Press", "ref_id": "b23", "title": "Numerical Recipes", "year": "1986" }, { "authors": "S Russell; J Binder; D Koller; K Kanazawa", "journal": "", "ref_id": "b24", "title": "Local learning in probabilistic networks with hidden variables", "year": "1995" }, { "authors": "L Saul; M Jordan", "journal": "", "ref_id": "b25", "title": "Exploiting tractable substructures in intractable networks", "year": "1995" }, { "authors": "H Seung", "journal": "World Scienti c", "ref_id": "b26", "title": "Annealed theories of learning", "year": "1995" } ]
[ { "formula_coordinates": [ 2, 216.72, 516.24, 176.88, 46.44 ], "formula_id": "formula_0", "formula_text": "P (S i = 1jpa(S i )) = 0 @ X j J ij S j + h i 1 A ;" }, { "formula_coordinates": [ 2, 278.4, 591.84, 243.6, 28.08 ], "formula_id": "formula_1", "formula_text": "(z) = 1 1 + e z (3)" }, { "formula_coordinates": [ 3, 214.32, 332.88, 307.68, 49.56 ], "formula_id": "formula_2", "formula_text": "P (S i jpa(S i )) = exp h P j J ij S j + h i S i i 1 + exp h P j J ij S j + h i i :(4)" }, { "formula_coordinates": [ 3, 206.64, 409.68, 315.36, 76.2 ], "formula_id": "formula_3", "formula_text": "P (S) = Y i P (S i jpa(S i )) (5) = Y i 8 < : exp h P j J ij S j + h i S i i 1 + exp h P j J ij S j + h i i 9 = ; :(6)" }, { "formula_coordinates": [ 3, 90, 606.24, 432, 36.36 ], "formula_id": "formula_4", "formula_text": "where P (V ) = X H P (H; V ) (8)" }, { "formula_coordinates": [ 4, 214.08, 485.52, 307.92, 67.56 ], "formula_id": "formula_5", "formula_text": "ln P (V ) = ln X H P (H; V ) (9) = ln X H Q(HjV ) P (H; V ) Q(HjV ) :(10)" }, { "formula_coordinates": [ 4, 224.88, 583.68, 297.12, 36.12 ], "formula_id": "formula_6", "formula_text": "ln P (V ) X H Q(HjV ) ln P (H; V ) Q(HjV ) :(11)" }, { "formula_coordinates": [ 4, 219.84, 651.12, 302.16, 36.12 ], "formula_id": "formula_7", "formula_text": "KL(QjjP) = X H Q(HjV ) ln Q(HjV ) P (HjV ) : (12)" }, { "formula_coordinates": [ 5, 150.72, 137.04, 371.28, 43.8 ], "formula_id": "formula_8", "formula_text": "ln P (V ) X H Q(HjV ) lnQ(HjV ) ! X H Q(HjV ) lnP(H; V ) ! :(13)" }, { "formula_coordinates": [ 5, 145.44, 258, 376.56, 46.44 ], "formula_id": "formula_9", "formula_text": "lnP(H; V ) = X ij J ij S i S j X i h i S i + X i ln 2 4 1 + exp 0 @ X j J ij S j + h i 1 A 3 5 ; (14)" }, { "formula_coordinates": [ 5, 238.08, 420, 136.08, 36.12 ], "formula_id": "formula_10", "formula_text": "Q(HjV ) = Y i2H Si i (1 i ) 1 Si ;" }, { "formula_coordinates": [ 6, 168.96, 115.2, 353.04, 67.56 ], "formula_id": "formula_11", "formula_text": "P (V ) X ij J ij i j + X i h i i X i ln 1 + e P j Jij Sj +hi (16) X i i ln i + (1 i ) ln(1 i )] ;" }, { "formula_coordinates": [ 7, 144, 476.16, 378, 77.4 ], "formula_id": "formula_14", "formula_text": "L V = X ij J ij i j + X i h i i X i i 0 @ X j J ij j + h i 1 A (22) X i ln D e izi + e (1 i)zi E + X i i ln i + (1 i ) ln(1 i )] ;" }, { "formula_coordinates": [ 7, 228, 673.2, 289.56, 33.96 ], "formula_id": "formula_15", "formula_text": "K ij = @ @ j ln D e i zi + e (1 i)zi E ; (23" }, { "formula_coordinates": [ 7, 517.56, 686.4, 4.44, 14.4 ], "formula_id": "formula_16", "formula_text": ") S i" }, { "formula_coordinates": [ 8, 207.6, 361.92, 314.4, 46.44 ], "formula_id": "formula_17", "formula_text": "i = 0 @ h i + X j J ij j + J ji ( j j ) + K ji ] 1 A ;(24)" }, { "formula_coordinates": [ 10, 275.04, 376.44, 246.96, 51.6 ], "formula_id": "formula_18", "formula_text": "J ij = @L V @J ij (25) h i = @L V @h i ; (26" }, { "formula_coordinates": [ 10, 517.56, 407.28, 4.44, 14.4 ], "formula_id": "formula_19", "formula_text": ")" } ]
Mean Field Theory for Sigmoid Belief Networks
We develop a mean eld theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean eld theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition|the classi cation of handwritten digits.
Lawrence K Saul; Tommi Jaakkola
[ { "figure_caption": "Figure 1 :1Figure 1: Sigmoid function (z) = 1 + e z ] 1 . If z is the sum of weighted inputs to node S, then P (S = 1jz) = (z) is the conditional probability that node S is activated.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "bound the right hand side by applying Jensen's inequality in the opposite direction as before, pulling the logarithm outside the expectation: hln 1 + e z ]i hzi + ln D e z + e (1 )z E :", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Bound in eq. (19) for the case where z is normally distributed with zero mean and", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: Three layer belief network (2x4x6) with top-down propagation of beliefs. To model the images of handwritten digits in section 4, we used 8x24x64 networks where units in the bottom layer encoded pixel values in 8x8 bitmaps.", "figure_data": "", "figure_id": "fig_3", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Binary images of handwritten digits: two and ve.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "). Classi cation error rates for the data set of handwritten digits. The rst three were reported byHinton et al (1995).", "figure_data": "algorithm nearest neighbor back-propagation wake-sleep mean eldclassi cation error 6.7% 5.6% 4.8% 4.6%digit log-likelihood score 0 -0.447 1 -0.296 2 -0.636 3 -0.583 4 -0.574 5 -0.565 6 -0.515 7 -0.434 8 -0.569 9 -0.495 all -0.511", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Normalized log-likelihood score for each network on the digits in its test set. To obtain the raw score, multiply by 400 64 ln2. The last row shows the score averaged across all digits.", "figure_data": "", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b0", "b3" ], "table_ref": [], "text": "Most empirical learning systems are given a set of pre-classi ed cases, each described by a vector of attribute values, and construct from them a mapping from attribute values to classes. The attributes used to describe cases can be grouped into continuous attributes, whose values are numeric, and discrete attributes with unordered nominal values. For example, the description of a person might include weight in kilograms, with a value such as 73.5, and color of eyes whose value is one of `brown', `blue', etc.\nC4.5 (Quinlan, 1993) is one such system that learns decision-tree classi ers. Several authors have recently noted that C4.5's performance is weaker in domains with a preponderance of continuous attributes than for learning tasks that have mainly discrete attributes. For example, Auer, Holte, and Maass (1995) describe T2, a system that searches for good two-level decision trees, and comment: \\The accuracy of T2's trees rivalled or surpassed C4.5's on 8 of the 15] datasets, including all but one of the datasets having only continuous attributes.\"\nDiscussing the e ect of replacing continuous attributes by discrete attributes, each of whose values corresponds to an interval of the continuous attribute, Dougherty, Kohavi, and Sahami (1995) write: \\C4.5's performance was signi cantly improved on two datasets : : : using the entropy discretization method and did not signi cantly degrade on any dataset.\n: : : We conjecture that the C4.5 induction algorithm is not taking full advantage of possible local discretization.\"\nThis paper explores a new version of C4.5 that changes the relative desirability of using continuous attributes. Section 2 sketches the current system, while the following section describes the modi cations. Results from a comprehensive set of trials, reported in Section 4, show that the modi cations lead to trees that are both smaller and more accurate. Section 5 compares the performance of the new version to results obtained with the two alternative methods of exploiting continuous attributes quoted above." }, { "figure_ref": [], "heading": "Constructing Decision Trees", "publication_ref": [ "b8", "b11", "b4" ], "table_ref": [], "text": "C4.5 uses a divide-and-conquer approach to growing decision trees that was pioneered by Hunt and his co-workers (Hunt, Marin, & Stone, 1966). Only a brief description of the method is given here; see Quinlan (1993) for a more complete treatment.\nThe following algorithm generates a decision tree from a set D of cases:\nIf D satis es a stopping criterion, the tree for D is a leaf associated with the most frequent class in D. One reason for stopping is that D contains only cases of this class, but other criteria can also be formulated (see below). Some test T with mutually exclusive outcomes T 1 ; T 2 ; : : : ; T k is used to partition D into subsets D 1 ; D 2 ; : : : ; D k , where D i contains those cases that have outcome T i . The tree for D has test T as its root with one subtree for each outcome T i that is constructed by applying the same procedure recursively to the cases in D i .\nProvided that there are no cases with identical attribute values that belong to di erent classes, any test T that produces a non-trivial partition of D will eventually lead to singleclass subsets as above. However, in the expectation that smaller trees are preferable (being easier to understand and often more accurate predictors), a family of possible tests is examined and one of them chosen to maximize the value of some splitting criterion. The default tests considered by C4.5 are: A=? for a discrete attribute A, with one outcome for each value of A.\nA t for a continuous attribute A, with two outcomes, true and false. To nd the threshold t that maximizes the splitting criterion, the cases in D are sorted on their values of attribute A to give ordered distinct values v 1 ,v 2 ,: : : ,v N . Every pair of adjacent values suggests a potential threshold t = (v i + v i+1 )=2 and a corresponding partition of D. 1 The threshold that yields the best value of the splitting criterion is then selected.\nThe default splitting criterion used by C4.5 is gain ratio, an information-based measure that takes into account di erent numbers (and di erent probabilities) of test outcomes. Let C denote the number of classes and p(D; j) the proportion of cases in D that belong to the jth class. The residual uncertainty about the class to which a case in D belongs can be expressed as Fayyad and Irani (1992) prove that, for convex splitting criteria such as information gain, it is not necessary to examine all such thresholds. If all cases with value vi and with adjacent value vi+1 belong to the same class, a threshold between them cannot lead to a partition that has the maximum value of the criterion. and the corresponding information gained by a test T with k outcomes as\nInfo(D) = C X j=1 p(D; j) log 2 (p(D; j)) 1.\nGain(D; T) = Info(D) k X i=1 jD i j jDj Info(D i ) :\nThe information gained by a test is strongly a ected by the number of outcomes and is maximal when there is one case in each subset D i . On the other hand, the potential information obtained by partitioning a set of cases is based on knowing the subset D i into which a case falls; this split information Split(D; T) = k X i=1 jD i j jDj log 2 jD i j jDj tends to increase with the number of outcomes of a test. The gain ratio criterion assesses the desirability of a test as the ratio of its information gain to its split information. The gain ratio of every possible test is determined and, among those with at least average gain, the split with maximum gain ratio is selected.\nIn some situations, every possible test splits D into subsets that have the same class distribution. All tests then have zero gain, and C4.5 uses this as an additional stopping criterion.\nThe recursive partitioning strategy above results in trees that are consistent with the training data, if this is possible. In practical applications data are often noisy { attribute values are incorrectly recorded and cases are misclassi ed. Noise leads to overly complex trees that attempt to account for these anomalies. Most systems prune the initial tree, identifying subtrees that contribute little to predictive accuracy and replacing each by a leaf." }, { "figure_ref": [], "heading": "Modi ed Assessment of Continuous Attributes", "publication_ref": [ "b13", "b12" ], "table_ref": [], "text": "We return now to the selection of a threshold for a continuous attribute A. If there are N distinct values of A in the set of cases D, there are N 1 thresholds that could be used for a test on A. Each threshold gives unique subsets D 1 and D 2 and so the value of the splitting criterion is a function of the threshold. The ability to choose the threshold t so as to maximize this value gives a continuous attribute A an advantage over a discrete attribute (which has no similar parameter that adjusts the partition of D), and also over other continuous attributes that have fewer distinct values in D. That is, the choice of a test will be biased towards continuous attributes with numerous distinct values. This paper proposes a correction for this bias that consists of two modi cations to C4.5. The rst of these, inspired by the Minimum Description Length principle (Rissanen, 1983), adjusts the apparent information gain from a test of a continuous attribute. Discussion of this change is prefaced by a brief introduction to MDL.\nFollowing Quinlan and Rivest (1989), let a sender and a receiver both possess an ordered list of the cases in the training data showing each case's attribute values. The sender also knows the class to which each case belongs and must transmit this information to the receiver. He or she rst encodes and sends a theory of how to classify the cases. Since this theory might be imperfect, the sender must also identify the exceptions to the theory that occur in the training cases and state how their classes predicted by the theory should be corrected. The total length of the transmission is thus the number of bits required to encode the theory (the theory cost) plus the bits needed to identify and correct the exceptions (the exceptions cost). The sender may have a choice among several alternative theories, some being simple but leaving many errors to be corrected while others are more elaborate but more accurate. The MDL principle may then be stated as: Choose the theory that minimizes the sum of the theory and exceptions costs.\nMDL thus provides a framework for trading o the complexity of a theory against its accuracy on the training data D. The exceptions cost associated with a set of cases D is asymptotically equivalent to jDj Info(D), so that jDj Gain(D; T) measures the reduction in exceptions cost when D is partitioned by a test T. Partitioning D in this way, however, requires transmission of a more complex theory that includes the de nition of T. Whereas a test A=? on a discrete attribute A can be speci ed by nominating the attribute involved, a test A t must also include the threshold t; if there are N 1 possible thresholds for A, this will take an additional log 2 (N 1) bits. 2 The rst modi cation is to \\charge\" this increased cost associated with a test on a continuous attribute to the apparent gain achieved by the test, so reducing the (per-case) information Gain(D; T) by log 2 (N 1)/jDj.\nA test on a continuous attribute with numerous distinct values will now be less likely to have the maximum value of the splitting criterion among the family of possible tests, and so is less likely to be selected. Further, if all thresholds t on a continuous attribute A have an adjusted gain that is less than zero, attribute A is e ectively ruled out. The consequences of this rst change are thus a re-ranking of potential tests and the possible exclusion of some of them.\nThe second change is much more straightforward. Recall that the gain ratio criterion divides the apparent gain by the information available from a split. This latter varies as a function of the threshold t and is is maximal when there are as many cases above t as below. If the gain ratio criterion is used to select t, the e ect of the penalty described above will also vary with t, having the least impact when t divides the cases equally. This seems to be an unnecessary complication, so the threshold t is chosen instead to maximize gain.\nOnce the threshold is chosen, however, the nal selection of the attribute to be used for the test is still made on the basis of the gain ratio criterion using the adjusted gain." }, { "figure_ref": [], "heading": "Empirical Evaluation", "publication_ref": [], "table_ref": [], "text": "The e ects of these changes were assessed empirically in a series of \\before and after\" experiments with a substantial number of learning tasks. Release 7 of C4.5 (abbreviated here as Rel 7) was modi ed as above to produce a new version (Rel 8). Both systems were applied to twenty databases from the UCI Repository that involve continuous attributes, either alone or in combination with discrete attributes. A summary of the characteristics of these data sets appears in Appendix A. In all the following experiments, both versions of C4.5 were run with the same default settings for all parameters; no attempt was made to tune either system for these tasks." }, { "figure_ref": [], "heading": "Initial experiments", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 1 displays the results of the rst trials, consisting of ten complete ten-fold crossvalidations3 with each task. The gure shown for each system is the mean of the ten cross-validation results where the error rates and tree sizes refer to C4.5's pruned trees; the standard error of this mean appears in small font. The column headed `w/d/l' shows the number of complete cross-validations in which Rel 8 gives a lower error rate, the same error rate, or a higher error rate than Rel 7. The gures under `ratio' present results for Rel 8 divided by the corresponding gure for Rel 7.\nAs the overall averages at the foot of the table indicate, the trees produced by Rel 8 in these trials are 4% more accurate and 12% smaller than those generated by Rel 7. Rel 8 is less accurate that Rel 7 on only four of the twenty tasks; for the smallest data set (labor, with 57 cases), however, the trees produced by Rel 8 are substantially less accurate. The pruned trees generated by Rel 8 for some tasks are a great deal smaller than their Rel 7 counterparts { diabetes is a particularly notable example.\nI do not recommend the use of the unpruned trees constructed initially by C4.5 but, for the sake of completeness, the corresponding gures for the unpruned trees were also determined. The average ratio of the error rate of Rel 8 to that of Rel 7 is 0.95, while the ratio of tree size is 0.94. For the unpruned trees, then, the modi cations incorporated in Rel 8 lead to a 5% reduction in error and a 6% reduction in the size." }, { "figure_ref": [], "heading": "Adding irrelevant attributes", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In practical applications, it is unlikely that an analyst would knowingly add irrelevant attributes to the data! However, even an attribute that is relevant for some parts of the tree might be quite irrelevant for others. The bias towards continuous attributes inherent in Rel 7 implies that the system should occasionally select a test on an irrelevant continuous attribute in preference to tests on relevant discrete attributes.\nTo explore this potential de ciency, the twenty data sets were modi ed by adding irrelevant attributes. Ten of these were continuous attributes, each having uniformly distributed random values x, 0 x < 1. (Since only the order of values of a continuous attribute is important, the distribution of these values does not matter { use of another distribution such as the Gaussian N(0; 1) should produce comparable results.) As Kohavi (personal communication, 1995) points out, it is unfair to compare Rel 8 to Rel 7 on data sets to which only irrelevant continuous-valued attributes have been added, since the modi cations incorporated in Rel 8 make it less likely to choose tests involving any continuous attributes. To circumvent this problem, a further ten discrete attributes with ten equiprobable values were added, giving twenty irrelevant attributes in all. The experiments above were repeated on the enlarged data sets, with the results shown in Table 2.\nThese results highlight the e ects of the modi cations implemented in Rel 8. Addition of irrelevant attributes increases the error of the Rel 7 trees by an average of 12%, but has a much smaller impact on those produced by Rel 8. The head-to-head comparison on the altered data sets, presented in the table, shows that the pruned trees found by Rel 8 have 10% lower error on average, and are also a great deal smaller. Any split on a random continuous attribute is unlikely to generate su cient gain to \\pay for\" the threshold, so such tests will tend to be ltered out by Rel 8 but not by Rel 7. Consequently, Rel 7 is more prone to split the data (uselessly) on a random attribute, leading to larger trees and higher error rates." }, { "figure_ref": [], "heading": "Ablation experiments", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "The e ects of the modi cations implemented in Rel 8 can be factored into choosing (slightly) di erent thresholds using gain rather than gain ratio, excluding attributes for which no threshold gives su cient gain to o set the penalty, and re-ranking potential tests by penalizing those that involve continuous attributes. To ascertain the contributions of each, two intermediate versions of C4.5 were constructed: 7GS also chooses thresholds on gain; if the gain of the best threshold is less than the penalty log 2 (N 1)/jDj, however, the test is excluded.\nThe only di erence between 7GS and Rel 8 is the latter's application of the penalty when determining the relative desirability of possible tests. The trials were repeated using the same cross-validation blocks as before for these intermediate versions. Average error rates, tree sizes, and ratios (again computed with respect to Rel 7) are presented in Table 3 and summarized graphically in Figure 1.\nSelection of thresholds by gain rather than gain ratio (7G) has very little impact { the average error rate and tree size ratios with respect to Rel 7 are both very close to one. There are non-trivial changes for some tasks, however; for instance, the error rate on the segment data is considerably lower and the trees found for the breast-w task are noticeably larger. Use of the penalty to lter tests on continuous attributes (7GS) produces more noticeable di erences. Ruling out some tests on continuous attributes accounts for most of the reduction in tree size observed with Rel 8. In some cases, the trees are markedly smaller { for the diabetes data, the 7GS trees are on average only one-third of the size of those produced by Rel 7. This change also accounts for about half of Rel 8's improvement in error rate, the diabetes data again providing the greatest change from 7G.\nFinally, the use by Rel 8 of the penalty to re-rank the attributes yields a further improvement in error rate and a small decrease in average tree size. This re-ranking may be bene cial even when all attributes are continuous: the average error rate of Rel 8 is about 1% lower than that of 7GS on the nine tasks of this kind, in only two of which does 7GS give a lower error rate than Rel 8." }, { "figure_ref": [], "heading": "Related Research", "publication_ref": [ "b3", "b2", "b5" ], "table_ref": [], "text": "This section examines the two alternative methods for utilizing continuous attributes that were mentioned in the introduction, and compares them empirically with C4.5 Rel 8. Dougherty et al. (1995) consider various ways of converting a continuous attribute to a discrete one by dividing its values into intervals, each of which becomes a separate value for the replacement discrete attribute. The method found to give the best results, entropy discretization, was rst investigated by Catlett (1991) as a means of reducing the time required to construct a tree. Fayyad and Irani (1993) subsequently introduced a clever re nement that led to the nal form used by Dougherty et al. and in the experiments reported here." }, { "figure_ref": [], "heading": "Global discretization", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "To nd the set of intervals, the training cases are rst sorted on the value of the continuous attribute in question. The procedure outlined in Section 2 is used to nd the threshold t that maximizes information gain. The same process is repeated for the corresponding subsets of cases with attribute values below and above t. (Since the cases are not reordered, they need not be re-sorted, and this is the source of the reduced learning times.) If w thresholds are found, the continuous attribute is mapped to a discrete attribute with w+1 values, one for each interval.\nSome stopping criterion is required to prevent this process from resulting in a very large number of intervals (which could become as numerous as the training cases if all values of the attribute are distinct). Catlett uses a four-pronged heuristic criterion, but Fayyad and Irani developed an elegant test based on the MDL principle (Section 3). They view a discretization rule as a classifying theory that uses a single attribute and that associates a class with each interval. Introduction of an additional threshold, increasing the complexity of the discretization rule, is allowed only if the greater theory coding cost is more than o set by the consequent reduction in the exceptions cost. This scheme generally leads to few thresholds in regions where the the cases' classes do not vary much and to ner divisions when required.\nSimilar experiments to those described by Dougherty et al. were carried out on the learning tasks of Section 4. In each trial, the training data are used to nd discretization rules to convert every continuous attribute to a discrete attribute. C4.5 4 is invoked to nd a tree that is evaluated on the test data, using the same discretization intervals found from the training data. As before, each data set is subjected to ten cross-validations using the same blocks of cases as previously.\nResults of these trials, summarized in Table 4, show that the comments of Dougherty et al. quoted in the introduction do not apply to Rel 8. Discretization leads to improved accuracy on eight of the tasks and to a degradation on 12 of them. Most of the improvements are modest, however, while several tasks exhibit a marked increase in error; the average value of the error ratio indicates a strong advantage for the local threshold selection employed in Rel 8 over the global thresholding used by discretization. Kohavi (personal communication, 1996) suggests that there might be a \\middle ground\" in which thresholds are determined locally until the subsets of cases are relatively small, at which point subsequent possible thresholds would be found using the discretization strategy above. Evidence in support of this idea is provided by Figure 2 where, for each task, the error ratio that appears in Table 4 is plotted against the size of the data set (on a logarithmic 4. Since there are no continuous attributes, Rel 7 and Rel 8 give identical results on these discretized tasks. scale). The clear trend shows that global discretization degrades performance more as data sets become larger, but can be bene cial for tasks with fewer cases." }, { "figure_ref": [], "heading": "Multi-threshold splits", "publication_ref": [ "b0", "b7" ], "table_ref": [ "tab_4" ], "text": "In contrast, T2 (Auer et al., 1995) determines thresholds locally but allows the values of a continuous attribute to be partitioned into multiple intervals. These intervals are not found heuristically by a recursive application of binary splitting, as above. Instead, a more thorough exploration is carried out to nd the set of up to m intervals that minimizes error on the training set. (The default value of m is C+1 where there are C classes in the data.) Search for these intervals is expensive, so T2 restricts decision trees to two levels of tests (in the spirit of one-level decision \\stumps\" described by Holte, 1993) where only the second level employs non-binary splits of continuous attributes. Within this restricted theory language, however, T2 is guaranteed to nd a tree that misclassi es as few of the training cases as possible.\nEven so, the computational cost of T2 using the default value of m is proportional to C 4 (C +1) 2 a 2 , where a is the number of attributes (Auer, personal communication, 1996).\nFor example, the time required to process the small auto data set with six classes and 25 attributes is four orders of magnitude greater than that needed by C4.5. This e ectively rules out trials of T2 on some of the learning tasks used above, speci cally those with more than four classes. For the remaining 14 tasks, experiments following the same pattern as before and using the same cross-validation blocks were carried out and are reported in Table 5. T2 produces trees with error rates much lower than those generated by Rel 8 on two tasks, slightly lower on two more, and higher on the remaining ten. As re ected in the average error ratio, the trials still favor C4.5 Rel 8 overall. (Had it been possible to run the tasks with larger numbers of classes, T2's restricted theory language would perhaps have caused an even more noticeable increase in error.) It is worth noting that T2's trees are much smaller than those found by C4.5 { less than half the size, on average. This is despite the fact that tests in T2 have one more outcome (for unknown values) than the corresponding tests in C4.5." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b9", "b10", "b1", "b6", "b11" ], "table_ref": [], "text": "The results of Section 4 show that the straightforward changes to C4.5's use of continuous attributes lead to an overall improvement in its performance on the twenty learning tasks investigated here. 5 The pruned trees are substantially smaller and somewhat more accurate, especially in the presence of irrelevant attributes. As the tasks are a representative selection from those in the UCI Repository that involve continuous attributes, similar learning tasks should also bene t. Of course, C4.5's performance on domains with continuous attributes can also be improved in other complementary ways, such as by selecting attributes (John, Kohavi, & P eger, 1994), exploring the space of parameter settings (Kohavi & John, 1995), or generating multiple classi ers (Breiman, 1996;Freund & Schapire, 1996).\nComparisons with a well-known global discretization scheme, and with a system that carries out a thorough search over the space of two-level decision trees, also favor the modi ed C4.5. However, both suggest further ways in which the system might be improved. Non-binary splits on continuous attributes make the trees easier to understand and also seem to lead to more accurate trees in some domains. It would also be interesting to investigate 5. The les necessary to update C4.5 Release 5 (available with Quinlan, 1993) to the new Release 8 can be obtained by anonymous ftp from ftp.cs.su.oz.au, le pub/ml/patch.tar.Z." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was made possible by a grant from the Australian Research Council. The T2 system was programmed by Peter Auer and made available for these comparisons by Rob Holte. Thanks to Thierry Van de Merckt for comments on the results that led to the ablation experiments. I am also grateful for suggestions regarding the paper's content and presentation made by Ron Kohavi, Usama Fayyad, and Pat Langley. The UCI Data Repository owes its existence to David Aha and Patrick Murphy. The breast cancer data (breast-w) was provided to the Repository by Dr William H. Wolberg." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Kohavi's suggestion to use discretization within a tree when the local number of training cases is small.\nOn another tack, C4.5 has an option that a ects tests on discrete attributes. Instead of the default, in which each value of the attribute is associated with a separate subtree, the values are grouped into subsets and one tree formed for each subset. Many possible subsets are explored, just as many possible thresholds for a continuous attribute are considered. The argument for the application of a penalty to tests on continuous attributes would seem to apply also to such subset tests. " }, { "figure_ref": [], "heading": "Appendix A. Description of learning tasks", "publication_ref": [], "table_ref": [], "text": "" } ]
[ { "authors": "P Auer; R C Holte; W Maass", "journal": "Morgam Kaufmann", "ref_id": "b0", "title": "Theory and application of agnostic paclearning with small decision trees", "year": "1995" }, { "authors": "L Breiman", "journal": "", "ref_id": "b1", "title": "Bagging predictors. Machine Learning", "year": "1996" }, { "authors": "J Catlett", "journal": "Springer Verlag", "ref_id": "b2", "title": "On changing continuous attributes into ordered discrete attributes", "year": "1991" }, { "authors": "J Dougherty; R Kohavi; M Sahami", "journal": "Morgan Kaufmann", "ref_id": "b3", "title": "Supervised and unsupervised discretization of continuous features", "year": "1995" }, { "authors": "U M Fayyad; K B Irani", "journal": "Machine Learning", "ref_id": "b4", "title": "On the handling of continuous-valued attributes in decision tree generation", "year": "1992" }, { "authors": "U M Fayyad; K B Irani", "journal": "Morgan Kaufmann", "ref_id": "b5", "title": "Multi-interval discretization of continuous-valued attributes for classi cation learning", "year": "1993" }, { "authors": "Y Freund; R E Schapire", "journal": "", "ref_id": "b6", "title": "A decision-theoretic generalization of on-line learning and an application to boosting", "year": "1996" }, { "authors": "R C Holte", "journal": "Machine Learning", "ref_id": "b7", "title": "Very simple classi cation rules perform well on most commonly used datasets", "year": "1993" }, { "authors": "E B Hunt; J Marin; P J Stone", "journal": "Academic Press", "ref_id": "b8", "title": "Experiments in Induction", "year": "1966" }, { "authors": "G H John; R Kohavi; K ", "journal": "Morgan Kaufmann", "ref_id": "b9", "title": "Irrelevant features and the subset selection problem", "year": "1994" }, { "authors": "R Kohavi; G H John", "journal": "Morgan Kaufmann", "ref_id": "b10", "title": "Automatic parameter selection by minimizing estimated error", "year": "1995" }, { "authors": "J R Quinlan", "journal": "Morgan Kaufmann", "ref_id": "b11", "title": "C4.5: Programs for Machine Learning", "year": "1993" }, { "authors": "J R Quinlan; R L Rivest", "journal": "Information and Computation", "ref_id": "b12", "title": "Inferring decision trees using the minimum description length principle", "year": "1989" }, { "authors": "J Rissanen", "journal": "Annals of Statistics", "ref_id": "b13", "title": "A universal prior for integers and estimation by minimum description length", "year": "1983" } ]
[ { "formula_coordinates": [ 2, 93.72, 614.16, 304.8, 59.05 ], "formula_id": "formula_0", "formula_text": "Info(D) = C X j=1 p(D; j) log 2 (p(D; j)) 1." }, { "formula_coordinates": [ 3, 195.96, 112.68, 220.08, 36.84 ], "formula_id": "formula_1", "formula_text": "Gain(D; T) = Info(D) k X i=1 jD i j jDj Info(D i ) :" } ]
Improved Use of Continuous Attributes in C4.5
A reported weakness of C4.5 in domains with continuous attributes is addressed by modifying the formation and evaluation of tests on continuous attributes. An MDL-inspired penalty is applied to such tests, eliminating some of them from consideration and altering the relative desirability of all tests. Empirical trials show that the modi cations lead to smaller decision trees with higher predictive accuracies. Results also con rm that a new version of C4.5 incorporating these changes is superior to recent approaches that use global discretization and that construct small trees with multi-interval splits.
J R Quinlan
[ { "figure_caption": "Figure 2: E ect of discretization vs data set size.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Results using modi ed (Rel 8) and previous (Rel 7) C4.5.", "figure_data": "Error Rate Rel 7 w/d/l ratio Rel 8 7.67 .12 7.49 .16 3/2/5 1.02 75.2 .7 70.1 1.1 1.07 Tree Size Rel 8 Rel 7 ratio 17.7 .5 23.8 .6 10/0/0 .74 63.7 .4 62.9 .5 1.01 breast-w 5.26 .19 5.29 .09 5/1/4 .99 25.0 .5 20.3 .5 1.23 anneal auto colic 15.0 .2 15.1 .4 5/2/3 .99 9.7 .2 20.0 .5 .49 credit-a 14.7 .2 15.8 .3 7/1/2 .93 33.2 1.1 57.3 1.2 .58 credit-g 28.4 .3 28.9 .3 5/1/4 .98 124 2 155 2 .80 diabetes 25.4 .3 28.3 .3 10/0/0 .90 44.0 1.6 128.2 1.8 .34 glass 32.5 .8 32.1 .5 4/1/5 1.01 45.7 .4 51.3 .4 .89 heart-c 23.0 .5 24.9 .4 8/0/2 .92 39.9 .4 45.3 .3 .88 heart-h 21.5 .2 21.6 .5 4/0/6 1.00 19.1 .6 29.7 1.2 .64 hepatitis 20.4 .6 21.7 .8 6/1/3 .94 17.8 .3 15.5 .4 1.15 hypo .48 .01 .49 .02 6/3/1 .97 27.5 .1 25.3 .1 1.09 iris 4.80 .17 4.87 .20 3/3/4 .99 8.5 .0 9.3 .1 .91 labor 19.1 1.0 16.7 .9 1/2/7 1.15 7.0 .3 7.3 .1 .96 letter 12.0 .0 12.2 .0 10/0/0 .98 2328 4 2370 4 .98 segment 3.21 .08 3.77 .07 9/1/0 .85 82.9 .5 83.5 .6 .99 sick 1.34 .03 1.29 .03 2/1/7 1.04 50.8 .5 51.5 .5 .99 sonar 25.6 .7 28.4 .6 8/0/2 .90 28.4 .2 33.1 .5 .86 vehicle 27.1 .4 29.1 .3 10/0/0 .93 135 2 181 1 .75 waveform 27.3 .3 28.1 .6 6/2/2 .97 44.6 .4 49.2 .4 .91 average .96 .88", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results after addition of irrelevant attributes.", "figure_data": "Error Rate Rel 7 w/d/l ratio Rel 8 7.72 .23 8.13 .18 9/0/1 Rel 8 .95 74.3 1.1 84.0 1.6 .88 Tree Size Rel 7 ratio 18.7 .5 26.0 .7 10/0/0 .72 63.4 .7 62.3 .6 1.02 breast-w+ 5.69 .11 6.17 .13 8/0/2 anneal+ auto+ .92 16.8 .4 25.0 .4 .67 colic+ 15.1 .2 20.1 .3 10/0/0 .75 8.9 .2 39.9 1.1 .22 credit-a+ 13.6 .3 16.4 .3 10/0/0 .83 34.7 .7 58.4 .9 .60 credit-g+ 28.5 .3 32.4 .4 10/0/0 .88 111 3 174 2 .64 diabetes+ 26.9 .3 30.3 .5 10/0/0 .89 43.6 2.1 115.5 1.8 .38 glass+ 37.0 .5 35.9 .8 3/0/7 1.03 31.0 .6 46.2 .9 .67 heart-c+ 22.6 .7 30.3 .4 10/0/0 .75 24.9 .8 52.0 .5 .48 heart-h+ 20.3 .4 24.9 .5 10/0/0 .82 19.9 .5 32.0 .9 .62 hepatitis+ 19.1 .6 23.9 .7 10/0/0 .80 5.6 .4 20.4 .6 .28 hypo+ .47 .02 .49 .02 7/2/1 .96 27.8 .2 25.9 .2 1.08 iris+ 5.67 .15 5.73 .41 4/0/6 .99 7.6 .1 9.7 .1 .79 labor+ 19.1 .8 24.6 .7 9/1/0 .78 6.8 .2 10.6 .1 .64 letter+ 12.7 .1 13.3 .1 10/0/0 .95 2300 3 2372 6 .97 segment+ 3.91 .09 3.85 .05 4/0/6 1.01 69.2 .6 88.2 .6 .78 sick+ 1.61 .05 1.57 .05 4/1/5 1.02 37.1 .8 54.8 .7 .68 sonar+ 25.5 .8 29.3 .6 9/1/0 .87 20.1 .4 34.0 .5 .59 vehicle+ 28.7 .3 28.8 .2 5/3/2 .99 109 1 162 1 .67 waveform+ 30.1 .7 28.0 .6 4/0/6 1.08 27.9 1.0 48.9 .5 .57 average .90 .667G di ers from Rel 7 only in that the threshold t is chosen to maximize information gain rather than gain ratio;", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results for intermediate systems 7G and 7GS.", "figure_data": "Error Rate ratio 7GS ratio 7.73 .16 1.03 7.62 .16 1.02 73.9 .7 1.05 73.4 .7 1.05 Tree Size 7G 7G ratio 7GS ratio 23.0 .9 .97 22.7 .8 .95 59.5 .9 .95 59.0 .9 .94 breast-w 5.21 .23 .98 5.32 .17 1.01 24.4 .3 1.21 24.1 .5 1.19 anneal auto colic 15.0 .4 .99 15.0 .3 .99 20.2 .5 1.01 17.9 .6 .90 credit-a 14.7 .3 .93 14.1 .2 .89 50.1 1.0 .88 38.3 1.0 .67 credit-g 29.7 .3 1.03 29.1 .2 1.01 148 2 .96 138 2 .89 diabetes 27.1 .4 .96 25.2 .3 .89 127 2 .99 45.4 2.0 .35 glass 30.9 .5 .96 31.3 .7 .97 50.3 .3 .98 46.2 .6 .90 heart-c 25.0 .4 1.01 23.8 .5 .96 44.5 .8 .98 42.2 .9 .93 heart-h 22.2 .4 1.03 20.9 .3 .97 30.2 1.1 1.02 19.3 .7 .65 hepatitis 22.0 .8 1.01 21.4 .6 .98 17.3 .4 1.12 15.2 .4 .98 hypo .49 .02 1.00 .50 .01 1.02 25.7 .2 1.02 27.0 .1 1.07 iris 4.93 .23 1.01 4.80 .17 .99 8.5 .1 .92 8.5 .0 .92 labor 18.8 1.1 1.13 19.5 1.0 1.17 7.8 .1 1.06 7.6 .1 1.04 letter 12.3 .0 1.00 12.2 .0 1.00 2327 3 .98 2330 3 .98 segment 3.39 .08 .90 3.36 .07 .89 83.7 .3 1.00 82.8 .3 .99 sick 1.34 .02 1.04 1.34 .02 1.04 50.4 .3 .98 51.4 .3 1.00 sonar 28.8 1.2 1.02 26.6 1.1 .94 32.6 .4 .98 28.5 .3 .86 vehicle 28.1 .2 .97 27.6 .3 .95 178 1 .99 145 2 .80 waveform 28.1 .8 1.00 27.3 .6 .97 46.8 .4 .95 44.4 .6 .90 average 1.00 .98 1.00 .90Error RateTree SizeRel 77G7GSRel 8.9.951.0 .8.91.0", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison with C4.5 using global discretization (Discr).", "figure_data": "Error Rate Discr w/d/l ratio Rel 8 7.67 .12 9.48 .14 10/0/0 Rel 8 .81 75.2 .7 68.1 .5 1.11 Tree Size Discr ratio 17.7 .5 23.8 .6 9/1/0 .74 63.7 .4 94.8 1.8 .67 breast-w 5.26 .19 5.38 .15 6/0/4 anneal auto .98 25.0 .5 19.9 .5 1.25 colic 15.0 .2 15.1 .1 6/2/2 .99 9.7 .2 7.8 .2 1.23 credit-a 14.7 .2 14.0 .1 0/1/9 1.05 33.2 1.1 22.3 .6 1.49 credit-g 28.4 .3 28.1 .4 5/1/4 1.01 124 2 82 1 1.50 diabetes 25.4 .3 25.5 .3 5/0/5 .99 44.0 1.6 19.6 .7 2.25 glass 32.5 .8 28.4 .3 1/0/9 1.14 45.7 .4 35.8 .3 1.28 heart-c 23.0 .5 21.7 .6 2/1/7 1.06 39.9 .4 25.9 .4 1.54 heart-h 21.5 .2 20.8 .4 3/0/7 1.04 19.1 .6 9.7 .6 1.97 hepatitis 20.4 .6 19.6 .8 3/1/6 1.04 17.8 .3 11.5 .5 1.55 hypo .48 .01 .72 .03 10/0/0 .67 27.5 .1 45.1 .3 .61 iris 4.80 .17 5.47 .29 6/3/1 .88 8.5 .0 6.2 .1 1.36 labor 19.1 1.0 20.0 .9 6/0/4 .96 7.0 .3 5.2 .1 1.34 letter 12.0 .0 21.1 .0 10/0/0 .57 2328 4 9600 12 .24 segment 3.21 .08 5.65 .10 10/0/0 .57 82.9 .5 296.4 2.6 .28 sick 1.34 .03 2.14 .03 10/0/0 .63 50.8 .5 32.8 .4 1.55 sonar 25.6 .7 24.6 .7 3/1/6 1.04 28.4 .2 28.6 .5 .99 vehicle 27.1 .4 31.5 .5 10/0/0 .86 135 2 175 2 .78 waveform 27.3 .3 26.5 .6 4/0/6 1.03 44.6 .4 42.2 .8 1.06 average .90 1.20", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison with T2.", "figure_data": "Error Rate T2 w/d/l ratio Rel 8 breast-w 5.26 .19 4.06 .09 0/0/10 1.30 25.0 .5 10.0 .0 2.50 Tree Size Rel 8 T2 ratio colic 15.0 .2 16.2 .2 10/0/0 .92 9.7 .2 15.5 .2 .63 credit-a 14.7 .2 16.6 .2 10/0/0 .89 33.2 1.1 46.1 .4 .72 credit-g 28.4 .3 32.2 .2 10/0/0 .88 124 2 49 1 2.51 diabetes 25.4 .3 24.9 .2 3/0/7 1.02 44.0 1.6 11.5 .0 3.81 heart-c 23.0 .5 26.8 .6 10/0/0 .86 39.9 .4 20.5 .0 1.94 heart-h 21.5 .2 26.1 .3 10/0/0 .82 19.1 .6 16.3 .3 1.18 hepatitis 20.4 .6 24.8 .3 10/0/0 .82 17.8 .3 13.7 .2 1.30 iris 4.80 .17 4.60 .35 3/1/6 1.04 8.5 .0 12.0 .0 .71 labor 19.1 1.0 15.3 1.6 3/0/7 1.25 7.0 .3 14.9 .1 .47 sick 1.34 .03 2.21 .01 10/0/0 .61 50.8 .5 12.0 .0 4.23 sonar 25.6 .7 28.4 .7 8/0/2 .90 28.4 .2 11.1 .0 2.56 vehicle 27.1 .4 38.1 .3 10/0/0 .71 135 2 16 0 8.46 waveform 27.3 .3 35.2 .6 10/0/0 .78 44.6 .4 13.9 .0 3.21 average .91 2.44", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
null
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b12", "b22", "b30", "b17", "b26", "b7", "b5", "b24", "b18", "b5" ], "table_ref": [], "text": "The goal of machine learning is to create systems that can improve their performance at some task as they acquire experience or data. In many natural learning tasks, this experience or data is gained interactively, by taking actions, making queries, or doing experiments. Most machine learning research, however, treats the learner as a passive recipient of data to be processed. This \\passive\" approach ignores the fact that, in many situations, the learner's most powerful tool is its ability to act, to gather data, and to in uence the world it is trying to understand. Active learning is the study of how to use this ability e ectively.\nFormally, active learning studies the closed-loop phenomenon of a learner selecting actions or making queries that in uence what data are added to its training set. Examples include selecting joint angles or torques to learn the kinematics or dynamics of a robot arm, selecting locations for sensor measurements to identify and locate buried hazardous wastes, or querying a human expert to classify an unknown word in a natural language understanding problem.\nWhen actions/queries are selected properly, the data requirements for some problems decrease drastically, and some NP-complete learning problems become polynomial in computation time (Angluin, 1988;Baum & Lang, 1991). In practice, active learning o ers its greatest rewards in situations where data are expensive or di cult to obtain, or when the environment is complex or dangerous. In industrial settings each training point may take days to gather and cost thousands of dollars; a method for optimally selecting these points could o er enormous savings in time and money.\nThere are a number of di erent goals which one may wish to achieve using active learning. One is optimization, where the learner performs experiments to nd a set of inputs that maximize some response variable. An example of the optimization problem would be nding the operating parameters that maximize the output of a steel mill or candy factory. There is an extensive literature on optimization, examining both cases where the learner has some prior knowledge of the parameterized functional form and cases where the learner has no such knowledge; the latter case is generally of greater interest to machine learning practitioners. The favored technique for this kind of optimization is usually a form of response surface methodology (Box & Draper, 1987), which performs experiments that guide hill-climbing through the input space.\nA related problem exists in the eld of adaptive control, where one must learn a control policy by taking actions. In control problems, one faces the complication that the value of a speci c action may not be known until many time steps after it is taken. Also, in control (as in optimization), one is usually concerned with the performing well during the learning task and must trade of exploitation of the current policy for exploration which may improve it. The sub eld of dual control (Fe'ldbaum, 1965) is speci cally concerned with nding an optimal balance of exploration and control while learning.\nIn this paper, we will restrict ourselves to examining the problem of supervised learning:\nbased on a set of potentially noisy training examples D = f(x i ; y i )g m i=1 , where x i 2 X and y i 2 Y , we wish to learn a general mapping X ! Y . In robot control, the mapping may be state action ! new state; in hazard location it may be sensor reading ! target position.\nIn contrast to the goals of optimization and control, the goal of supervised learning is to be able to e ciently and accurately predict y for a given x.\nIn active learning situations, the learner itself is responsible for acquiring the training set. Here, we assume it can iteratively select a new input x (possibly from a constrained set), observe the resulting output ỹ, and incorporate the new example (x; ỹ) into its training set. This contrasts with related work by Plutowski and White (1993), which is concerned with ltering an existing data set. In our case, x may be thought of as a query, experiment, or action, depending on the research eld and problem domain. The question we will be concerned with is how to choose which x to try next.\nThere are many heuristics for choosing x, including choosing places where we don't have data (Whitehead, 1991), where we perform poorly (Linden & Weber, 1993), where we have low con dence (Thrun & M oller, 1992), where we expect it to change our model (Cohn, Atlas, & Ladner, 1990, 1994), and where we previously found data that resulted in learning (Schmidhuber & Storck, 1993). In this paper we will consider how one may select x in a statistically \\optimal\" manner for some classes of machine learning algorithms. We rst brie y review how the statistical approach can be applied to neural networks, as described in earlier work (MacKay, 1992;Cohn, 1994). Then, in Sections 3 and 4 we consider two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. Section 5 presents the empirical results of applying statistically-based active learning to these architectures. While optimal data selection for a neural network is computationally expensive and approximate, we nd that optimal data selection for the two statistical models is e cient and accurate." }, { "figure_ref": [], "heading": "Active Learning { A Statistical Approach", "publication_ref": [ "b13" ], "table_ref": [], "text": "We begin by de ning P(x; y) to be the unknown joint distribution over x and y, and P(x) to be the known marginal distribution of x (commonly called the input distribution). We denote the learner's output on input x, given training set D as ŷ(x; D). 1 We can then write the expected error of the learner as follows:\nZ x E T h (ŷ(x; D) y(x)) 2 jx i P(x)dx;\n(1) where E T ] denotes expectation over P(yjx) and over training sets D. The expectation inside the integral may be decomposed as follows (Geman, Bienenstock, & Doursat, 1992):\nE T h (ŷ(x; D) y(x)) 2 jx i = E h (y(x) E yjx]) 2 i\n(2)\n+ (E D ŷ(x; D)] E yjx]) 2 +E D h (ŷ(x; D) E D ŷ(x; D)]) 2 i\nwhere E D ] denotes the expectation over training sets D and the remaining expectations on the right-hand side are expectations with respect to the conditional density P(yjx). It is important to remember here that in the case of active learning, the distribution of D may di er substantially from the joint distribution P(x; y). The rst term in Equation 2 is the variance of y given x | it is the noise in the distribution, and does not depend on the learner or on the training data. The second term is the learner's squared bias, and the third is its variance; these last two terms comprise the mean squared error of the learner with respect to the regression function E yjx]. When the second term of Equation 2 is zero, we say that the learner is unbiased. We shall assume that the learners considered in this paper are approximately unbiased; that is, that their squared bias is negligible when compared with their overall mean squared error. Thus we focus on algorithms that minimize the learner's error by minimizing its variance:\n2 ŷ 2 ŷ(x) = E D h (ŷ(x; D) E D ŷ(x; D)]) 2 i :(3)\n(For readability, we will drop the explicit dependence on x and D | unless denoted otherwise, ŷ and 2 ŷ are functions of x and D.) In an active learning setting, we will have chosen the x-component of our training set D; we indicate this by rewriting Equation 3as\n2 ŷ = D (ŷ hŷi) 2 E ;\nwhere h i denotes E D ] given a xed x-component of D. When a new input x is selected and queried, and the resulting (x; ỹ) added to the training set, 2 ŷ should change. We will denote the expectation (over values of ỹ) of the learner's new variance as\nD ~ 2 ŷ E = E D (x;ỹ) h 2 ŷjx i :\n(4)" }, { "figure_ref": [], "heading": "Selecting Data to Minimize Learner Variance", "publication_ref": [], "table_ref": [], "text": "In this paper we consider algorithms for active learning which select data in an attempt to minimize the value of Equation 4, integrated over X. Intuitively, the minimization proceeds as follows: we assume that we have an estimate of 2 ŷ, the variance of the learner at x. If, for some new input x, we knew the conditional distribution P(ỹjx), we could compute an estimate of the learner's new variance at x given an additional example at x. While the true distribution P(ỹjx) is unknown, many learning architectures let us approximate it by giving us estimates of its mean and variance. Using the estimated distribution of ỹ, we can estimate D ~ 2 ŷ E , the expected variance of the learner after querying at x.\nGiven the estimate of D ~ 2 ŷE , which applies to a given x and a given query x, we must integrate x over the input distribution to compute the integrated average variance of the learner. In practice, we will compute a Monte Carlo approximation of this integral, evaluating D ~ 2 ŷE at a number of reference points drawn according to P(x). By querying an\nx that minimizes the average expected variance over the reference points, we have a solid statistical basis for choosing new examples." }, { "figure_ref": [], "heading": "Example: Active Learning with a Neural Network", "publication_ref": [ "b11", "b18", "b5", "b18", "b5", "b5", "b20" ], "table_ref": [], "text": "In this section we review the use of techniques from Optimal Experiment Design (OED) to minimize the estimated variance of a neural network (Fedorov, 1972;MacKay, 1992;Cohn, 1994). We will assume we have been given a learner ŷ = f ŵ(), a training set D = f(x i ; y i )g m i=1 and a parameter vector estimate ŵ that maximizes some likelihood measure given D. If, for example, one assumes that the data were produced by a process whose structure matches that of the network, and that noise in the process outputs is normal and independently identically distributed, then the negative log likelihood of ŵ given D is proportional to\nS 2 = 1 m m X i=1\n(y i ŷ(x i )) 2 :\nThe maximum likelihood estimate for ŵ is that which minimizes S 2 .\nThe estimated output variance of the network is 2 ŷ S 2 @ŷ(x) @w T @ 2 S 2 @w 2 ! 1 @ŷ(x) @w ; (MacKay, 1992) where the true variance is approximated by a second-order Taylor series expansion around S 2 . This estimate makes the assumption that @ŷ=@w is locally linear. Combined with the assumption that P(yjx) is Gaussian with constant variance for all x, one can derive a closed form expression for D ~ 2 ŷ E . See Cohn (1994) for details.\nIn practice, @ŷ=@w may be highly nonlinear, and P(yjx) may be far from Gaussian; in spite of this, empirical results show that it works well on some problems (Cohn, 1994). It has the advantage of being grounded in statistics, and is optimal given the assumptions. Furthermore, the expectation is di erentiable with respect to x. As such, it is applicable in continuous domains with continuous action spaces, and allows hillclimbing to nd the x that minimizes the expected model variance.\nFor neural networks, however, this approach has many disadvantages. In addition to relying on simpli cations and assumptions which hold only approximately, the process is computationally expensive. Computing the variance estimate requires inversion of a jwj jwj matrix for each new example, and incorporating new examples into the network requires expensive retraining. Paass and Kindermann (1995) discuss a Markov-chain based sampling approach which addresses some of these problems. In the rest of this paper, we consider two \\non-neural\" machine learning architectures that are much more amenable to optimal data selection." }, { "figure_ref": [ "fig_0" ], "heading": "Mixtures of Gaussians", "publication_ref": [ "b28", "b3", "b19", "b25", "b14", "b10", "b14" ], "table_ref": [], "text": "The mixture of Gaussians model is a powerful estimation and prediction technique with roots in the statistics literature (Titterington, Smith, & Makov, 1985); it has, over the last few years, been adopted by researchers in machine learning (Cheeseman et al., 1988;Nowlan, 1991;Specht, 1991;Ghahramani & Jordan, 1994). The model assumes that the data are produced by a mixture of N multivariate Gaussians g i , for i = 1; :::; N (see Figure 1).\nIn the context of learning from random examples, one begins by producing a joint density estimate over the input/output space X Y based on the training set D. The EM algorithm (Dempster, Laird, & Rubin, 1977) can be used to e ciently nd a locally optimal t of the Gaussians to the data. It is then straightforward to compute ŷ given x by conditioning the joint distribution on x and taking the expected value. One bene t of learning with a mixture of Gaussians is that there is no xed distinction between inputs and outputs | one may specify any subset of the input-output dimensions, and compute expectations on the remaining dimensions. If one has learned a forward model of the dynamics of a robot arm, for example, conditioning on the outputs automatically gives a model of the arm's inverse dynamics. With the mixture model, it is also straightforward to compute the mode of the output, rather than its mean, which obviates many of the problems of learning direct inverse models (Ghahramani & Jordan, 1994).\nFor each Gaussian g i we will denote the input/output means as x;i and y;i and variances and covariances as 2\nx;i , 2 y;i and xy;i respectively. We can then express the probability of point (x; y), given g i as\nP(x; yji) = 1 2 p j i j exp 1 2 (x i ) T 1 i (x i ) (5)\nwhere we have de ned \nn i 1 + (x x;i ) 2 2 x;i ! : (6)\nHere, n i is the amount of \\support\" for the Gaussian g i in the training data. It can be computed as n i = m X j=1 P(x j ; y j ji)\nP N k=1 P(x j ; y j jk) :\nThe expectations and variances in Equation 6 are mixed according to the probability that g i has of being responsible for x, prior to observing y: h i h i (x) = P(xji)\nP N j=1 P(xjj) ;\nwhere\nP(xji) = 1 q 2 2 x;i exp \" (x x;i ) 2 2 2 x;i # :(7)\nFor input x then, the conditional expectation ŷ of the resulting mixture and its variance may be written:\nŷ = N X i=1 h i ŷi ; 2 ŷ = N X i=1 h 2 i 2 yjx;i n i 1 + (x x;i ) 2 2 x;i ! ;\nwhere we have assumed that the ŷi are independent in calculating 2 ŷ. Both of these terms can be computed e ciently in closed form. It is also worth noting that 2 ŷ is only one of many variance measures we might be interested in. If, for example, our mapping is stochastically multivalued (that is, if the Gaussians overlapped signi cantly in the x dimension), we may wish our prediction ŷ to re ect the most likely y value. In this case, ŷ would be the mode, and a preferable measure of uncertainty would be the (unmixed) variance of the individual Gaussians." }, { "figure_ref": [], "heading": "Active Learning with a Mixture of Gaussians", "publication_ref": [ "b5" ], "table_ref": [], "text": "In the context of active learning, we are assuming that the input distribution P(x) is known.\nWith a mixture of Gaussians, one interpretation of this assumption is that we know x;i and 2\nx;i for each Gaussian. In that case, our application of EM will estimate only y;i , 2 y;i , and xy;i .\nGenerally however, knowing the input distribution will not correspond to knowing the actual x;i and 2\nx;i for each Gaussian. We may simply know, for example, that P(x) is uniform, or can be approximated by some set of sampled inputs. In such cases, we must use EM to estimate x;i and 2\nx;i in addition to the parameters involving y. If we simply estimate these values from the training data, though, we will be estimating the joint distribution of P(x; yji) instead of P(x; yji). To obtain a proper estimate, we must correct Equation 5 as follows: P(x; yji) = P(x; yji) P(xji) P(xji) :\n(8)\nHere, P(xji) is computed by applying Equation 7given the mean and x variance of the training data, and P(xji) is computed by applying the same equation using the mean and x variance of a set of reference data drawn according to P(x). where hi h i (x), and N ( ; 2 ) denotes the normal distribution with mean and variance 2 . Given this, we can model the change in each g i separately, calculating its expected variance given a new point sampled from P(ỹjx; i) and weight this change by hi . The new expectations combine to form the learner's new expected variance\nD ~ 2 ŷE = N X i=1 h 2 i D ~ 2 yjx;i E n i + hi 1 + (x x;i ) 2 2 x;i ! (9)\nwhere the expectation can be computed exactly in closed form:\nD ~ 2 y;i E = n i 2 y;i n i + hi + n i hi 2 yjx;i + (ŷ i (x) y;i ) 2 (n i + hi ) 2 ; D ~ 2 yjx;i E = D ~ 2 y;i E D ~ 2 xy;i E 2 x;i ; D ~ xy;i E = n i xy;i n i + hi + n i hi (x x;i )(ŷ i (x) y;i ) (n i + hi ) 2 ; D ~ 2 xy;i E = h~ xy;i i 2 + n 2 i h2 i 2 yjx;i (x x;i ) 2 (n i + hi ) 4 :\nIf, as discussed earlier, we are also estimating x;i and 2 x;i , we must take into account the e ect of the new example on those estimates, and must replace x;i and 2\nx;i in the above equations with\n~ x;i = n i x;i + hi x n i + hi ; ~ 2 x;i = n i 2 x;i n i + hi + n i hi (x x;i ) 2 (n i + hi ) 2 :\nWe can use Equation 9to guide active learning. By evaluating the expected new variance over a reference set given candidate x, we can select the x giving the lowest expected model variance. Note that in high-dimensional spaces, it may be necessary to evaluate an excessive number of candidate points to get good coverage of the potential query space. In these cases, it is more e cient to di erentiate Equation 9 and hillclimb on @ D ~ 2 ŷE =@x to nd a locally maximal x. See, for example, (Cohn, 1994)." }, { "figure_ref": [ "fig_2" ], "heading": "Locally Weighted Regression", "publication_ref": [ "b23", "b4" ], "table_ref": [], "text": "Model-based methods, such as neural networks and the mixture of Gaussians, use the data to build a parameterized model. After training, the model is used for predictions and the data are generally discarded. In contrast, \\memory-based\" methods are non-parametric approaches that explicitly retain the training data, and use it each time a prediction needs to be made. Locally weighted regression (LWR) is a memory-based method that performs a regression around a point of interest using only training data that are \\local\" to that point. One recent study demonstrated that LWR was suitable for real-time control by constructing an LWR-based system that learned a di cult juggling task (Schaal & Atkeson, 1994). x in question using a kernel. A regression is then computed using the weighted points.\nWe consider here a form of locally weighted regression that is a variant of the LOESS model (Cleveland, Devlin, & Grosse, 1988). The LOESS model performs a linear regression on points in the data set, weighted by a kernel centered at x (see Figure 2). The kernel shape is a design parameter for which there are many possible choices: the original LOESS model uses a \\tricubic\" kernel; in our experiments we have used a Gaussian\nh i (x) h(x x i ) = exp( k(x x i ) 2 );\nwhere k is a smoothing parameter. In Section 4.1 we will describe several methods for automatically setting k. Too large a kernel includes points that degrade the t; too small a kernel neglects points that increase con dence in the t.\nFor brevity, we will drop the argument x for h i (x), and de ne n = P i h i . We can then write the estimated means and covariances as:\nx = P i h i x i n ; 2 x = P i h i (x i x ) 2 n ; xy = P i h i (x i x )(y i y ) n y = P i h i y i n ; 2 y = P i h i (y i y ) 2 n ; 2 yjx = 2 y 2 xy 2 x :\nWe use the data covariances to express the conditional expectations and their estimated variances:\nŷ = y + xy 2 x (x x ); 2 ŷ = 2 yjx n 2 X i h 2 i + (x x ) 2 2 x X i h 2 i (x i x ) 2 2 x ! (10)" }, { "figure_ref": [ "fig_3" ], "heading": "Setting the Smoothing Parameter k", "publication_ref": [ "b4", "b23", "b23" ], "table_ref": [], "text": "There are a number of ways one can set k, the smoothing parameter. The method used by Cleveland et al. (1988) is to set k such that the reference point being predicted has a predetermined amount of support, that is, k is set so that n is close to some target value.\nThis has the disadvantage of requiring assumptions about the noise and smoothness of the function being learned. Another technique, used by Schaal and Atkeson (1994), sets k to minimize the crossvalidated error on the training set. A disadvantage of this technique is that it assumes the distribution of the training set is representative of P(x), which it may not be in an active learning situation. A third method, also described by Schaal and Atkeson (1994), is to set k so as to minimize the estimate of 2 ŷ at the reference points. As k decreases, the regression becomes more global. The total weight n will increase (which decreases 2 ŷ ), but so will the conditional variance 2 yjx (which increases 2 ŷ). At some value of k, these two quantities will balance to produce a minimum estimated variance (see Figure 3). This estimate can be computed for arbitrary reference points in the domain, and the user has the option of using either a di erent k for each reference point or a single global k that minimizes the average 2 ŷ over all reference points. Empirically, we found that the variance-based method gave the best performance." }, { "figure_ref": [], "heading": "Active Learning with Locally Weighted Regression", "publication_ref": [], "table_ref": [], "text": "As with the mixture of Gaussians, we want to select x to minimize D ~ 2 ŷE . To do this, we must estimate the mean and variance of P(ỹjx). With locally weighted regression, these are explicit: the mean is ŷ(x) and the variance is 2 yjx . The estimate of D ~ 2 ŷE is also explicit.\nDe ning h as the weight assigned to x by the kernel we can compute these expectations exactly in closed form. For the LOESS model, the learner's expected new variance is\nD ~ 2 ŷE = D ~ 2 yjx E (n + h) 2 \" X i h 2 i + h2 + (x ~ x ) 2 ~ 2 x X i h 2 i (x i ~ x ) 2 ~ 2 x + h2 (x ~ x ) 2 ~ 2 x !# : (11)\nNote that, since P i h 2 i (x i x ) 2 = P i h 2 i x 2 i + 2 x P i h 2 i 2 x P i h 2 i x i , the new expectation of Equation 11may be e ciently computed by caching the values of P i h 2 i x 2 i and P i h 2 i x i .\nThis obviates the need to recompute the entire sum for each new candidate point. The component expectations in Equation 11are computed as follows:\nD ~ 2 yjx E = D ~ 2 y E D ~ 2 xy E ~ 2 x ; D ~ 2 y E = n 2 y n + h + n h 2 yjx + (ŷ(x) y ) 2 (n + h) 2 ; ~ x = n x + hx n + h ; h~ xy i = n xy n + h + n h(x x )(ŷ(x) y ) (n + h) 2 ; ~ 2 x = n 2 x n + h + n h(x x ) 2 (n + h) 2 ; D ~ 2 xy E = h~ xy i 2 + n 2 h2 2 yjx (x x ) 2 (n + h) 4 :\nJust as with the mixture of Gaussians, we can use the expectation in Equation 11to guide active learning." }, { "figure_ref": [ "fig_4" ], "heading": "Experimental Results", "publication_ref": [ "b5" ], "table_ref": [], "text": "For an experimental testbed, we used the \\Arm2D\" problem described by Cohn (1994). The task is to learn the kinematics of a toy 2-degree-of-freedom robot arm (see Figure 4).\nThe inputs are joint angles ( 1 ; 2 ), and the outputs are the Cartesian coordinates of the tip (X 1 ; X 2 ). One of the implicit assumptions of both models described here is that the noise is Gaussian in the output dimensions. To test the robustness of the algorithm to this assumption, we ran experiments using no noise, using additive Gaussian noise in the outputs, and using additive Gaussian noise in the inputs. The results of each were comparable; we report here the results using additive Gaussian noise in the inputs. Gaussian input noise corresponds to the case where the arm e ectors or joint angle sensors are noisy, and results in non-Gaussian errors in the learner's outputs. The input distribution P(x) is assumed to be uniform.\nWe compared the performance of the variance-minimizing criterion by comparing the learning curves of a learner using the criterion with that of one learning from random samples. The learning curves plot the mean squared error and variance of the learner as its training set size increases. The curves are created by starting with an initial sample, measuring the learner's mean squared error or estimated variance on a set of \\reference\" points (independent of the training set), selecting and adding a new example to the training set, retraining the learner on the augmented set, and repeating. On each step, the variance-minimizing learner chose a set of 64 unlabeled reference points drawn from input distribution P(x). It then selected a query x = ( 1 ; 2 ) that it estimated would minimize D ~2 yjx E over the reference set. In the experiments reported here, the best x was selected from another set of 64 \\candidate\" points drawn at random on each iteration. 2" }, { "figure_ref": [ "fig_5" ], "heading": "Experiments with Mixtures of Gaussians", "publication_ref": [], "table_ref": [], "text": "With the mixtures of Gaussians model, there are three design parameters that must be considered | the number of Gaussians, their initial placement, and the number of iterations of the EM algorithm. We set these parameters by optimizing them on the learner using random examples, then used the same settings on the learner using the varianceminimization criterion. Parameters were set as follows: Models with fewer Gaussians have the obvious advantage of requiring less storage space and computation. Intuitively, a small model should also have the advantage of avoiding over tting, which is thought to occur in systems with extraneous parameters. Empirically, as we increased the number of Gaussians, generalization improved monotonically with diminishing returns (for a xed training set size and number of EM iterations). The test error of the larger models generally matched that of the smaller models on small training sets (where over tting would be a concern), and continued to decrease on large training sets where the smaller networks \\bottomed out.\" We therefore preferred the larger mixtures, and report here our results with mixtures of 60 Gaussians. We selected initial placement of the Gaussians randomly, chosen uniformly from the smallest hypercube containing all current training examples. We arbitrarily chose the identity matrix as an initial covariance matrix. The learner was surprisingly sensitive to the number of EM iterations. We examined a range of 5 to 40 iterations of the EM algorithm per step. Small numbers of iterations (5-10) appear insu cent to allow convergence with large training sets, while large numbers of iterations (30-40) degraded performance on small training sets. An ideal training regime would employ some form of regularization, or would examine the degree of change between iterations to detect convergence; in our experiments, however, we settled on a xed regime of 20 iterations per step. Figure 5 plots the variance and MSE learning curves for a mixture of 60 Gaussians trained on the Arm2D domain with 1% input noise added. The estimated model variance using the variance-minimizing criterion is signi cantly better than that of the learner selecting data at random. The mean squared error, however, exhibits even greater improvement, with an error that is consistently 1=3 that of the randomly sampling learner." }, { "figure_ref": [], "heading": "Experiments with LOESS Regression", "publication_ref": [ "b5" ], "table_ref": [], "text": "With LOESS, the design parameters are the the size and shape of the kernel. As described earlier, we arbitrarily chose to work with a Gaussian kernel; we used the variance-based method for automatically selecting the kernel size.\nIn the case of LOESS, both the variance and the MSE of the learner using the varianceminimizing criterion are signi cantly lower than those of the learner selecting data randomly. It is worth noting that on the Arm2D domain, this form of locally weighted regression also signi cantly outperforms both the mixture of Gaussians and the neural networks discussed by Cohn (1994). " }, { "figure_ref": [], "heading": "Computation Time", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "One obvious concern about the criterion described here is its computational cost. In situations where obtaining new examples may take days and cost thousands of dollars, it is clearly wise to expend computation to ensure that those examples are as useful as possible.\nIn other situations, however, new data may be relatively inexpensive, so the computational cost of nding optimal examples must be considered. Table 1 summarizes the computation times for the two learning algorithms discussed in this paper. 3 Note that, with the mixture of Gaussians, training time depends linearly on the number of examples, but prediction time is independent. Conversely, with locally weighted regression, there is no \\training time\" per se, but the cost of additional examples accrues when predictions are made using the training set.\nWhile the training time incurred by the mixture of Gaussians may make it infeasible for selecting optimal action learning actions in realtime control, it is certainly fast enough to be used in many applications. Optimized, parallel implementations will also enhance its utility. 4 Locally weighted regression is certainly fast enough for many control applications, and may be made faster still by optimized, parallel implementations. It is worth noting 3. The times reported are \\per reference point\" and \\per candidate per reference point\"; overall time must be computed from the number of candidates and reference points examined. In the case of the LOESS model, for example, with 100 training points, 64 reference points and 64 candidate points, the time required to select an action would be (58 + 0:16 100) 4096 seconds, or about 0.3 seconds. 4. It is worth mentioning that approximately half of the training time for the mixture of Gaussians is spent computing the correction factor in Equation 8. Without the correction, the learner still computes P (yjx), but does so by modeling the training set distribution rather than the reference distribution. We have found however, that for the problems examined, the performance of such \\uncorrected\" learners does not di er appreciably from that of the \\corrected\" learners. that, since the prediction speed of these learners depends on their training set size, optimal data selection is doubly important, as it creates a parsimonious training set that allows faster predictions on future points." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b29", "b6", "b21", "b16" ], "table_ref": [], "text": "Mixtures of Gaussians and locally weighted regression are two statistical models that o er elegant representations and e cient learning algorithms. In this paper we have shown that they also o er the opportunity to perform active learning in an e cient and statistically correct manner. The criteria derived here can be computed cheaply and, for problems tested, demonstrate good predictive power. In industrial settings, where gathering a single data point may take days and cost thousands of dollars, the techniques described here have the potential for enormous savings.\nIn this paper, we have only considered function approximation problems. Problems requiring classi cation could be handled analogously with the appropriate models. For learning classi cation with a mixture model, one would select examples so as to maximize discriminability between Gaussians; for locally weighted regression, one would use a logistic regression instead of the linear one considered here (Weisberg, 1985).\nOur future work will proceed in several directions. The most important is active bias minimization. As noted in Section 2, the learner's error is composed of both bias and variance. The variance-minimizing strategy examined here ignores the bias component, which can lead to signi cant errors when the learner's bias is non-negligible. Work in progress examines e ective ways of measuring and optimally eliminating bias (Cohn, 1995); future work will examine how to jointly minimize both bias and variance to produce a criterion that truly minimizes the learner's expected error.\nAnother direction for future research is the derivation of variance-(and bias-) minimizing techniques for other statistical learning models. Of particular interest is the class of models known as \\belief networks\" or \\Bayesian networks\" (Pearl, 1988;Heckerman, Geiger, & Chickering, 1994). These models have the advantage of allowing inclusion of domain knowledge and prior constraints while still adhering to a statistically sound framework. Current research in belief networks focuses on algorithms for e cient inference and learning; it would be an important step to derive the proper criteria for learning actively with these models. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "David Cohn's current address is: Harlequin, Inc., One Cambridge Center, Cambridge, MA 02142 USA. Zoubin Ghahramani's current address is: Department of Computer Science, University of Toronto, Toronto, Ontario M5S 1A4 CANADA. This work was funded by NSF grant CDA-9309300, the McDonnell-Pew Foundation, ATR Human Information Processing Laboratories and Siemens Corporate Research. We are deeply indebted to Michael Titterington and Jim Kay, whose careful attention and continued kind help allowed us to make several corrections to an earlier version of this paper." } ]
[ { "authors": "D Angluin", "journal": "Machine Learning", "ref_id": "b0", "title": "Queries and concept learning", "year": "1988" }, { "authors": "E Baum; K Lang", "journal": "IEEE Trans. Neural Networks", "ref_id": "b1", "title": "Neural network algorithms that learn in polynomial time from examples and queries", "year": "1991" }, { "authors": "G Box; N Draper", "journal": "Wiley", "ref_id": "b2", "title": "Empirical model-building and response surfaces", "year": "1987" }, { "authors": "P Cheeseman; M Self; J Kelly; W Taylor; D Freeman; J Stutz", "journal": "AAAI Press", "ref_id": "b3", "title": "Bayesian classi cation", "year": "1988" }, { "authors": "W Cleveland; S Devlin; E Grosse", "journal": "Journal of Econometrics", "ref_id": "b4", "title": "Regression by local tting", "year": "1988" }, { "authors": "D Cohn", "journal": "Morgan Kaufmann", "ref_id": "b5", "title": "Neural network exploration using optimal experiment design", "year": "1994" }, { "authors": "D Cohn", "journal": "", "ref_id": "b6", "title": "Minimizing statistical bias with queries", "year": "1995" }, { "authors": "D Cohn; L Atlas; R Ladner", "journal": "", "ref_id": "b7", "title": "Training connectionist networks with queries and selective sampling", "year": "1990" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b8", "title": "", "year": "" }, { "authors": "D Cohn; L Atlas; R Ladner", "journal": "Machine Learning", "ref_id": "b9", "title": "Improving generalization with active learning", "year": "1994" }, { "authors": "A Dempster; N Laird; D Rubin", "journal": "J. Royal Statistical Society Series B", "ref_id": "b10", "title": "Maximum likelihood from incomplete data via the EM algorithm", "year": "1977" }, { "authors": "V Fedorov", "journal": "Academic Press", "ref_id": "b11", "title": "Theory of Optimal Experiments", "year": "1972" }, { "authors": "A A Fe'ldbaum", "journal": "Academic Press", "ref_id": "b12", "title": "Optimal control systems", "year": "1965" }, { "authors": "S Geman; E Bienenstock; R Doursat", "journal": "Neural Computation", "ref_id": "b13", "title": "Neural networks and the bias/variance dilemma", "year": "1992" }, { "authors": "Z Ghahramani; M Jordan", "journal": "", "ref_id": "b14", "title": "Supervised learning from incomplete data via an EM approach", "year": "1994" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "D Heckerman; D Geiger; D Chickering", "journal": "", "ref_id": "b16", "title": "Learning Bayesian networks: the combination of knowledge and statistical data", "year": "1994" }, { "authors": "A Linden; F Weber", "journal": "MIT Press", "ref_id": "b17", "title": "Implementing inner drive by competence re ection", "year": "1993" }, { "authors": "D J Mackay", "journal": "Neural Computation", "ref_id": "b18", "title": "Information-based objective functions for active data selection", "year": "1992" }, { "authors": "S Nowlan", "journal": "", "ref_id": "b19", "title": "Soft competitive adaptation: Neural network learning algorithms based on tting statistical mixtures", "year": "1991" }, { "authors": "G Paass; J Kindermann", "journal": "MIT Press", "ref_id": "b20", "title": "Bayesian query construction for neural network models", "year": "1995" }, { "authors": "J Pearl", "journal": "", "ref_id": "b21", "title": "Probablistic Reasoning in Intelligent Systems", "year": "1988" }, { "authors": "Morgan Kaufmann; M Plutowski; H White", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b22", "title": "Selecting concise training sets from clean data", "year": "1993" }, { "authors": "S Schaal; C Atkeson", "journal": "Control Systems", "ref_id": "b23", "title": "Robot juggling: An implementation of memory-based learning", "year": "1994" }, { "authors": "J Schmidhuber; J Storck", "journal": "", "ref_id": "b24", "title": "Reinforcement driven information acquisition in nondeterministic environments", "year": "1993" }, { "authors": "D Specht", "journal": "IEEE Trans. Neural Networks", "ref_id": "b25", "title": "A general regression neural network", "year": "1991" }, { "authors": "S Thrun; K ", "journal": "", "ref_id": "b26", "title": "Active exploration in dynamic environments", "year": "1992" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b27", "title": "", "year": "" }, { "authors": "D Titterington; A Smith; U Makov", "journal": "Wiley", "ref_id": "b28", "title": "Statistical Analysis of Finite Mixture Distributions", "year": "1985" }, { "authors": "S Weisberg", "journal": "Wiley", "ref_id": "b29", "title": "Applied Linear Regression", "year": "1985" }, { "authors": "S Whitehead", "journal": "", "ref_id": "b30", "title": "A study of cooperative mechanisms for faster reinforcement learning", "year": "1991" } ]
[ { "formula_coordinates": [ 3, 157.2, 243.6, 243.36, 23.26 ], "formula_id": "formula_0", "formula_text": "E T h (ŷ(x; D) y(x)) 2 jx i = E h (y(x) E yjx]) 2 i" }, { "formula_coordinates": [ 3, 303.6, 268.44, 151.2, 37.2 ], "formula_id": "formula_1", "formula_text": "+ (E D ŷ(x; D)] E yjx]) 2 +E D h (ŷ(x; D) E D ŷ(x; D)]) 2 i" }, { "formula_coordinates": [ 3, 195.84, 485.52, 326.4, 24.36 ], "formula_id": "formula_2", "formula_text": "2 ŷ 2 ŷ(x) = E D h (ŷ(x; D) E D ŷ(x; D)]) 2 i :(3)" }, { "formula_coordinates": [ 3, 258, 566.64, 82.8, 24.12 ], "formula_id": "formula_3", "formula_text": "2 ŷ = D (ŷ hŷi) 2 E ;" }, { "formula_coordinates": [ 3, 239.04, 648.24, 114, 24.36 ], "formula_id": "formula_4", "formula_text": "D ~ 2 ŷ E = E D (x;ỹ) h 2 ŷjx i :" }, { "formula_coordinates": [ 4, 245.76, 443.82, 54.72, 37.36 ], "formula_id": "formula_5", "formula_text": "S 2 = 1 m m X i=1" }, { "formula_coordinates": [ 6, 171.84, 134.28, 350.4, 31.78 ], "formula_id": "formula_6", "formula_text": "P(x; yji) = 1 2 p j i j exp 1 2 (x i ) T 1 i (x i ) (5)" }, { "formula_coordinates": [ 6, 342.24, 309.6, 180, 38.38 ], "formula_id": "formula_7", "formula_text": "n i 1 + (x x;i ) 2 2 x;i ! : (6)" }, { "formula_coordinates": [ 6, 204.48, 491.04, 317.76, 41.26 ], "formula_id": "formula_8", "formula_text": "P(xji) = 1 q 2 2 x;i exp \" (x x;i ) 2 2 2 x;i # :(7)" }, { "formula_coordinates": [ 6, 180.24, 565.92, 252.24, 39.58 ], "formula_id": "formula_9", "formula_text": "ŷ = N X i=1 h i ŷi ; 2 ŷ = N X i=1 h 2 i 2 yjx;i n i 1 + (x x;i ) 2 2 x;i ! ;" }, { "formula_coordinates": [ 7, 201.12, 485.52, 321.12, 44.14 ], "formula_id": "formula_10", "formula_text": "D ~ 2 ŷE = N X i=1 h 2 i D ~ 2 yjx;i E n i + hi 1 + (x x;i ) 2 2 x;i ! (9)" }, { "formula_coordinates": [ 7, 90, 546.72, 450.96, 78.46 ], "formula_id": "formula_11", "formula_text": "D ~ 2 y;i E = n i 2 y;i n i + hi + n i hi 2 yjx;i + (ŷ i (x) y;i ) 2 (n i + hi ) 2 ; D ~ 2 yjx;i E = D ~ 2 y;i E D ~ 2 xy;i E 2 x;i ; D ~ xy;i E = n i xy;i n i + hi + n i hi (x x;i )(ŷ i (x) y;i ) (n i + hi ) 2 ; D ~ 2 xy;i E = h~ xy;i i 2 + n 2 i h2 i 2 yjx;i (x x;i ) 2 (n i + hi ) 4 :" }, { "formula_coordinates": [ 7, 172.8, 673.56, 267.36, 35.38 ], "formula_id": "formula_12", "formula_text": "~ x;i = n i x;i + hi x n i + hi ; ~ 2 x;i = n i 2 x;i n i + hi + n i hi (x x;i ) 2 (n i + hi ) 2 :" }, { "formula_coordinates": [ 8, 214.8, 642.84, 182.64, 18.34 ], "formula_id": "formula_13", "formula_text": "h i (x) h(x x i ) = exp( k(x x i ) 2 );" }, { "formula_coordinates": [ 9, 159.36, 349.2, 318.48, 64.54 ], "formula_id": "formula_14", "formula_text": "x = P i h i x i n ; 2 x = P i h i (x i x ) 2 n ; xy = P i h i (x i x )(y i y ) n y = P i h i y i n ; 2 y = P i h i (y i y ) 2 n ; 2 yjx = 2 y 2 xy 2 x :" }, { "formula_coordinates": [ 9, 127.92, 448.8, 394.32, 39.82 ], "formula_id": "formula_15", "formula_text": "ŷ = y + xy 2 x (x x ); 2 ŷ = 2 yjx n 2 X i h 2 i + (x x ) 2 2 x X i h 2 i (x i x ) 2 2 x ! (10)" }, { "formula_coordinates": [ 10, 107.76, 242.88, 414.48, 44.14 ], "formula_id": "formula_16", "formula_text": "D ~ 2 ŷE = D ~ 2 yjx E (n + h) 2 \" X i h 2 i + h2 + (x ~ x ) 2 ~ 2 x X i h 2 i (x i ~ x ) 2 ~ 2 x + h2 (x ~ x ) 2 ~ 2 x !# : (11)" }, { "formula_coordinates": [ 10, 129.12, 348.72, 352.56, 110.32 ], "formula_id": "formula_17", "formula_text": "D ~ 2 yjx E = D ~ 2 y E D ~ 2 xy E ~ 2 x ; D ~ 2 y E = n 2 y n + h + n h 2 yjx + (ŷ(x) y ) 2 (n + h) 2 ; ~ x = n x + hx n + h ; h~ xy i = n xy n + h + n h(x x )(ŷ(x) y ) (n + h) 2 ; ~ 2 x = n 2 x n + h + n h(x x ) 2 (n + h) 2 ; D ~ 2 xy E = h~ xy i 2 + n 2 h2 2 yjx (x x ) 2 (n + h) 4 :" } ]
Active Learning with Statistical Models
For many types of machine learning algorithms, one can compute the statistically \optimal" way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both e cient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance.
David A Cohn; Zoubin Ghahramani
[ { "figure_caption": "Figure 1 :1Figure 1: Using a mixture of Gaussians to compute ŷ. The Gaussians model the data density. Predictions are made by mixing the conditional expectations of each Gaussian given the input x.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "If our goal in active learning is to minimize variance, we should select training examples x to minimize D ~ 2 ŷ E . With a mixture of Gaussians, we can compute D ~ 2 ŷE e ciently. The model's estimated distribution of ỹ given x is explicit:", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: In locally weighted regression, points are weighted by proximity to the current", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: The estimator variance is minimized when the kernel includes as many training points as can be accommodated by the model. Here the linear LOESS model is shown. Too large a kernel includes points that degrade the t; too small a kernel neglects points that increase con dence in the t.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The arm kinematics problem. The learner attempts to predict tip position given a set of joint angles ( 1 ; 2 ).", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Variance and MSE learning curves for mixture of 60 Gaussians trained on the Arm2D domain. Dotted lines denote standard error for average of 10 runs, each started with one initial random example.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Variance and MSE learning curves for LOESS model trained on the Arm2D domain. Dotted lines denote standard error for average of 60 runs, each started with a single initial random example.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "new variance of ŷ, after example (x; ỹ) has been added D ~ 2 ŷ E the expected value of ~ of Gaussian i, given x P(x; yji) joint distribution of input-output pair given Gaussian i P(xji) distribution x being given Gaussian i h i weight of a given point that is attributed to Gaussian i hi weight of new point (x; ỹ) that is attributed to Gaussian i example i by kernel centered at x n sum of weights given to all points by kernel x mean of inputs, weighted by kernel centered at x y mean of outputs, weighted by kernel centered at x h weight of new point (x; ỹ) given kernel centered at x", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Computation times on a Sparc 10 as a function of training set size m. Mixture model had 60 Gaussians trained for 20 iterations. Reference times are per reference point; candidate times are per candidate point per reference point.", "figure_data": "Training Mixture 3:9 + 0:05m sec 15000 sec Evaluating Reference Evaluating Candidates 1300 sec LOESS -92 + 9:7m sec 58 + 0:16m sec", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
null